WO2000079786A1 - Reduced power, high speed, increased bandwidth camera - Google Patents
Reduced power, high speed, increased bandwidth camera Download PDFInfo
- Publication number
- WO2000079786A1 WO2000079786A1 PCT/US2000/013763 US0013763W WO0079786A1 WO 2000079786 A1 WO2000079786 A1 WO 2000079786A1 US 0013763 W US0013763 W US 0013763W WO 0079786 A1 WO0079786 A1 WO 0079786A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- ccd array
- active
- signal
- frame rate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
Definitions
- the present invention is a digital camera, more specifically a very high speed camera that can be used to stop action of events that occur at very high speeds at a lower rate of power consumption.
- the power consumed by the CCD imager increases at least proportionally to the increase in the frame rate.
- the present invention provides a high speed camera with various method, computer-implementable method and apparatus improvements to a previous high speed camera design while enabling higher resolution and use of low speed components ! re-a ⁇ ing in a high speed camera that provides high speed processing while keeping component costs low.
- One such improvement is a high speed camera capable of processing image data from a two channel CCD array at one-quarter the pixel rate at which that image is provided by the CCD array. Each channel is clocked at a selected frequency to achieve a total pixel rate provided by the CCD array of twice the selected frequency.
- An added benefit to this method is that the data signals from the two channels are automatically demultiplexed with the individual color pixels being in digital form.
- the other channel of the CCD array provides a GBGB data signal of the image with the analog to digital conversion splitting the GBGB data signal into a GRN2 data signal and a BLU data signal
- Working at the high frequency at which the camera of the present invention is intended requires that a variable phase shift circuit be used to clock each of the AID converters to process the signals from any CCD array.
- phase shift circuit based on the half frequency signal for that purpose. That circuit includes a feedforward path and a feedback path.
- the feedforward path there is a voltage variable, active filter, delay circuit coupled to receive and selectively delay the half frequency clock signal From the active filter the clock signal is converted by a differential to TTL level translator to a TTL sample clock signal selectively delayed from the half frequency clock signal with said sample clock signal being disposed to be applied to a corresponding one of the four AID converters as a clock signal
- the feed back path includes an exclusive NOR gate having one input terminal coupled to receive the sample clock from said differential to TTL level translator and a second input te ⁇ ninal coupled to the input of the active filter delay circuit to produce a signal that is proportional to the phase difference between the half frequency clock and sample clock signals, together with the polarity of that difference.
- This difference signal is integrated to provide an analog signal that is proportional to the phase difference.
- a D/A converter coupled to the control subsystem of the camera to receive a signal corresponding to the desired phase shift of a particular variable phase snifter circuit and convert that phase shift signal to an analog signal
- an operational amplifier coupled to receive the analog difference signal from the integrator and the said analog phase shift signal from the D/A converter, and coupled to the active filter delay circuit to create a voltage feedback signal that adjusts the phase delay presented by the variable phase shifter circuit.
- the horizontal drivers of the present invention include a high frequency bipolar buffer stage coupled to receive a corresponding control signal from the control subsystem of the camera, a plurality of capacitors coupled to the buffer stage to compensate for parasitic capacitance, high frequency bipolar NPN and PNP transistors connected in a push-pull arrangement with bases coupled to the buffer stage and emitters coupled to a corresponding interactive terminal of the CCD array, and an R-L network coupled between the combined bases and the collector of one of the NPN and PNP transistors to speed up the transitions of the transistors in the push-pull arrangement. Additionally, signal voltage level shifting diodes to define the minimum and maximum signal levels from the buffer stage to match operational signal levels of the CCD array could be included if needed.
- the high frequency CCD arrays are clocked faster than the internal time constant of the output reset stage of the CCD array resulting in each output data pixel of the image containing a contribution from a previous data pixel of the image.
- the present invention provides a method and apparatus to minimize the residual effect of the previous data pixel on a current data pixel in the data pixel signal stream. This is accomplished by first converting the data pixel signal stream to a digital pixel stream with the digital pixel stream next delayed by a time equivalent to two pixel time periods.
- a value of the previous pixel is multiplied by a preselected factor having a value of less than one to obtain a fractional value of the previous pixel which is subtracted from the value of the current data pixel to obtain a compensated value for the current pixel This process is thus repeated for all of the pixels in the data stream to compensate for the smearing effect.
- Another feature of the present invention utilizing the digitized pixel data signals of the camera is the determination of, and compensation for, the dark reference (ie., black level) offset of the pixel data streams from the CCD array.
- the pixel data from the dark reference image area of the CCD array is used to determine and remove an average dark reference level from the data pixels originating from the active image receiving area of the CCD array.
- the data signals from the CCD array are first applied to an A/D converter to convert those signals to digital pixels since the dark reference dete ⁇ r ⁇ nation is performed in the digital domain in the present invention.
- the average value of a selected number of the dark reference pixels from the digital pixel stream for the same image line are subtracted from the level of each active image pixel
- This process is performed on the pixel data for each line of the image.
- Another technique to further improve the performance of the high speed camera of the present invention is to convert the average dark reference level to an analog offset signal, which is used to hold the DC level of the output operational amplifier from the corresponding channel of the CCD array constant. This provides two advantages: one is to keep the operational amplifier on the output of the CCD array from going into saturation; an other is to maximize the usable input dynamic range of components that follow the CCD array.
- the present invention also provides a high speed electronic camera and a method for reducing the power consumed by a CCD array as the image frame rate increases while mamtaining the same operational frequency.
- the high speed electronic camera of the present invention includes a lens assembly disposed to receive an image and focus it onto a CCD array that has an active image receiving area of with a first number of lines and a plurality of interactive terminals with the active image receiving area disposed to receive the image from the lens assembly.
- the CCD array also has an active image storage area with the first number of lines to save an image transferred from the active image area wherein each image includes a second number of lines with the second number being less than said first number.
- the camera further includes an oscillator to define the maximum operational signal frequency of the camera, a control subsystem to generate internal control signals, a plurality of vertical image drivers coupled to corresponding ones of the interactive terminals of the CCD array, with the oscillator and control subsystem vertically advancing image charges through the CCD array a line at a time under control of the oscillator utilizing signals received from the control subsystem, and a pair of horizontal image drivers coupled to corresponding ones of the interactive terminals of the CCD array, with the oscillator and control subsystem horizontally advancing image pixel charges through, and out from, the CCD array as image pixel bit signals under control of the oscillator utilizing signals received from the control subsystem.
- an oscillator to define the maximum operational signal frequency of the camera
- a control subsystem to generate internal control signals
- a plurality of vertical image drivers coupled to corresponding ones of the interactive terminals of the CCD array
- the oscillator and control subsystem vertically advancing image charges through the CCD array a line at a time under
- a first of the vertical image drivers provides a first pulsed signal to the active image receiving area of the CCD with a third number of pulses in the first pulsed signal equal to the second number of lines of the image being written into the active image area
- a second of the vertical image drivers provides a second pulsed signal to the active image storage area of the CCD to store the image from the active image area into the active image storage area with a fourth number of pulses in the second pulsed signal
- the method reduces the power consumed by the CCD array as the image frame rate increases while the same operational frequency is maintained. This is accomplished with the first of the vertical image drivers providing a first pulsed signal to the active image receiving area of the CCD with a the third number of pulses in the first pulsed signal equal to the second number of lines of the image being written into the active image area, and the second of the vertical image drivers providing a second pulsed signal to the active image storage area of the CCD to store the image from the active image area into the active image storage area with the fourth number of pulses in the second pulsed signal
- Figure 1 is a block diagram representation of the high speed camera system of the prior art.
- Figure 2a is a block diagram of the implementation of the camera head module of the present invention.
- Figure 2b is a schematic representation of a high frequency horizontal
- Figure 3 is a block diagram representation of the analog to digital conversion and data signal demultiplexer configuration of the present invention.
- Figure 4 is a block diagram of the programable phase shifter of the present invention.
- FIG 5 is a detailed block diagram of an individual variable clock phase shifter circuit of the present invention which is a component part of the phase shifter as in Figure 4.
- Figure 6 is a representative an analog video signal from the CCD imager array of Figures 1 and 2a.
- Figure 7 is a block diagram of the video black level determination technique of the present invention.
- Figure 8 is a schematic representation of a typical output stage of a CCD array.
- Figure 9a is a block diagram that illustrates single pixel smearing compensation for the embodiment of Figure 1 that includes two AID converters.
- Figure 9b is a block diagram that illustrates double pixel smearing compensation for the embodiment of Figure 1 that includes two AID converters.
- Figure 9c is a block diagram that illustrates single pixel smearing compensation for the embodiment of Figure 3 that includes four A/D converters.
- Figure 9d is a block diagram that illustrates double pixel smearing compensation for the embodiment of Figure 3 that includes four A/D converters.
- FigureslOa-b are simplified graphical representations of the operation of a CCD imager.
- Figure 11 are graphical representations of the image and storage areas of a CCD imager after an image is copies from the image area to the storage area at various frame rates as per the prior art.
- Figures 12a-e are graphical representations of the image and storage areas a CCD imager after an image is copied from the image area to the storage area at various frame rates as per the present invention.
- Figures 13a-e are graphical representations of the image and storage areas a CCD imager after an image is read out from the storage area at various frame rates as per the present invention.
- Figures 14a-c illustrate the waveforms of the vertical clocking signals applied to a CCD imager at various frame rates as per the present invention.
- Figure 1 presents a block diagram of an earlier version of a two channel high speed camera system.
- the overall system concept of that earlier version carries over into the camera system of the present invention that runs faster than that earlier version.
- the present invention incorporates numerous dramatic changes that permit the camera system of the present invention to run at twice the speed of the earlier version of the camera.
- the prior version of the two channel high speed camera is discussed first.
- the earlier version of the camera system runs at a 32 Mpixel/sec. rate per channel (64 Mpixels sec. overall), whereas the camera system of the present invention runs at a 64 Mpixel/sec. rate per channel (128 Mpixels sec. overall) with that being achieved through the improvements that are discussed below that allow for the doubling of the pixel rate while still pe ⁇ mtting the use of the majority of the lower cost components used by the earlier version of the camera system.
- a camera as in Figure 1 consists of four component structures: camera head module 2; bidirectional interconnection cable 4; control subsystem 6; and a user interface 8.
- camera head module 2 For convenience, and to permit the camera head module 2 to be as small as possible, the number of components within that module is limited with the camera head being connected to the control subsystem via bidirectional cable 4 which carries data and control signals in both directions, as well as DC power from the control subsystem to camera head module 2.
- Camera head module 2 includes a lens system 10 that focuses the desired image on a two channel CCD imager array 12 (e.g., TC236).
- Providing the necessary clock, or control signals to CCD array 12 are clock drivers 14 (e.g., a pair of TMC57253 chips connected in parallel) which in turn receive those several control signals from d-ffierentia-/--TL converter 19 having received differential logic signals from control subsystem 6 via twisted pairs in cable 4 (including RST, SRG, IAG and SAG shown in Figure 1 and discussed more fully below).
- differential 11 -L converter 19 converts the differential control signals received via twisted pair to standard TTL signals before applying those signals to clock drivers 14.
- Clock drivers 14 then use the control signals to convert them into the power clock signals that are needed to drive CCD array 12 with timing provided by oscillator 16 which operates at a 32 MHz rate.
- oscillator 16 is the master clock for the entire camera system.
- Oscillator 16 is located in camera head 2 since the location of clock signal edges is most critical in the definition of the clock signals that are applied to CCD array 12, i.e., the capturing of the image is the most critical timing operation of the camera. Once the image is captured, provision is made in the downstream circuits to reposition the various clock signal edges as is discussed below. Further, as can be appreciated, if a master clock signal were sent through cable 4, given the length and potential interaction with other signals being carried by that cable, the signal edges of a master clock signal at the opposite end of cable 4 in such an environment could become altered. Additionally, 32 MHz was selected in the previous system, and is also used in a portion of the system of the present invention, since that frequency is approaching the bandwidth limit of many of the components used in both camera head 2 and control subsystem 6.
- CCD array 12 Coming from CCD array 12, in response to the image from lens system 10 and the clock driver signals, is a pair of video signals (GBGB [green-bhie-green- bhie] and RGRG [red-green-red-green] in a color implementation, and two monochrome channels in a black and white implementation), one from each channel that are very small in amplitude, thus they are directed to amplifiers 18 to be amplified before being applied to cable 4 for transmission to control subsystem 6.
- the clock signal (CLK) from oscillator 16 is also sent across cable 4 to control subsystem 6.
- Locating oscillator 16 in camera head 2 allows transmission of the actual clock signal that was used to generate the video signals to control subsystem 6, via cable 4, together with the video signals so that all of the clock edges agree at the point in time that the video signals are applied to cable 4.
- the video and clock signal edges are much closer in time to each other than they would have been if the clock had been generated in control subsystem 6 and tr-msm ted to camera head 2.
- the CCD array being used in both the previous and present invention camera designs needs one clock signal to move the image down (IAG), another clock signal to move the stored image down (SAG), still another clock signal for shuttering the array, yet another clock signal to move the image horizontally (SRG) and still another clock signal to reset the output buffer of each channel (RST) of CCD array 12 after every pixel
- the two clock signals performing horizontal functions (SRG and RST) have the same frequency as each other.
- all of the clock signals performing vertical functions (IAG and SAG) also have the same frequency as each other, however, the frequency at which the vertical functions are performed is lower than the frequency at which the horizontal fimctions are performed.
- Control subsystem 6 receives the video signals (GRGR and BGBG) and the clock signal (CLK) as input signals to amplifiers 20 and 21, and programable phase shifter 26, respectively.
- Amplifiers 20 and 21 are similar to arnplifiers 18 in camera head 2 and are provided to compensate for signal strength lost by the video signals in cable 4 (le., the longer cable 4, the higher the gain of amplifiers 20 and 21), as well as to buffer the inputs of A/D converters 22 and 23 with a low impedance. From amplifiers 20 and 21, the amplified video signals are applied to AID converters 22 and 23, respectfiilly, to convert the analog video signals to digital video signals.
- CCD array 12 produces two output signals, so there are two channels of video data being processed simultaneously throughout the various elements of the camera before the final processed image is presented to the user in any of the output image subsystems.
- A/Ds 22 and 23 the corresponding digital video signals are applied to FIFO (First In First Out) memories 24 and 25, respectively.
- FIFOs 24 and 25 provide the corresponding digital video signal to several additional components in control subsystem 6.
- control subsystem 6 include RAM 30, filter FPGA (Field Programable Gate Array) 32 and a PCI bus interface 36.
- PCI bus interface 36 is optional and is provided in those camera systems where the user wants to utilize the images in another system that contains a PCI bus (e.g., a PC or network).
- filter FPGA 32 is programed to perform various functions on the digital video data under control of microprocessor 34 (e.g., 68Hcl 1) and control FPGA 28.
- the video signals are received by filter FPGA 32 either directly from FIFOs 24 and 25, or from RAM 30 depending on the time necessary for filter FPGA 32 to perform various tasks assigned to it.
- From filter FPGA 32 the video data, having been packetized into 24 bit words, are transferred via a parallel bus to video encoder 38 where the video data is converted to world standard TV signals. Those standardized TV video signals are then available to the user directly, or they are provided to user interface 8.
- control FPGA 28 provides the control signals, including RST, SRG, IAG and SAG to clock drivers 14 via twisted pairs in cable 4 and differential/I "1L converter 19 in camera head 2.
- Microprocessor 34 controls each functional block, makes the image smaller or larger, and performs other functions. Microprocessor 34 also controls controller FPGA 28 which in turn controls FIFOs 24 and 25 and PCI bus inter-ace 36.
- the clock signal transitions often have moved relative to corresponding positions in the video signals (RGRG and GBGB) on the control subsystem 6 side of cable 4 with this repositioning being substantially due to the construction of cable 4, the relative position of each wire or cable within cable 4, as well as the length of cable 4.
- programable phase shifter 26 is provided to reposition those clock edges so that the video image can be reconstructed in control subsystem 6.
- programmable phase shifter 26 Since this movement of clock edges is substantially related to the individual cable, programmable phase shifter 26 needs only to be programed, or reprogrammed, whenever cable 4 is changed. This adjustment of clock signal edges in control subsystem 6 is necessary since the video bit rate is last and the video signal that control subsystem 6 is trying to snag is a very narrow peak. Accordingly, the clock edge must occur at the same point in time as does the video pulse or else the video image cannot be recovered and no image appears at any of the output points of the system.
- the adjusted clock signals from programable phase shifter 26 are then applied to A/D converters 22 and 23, and control FPGA 28.
- the realigned clock signal edges when applied to A/D converters 22 and 23 cause the digitization of the video signals to occur synchronously at each peak.
- programable phase shifter 26 cannot be automatically programed, and therefore must be programed manually each time the cable is changed. This is typically done at the factory and is performed visually by an operator watching the image on CRT 42 and varying the program setting of programable phase shifter 26 by controlling microprocessor 34 via keyboard 40. This procedure is the same for both the camera system of the previous design, as well as those that include the features of the present invention.
- the basic difference between the camera system of the previous design and that of the present invention is that in the present invention camera system the video pixel rate from the CCD imager array is 64 Mpixel/sec. per channel (128 Mpixels sec. overall), as opposed to 32 Mpixel/sec. per channel (64 Mpixels sec. overall) of the prior design, while using double the oscillator 16 clock rate (64 MHz) to capture the image in camera head 2 and still maintain the same clock rate (32MHz) for the digital components in control subsystem 6.
- One of the unique things about the present invention is the ability to double the pixel rate while maintaining the same digital component clock rate.
- FIG. 2a is a block diagram of the implementation of camera head module 2' of the present invention.
- oscillator 16' is a 64 MHz clock.
- the vertical drivers for CCD array 12 are implemented with a single TMC57253 driver 14' that receives the vertical clock signals, IAG and SAG, from control FPGA 28 via differential to TTL converter 19'.
- This implementation of the vertical drivers is possible since less power is required for the vertical drivers, thus the present application remains within the bandwidth of the TMC57253 vertical drive circuits even with the doubling of the clock speed.
- the horizontal channels of the TCM57253 chip are not used in the present implementation, and two identical discrete high speed/high power driver circuits 15 and 17 are used to power the horizontal Reset (RST) and Serial Register Gate (SRG) signals needed by CCD array 12 to perform the horizontal shifting functions with the RST and SRG signals from FPGA control 28 being received via differential/l L converter 19.
- TC236 CCD arrays require these two signals, RST and SRG, and those are the two that have to run at the full pixel rate of 64 Mpixel/sec. in the present invention.
- Figure 2a shows a frequency divider 13 coupled to 64 MHz oscillator 16' to divide the frequency by 2 to provide a 32 MHz clock signal to control subsystem 6, and amplifiers 18 of Figure 1 implemented as two separate operational amplifiers 18' and 18" with their operation discussed in more detail below.
- the RST, SRG, IAG and SAG signals can alternatively be generated elsewhere in the camera system (e.g., by an FPGA could be located in camera head 2).
- the reason that the two horizontal driver stages of a TCM57253 chip can not be used in the implementation of the present invention is a function of the basic design of that chip and the CCD arrays with which it is designed to interface.
- the drivers that perform the vertical functions are typically operating into a 4000 pfd capacitance load, where as the horizontal drivers are typically operating into a 10-70 pfd load, thus the vertical drivers in the TCM57253 chip are designed to be very powerful but not that fast, and conversely the horizontal drivers are designed to be fast but not that powerful
- the two horizontal drivers of conventional clock driver chips, such as TCM57253 can not be used due to the slow rise and -all times and long delay times.
- the discrete circuit of Figure 2b was designed for horizontal power drivers 15 and 17 of Figure 2a utilizing high frequency bipolar transistors to produce the needed level shifter/clock driver having good edge times while working at 64 MHz.
- a separate combined voltage shifter and high frequency power driver of the type shown in Figure 2b is used for each of the RST and SRG horizontal gate clocks of CCD array 12.
- the input signal for either the RST or SRG function is received from control FPGA 28 via a twisted pair of wires in cable 4 and differential to TTL converter 19.
- That received signal first undergoes a TTL to 12 Vp-p level shift (since CCD array 12 is a CMOS device) by applying the control signal to the base of positively biased PNP transistor 60 and the clamping diodes 76 and 78 connected to the collector of transistor 60.
- the level shifted control signal from diodes 76 and 78 is applied to a pair of high frequency transistors 62 and 64 connected in a push-pull driver circuit arrangement with the power clock signal being delivered to the RST or SRG inputs of CCD array 12 from the connected emitters of transistors 62 and 64.
- the transistors are not driven into saturation to insure the rapid switching times needed.
- a circuit of this design is capable of charging and discharging a capacitance of about 75 pf over the full voltage excursion with a rise and Jail time of about 2 nsec. at the 64 MHz rate.
- the level shifting portion of the circuit of Figure 2b also includes capacitor 74 connected in parallel with resistor 72 with both connected in series with the base of transistor 60. These components are provided to overcome the parasitic capacitance of transistor 60 at 64 MHz which otherwise would slow down transistor 60. Additionally, emitter capacitor 66 increases the current on the signal to help transistor 60 turn on and offat the 64 MHz rate.
- CCD imager array 12 provides two output channels of multiplexed data, each at 32 Mpixel/sec, whereas in the camera of the present invention that same CCD imager array produces the same two channels of multiplexed data at 64 Mpixel/sec. each.
- One of those channels provides RGRG multiplexed data and the other GBGB multiplexed data as indicated in Figure 1 at the output of amplifiers 18 and the inputs to amplifiers 20 and 21.
- the two data channels from CCD imager array 12 have a combined bandwidth of 128 Mpixel/sec.
- AID flash converters with 10 bits having a maximum bandwidth of 48 Msamples sec.
- Figure 3 illustrates the A/D configuration of the present invention to handle the 128 Mpixel/sec. bandwidth from CCD array 12.
- A/Ds 22 and 23 and FIFOs 24 and 25 of Figure 1 are replaced with the configuration shown in Figure 3.
- A/D 22 (Fig. 1) is replaced with A/Ds 22' and 22" with both receiving the GRGR input signal from amplifier 20
- a D 23 (Fig. 1) is replaced with A/Ds 23' and 23" with both receiving the GBGB input signal from amplifier 21.
- each of A Ds 22' and 23' are clocked with the 32MHz CLK signal from frequency divider 13 (see Fig.
- each CLK-not signal needed by each of A/Ds 22' and 23' may be slightly out of phase with each other, as might the CLK-not signals used with A/Ds 22" and 23". Additionally, each CLK-not signal for A/Ds 22" and 23" may not be exactly 180° out of phase with the corresponding CLK signal needed for A/D 22' and 23', respectively.
- all four of the clock signals must be independently adjustable during set-up. Therefore, with this approach it is possible to get a 128 Mpixel/sec. rate with slower, low cost parts.
- the setting of the clocking phase shifts for A/Ds 22', 22", 23' and 23" are a one time set-up per actual cable 4 that is being used.
- These variations of clock phase shift result from variations in line length, orientation of lines with respect to others lines in the cable and varying printed circuit (PC) trace lengths, as well as the -act that the video signals are carried through coaxial cables within cable 4 while other signals (e.g., CLK) are carried by twisted pairs so the propagation times are different.
- PC printed circuit
- A/D configuration of Figure 3 also automatically demultiplexes the two multiplexed input signals received from CCD array 12.
- A/Ds 22' and 22" each receive the RGRG signal with red as the digitized output of A/D 22' and with greenl (GRN1) as the digitized output of A/D 22".
- A/Ds 23' and 23" each receive the GBGB signal with green 2 (GRN2) as the digitized output of A/D 23' and with blue (BLU) as the digitized output of A/D 23".
- the signals from A/Ds 22', 22", 23' and 23" are each at a 32 Mpixel/sec. rate thus allowing the downstream components to operate at the 32 MHz rate as in the camera system of the previous design.
- the present design handles information at twice the pixel rate of the camera of the previous design while still using the same clock rate in control subsystem 6 which permits the use of components that have a maximum upper clock rate of slightly higher than 32 MHz.
- the demultiplexed signals coming from each the A/Ds are at half the actual channel pixel rate with filter FPGA 32 being provided data at substantially the same rate as in the camera of the previous design discussed above.
- FIG 4 there is shown a block diagram of programable phase shifter 26' of the present invention.
- programable phase shifter 26 provides two independently adjustable 32 MHz clock signals that are each individually adjustable and substantially in phase with each other.
- programable phase shifter 26' receives a 32 MHz clock signal from frequency divider 13 in camera head 2 via cable 4.
- the 32MHz clock signal from frequency divider 13 is then applied to frequency divider 84 and to each of the four identical and independently variable phase shifter circuits 85-88 with the 32 MHz clock signal first passing through inverter 83 for variable phase shift circuits 86 and 88.
- Inverter 83 has been included since A/D converters 22" and 23" (which are driven by phase shift circuits 86 and 88) are triggered on the -ailing edge of the clock.
- the output from frequency divider 84 provides the 16MHz clock signal needed by microprocessor 34.
- Each of variable phase shifter circuits 85-88 also receive an input signal from microprocessor 34 that is proportional to the amount of phase shift determined to be necessary during the initial calibration of the system with the selected cable, as discussed above. Then on output lines 50-53 of each of variable phase shifter circuits 85-88, the individual phase adjusted clock signals for each of A/Ds 22', 22", 23' and 23" of Figure 3 are presented.
- Figure 5 is a block diagram for one of the four variable phase shifter circuits 85-88 with the other three having the same configuration.
- a new circuit, from that of the previous design, for each of the individual phase shifters here is necessary due to the increased bandwidth of the signal of the present invention and the need for more temperature stability as a result to support the doubling of the data rate from that of the previous design.
- the two output signals from CCD array 12 are very high frequency (64 MHz) analog signals with video energy concentrated at narrow peaks in each video signal as shown in Figure 6. These signal peaks are separated by large reset clock signals at 64 MHz which propagate the image charges through the output shift register of the CCD array. To capture the video information in those signals, A Ds 22', 22", 23' and 23" must repeatedly sample the corresponding video signal (RGRG or GBGB) at each exact peak of that signal which at 64 MHz is a very narrow window. This requires a high precision clock signal, for each of the A/Ds, whose phase relative to the mcoming video signal does not drift with temperature, or otherwise. In the application of the present invention a drift of more than 1 nsec.
- the phase delay circuit of the present invention is a closed loop and can provide a maximum of ⁇ 90° of phase shift.
- the circuit that is used here is unique in that it holds the output clock signals (sample clock in Fig. 4) at a precise phase delay from an incoming clock (CLK IN Fig. 4) under control of an input voltage.
- variable phase shifter circuit (each of phase shifter circuits 85-88 in Fig. 4) of the present invention is a closed loop, high frequency, all pass phase delay circuit; four of these circuits are used in programable phase shifter 26' (see Figs. 3 and 4) to permit individual adjustment of the clock phase before applying the sample clock signal to the corresponding A/D 22', 22", 23' or 23".
- a clock signal (CLK IN) is received from cable 4 and applied to varactors 90 and voltage divider 91 with an attenuated clock signal applied to the positive terminal of operational amplifier 92 from voltage divider 91.
- the output signal from varactor 90 is then applied to the negative terminal of operational amplifier 92.
- operational amplifier 92 is then fed back to varactor 90 and applied to a TTL converter 94 to produce the sample clock signal with TTL logic voltage levels that is then applied to the corresponding A/D converter (see Fig. 3) and to control FPGA 28 (see Fig. 1 ).
- exclusive NOR gate 96 receives the phase shifted sample clock signal at one input terminal and the CLK IN signal applied at the second input terminal
- the two clock signals, CLK IN and sample clock are compared by exclusive ORing them to produce a signal that is proportional to the phase difference between the two clock signals, as well as the polarity of that difference.
- the resulting signal from gate 96 is then applied to integrator 98 to create an analog signal that is proportional to the phase difference between the two clock signals. This technique has been employed since this circuit has low temperature -ie-i-sitivity. The average voltage from integrator 98 will thus remain stable regardless of circuit drift, signal speeds, part values, as well as temperature variations.
- operational amplifier 100 receives one input from integrator 98 and a second input from D/A 102 to scale the signal magnitude to be applied to varactors 90 to produce the desired phase delay. Since the signals being applied to operational amplifier 100 are low frequency signals, it is not necessary that amplifier 100 be a high frequency amplifier.
- the output signal from operational amplifier 100 is a DC level that is applied to varactor 90 to adjust the phase delay of the CLK IN signal as it passes therethrough.
- D/A 102 receives a fixed input signal from microprocessor 34, with that signal representing the phase delay having been selected during the initial set-up of the camera to match cable 4 with control subsystem 6 to capture the video signal being processed by the corresponding A/D converter of Figure 3.
- the delay (Le., phase shift) of the output sample clock is measured and compared against the CLK IN with the error voltage (created as a duty cycle difference by the XOR gate and averaged by a low pass filter) being applied to varactors to change the delay that is being generated.
- This feedback configuration eliminates the effect of temperature changes on the circuit, producing a very accurate clock position that is programmable in fine digital steps via D/A 102 under control of microprocessor 34 (Fig. 1). While in the above discussion varactors are used in the voltage controllable phase delay feedforward path, there are other techniques that can be employed to accomplish the same result.
- FIG 6 there is shown a typical video signal that is generated by each of the two channels of CCD array 12 and is delivered to each of A/Ds 22', 22", 23' and 23". Since CCD array 12 is n ning at 64 MHz, the frequency of the signal variations in the video signal occur at that frequency with the video information being the height of the peaks of that signal Thus, when the video signal is sampled by the corresponding A/D 22', 22", 23' and 23", it is the peaks that are being sampled. Thus, the need to align the clock pulses with peaks of the video signal that is applied to the corresponding A/D as was discussed above with respect to Figures 3, 4 and 5 in the initial setup for the system with a particular cable 4.
- a typical video signal is a negative going signal with each line of video information beginning with a fixed number of 'black' pulses that have an amplitude in a black region 104 generated by CCD array 12 without being exposed to external Hght, followed by a long series of 'active' pulses in an active region 106.
- the video signal is sampled with A/Ds 22', 22", 23' and 23" (see Fig.3) the peaks of the video signal are being sampled as a result of the phase adjustment of the individual clock signals to the A/D with the pixels from the various A/Ds being either black pixels or active pixels corresponding to whether the information into the A/D was from the black region 104 or the active region 106 of the video signal
- the black level or the average amplitude of the pulses in black region 104 must be determined. This is done to permit the shifting of the active pixel levels to eliminate the black level and thus generate a zero reference level that corresponds to the black level
- filtering the clock from the video signal does not work very well since the magnitude of the desired signal having the actual video information is much smaller than the magnitude of the clock signal
- That sampling results in the removal of the clock noise in the raw video signal applied to the A/Ds.
- the operation of the camera is further improved by determining the level of the black pixels to permit shifting of the level of the raw video signals to substantially zero out the black level so that a "0" level pixel represents black.
- the black level is typically removed in prior art cameras by clan-ping the black region to electrical ground to ahgn the video signal before the video signal is applied to A/Ds to digitize the video information.
- the raw video signal without first adjusting for the black level is digitized, then the video pixels are examined to determine the black level, and then the black level information is feed back to the camera head where the output level of the output amplifiers is shifted to compensate for the black level in the video signal from the CCD array.
- the black level is subtracted from the video signal with a DC offset signal to the amplifiers so that the signal presented to the A D converters is in the right digitizing range.
- a typical CCD array 12 has two processing channels, one to generate a
- FIG. 7 is a block diagram that illustrates the generation of a black level corrected digital video signal in the present invention for a single video channel with that operation being implemented as a part of filter FPGA 32. Since there are two video channels being provided by CCD array 12, this operation will have to be performed twice, once for each video channel since the black level for each channel is independent of the other channel In Figure 3 each of the two video channels were demultiplexed into individual color pixel streams as discussed above.
- the first several pixels in each line in each color data pixel stream will represent black, or dark reference, pixels. Therefore, to determine the black level of the signal from each channel of the CCD array, a preselected number of the black pixels in the corresponding color data stream for each line must be obtained from the corresponding color data streams. For example, if 16 dark reference pixels are to be used to determine the black level of a channel of the CCD array, then eight of the dark reference pixels from each corresponding color data stream are received from the corresponding FIFOs (either 24'-24" or 25'-25") and applied to sequencer 108 in Figure 7 to reconstruct a partial serial video signal stream for the corresponding video channel (le., 16 bits long in the present example).
- That sequenced serial stream of digital black pixels for the corresponding channel are then applied to one input of adder 110, and then the individual pixel values from adder 110 are applied to accumulator 112 where the first pixel is added to zero and fedback to the second input of adder 110 to be added to the pixel value of the second black pixel which is then transferred to accumulator 112.
- Accumulator 112 then feeds back the accumulated total of the values of the first two black pixels to the second input of adder 110 to be added to the value of the third black pixel with the resultant total of the three black pixels then passed to accumulator 112. This procedure is thus continued until accumulator 112 has an accumulated value for all 16 individual black level values in this example.
- a digital signal of the final accumulated value of the 16 black pixels at the beginning of the line ofvideo data is then applied to divider 114 where the accumulated value is divided by 16 (or the appropriate value if other than 16 black samples are used) to determine the average black pixel value. Note that given that the signal from accumulator 114 is digital and that the accumulated result in this example is to be divided by 16 (2 4 ), divider 114 can be implemented by stoning the accumulated result by four bits.
- the average black pixel value for the current line ofvideo data is then appUed to the minus terminal of first and second color subtractors 116 and 118 with the raw video data from the corresponding FIFOs applied to the phis terminal of each of subtractors 116 and 118 to create a black level corrected individual color digital video data stream.
- the black level could be determined with any number of dark reference pixels that are available from the CCD array, from one to the maximum number available. However, the more black reference pixel values that are used to determine the black level for the active pixels, the less black level noise will be present.
- the digital black level pixel value from divider 114 is applied to D/A converter 120 with the resultant analog value being applied to the corresponding one of operational amplifiers 18' and 18" in Figure 2a for the video channel being compensated.
- the limits of the amplifiers are less likely to be exceeded, and the raw analog video signal being provided to A/Ds 22', 22", 23' and 23" remains centered in the range of the AID converters thus maximizing the dynamic range and allowing better tracking of the video signals by the A/D converters.
- Interpixel smearing of the video signal is also a problem when CCD array 12 is operated to at a rate that is -aster than the output stage was designed to support.
- Figure 8 there is shown a amplified output stage of CCD array 12.
- the video data consists of a charge packet that is shifted through the array with the various clock signals discussed above. It might be visualized as a 'bucket brigade' as the charge ripples through the CCD array.
- the final stage of the CCD array applies the accumulated charge to a capacitor 123 that is connected between the gate terminal of FET 122 and signal ground, with the raw video signal provided to the following c cuitry on the source or drain of FET 122. If the charge from the CCD array were continually transferred to capacitor 123, capacitor 123 would continue to acquire more and more charge.
- a second FET 124 is connected across capacitor 123 to discharge capacitor 123 to ground at the pixel rate resulting in a raw video signal as shown in Figure 6 with the video data superimposed on a very strong clock signal
- the typical CCD array was designed to run at a clock frequency of approximately 7 MHz, thus as the clock frequency is increased as in the present invention, there is a smearing of some of the charge from a previous pixel into one or more following pixels. This results from the -act that FET 124 has a finite impedance between the drain and source when activated. Therefore there is a time constant that results from the internal impedance of FET 124 and capacitor 123. hi the present invention, CCD array 12 is being clocked -aster than that time constant which results in incomplete discharging of capacitor 123 each time FET 124 is turned on.
- filter FPGA 32 To counteract the smearing, a digital signal processing solution is implemented in filter FPGA 32 (see Fig. 1). Thus, the present invention makes this correction after the video signals have been digitized. Stated in general terms, the current pixel value is multiplied by a fractional constant of less than one, and then that value is subtracted from the pixel value of the next pixel in time in the same channel (Le., RGRG or GBGB). If the smearing of a pixel is sufficiently great, then a second smaller fractional value of the current pixel value is also subtracted from the second following pixel in time in the same channel
- FIG. 1 and 3 there are two video channels of data that were received from CCD array 12, namely RGRG and GBGB.
- the data rate from CCD array 12 is 64 Mpixel/sec. which is processed with one AID converter for each channel (le., A/D 22 for the RGRG channel and A/D 23 for the GBGB channel).
- Figure 3 illustrates the A/D conversion operation for the 128 Mpixel/sec. embodiment.
- each of the RGRG and GBGB channels ofvideo data have been converted to two data streams, yielding RED, GRNl, GRN2 and BLU.
- the current pixel is a RED pixel from FIFO 24'
- the next pixel in time in the same channel is a GRNl pixel from FIFO 24".
- the second following pixel in time is the next RED pixel
- the third pixel in time is the next GRNl pixel etc.
- the current pixel is GRN2 from FTFO 25'
- the next pixel in time in the same channel is a BLU pixel from FIFO 25".
- the second following pixel in that channel is the next GRN2 pixel
- the third in time is the next BLU pixel, etc.
- Figures 9a and 9b illustrating the one and two pixel anti-smearing techniques described in general above using the data streams from FIFOs 24 and 25 of Figure 1 where the two A/D converter configuration is shown
- Figures 9c and 9d illustrating one and two pixel anti-smearing techniques discussed above using the data streams from FIFOs 24', 24" (RGRG channel), 25' and 25" (GBGB channel) of Figure 3 where the four AID converter configuration is shown.
- RGRG channel RGRG channel
- GBGB channel GBGB channel
- Figure 9a includes two single pixel anti-smearing paths, one for the RGRG signal stream and one for the GBGB signal stream from FIFOs 24 and 25 in Figure 1.
- the RGRG data signal stream is applied to the positive terminal of subtractor 154 and to a one pixel delay 150.
- the previous pixel (N-l) from delay 150 is then applied to a multiplier 152 where the pixel value is multiplied by a selected fraction value, x (e.g., 0.2).
- multiplier 152 is then applied to the negative terminal of subtractor 154 where the fractional value of the previous pixel is subtracted from the current value of the present pixel with the data stream from subtractor 154 being the single pixel compensated video signal for the RGRG channel
- the GBGB data signal stream is applied to the positive terminal of subtractor 154' and single pixel delay 150', the previous pixel value multiplied by a selected factor, x, by multiplier 152" (the value of this x -actor may be slightly different from the x -actor of multiplier 152), the reduced value previous pixel is then applied to the negative terminal of subtractor 154' where it is subtracted from the current pixel value for the GBGB data stream yielding from subtractor 154' a single pixel compensated video signal for the GBGB data stream.
- Figure 9b the technique of compensating the video data stream for smearing from two previous pixels is illustrated for the two A/D embodiment of Figure 1.
- two data streams are applied to the positive terminal of subtractor 154 and 154', respectively, and to a two byte delay 150" and 150"', respectively.
- the most previous pixel byte (N- 1 ) in each data stream from the delay is multiplied by a factor, x (which might be slightly different from each other), by multipliers 152 and 152', respectively, while the second previous pixel byte (N-2) from the delay is multiplied by a factor, y (which also might be slightly different from each other), by multipliers 156 and 156', respectively.
- Figure 9c is a block diagram that illustrates a one pixel anti-smearing technique described in general above using the data stream from Figure 3 where the four A/D converter configuration is discussed.
- the operation described as follows must be performed twice, once for the RGRG channel, and once for the GBGB channeL
- the RED and GRNl or the GRN2 and BLU pixel data streams are applied to the positive terminal of first and second subtractors 130 and 132, respectively, and to first and second delay lines 126 and 128, respectively, with each delay line being one pixel byte long (e.g., 8 bits).
- the smearing compensated RED (GRN2) pixel is the result of that subtraction.
- the (N-l RED (GRN2) pixel from first delay line 126 is applied to second multiplier 136 where it is multiplied by the same preselected, less than unity, fractional value -actor x (perhaps 0.25), with the resultant reduced value of the (N-l) 81 RED (GRN2) pixel applied to the negative terminal of second subtractor 132.
- the smearing compensated GRNl (BLU) pixel is the result of that subtraction.
- Figure 9d illustrates smearing correction over the previous two pixels in the data stream using an extension of the technique described above in relation to Figure 9a.
- the differences here to that of Figure 9c is first that the delay lines 126' and 128' are now two pixels long (e.g., 16 bits) to hold the two previous pixel bytes in each of the demultiplexed data streams.
- the first pixel subtraction components of Figure 9a are shown here with the same reference numbers and they operate in the same manner as described in Figure 9c.
- the one pixel corrected value from first subtractor 130 is applied to the positive terminal of third subtractor 138.
- the pixel value of the (N-2) RED (GRN2) pixel from delay line 126' is then applied to third multiplier 142 where the value is multiplied by a preselected, less than unity, factor y (where y is smaller than x and perhaps has a value of 0.1), and the resultant multiplied value from third multiplier 142 is applied to the negative te ⁇ ninal of third subtractor 138.
- the two pixel, smear corrected current RED (GRN2) pixel is then provided by third subtractor 138.
- the GRNl (BLU) pixel is corrected using delay line 128', fourth multiplier 144 (using the same multiplication factor y as in third multiplier 142), and fourth subtractor 140 to provide the two pixel smear corrected current GRNl (BLU) pixel from fourth subtractor 140.
- the anti-smearing function is performed in firmware as one of the fimctions of filter FPGA 32.
- the technique of the present invention can easily be extended to correct for smearing from any number of previous pixels in the data stream that may be desired.
- the next aspect of the present invention is a power saving technique to permit operation of CCD imager 12 ( Figures 1 and 2a) at higher and higher frame rates without the imager overheating and ceasing operation after a short period of time (e.g., only seconds at higher frame rates).
- the power saving technique of the present invention is discussed with the aid of Figures 10a through 14c.
- Figures 10a and 10b each show a simplified graphical snapshot view of the contents of the two memory areas of a typical CCD imager 12 (a frame transfer imager) at two different points in time.
- Figure 10a the contents of image area 146 and storage area 148 are illustrated following the capture of image 150 in image area 146 and the copying of image 150 into storage area 148 as secondary image 152.
- image 150 does not utilize the full image area 146; that is illustrated here to show that the user, for various reasons, might decide to capture the image in less than the -full image area (le., perhaps the proportions of the electronic image need to be different than the proportions of image and storage areas 146 and 148 for processing purposes).
- Figure 10b the contents of image and storage areas 146 and 148, respectively, are illustrated following the capture of a new image 150' in image area 146 as secondary image 152 is read out from storage area 148.
- the general timing relationship between the vertical gate signals, IAG and SAG are off-set in time from each other.
- the IAG signal vertically gates a new image 150 from lens 10 into image area 146 at substantially the same time that the horizontal signals, SRG and RST, horizontally gate the previous secondary image from storage area 148 of CCD imager 12.
- the SAG signal causes the next image 150 in image area 146 to be copied into storage area 148 as the next secondary image 152. Then that pattern repeats for each successive image.
- the two image areas 146 and 148 each have the same proportions with Y lines of X pixels each.
- the vertical clock signals, Image Area Gate (IAG) and Storage Area Gate (SAG) each include recurring bursts of Y pulses (the same number as there are lines in each memory area) to clock image 150 and secondary image 152 through all Y lines of the respective image area 146 and storage area 148.
- CCD imager 12 presents a considerable amount of capacitance to the IAG and SAG signals when vertically moving each line of electronic image 150 as it is captured in image area 146 and as that electronic image is copied into storage area 148 as secondary image 152.
- CCD imager 12 requires more and more power from signals IAG and SAG to perform the desired tasks since all Y lines of both areas must be advanced for successive images more and more often (le., the entire image, all Y lines, in each of image area 146 and storage area 148 is advanced each time).
- Figure 11 is a series of views similar to those of Figure 10a for various frame rates, le., less than 1000 fps (frames per second); 1000 fps; 2000fps; 4000 fps; and 8000 f s. Since the master clock frequency of oscillator 16 ( Figures 1 and 2a) does not change, thus as the user selects higher and higher frame rates to capture the images of interest, the number of lines, and the number of pixels in each hne, for each captured image are reduced in proportion to the change in frame rate. Thus, in the examples of Figure 11, when compared to image 150, images 154, 158, 162 and 166 are substantially Vz, 1/3, 1/5 and 1/7 the size (width and height) of image 150, respectively. Similarly, secondary images 156, 160, 164 and 168 have the same size relationship to secondary image 152.
- gate signals IAG and SAG advance the image in image area 146 and storage area 148 by all Y lines (500 lines in a typical CCD imager) for any frame rate at which the camera is operated, requiring more and more power to do so as the frame rate is increased.
- the image (150, 154, 158, 162 and 166) in image area 146 and the secondary image (152, 156, 160, 164 and 168) in storage area 148 will be a single image that is advanced to the bottom of the corresponding memory area as illustrated in Figure 11.
- the actual height of each saved image in both memory areas varies for the different frame rates since the clock rate of oscillator 16 ( Figures 1 and 2a) remains fixed regardless of the frame rate. Stated differently, since the frequency of operation of oscillator 16 remains fixed throughout the operation of the camera of the present invention, the number of lines in the saved images must be reduced in proportion to the increase of the frame rate.
- CCD imager 12 has 500 lines of 680 pixels each, which is typical of many CCD frame transfer imagers.
- image area 146 and storage area 148 of CCD image 12 graphically illustrate the image contents of each immediately following the captured image from the image area being copied into the storage area for different frame rates ⁇ Figure 12a for a frame rate of less than 1000 fps; Figure 12b for a frame rate of 1000 fps; Figure 12c for 2000 fps; Figure 12d for 4000 fps; and Figure 12e for 8000 fps.
- the frame rates stated above are strictly for purposes of this discussion and the present invention is not limited to those frame rates or to frame rates that are no higher than 8000 fps.
- image 154 in Figure 12b for the 1000 f s case will be one half as many lines, le., 210 lines in height; image 158 in Figure 12c for the 2000 fps case will have one third as many lines as in image 150, le., 140 lines in height; image 162 in Figure 12d for the 4000 -fps case will have one fifth as many lines as in image 150, Le., 98 lines in height; and image 166 in Figure 12e for the 8000 fps case will have one seventh as many lines as in image 150, Le., 68 lines in height.
- image 150 was selected to have a height of 410 lines for purposes of this discussion since 420 is evenly divisible by 2, 3, 5, and 7. The number of pixels in each line of the various images for the various frame rates will also vary in the same proportions.
- the number of pulses in each burst of each of the IAG and SAG signals is tailored to the image height and frame rate being used.
- the number of pulses in each burst need only be equal to the number of lines in the height of the image at the selected frame rate. This is true since what is recorded in the lines of image area 146 above the corresponding image is of no importance since those lines are not read in the present invention when the contents of image area 146 are copied into storage area 148.
- each burst of IAG for Figures 12a-12e will contain 420, 210, 140, 98 and 68 pulses, respectively. This is true since images 150, 154, 158, 162 and 166 are each built from the lower edge of image area 146 upward in this graphical representation.
- each burst of the SAG signal of the present invention contains 500 pulses to copy image 150 into secondary image 152 (Le., 420 pulses to copy image 150 to storage area 148, and 80 pulses to continue to advance secondary image 152 to the bottom of the memory of storage area 148).
- each burst of the SAG signal in the 210 line image case must also contain 500 pulses (Le., 210 pulses to copy image 154 into storage area 148 and 290 pulses to continue to advance secondary image 156 to the bottom of the memory of storage area 148).
- Figures 12c- 12e are different from those of Figures 12a and 12b, in that more than one image, with a space between each of them, can be stored in storage area 148.
- storage area 148 contains the secondary image most recently copied, secondary image 160, and the immediately preceding secondary image 160' with a blank space between those two secondary images.
- each burst in the SAG signal will contain 180 pulses (140 pulses to advance image 158 into storage area 148 as secondary image 160 and 40 pulses to advance secondary image 160 one half the difference between 420 and 500 lines).
- the space between secondary images 160 and 160', in this example, is therefore 180 lines.
- storage area 148 contains the secondary image most recently copied, secondary image 164, and the immediately preceding two secondary images 164' and 164" with a blank space between each of the adjacent secondary images.
- each burst in the SAG signal will alternately contain 100 and 101 pulses (98 pulses to advance image 162 into storage area 148 as secondary image 164 and 3 or 4 pulses to advance secondary image 164 approximately one fifth the difference between 490 and 500 lines).
- the space between adjacent ones of secondary images 164, 164' and 164", in this example, is therefore 101, 101 and 102 lines, sequentially.
- storage area 148 contains the secondary image most recently copied, secondary image 168, and the immediately preceding three secondary images 168', 168" and
- each burst in the SAG signal will contain 72 pulses (68 pulses to advance image 166 into storage area 148 as secondary image 168 and 4 pulses, to advance secondary image 168 approximately one seventh the difference between 476 and 500 lines).
- the space between adjacent ones of secondary images 168, 168', 168" and 168"', in this example, is therefore 72 lines.
- Figures 13a-13e illustrate the contents of image area 146 and storage area 148 after one secondary image has been read out of storage area 148 for each of the conditions discussed above with respect to the corresponding one of Figures 12a- 12e.
- Images are read out of storage area 148 horizontally, one line at a time, and IAG saves the next image (150', 154', 158', 162' and 166', respectively) in image area 146 during the horizontal activity.
- all 500 lines of storage area 148 are read out each time, regardless of the frame rate and resulting height of the image.
- only the bottom secondary image (Le., the oldest secondary image) is readout by only applying horizontal signal SRG to imager 12 long enough to read out the bottom image from storage area 148.
- a line at a time of the bottom secondary image is horizontally read out of storage area 148.
- the remaining information in storage area 148 shifts downward a line at a time.
- the 140, 98 or 68 lines of the bottom secondary image 160', 164" or 168"', respectively, must be read out horizontally, phis the correction factor number of lines for each example, to bring the next image in storage area 148 to the bottom.
- the number of lines to be read out horizontally is equal to the number of lines in the dead space between images in each example (Le., 220 for the 140 line image; 103 for the 98 line image; and 76 for the 68 line image).
- control FPGA 28 the horizontal and vertical control signals that are applied to CCD imager 12 are generated by control FPGA 28. Since the bulk of the power consumed by imager 12 results from the vertical capturing and copying the images in image area 146 and storage area 148, the present invention results in the minimization of the duration of vertical signals IAG and SAG. There is also some additional power and time saving that results from the reduction of the number of lines in each image that need to be read out, however that power saving is dramatically lower than that saved by the reduction of the vertical movement of the images.
- Figures 14a- 14c there are comparisons of the IAG and SAG signals for the various image sizes and frame rates.
- the signals IAG and SAG are shown on the same time line and beginning at the same point in time. That is done merely to show the relative lengths (Le., number of pulses in each burst) of those two signals with respect to each other for each example. As discussed above, the bursts of signals IAG and SAG do not always occur at the same time.
- burst 170 of IAG will include 500 pulses to place image 150 in image area 146 each time, regardless of the actual height of that image.
- burst 172 of SAG will include 500 pulses to copy image 150 in image area 146 into secondary image 152 in storage area 148 each time, regardless of the actual height of the image.
- the bursts of pulses in the IAG and SAG signals are equal in length, and length "a" will be equal to the maximum number of lines in the CCD imager.
- burst 170' includes 420 or 210 pulses, respectively (Le., in general then "b" is equal the number of lines in the corresponding image).
- the SAG signal needs a full 500 pulses in each burst to copy image 150 or 154 into storage area 148 as secondary image 152 or 156, respectively.
- "a" is equal to the full number of lines in the selected imager. Therefore in the examples of Figures 12a and 12b, there will be a power saving during the IAG -function that is inversely proportion to the reduction in the height of the image as compared to the prior art.
- each burst 170' of IAG will contain 420 or 210 pulses (Le., "b” equals either 420 or 210), whereas each burst 172' of SAG will contain 500 pulses (Le., "a” equals 500).
- the reason that the length of burst 172" is not exactly equal to the length of burst 170" is that the number of lines in CCD imager 12 (e.g., 500 in this example) is not an integer multiple of the image height at any of the example frame rates discussed above (e.g., 500 is 3.57 times 140).
- the number of pulses in each burst of SAG needs to account for the extra lines in the storage area 148 that do not contain image information.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
A high speed electronic camera and a method for reducing the power consumed by a CCD array (12) as the image frame rate increases while maintaining the same operational frequency. The camera includes a lens assembly (10) disposed to receive an image and focus it onto a CCD array (12) to be received by an active image receiving area of a first number of lines. The CCD array (12) also has an active image storage area with the first number of lines to save an image transferred from the active image area wherein each image includes a second number of lines with the second number being less than the first number. The power saving is accomplished with a first of vertical image drivers (14) providing a first pulsed signal to the active image receiving area of the CCD (12) with a third number of pulses in the first pulsed signal equal to the second number of lines of the image being written into the active image area, and a second of the vertical image drivers (14) providing a second pulsed signal to the active image storage area of the CCD (12) to store the image from the active image area into the active image storage area with a fourth number of pulses in the second pulsed signal.
Description
REDUCED POWER. HIGH SPEED. INCREASED BANDWIDTH CAMERA
Cross Reference
This application is a Contmuation-In-Part of an application for HIGH SPEED, INCREASED BANDWIDTH CAMERA which was filed June 18, 1998, and has been given serial number 09/099,910.
Field of the Invention
The present invention is a digital camera, more specifically a very high speed camera that can be used to stop action of events that occur at very high speeds at a lower rate of power consumption.
Background of the Invention
Historically high speed electronic cameras have used an imager that has a multitude of parallel outputs, or channels (e.g., 16, 32, or even 256 or more channels with one for each line in the display), to increase the effective bandwidth of the camera. In such implementations, each channel requires duplicate electronic circuits that run at the same nominal video rate. Eventually, all of the channels have to be recombined to display the final image. There are many draw backs to such implementations. One is the need
-tor special imagers (rather than off-the-shelf) having many output channels, which dramatically increases the price since such imagers may cost 50 times as much as an off
the shelf two channel imager. A second is the increased costs for the additional components especially as higher and higher bandwidths are desired. A third is the labor intensive problem of correcting the imbalance in outputs of the large number of channels resulting from tolerances of the various elements used in each channel. With higher and higher speeds in electronic cameras, the differences between the various channels, even slight differences, contribute more and more to undesirable visual effects in the displayed images presented by those cameras.
Additionally, in existing high speed cameras, as the frame rate increases the power consumed by the CCD imager increases at least proportionally to the increase in the frame rate.
It would be desirable to have a fast electronic camera (Le., high frame rate with shuttering capability) with few channels that can provide improved image resolution at low cost with a few high speed, high cost electronic components in combination with a low cost imager in lieu of many low cost electronic components and a high cost imager as in the prior art and also operates that CCD imager at a reduced rate of power consumption as the frame rate increases. To provide that advantage it would be desirable to multiplex the higher cost components to provide an increased effective bandwidth while using low cost components without having to correct imbalance. The present invention provides such a camera.
Summary of the Invention
The present invention provides a high speed camera with various method, computer-implementable method and apparatus improvements to a previous high speed camera design while enabling higher resolution and use of low speed components! re-aώing in a high speed camera that provides high speed processing while keeping component costs low.
One such improvement is a high speed camera capable of processing image data from a two channel CCD array at one-quarter the pixel rate at which that image is provided by the CCD array. Each channel is clocked at a selected frequency to achieve a total pixel rate provided by the CCD array of twice the selected frequency. Then, αmu-taneously and independently two analog to digital conversions are performed on the image data signal from each channel of the CCD array (four AID
conversions) with one conversion on each data signal being performed on the rising edge of a half frequency signal (half the frequency of the selected signal) and the second conversion being performed on the falling edge of the half frequency signal to produce four pixel data signals of the image at one quarter the pixel rate of the overall image data from the CCD array, thus allowing the use of components capable of operating at half the frequency at which the CCD is clocked.
An added benefit to this method is that the data signals from the two channels are automatically demultiplexed with the individual color pixels being in digital form. This results from one channel of the CCD array providing a RGRG data signal of the image with the analog to digital conversion splitting that signal into a RED data signal and a GRN1 data signal- Similarly, the other channel of the CCD array provides a GBGB data signal of the image with the analog to digital conversion splitting the GBGB data signal into a GRN2 data signal and a BLU data signal Working at the high frequency at which the camera of the present invention is intended requires that a variable phase shift circuit be used to clock each of the AID converters to process the signals from any CCD array. The use of a programable phase shift is necessary to finely adjust the clock pulses to the A/Ds to assure that sampling occurs at the peak of each signal. This is true at higher frequencies independent of the number of channels of the CCD array. For the particular circuit configuration discussed above, four phase shift circuits are needed, one for each AID converter. The present invention also presents a high speed, temperature compensated phase shift circuit based on the half frequency signal for that purpose. That circuit includes a feedforward path and a feedback path. In the feedforward path there is a voltage variable, active filter, delay circuit coupled to receive and selectively delay the half frequency clock signal From the active filter the clock signal is converted by a differential to TTL level translator to a TTL sample clock signal selectively delayed from the half frequency clock signal with said sample clock signal being disposed to be applied to a corresponding one of the four AID converters as a clock signal The feed back path includes an exclusive NOR gate having one input terminal coupled to receive the sample clock from said differential to TTL level translator and a second input teπninal coupled to the input of the active filter delay circuit to produce a signal that is proportional to the phase difference
between the half frequency clock and sample clock signals, together with the polarity of that difference. This difference signal is integrated to provide an analog signal that is proportional to the phase difference. Additionally, there is a D/A converter coupled to the control subsystem of the camera to receive a signal corresponding to the desired phase shift of a particular variable phase snifter circuit and convert that phase shift signal to an analog signal Finally, there is an operational amplifier coupled to receive the analog difference signal from the integrator and the said analog phase shift signal from the D/A converter, and coupled to the active filter delay circuit to create a voltage feedback signal that adjusts the phase delay presented by the variable phase shifter circuit.
There are several other features of the present invention that are needed due to the high frequency at which the CCD array is being clocked, and which are independent of how many channels the CCD array has, or the actual configuration of the camera. One of those is a pair of analog, bipolar clock drivers to drive the horizontal functions of the CCD array since the necessary speed and power is not available in the horizontal driver section of commercially available CCD drivers. The horizontal drivers of the present invention include a high frequency bipolar buffer stage coupled to receive a corresponding control signal from the control subsystem of the camera, a plurality of capacitors coupled to the buffer stage to compensate for parasitic capacitance, high frequency bipolar NPN and PNP transistors connected in a push-pull arrangement with bases coupled to the buffer stage and emitters coupled to a corresponding interactive terminal of the CCD array, and an R-L network coupled between the combined bases and the collector of one of the NPN and PNP transistors to speed up the transitions of the transistors in the push-pull arrangement. Additionally, signal voltage level shifting diodes to define the minimum and maximum signal levels from the buffer stage to match operational signal levels of the CCD array could be included if needed.
In the design of the present invention, the high frequency CCD arrays are clocked faster than the internal time constant of the output reset stage of the CCD array resulting in each output data pixel of the image containing a contribution from a previous data pixel of the image. The present invention provides a method and apparatus to minimize the residual effect of the previous data pixel on a current data
pixel in the data pixel signal stream. This is accomplished by first converting the data pixel signal stream to a digital pixel stream with the digital pixel stream next delayed by a time equivalent to two pixel time periods. Then a value of the previous pixel is multiplied by a preselected factor having a value of less than one to obtain a fractional value of the previous pixel which is subtracted from the value of the current data pixel to obtain a compensated value for the current pixel This process is thus repeated for all of the pixels in the data stream to compensate for the smearing effect.
As the clocking speed is increased, the smearing effect is more severe. Additional previous pixel values can be multiplied by progressively smaller factors with the reduced values of two, three, etc. previous pixels all subtracted from the value of the current pixeL
Another feature of the present invention utilizing the digitized pixel data signals of the camera is the determination of, and compensation for, the dark reference (ie., black level) offset of the pixel data streams from the CCD array. To perform that fimction the pixel data from the dark reference image area of the CCD array is used to determine and remove an average dark reference level from the data pixels originating from the active image receiving area of the CCD array. As in the other embodiments of the present invention, the data signals from the CCD array are first applied to an A/D converter to convert those signals to digital pixels since the dark reference deteπrήnation is performed in the digital domain in the present invention.
Next the average value of a selected number of the dark reference pixels from the digital pixel stream for the same image line are subtracted from the level of each active image pixel This process is performed on the pixel data for each line of the image. Another technique to further improve the performance of the high speed camera of the present invention is to convert the average dark reference level to an analog offset signal, which is used to hold the DC level of the output operational amplifier from the corresponding channel of the CCD array constant. This provides two advantages: one is to keep the operational amplifier on the output of the CCD array from going into saturation; an other is to maximize the usable input dynamic range of components that follow the CCD array.
The present invention also provides a high speed electronic camera and a method for reducing the power consumed by a CCD array as the image frame rate increases while mamtaining the same operational frequency. The high speed electronic camera of the present invention includes a lens assembly disposed to receive an image and focus it onto a CCD array that has an active image receiving area of with a first number of lines and a plurality of interactive terminals with the active image receiving area disposed to receive the image from the lens assembly. The CCD array also has an active image storage area with the first number of lines to save an image transferred from the active image area wherein each image includes a second number of lines with the second number being less than said first number. The camera further includes an oscillator to define the maximum operational signal frequency of the camera, a control subsystem to generate internal control signals, a plurality of vertical image drivers coupled to corresponding ones of the interactive terminals of the CCD array, with the oscillator and control subsystem vertically advancing image charges through the CCD array a line at a time under control of the oscillator utilizing signals received from the control subsystem, and a pair of horizontal image drivers coupled to corresponding ones of the interactive terminals of the CCD array, with the oscillator and control subsystem horizontally advancing image pixel charges through, and out from, the CCD array as image pixel bit signals under control of the oscillator utilizing signals received from the control subsystem. In addition, a first of the vertical image drivers provides a first pulsed signal to the active image receiving area of the CCD with a third number of pulses in the first pulsed signal equal to the second number of lines of the image being written into the active image area, and a second of the vertical image drivers provides a second pulsed signal to the active image storage area of the CCD to store the image from the active image area into the active image storage area with a fourth number of pulses in the second pulsed signal
The method reduces the power consumed by the CCD array as the image frame rate increases while the same operational frequency is maintained. This is accomplished with the first of the vertical image drivers providing a first pulsed signal to the active image receiving area of the CCD with a the third number of pulses in the first pulsed signal equal to the second number of lines of the image being written into the active image area, and the second of the vertical image drivers providing a second
pulsed signal to the active image storage area of the CCD to store the image from the active image area into the active image storage area with the fourth number of pulses in the second pulsed signal
Brief Description of the Figures
Figure 1 is a block diagram representation of the high speed camera system of the prior art.
Figure 2a is a block diagram of the implementation of the camera head module of the present invention. Figure 2b is a schematic representation of a high frequency horizontal
CCD power driver for use in the present invention as in Figure 2a.
Figure 3 is a block diagram representation of the analog to digital conversion and data signal demultiplexer configuration of the present invention. Figure 4 is a block diagram of the programable phase shifter of the present invention.
Figure 5 is a detailed block diagram of an individual variable clock phase shifter circuit of the present invention which is a component part of the phase shifter as in Figure 4.
Figure 6 is a representative an analog video signal from the CCD imager array of Figures 1 and 2a.
Figure 7 is a block diagram of the video black level determination technique of the present invention.
Figure 8 is a schematic representation of a typical output stage of a CCD array. Figure 9a is a block diagram that illustrates single pixel smearing compensation for the embodiment of Figure 1 that includes two AID converters. Figure 9b is a block diagram that illustrates double pixel smearing compensation for the embodiment of Figure 1 that includes two AID converters. Figure 9c is a block diagram that illustrates single pixel smearing compensation for the embodiment of Figure 3 that includes four A/D converters. Figure 9d is a block diagram that illustrates double pixel smearing compensation for the embodiment of Figure 3 that includes four A/D converters.
FigureslOa-b are simplified graphical representations of the operation of a CCD imager.
Figure 11 are graphical representations of the image and storage areas of a CCD imager after an image is copies from the image area to the storage area at various frame rates as per the prior art.
Figures 12a-e are graphical representations of the image and storage areas a CCD imager after an image is copied from the image area to the storage area at various frame rates as per the present invention.
Figures 13a-e are graphical representations of the image and storage areas a CCD imager after an image is read out from the storage area at various frame rates as per the present invention.
Figures 14a-c illustrate the waveforms of the vertical clocking signals applied to a CCD imager at various frame rates as per the present invention.
Detailed Description of the Preferred Embodiments
Figure 1 presents a block diagram of an earlier version of a two channel high speed camera system. The overall system concept of that earlier version carries over into the camera system of the present invention that runs faster than that earlier version. As discussed below, the present invention incorporates numerous dramatic changes that permit the camera system of the present invention to run at twice the speed of the earlier version of the camera. To best understand the present invention, the prior version of the two channel high speed camera is discussed first. The earlier version of the camera system runs at a 32 Mpixel/sec. rate per channel (64 Mpixels sec. overall), whereas the camera system of the present invention runs at a 64 Mpixel/sec. rate per channel (128 Mpixels sec. overall) with that being achieved through the improvements that are discussed below that allow for the doubling of the pixel rate while still peπmtting the use of the majority of the lower cost components used by the earlier version of the camera system.
Initially there is presented an operational overview of the previous version of the camera having the basic architecture shown in Figure 1. A camera as in Figure 1 consists of four component structures: camera head module 2; bidirectional interconnection cable 4; control subsystem 6; and a user interface 8. For convenience,
and to permit the camera head module 2 to be as small as possible, the number of components within that module is limited with the camera head being connected to the control subsystem via bidirectional cable 4 which carries data and control signals in both directions, as well as DC power from the control subsystem to camera head module 2.
Camera head module 2 includes a lens system 10 that focuses the desired image on a two channel CCD imager array 12 (e.g., TC236). Providing the necessary clock, or control signals to CCD array 12 are clock drivers 14 (e.g., a pair of TMC57253 chips connected in parallel) which in turn receive those several control signals from d-ffierentia-/--TL converter 19 having received differential logic signals from control subsystem 6 via twisted pairs in cable 4 (including RST, SRG, IAG and SAG shown in Figure 1 and discussed more fully below). Thus, differential 11 -L converter 19 converts the differential control signals received via twisted pair to standard TTL signals before applying those signals to clock drivers 14. Clock drivers 14 then use the control signals to convert them into the power clock signals that are needed to drive CCD array 12 with timing provided by oscillator 16 which operates at a 32 MHz rate.
As will be seen in the discussion that follows with respect to the operation of control subsystem 6, oscillator 16 is the master clock for the entire camera system. Oscillator 16 is located in camera head 2 since the location of clock signal edges is most critical in the definition of the clock signals that are applied to CCD array 12, i.e., the capturing of the image is the most critical timing operation of the camera. Once the image is captured, provision is made in the downstream circuits to reposition the various clock signal edges as is discussed below. Further, as can be appreciated, if a master clock signal were sent through cable 4, given the length and potential interaction with other signals being carried by that cable, the signal edges of a master clock signal at the opposite end of cable 4 in such an environment could become altered. Additionally, 32 MHz was selected in the previous system, and is also used in a portion of the system of the present invention, since that frequency is approaching the bandwidth limit of many of the components used in both camera head 2 and control subsystem 6.
Coming from CCD array 12, in response to the image from lens system
10 and the clock driver signals, is a pair of video signals (GBGB [green-bhie-green- bhie] and RGRG [red-green-red-green] in a color implementation, and two monochrome channels in a black and white implementation), one from each channel that are very small in amplitude, thus they are directed to amplifiers 18 to be amplified before being applied to cable 4 for transmission to control subsystem 6. The clock signal (CLK) from oscillator 16 is also sent across cable 4 to control subsystem 6. Locating oscillator 16 in camera head 2 allows transmission of the actual clock signal that was used to generate the video signals to control subsystem 6, via cable 4, together with the video signals so that all of the clock edges agree at the point in time that the video signals are applied to cable 4. Thus, when those signals arrive at control subsystem 6, the video and clock signal edges are much closer in time to each other than they would have been if the clock had been generated in control subsystem 6 and tr-msm ted to camera head 2. Even with this approach the clock edges and video signals are not related closely enough for proper image capture due to cross-talk between various wires in cable 4, the length of cable 4, and the length of traces on the PC boards of camera head 2 and control subsystem 6, each of which can have a differing phase shift effect on the clock signal and each of the video signal In any design, a CCD array typically requires many different clock signals. For example, the CCD array being used in both the previous and present invention camera designs needs one clock signal to move the image down (IAG), another clock signal to move the stored image down (SAG), still another clock signal for shuttering the array, yet another clock signal to move the image horizontally (SRG) and still another clock signal to reset the output buffer of each channel (RST) of CCD array 12 after every pixel The two clock signals performing horizontal functions (SRG and RST) have the same frequency as each other. Similarly, all of the clock signals performing vertical functions (IAG and SAG) also have the same frequency as each other, however, the frequency at which the vertical functions are performed is lower than the frequency at which the horizontal fimctions are performed. The vertical functions present a very heavy load since each vertical clock signal must move all of the image pixels together in parallel, whereas the horizontal functions present a much lighter load since each horizontal clock signal is moving only one line of pixels each time instead of the entire image. Thus, the horizontal functions can be driven
much fester. In the previous design, two clock driver chips (TMC57253) were connected in parallel to provide enough current to accomplish the high speed horizontal switching. However, in that design the horizontal drivers in those chips are being run at, or near, the operational limit. Control subsystem 6 receives the video signals (GRGR and BGBG) and the clock signal (CLK) as input signals to amplifiers 20 and 21, and programable phase shifter 26, respectively. Amplifiers 20 and 21 are similar to arnplifiers 18 in camera head 2 and are provided to compensate for signal strength lost by the video signals in cable 4 (le., the longer cable 4, the higher the gain of amplifiers 20 and 21), as well as to buffer the inputs of A/D converters 22 and 23 with a low impedance. From amplifiers 20 and 21, the amplified video signals are applied to AID converters 22 and 23, respectfiilly, to convert the analog video signals to digital video signals.
CCD array 12 produces two output signals, so there are two channels of video data being processed simultaneously throughout the various elements of the camera before the final processed image is presented to the user in any of the output image subsystems. From A/Ds 22 and 23 the corresponding digital video signals are applied to FIFO (First In First Out) memories 24 and 25, respectively. In turn, FIFOs 24 and 25 provide the corresponding digital video signal to several additional components in control subsystem 6. These include RAM 30, filter FPGA (Field Programable Gate Array) 32 and a PCI bus interface 36. PCI bus interface 36 is optional and is provided in those camera systems where the user wants to utilize the images in another system that contains a PCI bus (e.g., a PC or network).
In both the previous and present invention camera designs, filter FPGA 32 is programed to perform various functions on the digital video data under control of microprocessor 34 (e.g., 68Hcl 1) and control FPGA 28. The video signals are received by filter FPGA 32 either directly from FIFOs 24 and 25, or from RAM 30 depending on the time necessary for filter FPGA 32 to perform various tasks assigned to it. From filter FPGA 32 the video data, having been packetized into 24 bit words, are transferred via a parallel bus to video encoder 38 where the video data is converted to world standard TV signals. Those standardized TV video signals are then available to the user directly, or they are provided to user interface 8. Additionally, control FPGA 28 provides the control signals, including RST, SRG, IAG and SAG to clock
drivers 14 via twisted pairs in cable 4 and differential/I "1L converter 19 in camera head 2.
Microprocessor 34 controls each functional block, makes the image smaller or larger, and performs other functions. Microprocessor 34 also controls controller FPGA 28 which in turn controls FIFOs 24 and 25 and PCI bus inter-ace 36. As stated above, the clock signal transitions often have moved relative to corresponding positions in the video signals (RGRG and GBGB) on the control subsystem 6 side of cable 4 with this repositioning being substantially due to the construction of cable 4, the relative position of each wire or cable within cable 4, as well as the length of cable 4. To correct for that phase shift of the clock edges, programable phase shifter 26 is provided to reposition those clock edges so that the video image can be reconstructed in control subsystem 6. Since this movement of clock edges is substantially related to the individual cable, programmable phase shifter 26 needs only to be programed, or reprogrammed, whenever cable 4 is changed. This adjustment of clock signal edges in control subsystem 6 is necessary since the video bit rate is last and the video signal that control subsystem 6 is trying to snag is a very narrow peak. Accordingly, the clock edge must occur at the same point in time as does the video pulse or else the video image cannot be recovered and no image appears at any of the output points of the system. The adjusted clock signals from programable phase shifter 26 are then applied to A/D converters 22 and 23, and control FPGA 28. The realigned clock signal edges when applied to A/D converters 22 and 23 cause the digitization of the video signals to occur synchronously at each peak.
Since no two cables, even those of the same length, are exactly the same, programable phase shifter 26 cannot be automatically programed, and therefore must be programed manually each time the cable is changed. This is typically done at the factory and is performed visually by an operator watching the image on CRT 42 and varying the program setting of programable phase shifter 26 by controlling microprocessor 34 via keyboard 40. This procedure is the same for both the camera system of the previous design, as well as those that include the features of the present invention.
The basic difference between the camera system of the previous design
and that of the present invention is that in the present invention camera system the video pixel rate from the CCD imager array is 64 Mpixel/sec. per channel (128 Mpixels sec. overall), as opposed to 32 Mpixel/sec. per channel (64 Mpixels sec. overall) of the prior design, while using double the oscillator 16 clock rate (64 MHz) to capture the image in camera head 2 and still maintain the same clock rate (32MHz) for the digital components in control subsystem 6. One of the unique things about the present invention is the ability to double the pixel rate while maintaining the same digital component clock rate. It is necessary to maintain the same clock rate for the digital components so that the majority of the digital components can be of the inexpensive type since 32 MHz is substantially the fastest that those components as used in the previous design camera system will operate. To accomplish the doubling of the video speed of the camera of Figure 1, several of the blocks must be implemented in different ways than they are currently implemented in the previous design camera. One of those necessary changes is the substitution of a 64 MHz clock for oscillator 16 and the implementation of clock drivers 14. As discussed above, in the prior art design clock drivers 14 were implemented with two TMC57253 CCD driver chips connected in parallel to generate more drive current to perform the horizontal functions, with the input signal to the chips being a TTL signal (0-5 V) and the output signals that are applied to CCD array 12 being CMOS signals (12 Vp-p). That being the case, the drivers of TMC57253 are in essence power drivers.
Figure 2a is a block diagram of the implementation of camera head module 2' of the present invention. Here oscillator 16' is a 64 MHz clock. Also, the vertical drivers for CCD array 12 are implemented with a single TMC57253 driver 14' that receives the vertical clock signals, IAG and SAG, from control FPGA 28 via differential to TTL converter 19'. This implementation of the vertical drivers is possible since less power is required for the vertical drivers, thus the present application remains within the bandwidth of the TMC57253 vertical drive circuits even with the doubling of the clock speed. However, for the horizontal drivers of the TMC57253, the doubling of the clock speed does exceed the available bandwidth of that chip even if multiple TMC57253 driver chips are connected in parallel Thus, the horizontal channels of the TCM57253 chip are not used in the present implementation,
and two identical discrete high speed/high power driver circuits 15 and 17 are used to power the horizontal Reset (RST) and Serial Register Gate (SRG) signals needed by CCD array 12 to perform the horizontal shifting functions with the RST and SRG signals from FPGA control 28 being received via differential/l L converter 19. TC236 CCD arrays require these two signals, RST and SRG, and those are the two that have to run at the full pixel rate of 64 Mpixel/sec. in the present invention. Additionally, Figure 2a shows a frequency divider 13 coupled to 64 MHz oscillator 16' to divide the frequency by 2 to provide a 32 MHz clock signal to control subsystem 6, and amplifiers 18 of Figure 1 implemented as two separate operational amplifiers 18' and 18" with their operation discussed in more detail below. Note that the RST, SRG, IAG and SAG signals can alternatively be generated elsewhere in the camera system (e.g., by an FPGA could be located in camera head 2).
The reason that the two horizontal driver stages of a TCM57253 chip can not be used in the implementation of the present invention is a function of the basic design of that chip and the CCD arrays with which it is designed to interface. The drivers that perform the vertical functions are typically operating into a 4000 pfd capacitance load, where as the horizontal drivers are typically operating into a 10-70 pfd load, thus the vertical drivers in the TCM57253 chip are designed to be very powerful but not that fast, and conversely the horizontal drivers are designed to be fast but not that powerful Thus, in the present invention, the two horizontal drivers of conventional clock driver chips, such as TCM57253, can not be used due to the slow rise and -all times and long delay times.
Thus the discrete circuit of Figure 2b was designed for horizontal power drivers 15 and 17 of Figure 2a utilizing high frequency bipolar transistors to produce the needed level shifter/clock driver having good edge times while working at 64 MHz. A separate combined voltage shifter and high frequency power driver of the type shown in Figure 2b is used for each of the RST and SRG horizontal gate clocks of CCD array 12. As shown in Figure 2b, on the right side the input signal for either the RST or SRG function is received from control FPGA 28 via a twisted pair of wires in cable 4 and differential to TTL converter 19. That received signal first undergoes a TTL to 12 Vp-p level shift (since CCD array 12 is a CMOS device) by applying the control signal to the base of positively biased PNP transistor 60 and the clamping
diodes 76 and 78 connected to the collector of transistor 60. Following the voltage level shift, the level shifted control signal from diodes 76 and 78 is applied to a pair of high frequency transistors 62 and 64 connected in a push-pull driver circuit arrangement with the power clock signal being delivered to the RST or SRG inputs of CCD array 12 from the connected emitters of transistors 62 and 64. In both stages of the power driver of Figure 2b, the transistors are not driven into saturation to insure the rapid switching times needed. A circuit of this design is capable of charging and discharging a capacitance of about 75 pf over the full voltage excursion with a rise and Jail time of about 2 nsec. at the 64 MHz rate. The level shifting portion of the circuit of Figure 2b also includes capacitor 74 connected in parallel with resistor 72 with both connected in series with the base of transistor 60. These components are provided to overcome the parasitic capacitance of transistor 60 at 64 MHz which otherwise would slow down transistor 60. Additionally, emitter capacitor 66 increases the current on the signal to help transistor 60 turn on and offat the 64 MHz rate. Additionally, there is a series resistor 82 and inductor 80 circuit connected between the base of the push-pull transistors 62 and 64 and the negative bias voltage that operates in conjunction with the parasitic capacitance of the push-pull transistors to speed up their operation and thus create faster edge transitions. Another area where changes were made to accommodate the doubling of the speed of the present invention camera over that of the previous design is A/D converters 22 and 23 of Figure 1. In the previous design, CCD imager array 12 provides two output channels of multiplexed data, each at 32 Mpixel/sec, whereas in the camera of the present invention that same CCD imager array produces the same two channels of multiplexed data at 64 Mpixel/sec. each. One of those channels provides RGRG multiplexed data and the other GBGB multiplexed data as indicated in Figure 1 at the output of amplifiers 18 and the inputs to amplifiers 20 and 21.
Thus, in the present invention the two data channels from CCD imager array 12 have a combined bandwidth of 128 Mpixel/sec. At the time of the present invention there were low cost AID flash converters with 10 bits having a maximum bandwidth of 48 Msamples sec. Thus, to accommodate the bandwidth of the high speed camera of the present invention with the same type of A/D converters it is
necessary to use two A/D converters of that type for each channel of multiplexed data from CCD array 12.
Figure 3 illustrates the A/D configuration of the present invention to handle the 128 Mpixel/sec. bandwidth from CCD array 12. Thus, A/Ds 22 and 23 and FIFOs 24 and 25 of Figure 1 are replaced with the configuration shown in Figure 3. So A/D 22 (Fig. 1) is replaced with A/Ds 22' and 22" with both receiving the GRGR input signal from amplifier 20, and A D 23 (Fig. 1) is replaced with A/Ds 23' and 23" with both receiving the GBGB input signal from amplifier 21. Then, essentially, each of A Ds 22' and 23' are clocked with the 32MHz CLK signal from frequency divider 13 (see Fig. 2a) via cable 4 and programable phase shifter 26^, and A/Ds 22" and 23" are clocked with CLK-not, the inverse of the clock signal, to the companion A/D in each case. Thus by using both the rising and falling edges of the CLK signal, the effective sampling rate of the A/D fimction is 128 Mpixels/sec. with a 32MHz clock signal in control subsystem 6. The AID clocking scheme described in the previous paragraph is somewhat of an oversimplification of what is actually needed. While the four clock signals to A/Ds 22', 22", 23' and 23" are substantially as described above, in -act, each of the four clock signals needs to be individually adjusted in the initial setup procedure as described above with the programable phase shifter 26 providing four separate clock signals. In other words the CLK signal needed by each of A/Ds 22' and 23' may be slightly out of phase with each other, as might the CLK-not signals used with A/Ds 22" and 23". Additionally, each CLK-not signal for A/Ds 22" and 23" may not be exactly 180° out of phase with the corresponding CLK signal needed for A/D 22' and 23', respectively. Thus, all four of the clock signals must be independently adjustable during set-up. Therefore, with this approach it is possible to get a 128 Mpixel/sec. rate with slower, low cost parts.
As discussed above for the prior art camera, the setting of the clocking phase shifts for A/Ds 22', 22", 23' and 23" are a one time set-up per actual cable 4 that is being used. These variations of clock phase shift result from variations in line length, orientation of lines with respect to others lines in the cable and varying printed circuit (PC) trace lengths, as well as the -act that the video signals are carried through coaxial cables within cable 4 while other signals (e.g., CLK) are carried by twisted
pairs so the propagation times are different.
The A/D configuration of Figure 3 also automatically demultiplexes the two multiplexed input signals received from CCD array 12. Thus, A/Ds 22' and 22" each receive the RGRG signal with red as the digitized output of A/D 22' and with greenl (GRN1) as the digitized output of A/D 22". Similarly, A/Ds 23' and 23" each receive the GBGB signal with green 2 (GRN2) as the digitized output of A/D 23' and with blue (BLU) as the digitized output of A/D 23". Note that since green has twice as many pixels in the input signals (Le., green is a component of both multiplexed signals) than does either red or blue, there are two channels of green (GRN1 and GRN2) output signals from the A/Ds. So this configuration of A/Ds demultiplexes the input signals, as well as provides the necessary bandwidth with lower cost, lower bandwidth components. In the camera of the previous design the demultiplexing was performed as one of the functions of filter FPGA 32 (Fig. 1).
Thus, the signals from A/Ds 22', 22", 23' and 23" are each at a 32 Mpixel/sec. rate thus allowing the downstream components to operate at the 32 MHz rate as in the camera system of the previous design.
The present design handles information at twice the pixel rate of the camera of the previous design while still using the same clock rate in control subsystem 6 which permits the use of components that have a maximum upper clock rate of slightly higher than 32 MHz. In the present design the demultiplexed signals coming from each the A/Ds are at half the actual channel pixel rate with filter FPGA 32 being provided data at substantially the same rate as in the camera of the previous design discussed above.
Next, in Figure 4 there is shown a block diagram of programable phase shifter 26' of the present invention. In the camera system of the previous design of Figure 1 there are two A/Ds 22 and 23 with programable phase shifter 26 providing two independently adjustable 32 MHz clock signals that are each individually adjustable and substantially in phase with each other. In the present invention, as discussed above, there are four 32 MHz clock signals required by A/Ds 22', 22", 23' and 23", the phase of each being independently adjustable. Thus, programable phase shifter 26' receives a 32 MHz clock signal from frequency divider 13 in camera head 2 via cable 4. The 32MHz clock signal from frequency divider 13 is then applied to
frequency divider 84 and to each of the four identical and independently variable phase shifter circuits 85-88 with the 32 MHz clock signal first passing through inverter 83 for variable phase shift circuits 86 and 88. Inverter 83 has been included since A/D converters 22" and 23" (which are driven by phase shift circuits 86 and 88) are triggered on the -ailing edge of the clock. The output from frequency divider 84 provides the 16MHz clock signal needed by microprocessor 34. Each of variable phase shifter circuits 85-88 also receive an input signal from microprocessor 34 that is proportional to the amount of phase shift determined to be necessary during the initial calibration of the system with the selected cable, as discussed above. Then on output lines 50-53 of each of variable phase shifter circuits 85-88, the individual phase adjusted clock signals for each of A/Ds 22', 22", 23' and 23" of Figure 3 are presented.
Figure 5 is a block diagram for one of the four variable phase shifter circuits 85-88 with the other three having the same configuration. A new circuit, from that of the previous design, for each of the individual phase shifters here is necessary due to the increased bandwidth of the signal of the present invention and the need for more temperature stability as a result to support the doubling of the data rate from that of the previous design.
The two output signals from CCD array 12 (RGRG and GBGB) are very high frequency (64 MHz) analog signals with video energy concentrated at narrow peaks in each video signal as shown in Figure 6. These signal peaks are separated by large reset clock signals at 64 MHz which propagate the image charges through the output shift register of the CCD array. To capture the video information in those signals, A Ds 22', 22", 23' and 23" must repeatedly sample the corresponding video signal (RGRG or GBGB) at each exact peak of that signal which at 64 MHz is a very narrow window. This requires a high precision clock signal, for each of the A/Ds, whose phase relative to the mcoming video signal does not drift with temperature, or otherwise. In the application of the present invention a drift of more than 1 nsec. out of 30 nsec. is unacceptable. A maximum drift of ±0.25 nsec. is the goal in this high speed video application. As will be seen in the discussion below, the phase delay circuit of the present invention is a closed loop and can provide a maximum of ±90° of phase shift. The circuit that is used here is unique in that it holds the output clock signals (sample
clock in Fig. 4) at a precise phase delay from an incoming clock (CLK IN Fig. 4) under control of an input voltage.
As shown in Figure 5, the variable phase shifter circuit (each of phase shifter circuits 85-88 in Fig. 4) of the present invention is a closed loop, high frequency, all pass phase delay circuit; four of these circuits are used in programable phase shifter 26' (see Figs. 3 and 4) to permit individual adjustment of the clock phase before applying the sample clock signal to the corresponding A/D 22', 22", 23' or 23". hi the upper left of Figure 5 a clock signal (CLK IN) is received from cable 4 and applied to varactors 90 and voltage divider 91 with an attenuated clock signal applied to the positive terminal of operational amplifier 92 from voltage divider 91. The output signal from varactor 90 is then applied to the negative terminal of operational amplifier 92. The output of operational amplifier 92 is then fed back to varactor 90 and applied to a TTL converter 94 to produce the sample clock signal with TTL logic voltage levels that is then applied to the corresponding A/D converter (see Fig. 3) and to control FPGA 28 (see Fig. 1 ).
In the feed-back path, exclusive NOR gate 96 (e.g., 74AC86) receives the phase shifted sample clock signal at one input terminal and the CLK IN signal applied at the second input terminal The two clock signals, CLK IN and sample clock, are compared by exclusive ORing them to produce a signal that is proportional to the phase difference between the two clock signals, as well as the polarity of that difference. The resulting signal from gate 96 is then applied to integrator 98 to create an analog signal that is proportional to the phase difference between the two clock signals. This technique has been employed since this circuit has low temperature -ie-i-sitivity. The average voltage from integrator 98 will thus remain stable regardless of circuit drift, signal speeds, part values, as well as temperature variations.
Next in the feed-back path is operational amplifier 100 which receives one input from integrator 98 and a second input from D/A 102 to scale the signal magnitude to be applied to varactors 90 to produce the desired phase delay. Since the signals being applied to operational amplifier 100 are low frequency signals, it is not necessary that amplifier 100 be a high frequency amplifier. The output signal from operational amplifier 100 is a DC level that is applied to varactor 90 to adjust the phase delay of the CLK IN signal as it passes therethrough. D/A 102 receives a fixed
input signal from microprocessor 34, with that signal representing the phase delay having been selected during the initial set-up of the camera to match cable 4 with control subsystem 6 to capture the video signal being processed by the corresponding A/D converter of Figure 3. In summary, the delay (Le., phase shift) of the output sample clock is measured and compared against the CLK IN with the error voltage (created as a duty cycle difference by the XOR gate and averaged by a low pass filter) being applied to varactors to change the delay that is being generated. This feedback configuration eliminates the effect of temperature changes on the circuit, producing a very accurate clock position that is programmable in fine digital steps via D/A 102 under control of microprocessor 34 (Fig. 1). While in the above discussion varactors are used in the voltage controllable phase delay feedforward path, there are other techniques that can be employed to accomplish the same result.
In Figure 6 there is shown a typical video signal that is generated by each of the two channels of CCD array 12 and is delivered to each of A/Ds 22', 22", 23' and 23". Since CCD array 12 is n ning at 64 MHz, the frequency of the signal variations in the video signal occur at that frequency with the video information being the height of the peaks of that signal Thus, when the video signal is sampled by the corresponding A/D 22', 22", 23' and 23", it is the peaks that are being sampled. Thus, the need to align the clock pulses with peaks of the video signal that is applied to the corresponding A/D as was discussed above with respect to Figures 3, 4 and 5 in the initial setup for the system with a particular cable 4.
A typical video signal is a negative going signal with each line of video information beginning with a fixed number of 'black' pulses that have an amplitude in a black region 104 generated by CCD array 12 without being exposed to external Hght, followed by a long series of 'active' pulses in an active region 106. Thus, when the video signal is sampled with A/Ds 22', 22", 23' and 23" (see Fig.3) the peaks of the video signal are being sampled as a result of the phase adjustment of the individual clock signals to the A/D with the pixels from the various A/Ds being either black pixels or active pixels corresponding to whether the information into the A/D was from the black region 104 or the active region 106 of the video signal
To obtain a true measure of the amplitude of the active pixels from the
A/Ds, the black level or the average amplitude of the pulses in black region 104 must be determined. This is done to permit the shifting of the active pixel levels to eliminate the black level and thus generate a zero reference level that corresponds to the black level Typically what is done in many camera systems is to filter the video signal to remove the clock information from the video signal However, at the frequencies at which the camera system of the present invention is operating filtering the clock from the video signal does not work very well since the magnitude of the desired signal having the actual video information is much smaller than the magnitude of the clock signal Thus, by aligning the clock signal phase with the video signal received by each of the A/Ds, the low amplitude video information is sampled from the clock signal and then digitized creating the video pixels of interest. That sampling results in the removal of the clock noise in the raw video signal applied to the A/Ds. Once the clock noise has been removed and the video data pixelized, the operation of the camera is further improved by determining the level of the black pixels to permit shifting of the level of the raw video signals to substantially zero out the black level so that a "0" level pixel represents black.
The black level is typically removed in prior art cameras by clan-ping the black region to electrical ground to ahgn the video signal before the video signal is applied to A/Ds to digitize the video information. As will be seen in the following discussion, in the present invention the raw video signal without first adjusting for the black level is digitized, then the video pixels are examined to determine the black level, and then the black level information is feed back to the camera head where the output level of the output amplifiers is shifted to compensate for the black level in the video signal from the CCD array. Using this approach, the black level is subtracted from the video signal with a DC offset signal to the amplifiers so that the signal presented to the A D converters is in the right digitizing range.
There are several approaches that can be used to determine the black level of the incoming video signal However, given the high frequency of the analog signal that can not easily be done in the analog domain. The simplest approach is to use the level of the first black signal in each line of data. At the high frequency at which the present camera is operating there is a considerable amount of noise thus
relying on a single black pixel is not very reliable. To minimize the effect of that noise in the present invention, all of the leading black pixels in each line are averaged together to determine the offset level that is to be fed back to amplifiers 18 in camera head 2. A typical CCD array 12 has two processing channels, one to generate a
RGRG signal and a second to generate a GBGB signal Each of those channels includes up to approximately 26 sampling locations that are masked off so that light does not reach them. Thus those first samples in each line of video data on each channel represents the black level from the CCD array. Figure 7 is a block diagram that illustrates the generation of a black level corrected digital video signal in the present invention for a single video channel with that operation being implemented as a part of filter FPGA 32. Since there are two video channels being provided by CCD array 12, this operation will have to be performed twice, once for each video channel since the black level for each channel is independent of the other channel In Figure 3 each of the two video channels were demultiplexed into individual color pixel streams as discussed above. Thus the first several pixels in each line in each color data pixel stream will represent black, or dark reference, pixels. Therefore, to determine the black level of the signal from each channel of the CCD array, a preselected number of the black pixels in the corresponding color data stream for each line must be obtained from the corresponding color data streams. For example, if 16 dark reference pixels are to be used to determine the black level of a channel of the CCD array, then eight of the dark reference pixels from each corresponding color data stream are received from the corresponding FIFOs (either 24'-24" or 25'-25") and applied to sequencer 108 in Figure 7 to reconstruct a partial serial video signal stream for the corresponding video channel (le., 16 bits long in the present example). That sequenced serial stream of digital black pixels for the corresponding channel are then applied to one input of adder 110, and then the individual pixel values from adder 110 are applied to accumulator 112 where the first pixel is added to zero and fedback to the second input of adder 110 to be added to the pixel value of the second black pixel which is then transferred to accumulator 112. Accumulator 112 then feeds back the accumulated total of the values of the first two black pixels to the second input of adder 110 to be
added to the value of the third black pixel with the resultant total of the three black pixels then passed to accumulator 112. This procedure is thus continued until accumulator 112 has an accumulated value for all 16 individual black level values in this example. A digital signal of the final accumulated value of the 16 black pixels at the beginning of the line ofvideo data is then applied to divider 114 where the accumulated value is divided by 16 (or the appropriate value if other than 16 black samples are used) to determine the average black pixel value. Note that given that the signal from accumulator 114 is digital and that the accumulated result in this example is to be divided by 16 (24), divider 114 can be implemented by stoning the accumulated result by four bits. The average black pixel value for the current line ofvideo data is then appUed to the minus terminal of first and second color subtractors 116 and 118 with the raw video data from the corresponding FIFOs applied to the phis terminal of each of subtractors 116 and 118 to create a black level corrected individual color digital video data stream. The black level could be determined with any number of dark reference pixels that are available from the CCD array, from one to the maximum number available. However, the more black reference pixel values that are used to determine the black level for the active pixels, the less black level noise will be present.
To adjust the analog video signal from amplifiers 18' and 18" in Figure 2a for the black level offset presented by CCD array 12, the digital black level pixel value from divider 114 is applied to D/A converter 120 with the resultant analog value being applied to the corresponding one of operational amplifiers 18' and 18" in Figure 2a for the video channel being compensated. By providing this feedback to amplifiers 18' and 18" the limits of the amplifiers are less likely to be exceeded, and the raw analog video signal being provided to A/Ds 22', 22", 23' and 23" remains centered in the range of the AID converters thus maximizing the dynamic range and allowing better tracking of the video signals by the A/D converters.
Interpixel smearing of the video signal is also a problem when CCD array 12 is operated to at a rate that is -aster than the output stage was designed to support. Referring to Figure 8 there is shown a amplified output stage of CCD array 12. Within a CCD array the video data consists of a charge packet that is shifted through the array with the various clock signals discussed above. It might be
visualized as a 'bucket brigade' as the charge ripples through the CCD array. To convert that charge to a voltage level that is proportional to the corresponding charge, the final stage of the CCD array applies the accumulated charge to a capacitor 123 that is connected between the gate terminal of FET 122 and signal ground, with the raw video signal provided to the following c cuitry on the source or drain of FET 122. If the charge from the CCD array were continually transferred to capacitor 123, capacitor 123 would continue to acquire more and more charge. Thus, a second FET 124 is connected across capacitor 123 to discharge capacitor 123 to ground at the pixel rate resulting in a raw video signal as shown in Figure 6 with the video data superimposed on a very strong clock signal
The typical CCD array was designed to run at a clock frequency of approximately 7 MHz, thus as the clock frequency is increased as in the present invention, there is a smearing of some of the charge from a previous pixel into one or more following pixels. This results from the -act that FET 124 has a finite impedance between the drain and source when activated. Therefore there is a time constant that results from the internal impedance of FET 124 and capacitor 123. hi the present invention, CCD array 12 is being clocked -aster than that time constant which results in incomplete discharging of capacitor 123 each time FET 124 is turned on. So a particular charge packet that was put on capacitor 123 is mostly discharged in a closure of FET 124, but there is a little of the charge left on capacitor 123 that is combined with the charge of the next video sample. Then in the next sample there is a lesser amount of the charge from the two previous samples, and so on, so there is a tapering-off effect of a sample where the information is smeared into the next pixel sample, or several following samples. Thus, an interference filter is needed to minimize the smearing effect of the high speed at which the CCD array is being operated. If the smearing effect is not compensated, the output image presented to the user will not appear sharp and will have color space errors.
To counteract the smearing, a digital signal processing solution is implemented in filter FPGA 32 (see Fig. 1). Thus, the present invention makes this correction after the video signals have been digitized. Stated in general terms, the current pixel value is multiplied by a fractional constant of less than one, and then that value is subtracted from the pixel value of the next pixel in time in the same channel
(Le., RGRG or GBGB). If the smearing of a pixel is sufficiently great, then a second smaller fractional value of the current pixel value is also subtracted from the second following pixel in time in the same channel
Note that in Figures 1 and 3 there are two video channels of data that were received from CCD array 12, namely RGRG and GBGB. In the Figure 1 camera system the data rate from CCD array 12 is 64 Mpixel/sec. which is processed with one AID converter for each channel (le., A/D 22 for the RGRG channel and A/D 23 for the GBGB channel). Figure 3 illustrates the A/D conversion operation for the 128 Mpixel/sec. embodiment. As a result of the A/D operation of Figure 3, as discussed above, each of the RGRG and GBGB channels ofvideo data have been converted to two data streams, yielding RED, GRNl, GRN2 and BLU. However, if the current pixel is a RED pixel from FIFO 24', the next pixel in time in the same channel is a GRNl pixel from FIFO 24". In turn the second following pixel in time is the next RED pixel the third pixel in time is the next GRNl pixel etc. Similarly, if the current pixel is GRN2 from FTFO 25', the next pixel in time in the same channel is a BLU pixel from FIFO 25". In turn the second following pixel in that channel is the next GRN2 pixel, the third in time is the next BLU pixel, etc.
First the 64 Mpixel/sec. embodiments for anti-smearing will be addressed with Figures 9a and 9b illustrating the one and two pixel anti-smearing techniques described in general above using the data streams from FIFOs 24 and 25 of Figure 1 where the two A/D converter configuration is shown, and with Figures 9c and 9d illustrating one and two pixel anti-smearing techniques discussed above using the data streams from FIFOs 24', 24" (RGRG channel), 25' and 25" (GBGB channel) of Figure 3 where the four AID converter configuration is shown. For each of these discussions parallel arithmetic is assumed. However, other techniques that perform the same fimction could be implemented. Note that all of the fimctions of the smearing correction described in relation to Figures 9a-9d are described as performing the arithmetic fimctions on bytes of data (le., the number of bits in a single pixel, e.g., 8 bits) with the resultant byte having the same number of bits as each other pixel byte at each point in the intermediate and final results.
Figure 9a includes two single pixel anti-smearing paths, one for the RGRG signal stream and one for the GBGB signal stream from FIFOs 24 and 25 in
Figure 1. In the top portion of Figure 9a the RGRG data signal stream is applied to the positive terminal of subtractor 154 and to a one pixel delay 150. The previous pixel (N-l) from delay 150 is then applied to a multiplier 152 where the pixel value is multiplied by a selected fraction value, x (e.g., 0.2). The output of multiplier 152 is then applied to the negative terminal of subtractor 154 where the fractional value of the previous pixel is subtracted from the current value of the present pixel with the data stream from subtractor 154 being the single pixel compensated video signal for the RGRG channel Similarly, the GBGB data signal stream is applied to the positive terminal of subtractor 154' and single pixel delay 150', the previous pixel value multiplied by a selected factor, x, by multiplier 152" (the value of this x -actor may be slightly different from the x -actor of multiplier 152), the reduced value previous pixel is then applied to the negative terminal of subtractor 154' where it is subtracted from the current pixel value for the GBGB data stream yielding from subtractor 154' a single pixel compensated video signal for the GBGB data stream. In Figure 9b the technique of compensating the video data stream for smearing from two previous pixels is illustrated for the two A/D embodiment of Figure 1. Here, as in Figure 9a, two data streams are applied to the positive terminal of subtractor 154 and 154', respectively, and to a two byte delay 150" and 150"', respectively. As was the case in Figure 9a, the most previous pixel byte (N- 1 ) in each data stream from the delay is multiplied by a factor, x (which might be slightly different from each other), by multipliers 152 and 152', respectively, while the second previous pixel byte (N-2) from the delay is multiplied by a factor, y (which also might be slightly different from each other), by multipliers 156 and 156', respectively. Additionally, the -actor "y" is less than the -actor "x" (e.g., "y" might be approximately 0.1). Figure 9c is a block diagram that illustrates a one pixel anti-smearing technique described in general above using the data stream from Figure 3 where the four A/D converter configuration is discussed. Since there are two video data streams (RGRG and GBGB) that have been demultiplexed by the circuit of Figure 3, there will be two similar anti-smearing corrections that need to be performed with a duplication of the function of Figure 9c for each video data channel To indicate that Figure 9c is applicable to each video data channel, the input signals from the FIFOs of Figure 3 have been shown in the alternative for each of the two input lines in Figure 9c, namely
either RED and GRNl for the RGRG channel data from A/Ds 22' and 22", or GRN2 and BLU for the GBGB channel data from A/Ds 23' and 23". Note, that since there are two data streams, the operation described as follows must be performed twice, once for the RGRG channel, and once for the GBGB channeL Thus, the RED and GRNl or the GRN2 and BLU pixel data streams are applied to the positive terminal of first and second subtractors 130 and 132, respectively, and to first and second delay lines 126 and 128, respectively, with each delay line being one pixel byte long (e.g., 8 bits). Then the (N-l GRNl (BLU) pixel from second delay line 128 is applied to first multiplier 134 where it is multiplied by a preselected, less than unity, fractional value factor x, with the resultant reduced value of the (N-l)81 GRNl (BLU) pixel applied to the negative terminal of first subtractor 130. The smearing compensated RED (GRN2) pixel is the result of that subtraction.
Similarly, the (N-l RED (GRN2) pixel from first delay line 126 is applied to second multiplier 136 where it is multiplied by the same preselected, less than unity, fractional value -actor x (perhaps 0.25), with the resultant reduced value of the (N-l)81 RED (GRN2) pixel applied to the negative terminal of second subtractor 132. The smearing compensated GRNl (BLU) pixel is the result of that subtraction. By performing the anti-smearing fimction in this way without recombining the pixels of a channel the 128 Mpixel data rate of the present invention can still be processed as if it were a 64 Mpixel rate as discussed above, thus further enabling the use of less expensive, slower digital components.
Figure 9d illustrates smearing correction over the previous two pixels in the data stream using an extension of the technique described above in relation to Figure 9a. The differences here to that of Figure 9c is first that the delay lines 126' and 128' are now two pixels long (e.g., 16 bits) to hold the two previous pixel bytes in each of the demultiplexed data streams. The first pixel subtraction components of Figure 9a are shown here with the same reference numbers and they operate in the same manner as described in Figure 9c.
To subtract a -actor of the value of the second previous pixel in the channel the pixel of the same color as the pixel being corrected is used since in the actual data channel the colors alternate (le., RGRG or GBGB). Thus, to correct for the smearing from the second previous pixel in the data stream, the one pixel corrected
value from first subtractor 130 is applied to the positive terminal of third subtractor 138. The pixel value of the (N-2) RED (GRN2) pixel from delay line 126' is then applied to third multiplier 142 where the value is multiplied by a preselected, less than unity, factor y (where y is smaller than x and perhaps has a value of 0.1), and the resultant multiplied value from third multiplier 142 is applied to the negative teπninal of third subtractor 138. The two pixel, smear corrected current RED (GRN2) pixel is then provided by third subtractor 138.
Similarly, the GRNl (BLU) pixel is corrected using delay line 128', fourth multiplier 144 (using the same multiplication factor y as in third multiplier 142), and fourth subtractor 140 to provide the two pixel smear corrected current GRNl (BLU) pixel from fourth subtractor 140.
In the present invention, the anti-smearing function is performed in firmware as one of the fimctions of filter FPGA 32. Thus, it can be seen that the technique of the present invention can easily be extended to correct for smearing from any number of previous pixels in the data stream that may be desired. Additionally, the above discussion, for simplicity, indicated that the fractional factors were the same (le., x=x and y=y) it may be necessary to make those factors slightly different for each other given the speed at which the circuits are operating the inherent tolerances of the various components. The next aspect of the present invention is a power saving technique to permit operation of CCD imager 12 (Figures 1 and 2a) at higher and higher frame rates without the imager overheating and ceasing operation after a short period of time (e.g., only seconds at higher frame rates). The power saving technique of the present invention is discussed with the aid of Figures 10a through 14c. Figures 10a and 10b each show a simplified graphical snapshot view of the contents of the two memory areas of a typical CCD imager 12 (a frame transfer imager) at two different points in time. As discussed above with relation to Figures 1 and 2a, there are several clock signals that move the electronic form of the image captured from lens 10 through the image and storage areas of CCD imager 12. In Figure 10a the contents of image area 146 and storage area 148 are illustrated following the capture of image 150 in image area 146 and the copying of image 150 into storage area 148 as secondary image 152. Note that image 150 does not utilize
the full image area 146; that is illustrated here to show that the user, for various reasons, might decide to capture the image in less than the -full image area (le., perhaps the proportions of the electronic image need to be different than the proportions of image and storage areas 146 and 148 for processing purposes). Then in Figure 10b the contents of image and storage areas 146 and 148, respectively, are illustrated following the capture of a new image 150' in image area 146 as secondary image 152 is read out from storage area 148.
Thus, the general timing relationship between the vertical gate signals, IAG and SAG, are off-set in time from each other. The IAG signal vertically gates a new image 150 from lens 10 into image area 146 at substantially the same time that the horizontal signals, SRG and RST, horizontally gate the previous secondary image from storage area 148 of CCD imager 12. Following the gating of the previous secondary image storage area 148 and the new image having been established in image area 146, the SAG signal causes the next image 150 in image area 146 to be copied into storage area 148 as the next secondary image 152. Then that pattern repeats for each successive image.
Referring again to Figures 10a and 10b, the two image areas 146 and 148 each have the same proportions with Y lines of X pixels each. In normal operation as, in the prior art, the vertical clock signals, Image Area Gate (IAG) and Storage Area Gate (SAG), each include recurring bursts of Y pulses (the same number as there are lines in each memory area) to clock image 150 and secondary image 152 through all Y lines of the respective image area 146 and storage area 148. As also discussed above, CCD imager 12 presents a considerable amount of capacitance to the IAG and SAG signals when vertically moving each line of electronic image 150 as it is captured in image area 146 and as that electronic image is copied into storage area 148 as secondary image 152. Thus, as the user selects a faster and faster frame rate, the capacitance of CCD imager 12 requires more and more power from signals IAG and SAG to perform the desired tasks since all Y lines of both areas must be advanced for successive images more and more often (le., the entire image, all Y lines, in each of image area 146 and storage area 148 is advanced each time).
Figure 11 is a series of views similar to those of Figure 10a for various frame rates, le., less than 1000 fps (frames per second); 1000 fps; 2000fps; 4000 fps;
and 8000 f s. Since the master clock frequency of oscillator 16 (Figures 1 and 2a) does not change, thus as the user selects higher and higher frame rates to capture the images of interest, the number of lines, and the number of pixels in each hne, for each captured image are reduced in proportion to the change in frame rate. Thus, in the examples of Figure 11, when compared to image 150, images 154, 158, 162 and 166 are substantially Vz, 1/3, 1/5 and 1/7 the size (width and height) of image 150, respectively. Similarly, secondary images 156, 160, 164 and 168 have the same size relationship to secondary image 152.
In the prior art, gate signals IAG and SAG advance the image in image area 146 and storage area 148 by all Y lines (500 lines in a typical CCD imager) for any frame rate at which the camera is operated, requiring more and more power to do so as the frame rate is increased. Thus, at any frame rate the image (150, 154, 158, 162 and 166) in image area 146 and the secondary image (152, 156, 160, 164 and 168) in storage area 148 will be a single image that is advanced to the bottom of the corresponding memory area as illustrated in Figure 11. The actual height of each saved image in both memory areas, however, varies for the different frame rates since the clock rate of oscillator 16 (Figures 1 and 2a) remains fixed regardless of the frame rate. Stated differently, since the frequency of operation of oscillator 16 remains fixed throughout the operation of the camera of the present invention, the number of lines in the saved images must be reduced in proportion to the increase of the frame rate.
For purposes of the following discussion it is assumed that CCD imager 12 has 500 lines of 680 pixels each, which is typical of many CCD frame transfer imagers. Then in Figures 12a-e (similar to Figures 10a and 11), image area 146 and storage area 148 of CCD image 12 graphically illustrate the image contents of each immediately following the captured image from the image area being copied into the storage area for different frame rates ~ Figure 12a for a frame rate of less than 1000 fps; Figure 12b for a frame rate of 1000 fps; Figure 12c for 2000 fps; Figure 12d for 4000 fps; and Figure 12e for 8000 fps. It should further be noted that the frame rates stated above are strictly for purposes of this discussion and the present invention is not limited to those frame rates or to frame rates that are no higher than 8000 fps.
In each of Figures 12a- 12e, the illustrated image stored in image area 146 for each frame rate looks the same as the images in image area 146 in Figure 11
for the prior art, however, in Figures 12a-12e the secondary images in storage area 148 for the various frames rates is not the same as in the prior art as represented in Figure 11. This difference will become clear as the power saving technique of the present invention is discussed below. Here, image 150 in Figure 12a is for the less than 1000 fps case and is selected to be 420 lines in height for purposes of this discussion and is not a limitation on the present invention. Thus image 154 in Figure 12b for the 1000 f s case will be one half as many lines, le., 210 lines in height; image 158 in Figure 12c for the 2000 fps case will have one third as many lines as in image 150, le., 140 lines in height; image 162 in Figure 12d for the 4000 -fps case will have one fifth as many lines as in image 150, Le., 98 lines in height; and image 166 in Figure 12e for the 8000 fps case will have one seventh as many lines as in image 150, Le., 68 lines in height. As can be seen, image 150 was selected to have a height of 410 lines for purposes of this discussion since 420 is evenly divisible by 2, 3, 5, and 7. The number of pixels in each line of the various images for the various frame rates will also vary in the same proportions.
In the technique of the present invention, instead of clocking through all 500 lines of image area 146 and storage area 148 each time, regardless of the actual height of the image which gets shorter as the frame rate increases, as in the prior art, the number of pulses in each burst of each of the IAG and SAG signals is tailored to the image height and frame rate being used. Thus, for the IAG signal for each example of Figures 12a-12e, the number of pulses in each burst need only be equal to the number of lines in the height of the image at the selected frame rate. This is true since what is recorded in the lines of image area 146 above the corresponding image is of no importance since those lines are not read in the present invention when the contents of image area 146 are copied into storage area 148. Thus, in this 500 line example, each burst of IAG for Figures 12a-12e will contain 420, 210, 140, 98 and 68 pulses, respectively. This is true since images 150, 154, 158, 162 and 166 are each built from the lower edge of image area 146 upward in this graphical representation.
To determine the number of pulses that are to be included in each burst of the SAG signal consideration has to first be given to how copied images can be accommodated in storage area 148. Considering Figure 12a where the image is 420 lines in height, it is only possible to store one secondary image in storage area 148.
Since when a image is copied from image area 146 into storage area 148, the lines of image 150 can be thought of as progressing downward through storage area 148 from the top downward, Le., line 1 of image 150 is first copied to line 500 of storage area 148, then line 1 moves down to line 499 with line 2 now filling line 500, and so on. Thus, for the 420 line image example of Figure 12a, each burst of the SAG signal of the present invention contains 500 pulses to copy image 150 into secondary image 152 (Le., 420 pulses to copy image 150 to storage area 148, and 80 pulses to continue to advance secondary image 152 to the bottom of the memory of storage area 148).
Similarly in the 210 line example of Figure 12b, since only one 210 line image can be stored in storage area 148 since a blank space equal to the height of the image must be maintained between each image in storage area 148 (this will become clear when Figures 13a- 13e are discussed below). Thus, each burst of the SAG signal in the 210 line image case must also contain 500 pulses (Le., 210 pulses to copy image 154 into storage area 148 and 290 pulses to continue to advance secondary image 156 to the bottom of the memory of storage area 148).
The examples of Figures 12c- 12e are different from those of Figures 12a and 12b, in that more than one image, with a space between each of them, can be stored in storage area 148. In Figure 12c, the 140 line image case, it is possible to have three image spaces in storage area 148 (Le., 140 x 3 = 474, which is less than 500). In this view, storage area 148 contains the secondary image most recently copied, secondary image 160, and the immediately preceding secondary image 160' with a blank space between those two secondary images. Here, since three times the image height is less than the actual height of storage area 148 (Le., it is 80 lines short of 500), each burst in the SAG signal will contain 180 pulses (140 pulses to advance image 158 into storage area 148 as secondary image 160 and 40 pulses to advance secondary image 160 one half the difference between 420 and 500 lines). The space between secondary images 160 and 160', in this example, is therefore 180 lines.
In Figure 12d, the 98 line image case, it is possible to have five image spaces in storage area 148 (Le., 98 x 5 = 490, which is also less than 500). In this view, storage area 148 contains the secondary image most recently copied, secondary image 164, and the immediately preceding two secondary images 164' and 164" with a blank space between each of the adjacent secondary images. Here, since five times the
image height is less than the actual height of storage area 148 (Le., it is 10 short of 500), each burst in the SAG signal will alternately contain 100 and 101 pulses (98 pulses to advance image 162 into storage area 148 as secondary image 164 and 3 or 4 pulses to advance secondary image 164 approximately one fifth the difference between 490 and 500 lines). The space between adjacent ones of secondary images 164, 164' and 164", in this example, is therefore 101, 101 and 102 lines, sequentially. hi Figure 12e, the 68 line image case, it is possible to have seven image spaces in storage area 148 (Le., 68 x 7 = 476, which is also less than 500). In this view, storage area 148 contains the secondary image most recently copied, secondary image 168, and the immediately preceding three secondary images 168', 168" and
168'" with a blank space between each of the adjacent secondary images. Here, since seven times the image height is less than the actual height of storage area 148 (Le., it is 24 short of 500), each burst in the SAG signal will contain 72 pulses (68 pulses to advance image 166 into storage area 148 as secondary image 168 and 4 pulses, to advance secondary image 168 approximately one seventh the difference between 476 and 500 lines). The space between adjacent ones of secondary images 168, 168', 168" and 168"', in this example, is therefore 72 lines.
Figures 13a-13e illustrate the contents of image area 146 and storage area 148 after one secondary image has been read out of storage area 148 for each of the conditions discussed above with respect to the corresponding one of Figures 12a- 12e. Images are read out of storage area 148 horizontally, one line at a time, and IAG saves the next image (150', 154', 158', 162' and 166', respectively) in image area 146 during the horizontal activity. In the prior art all 500 lines of storage area 148 are read out each time, regardless of the frame rate and resulting height of the image. In the present invention to further reduce the power consumed by CCD imager 12, only the bottom secondary image (Le., the oldest secondary image) is readout by only applying horizontal signal SRG to imager 12 long enough to read out the bottom image from storage area 148.
Therefore, a line at a time of the bottom secondary image is horizontally read out of storage area 148. As each line is read out the remaining information in storage area 148 shifts downward a line at a time. In the 420 and 120 line examples of Figures 13a and 13b, since storage area 148 in each only contains a
single image, only 420 and 210 lines, respectively, must be read out with horizontal signal SRG to empty storage area 148 of all information of interest (Le., what is in the remainder of the 500 lines, if anything, is of no interest at the corresponding frame rates). In each of the 140, 98 and 68 line examples of Figures 13c-13e, the 140, 98 or 68 lines of the bottom secondary image 160', 164" or 168"', respectively, must be read out horizontally, phis the correction factor number of lines for each example, to bring the next image in storage area 148 to the bottom. In other words the number of lines to be read out horizontally is equal to the number of lines in the dead space between images in each example (Le., 220 for the 140 line image; 103 for the 98 line image; and 76 for the 68 line image).
As discussed above, the horizontal and vertical control signals that are applied to CCD imager 12 are generated by control FPGA 28. Since the bulk of the power consumed by imager 12 results from the vertical capturing and copying the images in image area 146 and storage area 148, the present invention results in the minimization of the duration of vertical signals IAG and SAG. There is also some additional power and time saving that results from the reduction of the number of lines in each image that need to be read out, however that power saving is dramatically lower than that saved by the reduction of the vertical movement of the images.
Referring next to Figures 14a- 14c there are comparisons of the IAG and SAG signals for the various image sizes and frame rates. One note before describing the differences in the combinations of the IAG and SAG signals for each of the examples given above in regard to Figures 10a and 10b for the prior art, and Figures 12a-13e for the present invention, in Figures 14a-14c, the signals IAG and SAG are shown on the same time line and beginning at the same point in time. That is done merely to show the relative lengths (Le., number of pulses in each burst) of those two signals with respect to each other for each example. As discussed above, the bursts of signals IAG and SAG do not always occur at the same time.
Beginning with Figure 14a and corresponding Figures 10a- 10b, the relationship of IAG to SAG is shown for the prior art where all 500 lines of image area 146 and storage area 148 are activated every time. Thus, burst 170 of IAG will include 500 pulses to place image 150 in image area 146 each time, regardless of the actual height of that image. Similarly, burst 172 of SAG will include 500 pulses to
copy image 150 in image area 146 into secondary image 152 in storage area 148 each time, regardless of the actual height of the image. Thus, in Figure 14a for the prior art the bursts of pulses in the IAG and SAG signals are equal in length, and length "a" will be equal to the maximum number of lines in the CCD imager. Then, for the present invention there are basically two relationships between the burst lengths of the IAG and SAG signals. The first of those relationships is illustrated in Figure 14b that corresponds to Figures 12a (13a) and 12b (13b) where there is only room to store one secondary image in storage area 148. As discussed above, the establishment of image 150 or 154 in image area 146 requires an IAG signal that has bursts with as many pulses as there are lines in the image height. Thus, for the 500 line imager examples for the 420 and 210 line images, burst 170' includes 420 or 210 pulses, respectively (Le., in general then "b" is equal the number of lines in the corresponding image). Also, as discussed above, in the single image in storage area 148 examples, the SAG signal needs a full 500 pulses in each burst to copy image 150 or 154 into storage area 148 as secondary image 152 or 156, respectively. Thus, in general, "a" is equal to the full number of lines in the selected imager. Therefore in the examples of Figures 12a and 12b, there will be a power saving during the IAG -function that is inversely proportion to the reduction in the height of the image as compared to the prior art. During the SAG function, however, there will be no change in the power consumed, m -summary, for a 500 line imager each burst 170' of IAG will contain 420 or 210 pulses (Le., "b" equals either 420 or 210), whereas each burst 172' of SAG will contain 500 pulses (Le., "a" equals 500). Therefore, in comparison to the prior art example of Figures 10a and 14a considering the number of pulses in each burst of the IAG and SAG signals, for the example of Figures 12a and 14a there will be approximately a 9% power saving over the prior art (100 - 100 x {[b + a]/[a + a]} = 100 - 100 {[420 + 500]/[500 + 500]}), and similarly for the example of Figures 12b and 14b there will be a power saving of approximately 29% over the prior art (100 - 100 x {[b + a]}/[a + a]} = 100 - 100 x {[210 +500]/[500 + 500]}).
The second relationship in the present invention for the bursts of the IAG and SAG signals is illustrated in Figure 14c which corresponds to the examples of Figures 12c-12e. As was the case in Figure 14b, here too, burst 170" of IAG need only contain as many pulses as there are lines in the desired image (Le., 140, 98 or 68
as per the examples of Figures 12c-12e, respectively). Thus "c" will equal either 140, 98 or 68 in correspondence to the respective example. The length of burst 172" for SAG in each example will be substantially equal to the length of burst 170". The reason that the length of burst 172" is not exactly equal to the length of burst 170" is that the number of lines in CCD imager 12 (e.g., 500 in this example) is not an integer multiple of the image height at any of the example frame rates discussed above (e.g., 500 is 3.57 times 140). Thus, as discussed above, the number of pulses in each burst of SAG needs to account for the extra lines in the storage area 148 that do not contain image information. Thus in the example of Figure 12c where there are two images stored in storage area 148, the number of pulses in each burst of SAG must be equal to the line height of the selected image phis one half the difference between the line height of storage area 148 (e.g., 500 lines) and three times the number of lines in the selected image height (e.g., 3 x 140). Thus, for the 500 line/140 line example, each burst 172" of SAG must contain 180 pulses (140 +{500 - [3 x 140]}/2). In this example then "d" = 180. Therefore the power saving over the prior art for the example of Figure 12c is approximately 68% (Le., [100 - 100 x({c + d}/{a + a})] or [100 - 100 x({140 + 180}/{500 + 500})]).
Referring next to the example of Figure 12d where there are three images stored in storage area 148, the number of pulses in each burst of SAG must be equal to the line height of the selected image p is one third the difference between the line height of storage area 148 and five times the number of lines in the selected image height. Thus, for the 500 line/98 line example, each burst 172" of SAG must sequentially contain 101, 101 and 102 pulses (98 +{500 - [5 x 98]}/3) since the difference is 10 divided by 3 and the fact that there can not be a fractional number of pulses. In this example then "d" = 101/101/103, sequentially. Therefore the power saving over the prior art for the example of Figure 12d is approximately 80% (Le., [100 - 100 x({c + d}/{a + a})] or [100 - 100 x({98 + 101}/{500 + 500})]).
Referring next to the example of Figure 12e where there are four images stored in storage area 148, the number of pulses in each burst of SAG must be equal to the line height of the selected image phis one fourth the difference between the line height of storage area 148 and seven times the number of lines in the selected image height. Thus, for the 500 line/68 line example, each burst 172" of SAG must
contain 72 pulses (68 +{500 - [7 x 140]}/4) since the difference is 24 divided by 4. In this example then "d" = 72. Therefore the power saving over the prior art for the example of Figure 12e is approximately 86% (Le., [100 - 100 x({c + d}/{a + a})] or [100 - 100 x({68 + 72}/{500 + 500})]). In the various preceding discussions with respect to each of the improvements of the present invention the discussion has generally been for a camera in the color mode. The various techniques and circuit implementations discussed above are also applicable to black and white. The only difference in each of the discussions where the various component colors were discussed is that in the black and white mode each pixel will be a gray scale pixel and they will be operated on in exactly the same way as described for the component color pixels.
The preceding discussion has been provided to illustrate the techniques of the present invention and at least one possible implementation of each of those techniques which individually and collectively contribute to the high speed, increased bandwidth camera design of the present invention. Given the ideas presented here, one skilled in the art would be able to derive alternative embodiments to accomplish -a ilar results. Clearly those alternative embodiments are included within the scope of the ideas presented here either directly or as equivalents that one skilled in the art will recognize as such.
Claims
1. A high speed electronic camera comprising: a lens assembly disposed to receive an image; a CCD array having an active image receiving area of with a first number of lines and a plurality of interactive terminals with said active image receiving area disposed to receive said image from said lens assembly to generate electronic signals representative of said image and an active image storage area with said first number of lines to save an image transferred from said active image area wherein each image includes a second number of lines where said second number is less than said first number; an oscillator to define the maximum signal frequency of the camera; a control subsystem to generate internal control signals; a plurality of vertical image drivers coupled to corresponding ones of said interactive terminals of said CCD array, said oscillator and said control subsystem to vertically advance image charges through said CCD array a line at a time under control of said oscillator utilizing signals received from said control subsystem; a pair of horizontal image drivers coupled to corresponding ones of said interactive termmals of said CCD array, said oscillator and said control subsystem to horizontally advance image pixel charges through, and out from, said CCD array as image pixel bit signals under control of said oscillator utilizing signals received from said control subsystem; an A/D converter stage coupled to said CCD array to convert said electronic signals from said CCD array to pixel bit streams representative of said image; and an output interface coupled to said A/D converter stage to present said pixel bit streams to a user; wherein a first of said vertical image drivers provides a first pulsed signal to said active image receiving area of said CCD with a third number of pulses in said first pulsed signal equal to said second number of lines of said image being written into said active image area; and wherein a second of said vertical image drivers provides a second pulsed signal to said active image storage area of said CCD to store said image from said active image area into said active image storage area with a fourth number of pulses in said second pulsed signal
2. A high speed electronic camera as in claim 1 wherein: said second number is greater than one third of said first number; and said fourth number is equal to said first number.
3. A high speed camera as in claim 2 wherein said first number is 500, said second number is 420, said third number is 420, and said fourth number is 500.
4. A high speed camera as in claim 2 wherein said first number is
500, said second number is 210, said third number is 210, and said fourth number is 500.
5. A high speed camera as in claim 1 wherein: said second number is one third of said first number; and said fourth number is equal to said second number.
6. A high speed camera as in claim 5 wherein said first number is 501 ; said second number is 167; said third number is 167; and said fourth number is 167.
7. A high speed camera as in claim 1 wherein: said second number is less than one third of said first number; a fifth number is a difference between said first number and a maximum integer multiple of said second number wherein said difference is less than said first number; and said fourth number is equal to said second number p is an integer portion of said fifth number.
8. A high speed camera as in claim 7 wherein said first number is 500; said second number is 140; said third number is 140; said fourth number is 140 plus one half of 80; and said fifth number is 80.
9. A high speed camera as in claim 7 wherein said first number is 500; said second number is 98; said third number is 98; said fourth number is 98 plus an integerized one third of 10; and said fifth number is 10.
10. A high speed camera as in claim 7 wherein said first number is 500; said second number is 68; said third number is 68; said fourth number is 68 phis one fourth of 24; and said fifth number is 24.
11. A high speed camera as in claim 1 wherem for a first frame rate of X frames per second said second number equals G; and at a second frame rate of Y frames per second, said second number equals H, with H being equal to (X/Y)«G.
12. A high speed camera as in claim 11 wherein each line in said image is R pixels long at X frames per second; and each fine in said image is S pixels long at Y frames per second, with S being equal to (X/Y)«R.
13. A method for reducing the power consumed by a CCD array as the image frame rate increases while maintaining the same operational frequency, said
CCD array having an active image receiving area with a first number of lines and a plurality of interactive terminals with said active image receiving area disposed to receive said image from a lens assembly to generate electronic signals representative of said image and an active image storage area with said first number of lines to save an image transferred from said active image area wherein each image includes a second number of lines with said second number being less than said first number, a pair of vertical image drivers coupled to said CCD array to vertically advance image charges through said active image receiving area and said active image storage array of said CCD array a line at a time, a horizontal image driver coupled to said active image storage area to advance said image pixel charges out therefrom as image pixel bit signals under control of a fixed frequency oscillator utilizing signals from a control subsystem, said method including the steps of: a. a first of said vertical image drivers providing a first pulsed signal to said active image receiving area of said CCD with a third number of pulses in said first pulsed signal equal to said second number of lines of said image being written into said active image area; and b. a second of said vertical image drivers providing a second pulsed signal to said active image storage area of said CCD to store said image from said active image area into said active image storage area with a fourth number of pulses in said second pulsed signal
14. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 13 wherein: said second number is greater than one third of said first number; and said fourth number is equal to said first number.
15. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 14 wherein said first number is 500, said second number is 420, said third number is 420, and said fourth number is 500.
16. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 14 wherein said first number is 500, said second number is 210, said third number is 210, and said fourth number is 500.
17. A method for reducing the power consumed by a CCD array as the image frame rate mcreases as in claim 13 wherein: said second number is one third of said first number; and said fourth number is equal to said second number.
18. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 17 wherein said first number is 501 ; said second number is 167; said third number is 167; and said fourth number is 167.
19. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 13 wherein: said second number is less than one third of said first number; a fifth number is a difference between said first number and a niaximum integer multiple of said second number wherein said difference is less than said first number; and said fourth number is equal to said second number phis an integer portion of said fifth number.
20. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 19 wherein said first number is 500; said second number is 140; said third number is 140; said fourth number is 140 plus one half of 80; and said fifth number is 80.
21. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 19 wherein said first number is 500; said second number is 98; said third number is 98; said fourth number is 98 plus an integerized one third of 10; and said fifth number is 10.
22. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 19 wherein said first number is 500; said second number is 68; said third number is 68; said fourth number is 68 phis one fourth of 24; and said fifth number is 24.
23. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 13 wherem for a first frame rate of X frames per second said second number equals G; and at a second frame rate of Y frames per second, said second number equals H, with H being equal to (X/Y)»G.
24. A method for reducing the power consumed by a CCD array as the image frame rate increases as in claim 23 wherein each line in said image is R pixels long at X frames per second; and each line in said image is S pixels long at Y frames per second, with S being equal to (X/Y)«R.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US33804699A | 1999-06-22 | 1999-06-22 | |
| US09/338,046 | 1999-06-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2000079786A1 true WO2000079786A1 (en) | 2000-12-28 |
Family
ID=23323184
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2000/013763 Ceased WO2000079786A1 (en) | 1999-06-22 | 2000-05-19 | Reduced power, high speed, increased bandwidth camera |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2000079786A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1942659A1 (en) * | 2006-12-12 | 2008-07-09 | Axis AB | Improved method for capturing image data |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4426664A (en) * | 1981-04-27 | 1984-01-17 | Sony Corporation | Solid state image sensor |
| US4799108A (en) * | 1986-08-19 | 1989-01-17 | Kappa Messtechnik Gmbh | Method of recording and storing images in rapid sequence |
| US4890165A (en) * | 1987-04-06 | 1989-12-26 | Canon Kabushiki Kaisha | Image pick-up apparatus for producing video signals of high information density in time base direction |
-
2000
- 2000-05-19 WO PCT/US2000/013763 patent/WO2000079786A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4426664A (en) * | 1981-04-27 | 1984-01-17 | Sony Corporation | Solid state image sensor |
| US4799108A (en) * | 1986-08-19 | 1989-01-17 | Kappa Messtechnik Gmbh | Method of recording and storing images in rapid sequence |
| US4890165A (en) * | 1987-04-06 | 1989-12-26 | Canon Kabushiki Kaisha | Image pick-up apparatus for producing video signals of high information density in time base direction |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1942659A1 (en) * | 2006-12-12 | 2008-07-09 | Axis AB | Improved method for capturing image data |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6160578A (en) | High speed, increased bandwidth camera | |
| AU2004219236B2 (en) | High frame rate high definition imaging system and method | |
| EP3389258B1 (en) | Solid-state imaging device, driving method, and electronic device | |
| US4910599A (en) | Imaging apparatus having electronic zooming with high and low speed readout | |
| US7880790B2 (en) | Image-signal processing apparatus for use in combination with an image sensor | |
| CN1034466C (en) | Chroma Processing System | |
| DE69127950T2 (en) | Digital color signal processing with clock signal control for a video camera | |
| US5621477A (en) | Digital decoder and method for decoding composite video signals | |
| WO2000079786A1 (en) | Reduced power, high speed, increased bandwidth camera | |
| US4527190A (en) | Mixing circuit | |
| US7548265B2 (en) | Image pickup apparatus and image pickup method including clocks | |
| US6166779A (en) | Method for analog decimation of image signals | |
| JP4164878B2 (en) | Imaging apparatus and control method thereof | |
| US6989809B1 (en) | Liquid crystal display | |
| JP3180624B2 (en) | Television camera equipment | |
| JPH07129124A (en) | Picture element arrangement display device | |
| JP4549040B2 (en) | Imaging device | |
| JP3018710B2 (en) | CCD delay line device | |
| JPH099149A (en) | Ccd image pickup signal processing circuit | |
| JP2001177773A (en) | Drive timing generation circuit | |
| JPH0723295A (en) | Imaging device | |
| JPH0583723A (en) | Impulse noise removal circuit for video signals | |
| JPH11341360A (en) | Television camera | |
| JPH0792946A (en) | Display signal generator | |
| JPH10257508A (en) | High-speed imaging device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA JP |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |