HK1207549B - Continuous video in a light deficient environment - Google Patents
Continuous video in a light deficient environment Download PDFInfo
- Publication number
- HK1207549B HK1207549B HK15108322.5A HK15108322A HK1207549B HK 1207549 B HK1207549 B HK 1207549B HK 15108322 A HK15108322 A HK 15108322A HK 1207549 B HK1207549 B HK 1207549B
- Authority
- HK
- Hong Kong
- Prior art keywords
- electromagnetic radiation
- imaging sensor
- pulse
- emitter
- pulses
- Prior art date
Links
Description
Cross Reference to Related Applications
The following claims are hereby made: the benefit of U.S. provisional patent application No.61/676,289, filed on 26/7/2012, and U.S. provisional patent application No.61/790,487, filed on 15/3/2013, the entire contents of which are incorporated herein by reference, including but not limited to those portions specifically appearing below, with the following exceptions: this application replaces the above-referenced provisional application when any portion of the above-referenced provisional application is inconsistent with this application.
Background
Technological advances have provided advances in imaging capabilities for medical use. One area in which some of the most beneficial advancements have been made due to advancements in the components that make up endoscopes is in endoscopic surgical procedures.
The present disclosure relates generally to electromagnetic sensing and sensors. The present disclosure also relates to low energy electromagnetic input conditions and low energy electromagnetic throughput conditions. In particular, the present disclosure relates to (but not necessarily purely to) systems for producing images in low light environments and associated structures, methods and features that may include: controlling the light source by duration, intensity, or both; a pulsed light source controlled component during a blanking period; the blanking period is maximized to allow for optimal light and maintain color balance.
Features and advantages of the present disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the present disclosure without undue experimentation. The features and advantages of the disclosure may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims.
Drawings
Non-limiting and non-exhaustive implementations of the present disclosure are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified. The advantages of the present disclosure will be better understood with reference to the following description and the accompanying drawings, in which:
FIG. 1 is a schematic view of an embodiment of a system for generating an image in a low-light environment with a pair of sensors and electromagnetic emitters in operation, completed according to the principles and teachings of the present disclosure;
FIG. 2 is a schematic diagram of contemplated system hardware;
FIGS. 2A-2D are illustrations of an operating cycle of a sensor for constructing an image frame according to the principles and teachings of the present disclosure;
FIG. 3 is a graphical representation of the operation of an embodiment of an electromagnetic transmitter in accordance with the principles and teachings of the present disclosure;
FIG. 4 is a graphical representation of varying the duration and amplitude of a transmitted electromagnetic pulse to provide exposure control in accordance with the principles and teachings of the present disclosure;
FIG. 5 is a graphical representation of an embodiment of the present disclosure combining the operating cycle of the sensor, the electromagnetic emitter, and the emitted electromagnetic pulses of FIGS. 2-4 to illustrate the imaging system during operation, in accordance with the principles and teachings of the present disclosure;
FIG. 6 is a schematic diagram illustrating two different processes over a time period from t (0) to t (1) for recording a video frame for full-spectrum light and split-spectrum light in accordance with the principles and teachings of the present disclosure;
7A-7E illustrate schematic diagrams of a process over a time interval for recording video frames for full spectrum light and split spectrum light, in accordance with the principles and teachings of the present disclosure;
fig. 8-12 illustrate adjustment of both an electromagnetic emitter and a sensor, wherein such adjustment may be accomplished simultaneously in some embodiments, in accordance with the principles and teachings of the present disclosure.
13-21 illustrate a sensor calibration method and hardware schematic for use with a segmented light system in accordance with the principles and teachings of the present disclosure;
22-23 illustrate methods and hardware schematics for increasing dynamic range within a closed or limited light environment in accordance with the principles and teachings of the present disclosure;
FIG. 24 illustrates the effect of signal-to-noise ratio for color correction of a typical Bell-based sensor compared to no color correction;
fig. 25 shows the chromaticities of 3 kinds of monochromatic lasers compared to the sRGB range;
26-27B illustrate a method and hardware schematic for increasing dynamic range within a closed or limited light environment, according to the principles and teachings of the present disclosure;
28A-28C illustrate the use of pulsing and/or synchronized white light emission using corresponding color sensors;
29A and 29B illustrate an implementation having multiple pixel arrays for producing a three-dimensional image in accordance with the teachings and principles of the present disclosure;
30A and 30B show perspective and side views, respectively, of an implementation of an imaging sensor created on multiple substrates, wherein multiple pixel columns forming a pixel array are located on a first substrate and multiple circuit columns are located on a second substrate, and showing electrical connections and communications between one pixel column and its associated or corresponding circuit column;
31A and 31B illustrate perspective and side views, respectively, of an implementation of an imaging sensor having multiple pixel arrays for producing a three-dimensional image, where the multiple pixel arrays and the image sensor are created on multiple substrates; and is
Fig. 32-36 illustrate embodiments of emitters that include various mechanical filter and shutter configurations.
Detailed Description
The present disclosure relates to methods, systems, and computer-based products directed to digital imaging that may be primarily applicable to medical applications. In the following description of the present disclosure, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration embodiments in which the disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
Conventional endoscopes used in, for example, arthroscopy and laparoscopy are designed such that the image sensor is typically placed within the handpiece unit. In such a configuration, the endoscope unit must transmit incident light along its length towards the sensor via a complex arrangement of precisely coupled optical components with minimal loss and distortion. The cost of the endoscope unit is controlled by the optics, as these components are expensive and the manufacturing process is labor intensive. Moreover, this type of range is mechanically fragile and relatively minor effects can easily damage the components or disturb the relative alignment of the components, thereby resulting in a substantial reduction of light and rendering the range unusable. This requires frequent, expensive, maintenance cycles in order to maintain image quality. One solution to this problem is to place the image sensor within the distal endoscope itself, thereby potentially approaching the simplicity, robustness and economy of optics typically achieved, i.e., within, for example, a mobile phone camera. An acceptable solution for this approach is by no means trivial, however, because it introduces its own set of engineering challenges, at least in that the sensor must fit within a highly confined area, especially in the X and Y dimensions, while there are more degrees of freedom in the Z dimension.
Placing the intrusive limits on the sensor area naturally results in fewer and/or smaller pixels within the pixel array. Reducing the pixel count can directly affect the spatial resolution, while reducing the pixel area can reduce the available signal capacity, and thus the sensitivity of the pixels, as well as optimizing the number of pixels so that the image quality is maximized, using the maximum pixel quality and the tilted minimum pixel resolution and natural number of pixels so that resolution is not an issue, and reducing the signal-to-noise ratio (SNR) per pixel. Reducing signal capacity reduces the dynamic range, i.e., the ability of the imaging device or camera to use a greater range of brightness to simultaneously capture all useful information from the scene. There are a number of ways to extend the dynamic range of the imaging system beyond that of the pixels themselves. All these methods may have some penalty, however (e.g. in resolution or frame rate), and they may introduce undesirable artifacts, which in extreme cases become problematic. Reducing sensitivity has the result that more optical power is required to bring the acceptable signal level to darker areas of the scene. Reducing the f-number (magnifying aperture) can compensate for loss of sensitivity, but at the expense of spatial distortion and reduced depth of focus.
In the sensor industry, CMOS image sensors have largely replaced traditional CCD image sensors in modern camera applications such as endoscopy due to their relatively convenient integration and operation, excellent or comparable image quality, greater versatility, and lower cost. Typically, they may include the circuitry required to convert the image information into digital data, with various levels of digital processing incorporated thereafter. This can be varied from the basic algorithm to the full Image Signal Processing (ISP) chain for the purpose of correcting for non-idealities, which are caused for example by variations in amplifier behaviour, to provide video data in the standard sRGB color space, for example (on-chip camera).
If the control unit or second stage is placed remotely with respect to the sensor and at a perceptible distance from the sensor, it may be desirable to transmit data in the digital domain because it is largely free from interfering noise and signal degradation when compared to transmitting an analog data stream. It is to be appreciated that various electrical digital signaling standards may be used, such as LVDS (low voltage differential signaling), sub-LVDS, SLVS (scalable low voltage signaling), or other electrical digital signaling standards.
There may be a strong desire to reduce the number of electrical conductors to reduce the amount of space consumed by the pads (pads) on the sensor, and to reduce the complexity and cost of sensor manufacture. Although analog-to-digital conversion added to the sensor may be advantageous, the additional area occupied by the conversion circuitry is offset because a significant reduction in analog buffer power is required due to the earlier conversion to digital signals.
Given the typical feature sizes available in CMOS Image Sensor (CIS) technology in terms of area consumption, it may be preferable to: in some implementations, all internal logic signals generated on the same chip are provided as a pixel array via a set of control registers and a simple command interface.
Some implementations of the present disclosure may include aspects of a combined sensor and system design that allows for high definition imaging with reduced pixel counts in highly controlled lighting environments. This can be done by means of frame-by-frame pulsing of the individual color wavelengths and by means of switching or alternating each frame between monochromatic, different color wavelengths using a controlled light source together with a high frame acquisition rate and a corresponding monochromatic sensor of a specific design. As used herein, a monochrome sensor refers to an unfiltered imaging sensor. Since the pixels are color agnostic, the effective spatial resolution is significantly higher than the color (typically bayer pattern filtered) counterpart in a conventional single sensor camera. They may also have higher quantum efficiencies. As much fewer incident photons are wasted between individual pixels. Moreover, bell-space based color modulation requires a reduction in Modulation Transfer Function (MTF) of the companion optics compared to monochromatic modulation to wipe out color artifacts associated with bell patterns. This has a detrimental effect on the actual spatial resolution that can be achieved using the color sensor.
The present disclosure also relates to system aspects for endoscopic applications, wherein an image sensor is present at the distal end of the endoscope. In striving for minimum area sensor based systems, there are other design aspects that can evolve beyond the reduction in pixel count. The area of the distal portion of the chip can be minimized. Furthermore, the number of connections to the chip (board) can also be minimized. The present disclosure describes novel methods for accomplishing these goals for such systems. This involves the design of a fully-custom CMOS image sensor with several novel features.
For the purposes of promoting an understanding of the principles in accordance with the present disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Any alterations and further modifications of the inventive features illustrated herein, and any additional applications of the principles of the disclosure as illustrated herein, which would normally occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the disclosure claimed.
Before the structures, systems, and methods for producing images in light deficient environments are disclosed and described, it is to be understood that this disclosure is not limited to the particular structures, configurations, process steps, and materials disclosed herein as such may vary somewhat. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting, since the scope of the present disclosure will be limited only by the appended claims and equivalents thereof.
When describing and claiming the subject matter of the present disclosure, the following terminology will be used in accordance with the definitions set forth below.
It must be noted that, as used in this specification and the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.
As used herein, the terms "comprises," "comprising," "includes," "including," "characterized by," and grammatical equivalents thereof do not exclude the presence of additional, unrecited elements or method steps that are inclusive or open-ended.
As used herein, the phrase "consisting of … …" and grammatical equivalents thereof excludes any elements, steps, or components that are not specified in the claims.
As used herein, the phrase "consisting essentially of … …" and grammatical equivalents thereof, limits the scope of the claims to specific substances or steps and those substances or steps that do not materially affect the basic and novel characteristics of the disclosure as claimed.
As used herein, the term "proximal" shall refer broadly to the concept of the portion closest to the origin.
As used herein, the term "distal" shall generally refer to the opposite of proximal, and thus refers to the concept of a portion further away or farthest away from the origin, depending on the context.
As used herein, color sensors or multi-spectral sensors are those known to have a Color Filter Array (CFA) thereon to filter incident electromagnetic radiation into its separate components. Such CFAs can be built on bayer patterns or modifications thereof to separate the green, red and blue spectral components of light in the visible range of the electromagnetic spectrum. Referring now to fig. 1-5, a system and method for producing an image in a low light environment will now be described. FIG. 1 shows a schematic diagram of a pair of sensors and electromagnetic emitters in operation for producing an image in a low light environment. This configuration allows for enhanced functionality in light controlled or ambient light deficient environments.
It should be noted that as used herein, the term "light" is a particle or wavelength and is intended to mean electromagnetic radiation detectable by the pixel array, and that the light may include wavelengths from the visible and non-visible spectrum of electromagnetic radiation. The term "split" means herein wavelengths of a predetermined range of the electromagnetic spectrum that are smaller than the entire spectrum, or in other words wavelengths that make up a portion of the electromagnetic spectrum. As used herein, an emitter is a light source that is controllable with respect to a portion of the emitted electromagnetic spectrum or is operable with respect to the physics of its components, the intensity of the emission or the duration of the emission or all of the above. The emitter may emit light in any dithered, diffused, or collimated emission, and may be controlled digitally or by analog methods or systems. As used herein, an electromagnetic emitter is an explosive source of electromagnetic energy and includes a light source, such as a laser, LED, incandescent lamp, or any light source that can be digitally controlled.
The pixel array of the image sensor may be electronically paired with the emitter such that the pixel array of the image sensor and the emitter are synchronized during operation for receiving the emission and adjustments made within the system. As can be seen in fig. 1, the emitter 100 may be tuned to emit electromagnetic radiation in the form of a laser, which may be pulsed (pulse) to illuminate the object 110. The emitter 100 may be pulsed at intervals corresponding to the operation and function of the pixel array 122. The transmitter 100 may pulse the light in a plurality of electromagnetic partitions 105 such that the pixel array receives the electromagnetic energy and produces a data set that is consistent in time with each specified electromagnetic partition 105. For example, FIG. 1 shows a system having a monochromatic sensor 120 and supporting circuitry, the monochromatic sensor 120 having an array of pixels (black and white) 122, the array of pixels 122 being sensitive to electromagnetic radiation of any wavelength. The light emitter 100 shown in the figure may be a laser emitter capable of emitting red, blue, and green electromagnetic segmentations 105a, 105b, and 105c in any desired order. It is to be appreciated that other optical transmitters 100, such as digital or analog based transmitters, may be used in fig. 1 without departing from the scope of the present disclosure.
During operation, data created by the monochrome sensor 120 for any individual pulse may be assigned a particular color segmentation, where the assignment is based on the timing of the color segmentation of the pulse from the emitter 100. Even though pixel 122 is not a dedicated pixel, pixel 122 may assign a color to any given set of data based on a priori information about the emitter.
In one embodiment, three sets of data representing RED, GREEN, and BLUE electromagnetic pulses may be combined to form a single image frame. It is to be understood that the present disclosure is not limited to any particular color combination or any particular electromagnetic segmentation, and that any color combination or any electromagnetic segmentation may be used in place of RED, GREEN, and BLUE, such as cyan, magenta, and yellow; ultraviolet light; infrared rays; any combination of the foregoing, or any other color combination including all visible and non-visible wavelengths. In the figure, the object 110 to be imaged comprises a red portion 110a, a green portion 110b and a blue portion 110 c. As shown in the figure, the reflected light from the electromagnetic pulse contains only data for portions of the object having the specified color corresponding to the pulsed color division. These separate color (or color interval) data sets may then be used to reconstruct an image by combining the data sets at 130.
As shown in FIG. 2, implementations of the present disclosure may include or use a special purpose or general-purpose computer including computer hardware, such as one or more processors and system memory, as discussed in greater detail below. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. The computer-readable medium for storing computer-readable instructions is a computer storage medium (device). Computer-readable media for carrying computer-executable instructions are transmission media. Thus, for example and not limitation, implementations of the present disclosure can include at least two distinctly different computer-readable media; computer storage media (devices) and transmission media.
Computer storage media (devices) include RAM, ROM, EEPROM, CD-ROM, solid state drives ("SSDs") (e.g., based on RAM), flash memory, phase change memory ("PCM"), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A "network" is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. In one implementation, the sensors and camera control units may be networked to communicate with each other and with other components connected on the network to which they are connected. When information is transferred or provided over a network or another communication connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a network and/or data links which can be used to carry desired computer program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Moreover, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures may be transferred automatically from transmission media to computer storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link may be buffered in RAM within a network interface module (e.g., a "NIC"), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. The RAM may also include solid state drives (SSD or real time memory stacked storage based on pci x, e.g. fusion io). Thus, it should be understood that computer storage media (devices) can be included within computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, control units, camera control units, hand-held devices, cell phones, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile phones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. It should be noted that any of the computing devices mentioned above may be provided by or located within a physical location. The present disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Also, where appropriate, the functions described herein may be performed in one or more of the following devices: hardware, software, firmware, digital components, or analog components. For example, one or more Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) may be programmed to perform one or more of the systems and procedures described herein. Certain terms are used throughout the following description and are required to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name but not function.
Fig. 2 is a block diagram illustrating an exemplary computing device 150. Computing device 150 may be used to execute various programs, such as those discussed herein. Computing device 150 may function as a server, a client, or any other computing entity. Computing device 150 may perform various monitoring functions as discussed herein and may execute one or more applications, such as the applications described herein. The computing device 150 may be any of a variety of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, a camera control unit, a tablet computer, and so forth.
Computing device 150 includes one or more processors 152, one or more memory devices 154, one or more interfaces 156, one or more mass storage devices 158, one or more input/output (I/O) devices 160, and a display device 180, all coupled to a bus 162. The one or more processors 152 include one or more processors or controllers that execute instructions stored in the one or more memory devices 154 and/or the one or more mass storage devices 158. The one or more processors 152 may also include various types of computer-readable media, such as cache memory.
The one or more memory devices 154 include various computer-readable media, such as volatile memory (e.g., Random Access Memory (RAM)164) and/or non-volatile memory (e.g., random access memory (ROM) 166). The one or more memory devices 154 may also include rewritable ROM, such as flash memory.
The one or more mass storage devices 158 include a variety of computer-readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., flash memory), and so forth. As shown in FIG. 2, the particular mass storage device is a hard disk drive 174. The various drives can also be included in one or more mass storage devices 158 to be able to read from and/or write to various computer readable media. The one or more mass storage devices 158 include removable media 176 and/or non-removable media.
The one or more I/O devices 160 include various devices that allow data and/or other information to be input into the computing device 150 or obtained from the computing device 150. For example, one or more exemplary I/O devices 160 include: digital imaging devices, electromagnetic sensors and transmitters, cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
Display device 180 includes any type of device capable of displaying information to one or more users of computing device 150. Examples of display device 180 include a monitor, a display terminal, a video projection device, and the like.
The one or more interfaces 106 include various interfaces that allow the computing device 150 to interact with other systems, devices, or computing environments. One or more exemplary interfaces 156 can include any number of different network interfaces 170, such as interfaces to a Local Area Network (LAN), a Wide Area Network (WAN), a wireless network, and the internet. The one or more other interfaces include a user interface 168 and a peripheral interface 172. One or more of the interfaces 156 may also include one or more user interface elements 168. The one or more interfaces 156 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mouse, track pad, etc.), keyboards, etc.
The bus 162 allows the one or more processors 152, the one or more memory devices 154, the one or more interfaces 156, the one or more mass storage devices 158, and the one or more I/O devices 160 to communicate with each other and with other devices or components coupled to the bus 162. Bus 162 represents one or more of any of several types of bus structures, such as a system bus, a PCI bus, an IEEE 1394 bus, a USB bus, and so forth.
For purposes of illustration, programs and other executable program components are illustrated herein as discrete blocks, although it is understood that such programs and components may exist at different times in different storage components of the computing device 150 and be executed by one or more processors 152. Alternatively, the systems and processes described herein may be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) may be programmed to perform one or more of the systems and procedures described herein.
Fig. 2A shows the duty cycle of a sensor used in a rolling readout mode or during sensor readout 200. Frame readout may begin at vertical line 210 and may be represented by vertical line 210. The readout period is indicated by a diagonal or diagonal line 202. The sensors can be read out row by row, with the top of the downward sloping edge being the top row of sensors 212 and the bottom of the downward sloping edge being the bottom row of sensors 214. The time between the last row readout and the next readout period may be referred to as blanking time 216. It is noted that some of the rows of sensor pixels may be covered with a light shield (e.g. a metal plating or any other substantially black layer of another material type). These covered rows of pixels may be referred to as optical black rows 218 and 220. Optical black lines 218 and 220 may be used as inputs for the correction algorithm. As shown in FIG. 2A, these optical black rows 218 and 220 may be located at the top of the pixel array or at the bottom of the pixel array or at both the top and bottom of the pixel array. Fig. 2B illustrates a process for controlling the amount of electromagnetic radiation (e.g., light that is exposed to, and thus integrated or accumulated by, a pixel). It is to be understood that photons are the fundamental particles of electromagnetic radiation. Photons are integrated, absorbed, or accumulated by each pixel and converted into an electrical charge or current. An electronic shutter or rolling shutter (shown by dashed line 222) may be used to start the integration time by resetting the pixels. The light will then integrate until the next readout phase. The position of the electronic shutter 222 can be moved between two readout periods 202 to control the pixel saturation for a given amount of light. Note that this technique allows a constant integration time between two different lines, but introduces a delay when moving from the top row to the bottom row. Fig. 2C shows a case where the electronic shutter 222 has been removed. In this configuration, the integration of incident light may start during readout 202 and end at the next readout period 202 (this next readout period 202 also defines the start of the next integration). Fig. 2D shows a configuration without electronic shutter 222 but with controllable and pulsed light 230 during blanking time 216. This ensures that all rows see the same light produced by the same light pulse 230. In other words, each line will start its integration in the dark environment (which may be at the optical black back row 220 of the readout frame (m) of the maximum light pulse width) and will then receive a flicker of light and will end its integration in the dark environment (which may be at the optical black front row 218 of the next subsequent readout frame (m +1) of the maximum light pulse width). In the example of fig. 2D, the image generated from the light pulse will only be available during frame (m +1) readout without any interference within frames (m) and (m + 2). It is noted that the condition of having a light pulse read out in only one frame without adjacent frame interference is to have a given light pulse ignited during the blanking time 216. Because optical black lines 218,220 are not light sensitive, the optical black back line 220 time for frame (m) and the optical black back line 218 time for frame (m +1) may be increased to the blanking time 216 to determine the maximum range of ignition times for light pulse 230. As shown in fig. 2A, the sensor may cycle multiple times to receive data for each pulse modulated color (e.g., red, green, blue). Each cycle may be timed. In one embodiment, the loop may be timed to operate within an interval of 16.67 ms. In another embodiment, the loop may be timed to operate within an 8.3ms interval. It is to be appreciated that other timing intervals are contemplated by and are intended to fall within the scope of the present disclosure.
Figure 3 graphically illustrates operation of an embodiment of an electromagnetic transmitter. The emitter may be timed to coincide with the period of the sensor such that electromagnetic radiation is emitted during the sensor operating period and/or during a portion of the sensor operating period. Fig. 3 shows pulse 1 at 302, pulse 2 at 304, and pulse 3 at 306. In one embodiment, the emitter may be pulsed during the readout portion 202 of the sensor operation cycle. In one embodiment, the transmitter may be pulsed during the blanking portion 216 of the sensor operating period. In one embodiment, the transmitter may be pulsed for a duration during portions of two or more sensor operating cycles. In one embodiment, the emitter may start a pulse during the blanking portion 216 or during the optical black portion 220 of the readout operation 202 and end the pulse during the readout portion 202 or during the optical black portion 218 of the readout portion 202 of the next subsequent cycle. It is to be understood that any combination of the above is intended to fall within the scope of the present disclosure, as long as the pulses of the emitter and the period of the sensor correspond.
Figure 4 graphically represents different durations and amplitudes of emitted electromagnetic pulses (e.g., pulse 1 at 402, pulse 2 at 404, and pulse 3 at 406) to control exposure. An emitter with a fixed output amplitude may be pulsed for one interval during any one of the periods noted above with respect to fig. 2D and 3 to provide the required electromagnetic energy to the pixel array. Emitters with a fixed output amplitude may be pulsed for longer time intervals, thereby providing more electromagnetic energy to the pixel, or the emitters may be pulsed for shorter time intervals, thereby providing less electromagnetic energy. Longer intervals or shorter intervals are required depending on the operating conditions.
The emission amplitude itself may be increased to provide more electromagnetic energy to the pixel than the time interval during which the fixed output amplitude of the emitter pulse is adjusted. Similarly, decreasing the amplitude of the pulse provides less electromagnetic energy to the pixel. It is noted that embodiments of the system may have the ability to adjust both amplitude and duration, if desired. Furthermore, the sensor may be adjusted to increase its sensitivity and duration as desired for optimal image quality. Fig. 4 shows different amplitudes and durations of the pulses. In illustration, pulse 1 at 402 has a higher amplitude or intensity than pulse 2 at 404 or pulse 3 at 406. Further, pulse 1 at 402 has a shorter duration than pulse 2 at 404 or pulse 3 at 406, such that the electromagnetic energy provided by the pulses is accounted for by the area below the pulses shown in the illustration. In illustration, pulse 2 at 404 has a relatively low pulse or intensity and longer duration when compared to pulse 1 at 402 or pulse 3 at 406. Finally, in the illustration, pulse 3 at 406 has an intermediate amplitude or intensity and duration when compared to pulse 1 at 402 and pulse 2 at 404.
Fig. 5 is a graphical representation of an embodiment of the present disclosure combining an operating cycle, an electromagnetic emitter, and the emitted electromagnetic pulses of fig. 2-4 to illustrate an imaging system during operation, in accordance with the principles and teachings of the present disclosure. As can be seen in the figure, the electromagnetic emitter pulses the emission primarily during the blanking period 216 of the sensor so that the pixels will be charged and ready to read during the readout portion 202 of the sensor cycle. If additional time is needed or desired to pulse the electromagnetic energy, the dashed portions in the pulse (from FIG. 3) illustrate the possibility or ability to emit electromagnetic energy during the optical black portions 220 and 218 of the read period (sensor period) 200.
Referring now to fig. 6-9, fig. 6 shows schematic diagrams of two different processes for a time period from t (0) to t (1) for recording a video frame for full spectral light and split spectral light. It is to be noted that the color sensor has a Color Filter Array (CFA) for filtering out specific wavelengths of light of each pixel that are typically used for full spectrum light reception. An example of a CFA is a bell pattern. Because the color sensor may include pixels within the array that are made sensitive to monochromatic colors from within the full spectrum, the reduced resolution image results because the pixel array has pixel spacing dedicated to only monochromatic colors of light within the full spectrum. Typically, this arrangement forms a checkerboard-like pattern across the array.
Conversely, when a split spectrum of light is used, the sensor may be made sensitive or responsive to the magnitude of all of the light energy, as the pixel array will be instructed to sense electromagnetic energy from a predetermined split of the full spectrum of electromagnetic energy in each cycle. Thus, to form an image, the sensor need only cycle through a number of different fractions within the full spectrum from the light, and then reassemble the image to display a predetermined mixture of color values for each pixel on the array. Thus, a higher resolution image is also provided because there is a reduced distance compared to a bayer sensor between the centers of pixels of the same color sensitivity for each color pulse. Therefore, the formed color image has a high Modulation Transfer Function (MTF). The image from each color-division frame cycle therefore has a higher resolution, so the composite image created when the divided light frames are combined into a full-color frame also has a higher resolution. In other words, because each pixel or each pixel within the array (at most instead of every other pixel in a sensor with color filters) is sensing the magnitude of the energy for a given pulse and a given scene, just a temporally separated portion, a higher resolution image is created for each scene using less derived (less accurate) data that needs to be introduced.
For example, white light or full spectrum visible light is a combination of red, green, and blue light. In the embodiment shown in fig. 6, it can be seen that in the split spectrum process 620 and the full spectrum process 610, the time to acquire the images is t (0) to t (1). In the full spectrum process 610, white light or a full spectrum of electromagnetic energy is emitted at 612. At 614, white light or full spectrum electromagnetic energy is sensed. At 616, the image is processed and displayed. Thus, the image is processed and displayed between t (0) and t (1). Conversely, in split spectrum processing 620, a first split is emitted at 622 and sensed at 624. At 626, a second segmentation is transmitted and then sensed at 628. At 630, a third segmentation is transmitted and sensed at 632. At 634, the image is processed and displayed. It is to be appreciated that any system that uses a pattern sensor period that is at least two times faster than a white light period is intended to fall within the scope of the present disclosure.
As can be seen graphically in the embodiment shown in fig. 6 between times t (0) and t (1), the sensor used to split the spectroscopy system 620 cycles three times for each period in the full-spectrum system. In the split-spectrum system 620, the first of the three sensor cycles is for the green spectra 622 and 624, the second of the three sensor cycles is for the red spectra 626 and 628, and the third of the three sensor cycles is for the blue spectra 630 and 632. Thus, in one embodiment where the display device (LCD panel) operates at 50 to 60 frames per second, the split light system should operate at 150 to 180 frames per second to maintain the continuity and smoothness of the displayed video.
In other embodiments, there may be different acquisition and display frame rates. Furthermore, the average acquisition rate may be any multiple of the display rate.
In one embodiment, it may be desirable to represent not all partitions equally within the system frame rate. In other words, not all light sources have to be pulsed using the same rule to emphasize and de-emphasize aspects of the recording scene as desired by the user. It should also be understood that non-visible and visible divisions of the electromagnetic spectrum may be pulsed together within the system, with their respective data values stitched into a video output as desired for display to a user.
Embodiments may include the following pulse cycle patterns:
a green pulse;
a red pulse;
a blue pulse;
a green pulse;
a red pulse;
a blue pulse;
infrared (IR) pulses;
(repeat).
As can be seen in this example, the IR split may be pulsed at a different rate than the other split pulses. This may be done to emphasize particular aspects of the scene, where the IR data only overlaps with other data in the video output to accomplish the desired emphasis. It should be noted that the addition of the fourth electromagnetic segmentation does not necessarily require a serialized system, operating at four times the rate of a full-spectrum non-contiguous system, as each segmentation is not necessarily represented equally in the pulse pattern. As seen in this embodiment, the addition of a segmentation pulse, less represented in the pulse pattern (IR in the example above), will result in less than a 20% increase in the cycling rate of the sensor to accommodate the irregular segmented sample.
In one embodiment, electromagnetic segmentation may be transmitted that is sensitive to staining or material used to highlight aspects of the scene. In this embodiment, it is sufficient to highlight the location of the dye or material without the need for high resolution. In such embodiments, the dye sensitive electromagnetic segmentation may be cycled less frequently than other segmentations within the system to include emphasized data.
The segmentation period may be divided to accommodate or estimate various imaging and video standards. In one embodiment, the split period may include pulses of electromagnetic energy in the red, green, and blue spectra as best shown in fig. 7A-7D, as follows. In fig. 7A, the different light intensities are achieved by modulating the light pulse width or duration within the operating range shown by the vertical dashed gray lines. In fig. 7B, the different light intensities are achieved by modulating the optical power or the power of an electromagnetic emitter (which may be a laser or LED emitter) but keeping the pulse width or duration constant. Fig. 7C shows a situation where both the optical power and the optical pulse width are modulated to result in greater flexibility. The segmentation period may use CMY, IR and uv light by using a non-visible pulse source mixed with a visible pulse source and any other color space currently known or yet to be developed as needed to generate an image or estimate a desired video standard. It should also be noted that the system may be capable of switching between color spaces immediately to provide the desired image output quality.
In embodiments using the color space green-blue-green-red (as seen in fig. 7D), it may be desirable for the luminance component to be pulsed more frequently than the color difference component, since users are generally more sensitive to differences in light amplitude than to differences in light color. This principle can be exploited using a monochrome sensor as shown in fig. 7D. In fig. 7D, the green color containing the most luminance information may be pulsed more frequently or use more intensity in the (G-B-G-R-G-B-G-R … …) scheme to obtain luminance data. This configuration would create a video stream with perceptually more details without creating and transmitting imperceptible data.
In one embodiment, a replica of the weaker divided pulse may be used to produce an output adjusted for the weaker pulse. For example, blue laser light is considered less sensitive than silicon-based pixels and is more difficult to produce than red or green light, and therefore may be pulsed more frequently during a frame period to compensate for the weak light. These additional pulses may be done continuously over time or by using multiple lasers that are pulsed simultaneously to produce the desired compensation effect. It is noted that by pulsing during the blanking period (during the time when the sensor is not reading out the pixel array), the sensor is insensitive to differences or mismatches between lasers of the same kind and only accumulates light for the desired output. In another embodiment, the maximum light pulse range may be different from frame to frame. This is illustrated in fig. 7E where the light pulses are different from one frame to another. The sensor may be created to be able to program different blanking times using a repeating pattern of 2 frames or 3 frames or 4 frames or n frames. In fig. 7E, 4 different light pulses are shown, and pulse 1 may repeat, for example, after pulse 4, and may have a pattern of 4 frames with different blanking times. This technique can be used to place the most powerful partitions at the minimum blanking time and thus allow the weakest partition to have a wider pulse on one of its subsequent frames without the need to increase the readout speed. The reconstructed frame may still have a regular pattern from frame to frame, and thus the reconstructed frame consists of a plurality of pulse frames.
As can be seen in fig. 8, because each segmented spectrum of light may have a different energy value, the sensor and/or light emitter may be adjusted to compensate for the difference in energy values. At 810, data obtained from a histogram from a previous frame may be analyzed. At 820, the sensors may be adjusted as noted below. Further, at 830, the transmitter may be adjusted. At 840, an image may be obtained from the adjusted sample time from the sensor, or using adjusted (increasing or decreasing) emitted light to obtain an image, or a combination of the above. For example, because the red spectrum is more easily detected by the sensor within the system than the blue spectrum, the sensor may be tuned to be less sensitive during the red split period and more sensitive during the blue split period due to the low quantum efficiency of the blue split relative to silicon (best shown in fig. 9). Likewise, the emitters may be adjusted to provide adjusted segmentation (e.g., higher or lower intensity and duration). Also, the adjustment may be done at both the sensor and transmitter levels. The transmitter may also be designed to transmit at one particular frequency or may be altered to transmit a particular division of frequencies to broaden the spectrum being transmitted, if desired for a particular application.
Fig. 10 shows a schematic diagram of an unshared 4T pixel. The TX signal is used to transfer the accumulated charge from the photodiode (PPD) to the Floating Diffusion (FD). The reset signal is used to reset the FD to the reset bus. If the reset signal and the TX signal "happen" at the same time, the PPD is constantly reset (each photo charge generated in the PPD is collected directly at the reset bus) and the PPD is always empty. A typical pixel array implementation includes: a horizontal reset line to which the reset signals of all pixels in a row are attached and a horizontal TX line to which the TX signals of all pixels in a row are attached.
In one embodiment, the timing of the sensor sensitivity adjustment is shown and may be implemented using a global reset mechanism (i.e., a device that fires all pixel array reset signals at once) and a global TX mechanism (i.e., a device that fires all pixel array TX signals at once). This is shown in fig. 11. In this case, the light pulse is constant in duration and amplitude, but the light integrated in all pixels starts from the "on" to "off" transition of the global TX and ends with a light pulse. Thus, modulation is achieved by shifting the falling edge of the global TX pulse.
Instead, the emitter may emit red light, which has less intensity than blue light, to produce a properly exposed image (best shown in FIG. 12). At 1210, data obtained from a histogram from a previous frame may be analyzed. At 1220, the transmitter may be adjusted. At 1230, an image can be obtained from the adjusted emitted light. Further, in one embodiment, both the emitter and the sensor may be adjusted simultaneously.
In some embodiments, the segmented spectral frame is reconstructed into a full spectral frame for subsequent output as simple as the sensed value of each pixel in the promiscuous array. Furthermore, the shuffling and mixing of values may be an average only, or may be tuned to a predetermined look-up table (LUT) of values for the desired output. In embodiments of the system using segmented spectra, the sensed values may be post-processed or slightly further improved from the sensor by the image or secondary processor and just output to the display.
Fig. 13 shows a basic example at 1300 of how a monochromatic ISP and ISP chain for purposes of generating sRGB image sequences from raw sensor data produced in an existing G-R-G-B light pulse scheme may be assembled.
The first stage involves corrections (see 1302,1304, and 1306 in FIG. 13) to account for any non-idealities in the sensor technology that is best suited to work in the original data domain (see FIG. 21).
At the next stage, two frames will be buffered (see 1308 and 1310 in fig. 13) since each last frame gets data from three original frames. Frame reconstruction at 1314 will be performed by sampling data from the current frame and two buffered frames (1308 and/or 1310). This reconstruction process results in a full color frame in the linear RGB color space.
In this example, the white balance coefficients at 1318 and the color correction matrix at 1320 are applied before converting to YCbCr space at 1322 for subsequent edge enhancement at 1324. After edge enhancement at 1324, the image is converted back to linear RGB at 1326 for scaling at 1328, if possible.
Finally, the gamma transfer function at 1330 will be applied to convert the data to the sRGB domain at 1332.
FIG. 14 is an example of an implementation of color fusion hardware. The color fusion hardware obtains an rgbgrgbggrgbg video data stream at 1402 and converts it to a parallel RGB video data stream at 1405. The bit width on the input side may be, for example, 12 bits per color. The output width for that example would be 36 bits per pixel. Other embodiments may have different initial bit widths and 3 times the number of output widths. The memory writer block renders it an input RGBG video stream at 1402, and writes each frame to its correct frame memory buffer at 1404 (the memory writer causes the same pulse generator 1410 running the laser source). As shown at 1404, the write to memory follows the pattern: red, green 1, blue, green 2 and then starts to return to red again. At 1406, the memory reader reads three frames at once to construct the RGB pixels. Each pixel is three times the bit width of the individual color component. The reader also causes a laser pulse generator at 1410. The reader waits until the red, green 1 and blue frames have been written and then continues to read them out in parallel while the writer continues to write green 2 and begins to return to red. When the red color is complete, the reader starts reading from blue, green 2, and red. The pattern continues indefinitely.
Referring now to fig. 15 and 16, in one embodiment, the RG1BG2RG1BG2 pattern reconstruction shown in fig. 16 allows for a 60fps output and uses a 120fps input. Each successive frame contains either a red component or a blue component from a previous frame. In fig. 16, each color component may take 8.3ms and the resulting reconstructed frame has a periodicity of 16.67 ms. Typically, for this pulse scheme, the reconstructed frame has twice the period of the incident color frame, as shown in fig. 15. In other embodiments, different pulsing schemes may be used. For example, embodiments may be based on the timing of each color component or frame (TI) and the reconstructed frame having twice the period of the incident color frame (2 × TI). Different frames within the sequence may have different frame periods and the average acquisition rate may be any multiple of the final frame rate.
Fig. 17-20 show schematic diagrams of a color correction method and hardware for use with a segmented light system. It is common in digital imaging to manipulate values within image data to correct the output to meet user expectations or to highlight some aspect of the imaging target. This aspect is most commonly done in satellite images that are tuned and adjusted to emphasize one data type over another. Most often, there is a full spectrum of available electromagnetic energy in the data needed by the satellite, since the light source is not controllable, i.e. the sun is the light source. Instead, there are imaging conditions where the light is controlled and even provided by the user. In this case, calibration of the image data is still desirable, as incorrect intensity may be given to certain data on other data without calibration. In systems where the light is controlled by the user, it is advantageous to provide the user with an already light emission, and the light emission may be only a part of the electromagnetic spectrum or parts of the full electromagnetic spectrum. Calibration is still important to meet the user's expectations and to check for errors within the system. One method of calibration may be a table of expected values for a given imaging condition that can be compared to data from the sensor. One embodiment may include color neutral scenes with known values that should be output by the imaging device, and when the device samples the color neutral scenes, the device may be adjusted to meet those known values.
In use, after start-up, the system can sample a color neutral scene at 1710 by running multiple full cycles of electromagnetic spectrum segmentation at 1702 (as shown in fig. 17). A table of values 1708 may be formed to generate a histogram for the frame at 1704. At 1706, the frame's value may be compared to known or expected values from color-neutral scenes. The imaging device may then be adjusted to meet the desired output at 1712. In one embodiment, shown in fig. 17, the system may include an Image Signal Processor (ISP) that may be adjusted to calibrate the imaging device.
It is noted that because each of the split spectra of light may have different energy values, the sensor and/or light emitter may be adjusted to compensate for the difference in energy values. For example, in one embodiment, because the blue spectrum has lower quantum efficiency than the red spectrum with respect to a silicon-based imager, the responsiveness of the sensor may then be adjusted to be less responsive during the red period and more responsive during the blue period. Conversely, the emitter may emit a higher intensity of blue light because blue light has a lower quantum efficiency than red light to produce a properly exposed image.
In the embodiment shown in fig. 18, where the light source emissions are set and controllable by the system, adjustments to those emissions may be done to color correct the image at 1800. The adjustment may be done for any aspect of the emitted light, such as amplitude, duration (i.e. time on) or range within the spectral split. Furthermore, in some embodiments both the emitter and the sensor may be adjusted simultaneously, as shown in fig. 19.
To reduce the amount of noise and artifacts in the output image stream or video, the sensors or emitters in the system may be adjusted in sections, as can be seen in fig. 20. A system 2000 in which both the emitter 2006 and sensor 2008 may be adjusted is shown in fig. 20, but imaging devices in which either the emitter or sensor is adjusted during use or during a portion of use are also contemplated and fall within the scope of the present disclosure. It may be advantageous to adjust only the emitter during one portion of use and only the sensor during another portion of use, while also adjusting both during one portion of use. In any of the above embodiments, an improved image may be obtained by limiting the overall adjustment that the system can make between frame periods. In other words, embodiments may be limited such that the transmitter may only be adjusted at a portion of its operating range at any time between frames. Also, the sensor may be limited such that the sensor may only be adjusted at a portion of its operating range at any time between frames. Also, in one embodiment, both the emitters and sensors may be limited such that they may only be adjusted together at any time between frames at a portion of their respective operating ranges.
In an exemplary embodiment, the partial adjustment of the in-system component may be performed, for example, at about 1dB of the operating range of the component to correct the exposure of the previous frame. 1dB is merely an example, and it should be noted that in other embodiments, the allowable adjustment of the components may be any portion of their respective operating ranges. The components of the system may be varied by intensity or duration adjustments, i.e., typically governed by the number of bits (resolution) output by the component. The component resolution may typically be in the range of about 10-24 bits, but should not be limited to this range as it is intended to include the resolution of components in development in addition to those currently available. For example, after the first frame, it is determined that the scene is too blue when viewed, then the emitter may be adjusted to reduce the amplitude or duration of the pulse of blue light by a fraction of the adjustment, e.g., about 1dB, during the blue period of the system, as discussed above.
In this exemplary embodiment, more than 10% may be needed, but the system has limited itself to 1dB adjustment of the operating range per system cycle. Thus, during the next system cycle, the blue light can be adjusted again, if necessary. The partial adjustment between cycles may have a dampening effect on the output image and will reduce noise and artifacts when operating the emitter and sensor at their operating extremes. Any fractional number of the operational ranges for which the adjusted components may be determined may be used as a limiting factor, or it may be determined that particular embodiments of the system may include components that are adjustable over their entire operational ranges.
In addition, the optical black region of any image sensor may be used to assist in image correction and noise reduction. In one embodiment, the values read from the optical black area may be compared to those of the effective pixel area of the sensor to establish a reference point for use in image data processing. Fig. 21 illustrates such a sensor correction process that can be used in a system of color pulsing. Cmos image sensors typically have a number of non-ideal factors that have a detrimental effect on image quality, especially low light. The main part of these non-idealities is fixed pattern noise and line noise. Fixed pattern noise is a spread in the offset of the sensing elements. Typically most FPN is pixel-to-pixel spread, which, among other sources, originates from random variations in dark current from photodiode to photodiode. This appears very anomalous to the observer. Even more excessive is the column PFN due to the offset in the readout chain associated with a particular pixel column. This results in perceived vertical streaks within the image.
The total amount of illumination controls the benefit that an entire frame with dark data can be obtained periodically and used to correct for pixel and column offsets. In the illustrated example, a single frame buffer may be used to complete a running average of the entire frame without light by using, for example, simple exponential smoothing. The dark-averaged frame is subtracted from each illuminated frame during regular operation.
Line noise is a random temporal variation in pixel offset within each row. Since it is temporary, the correction must be recalculated for each line and each frame. For this purpose, there are typically multiple light-blind (OB) pixels in each row in the array, which must first be sampled to evaluate the line offset before sampling the light-sensitive pixels. Then, only the line offset is subtracted during the line noise correction process.
In the example in fig. 21, there are other corrections that involve having the data in the proper order, monitoring and controlling the voltage offset in the analog domain (black jig), and identifying/correcting individual defective pixels.
FIGS. 22 and 23 illustrate a method and hardware schematic for increasing dynamic range in a closed or limited light environment. In one embodiment, the exposure inputs may be input at different levels over time and combined to produce a greater dynamic range. As can be seen in fig. 22, the imaging system may cycle at 2202 with a first intensity for a first period, and then at 2204 with a second intensity for a second period, and then by combining those of the first and second periods into a single frame at 2206, such that a greater dynamic range may be achieved. A greater dynamic range may be particularly desirable because of the limited spatial environment in which the imaging device is used. In limited space environments where light is insufficient or dark, exposure has an exponential relationship to distance. For example, objects near the light source and objects near the optical opening of the imaging device tend to be exposed, while objects at a distance tend to be extremely underexposed because there is little (any) ambient light.
As can be seen in fig. 23, the period of the system with emission of electromagnetic energy in multiple divisions may be continuously cycled according to the division of the electromagnetic spectrum at 2300. For example, in an embodiment in which the emitter emits laser light that is visibly red segmented, visibly blue segmented, and visibly green segmented, the two periodic data sets to be combined may be of the form:
the red color of intensity one at 2302,
the red color of intensity two at 2304,
the blue color of intensity one at 2302,
the blue color of intensity two at 2304,
the green color of intensity one at 2302,
green at an intensity of two at 2304.
Alternatively, the system may cycle in the following form:
the red color of intensity one at 2302,
the blue color of intensity one at 2302,
the green color of intensity one at 2302,
the red color of intensity two at 2304,
the blue color of intensity two at 2304,
green at an intensity of two at 2304.
In such an embodiment, the first image may be obtained from the intensity one value and the second image may be obtained from the intensity two value and then combined or processed at 2310 as a complete set of image data rather than their component parts.
It is contemplated within the scope of the present disclosure that any number of transmit divisions may be used in any order. As can be seen in fig. 23, "n" is used as a variable to represent any number of electromagnetic divisions, and "m" is used to represent any level of intensity for "n" divisions. Such a system may cycle in the following form:
n for the intensity m at 2306,
(n +1) at intensity (m +1),
(n +2) at intensity (m +2),
(n + i) at intensity (m + j) at 2308.
Thus, any pattern of the sequencing cycle may be used to produce the desired image correction, where "i" and "j" are additional values within the operating range of the imaging system.
For the purpose of maximizing the fidelity of color reproduction, digital color cameras include an image processing stage. This is done by a 3 × 3 matrix called Color Correction Matrix (CCM):
the directions in CCM are tuned using a set of reference colors (e.g., from a macbeth table) to provide the best full match to the sRGB standard color space. The diagonal terms a, e and i are the effective white balance gain. Typically, though, white balance is applied separately, and the total number of horizontal lines is constrained to be uniform, so that no net gain is applied by the CCM itself. The off-diagonal term effectively handles color crosstalk in the input channels. Thus the bayer sensor has a higher off-diagonal than a 3-chip camera because the color filter array has multiple response overlaps between channels.
There is a signal-to-noise penalty for color correction that depends on the magnitude of the off-diagonal term. A hypothetical sensor with a channel that perfectly matches the sRGB component will have the identity matrix CCM:
the signal-to-noise ratio (SNR) evaluated in the green channel of 10000e of perfect white photo-electric signal per pixel (neglecting readout noise) for this case would be:
any offset from this reduces the SNR. CCMs are obtained, for example, which have values that are unusual for bayer CMOS sensors:
in this case, green SNR:
fig. 24 shows the results of a full SNR simulation using D65 illumination for a typical bayer sensor CCM for the case of CCM using identity matrices versus tuning. The SNR evaluated for the luminance component is worse about 6dB due to the result of color correction.
The system described in this publication uses monochromatic illumination at three discrete wavelengths and therefore does not present color crosstalk per se. The crosses in fig. 25 indicate the positions of the three wavelengths, which are available via the laser diode source (465nm,532nm &639nm) compared to the sRGB range indicated by the triangles.
The off-diagonal terms for CCM in this case are greatly reduced compared to bell sensors, which provides a significant SNR advantage.
Fig. 26 shows an imaging system with increased dynamic range as provided by the pixel configuration of the pixel array of the image sensor. As can be seen in the figure, adjacent pixels 2602 and 2604 can be arranged at different sensitivities so that each cycle includes data generated with respect to pixels that are more or less sensitive to each other. Because multiple sensitivities can be recorded in a single cycle of the array, if recorded in parallel, the dynamic range can be increased, as opposed to the time-dependent serial nature of other embodiments.
In one embodiment, the array may include rows of pixels that may be placed in rows based on the sensitivity of the rows of pixels. In one embodiment, pixels of different sensitivities may alternate within a row or column with respect to their nearest neighbors to form a checkerboard pattern across the array based on those sensitivities. The above may be accomplished by any pixel circuit sharing arrangement or in any individual pixel circuit arrangement.
Wide dynamic range can be accomplished by having multiple global TXs, each only firing on a different set of pixels. For example, in global mode, the global TX1 signal is firing pixel set 1, the global TX2 signal is firing pixel set 2, and the global TXn signal is firing pixel set n.
Based on fig. 11, fig. 27A shows a timing example of 2 different pixel sensitivities (dual pixel sensitivities) in a pixel array. In this case, the global TX1 signal fires half of the pixels of the array, and global TX2 fires the other half of the pixels. Because global TX1 and global TX2 have different "on" to "off edge locations, the integrated light is different between TX1 pixels and TX2 pixels. FIG. 27B illustrates different embodiments of timing for dual pixel sensitivity. In this case, the light pulse is modulated twice (pulse duration and/or amplitude). TX1 pixels integrated P1 pulses and TX2 pixels integrated P1+ P2 pulses. The separate global TX signals may be done in a variety of ways. The following are examples:
distinguish the TX lines from each row; and is
Send multiple TX lines per row, each addressing a different set of pixels.
In one implementation, an apparatus is described that provides wide dynamic range video, which utilizes the color pulsing system described in this disclosure. The basis is a pixel with multiple features, or which can be tuned differently within the same monochrome array, which is capable of integrating incident light for different durations within the same frame. An example of a pixel arrangement in an array of such sensors would be a uniform checkerboard pattern throughout two independently variable integration times. For this case, the red and blue information may be provided within the same frame. In practice, this can be done at the same time as extending the dynamic range for the green frame, where this is most desirable because the two integration times can be adjusted on a frame-by-frame basis over the frame. The benefit is that color-shifting artifacts are less of a problem if all data is obtained from two and three frames. There is of course a subsequent loss of spatial resolution for the red and blue data, but is a lesser consequence of image quality compared to green, since the luminance component is dominated by the green data.
An inherent property of monochromatic Wide Dynamic Range (WDR) arrays is that pixels with longer integration times must integrate a superset of the light seen by the shorter integration time pixels. For regular wide dynamic range operation in green frames, this is desirable. For red and blue frames this means that the pulse must be controlled along with the exposure time, for example to provide blue light from the start of a long exposure, and to switch to red at the point where the shorter exposed pixel is turned on (both pixel types have charge transferred simultaneously).
In the color blending stage, the two pixel features are divided into two buffers. The empty pixels are then filled using, for example, linear interpolation. At this time, one buffer contains a full image of blue data and the other contains a full image of red + blue data. The blue buffer may be subtracted from the second buffer to give pure red data.
Fig. 28A-28C illustrate the use of corresponding color sensor pulses and/or synchronized or constantly maintained white light emission. As can be seen in fig. 28A, the white light emitter may be configured to emit a light beam during the blanking period of the corresponding sensor to provide a controllable light source in a controllable light environment. The light source can emit a beam of constant amplitude and vary the duration of the pulse, as seen in fig. 28A, or the pulse can be held constant by varying the amplitude to properly achieve the exposed data, as shown in fig. 28B. Fig. 28C shows a graphical representation of a constant light source that can be modulated by varying the current controlled by and synchronized with the sensor.
In one embodiment, white light or multi-spectrum light may be emitted as pulses to provide data for use within the system (best shown in fig. 28A-28C), if desired. White light emission in combination with segmentation of the electromagnetic spectrum may be useful to emphasize and de-emphasize particular aspects within the scene. Such an embodiment may use the following pulse pattern:
a green pulse;
a red pulse;
a blue pulse;
a green pulse;
a red pulse;
a blue pulse;
white light (multispectral) pulses;
(repetition)
Any system that uses an image sensor period that is at least two times faster than the white light period is intended to fall within the scope of the present disclosure. It is to be appreciated that any combination of segmentation of the electromagnetic spectrum is contemplated herein, whether it be from the visible spectrum or the non-visible spectrum of the full electromagnetic spectrum.
Fig. 29A and 29B illustrate respective perspective and side views of an implementation of a monochrome sensor 2900 having multiple pixel arrays for producing three-dimensional images, in accordance with the teachings and principles of the present disclosure. Such an implementation may be desirable for three-dimensional image acquisition, where the two pixel arrays 2902 and 2904 may be offset during use. In another implementation, first pixel array 2902 and second pixel array 2904 may be dedicated to receiving a predetermined range of wavelengths of electromagnetic radiation, where the first pixel array is dedicated to a different range of wavelengths of electromagnetic radiation than the second pixel array.
Fig. 30A and 30B show respective perspective and side views of an implementation of an imaging sensor 3000 created on multiple substrates. As shown, a plurality of pixel columns 3004 forming a pixel array are located on the first substrate 3002, and a plurality of circuit columns 3008 are located on the second substrate 3006. The figure also shows the electrical connections and communications between a column of pixels and its associated or corresponding circuit column. In one implementation, an image sensor that may be fabricated using a single, monochrome substrate/on-chip pixel array and supporting circuitry may have the pixel array separate from all or most of the supporting circuitry. The present disclosure may use at least two substrates/chips that are to be stacked together using three-dimensional stacking techniques. The first substrate/chip 3002 of the two substrates/chips may be processed using image CMOS processing. The first substrate/chip 3002 may include an array of pixels that are either exclusive or surrounded by limited circuitry. The second or subsequent substrate/chip 3006 may be processed using any process and need not come from image CMOS processing. The second substrate/chip 3006 may be, but is not limited to, a more dense digital process to integrate various amounts of functionality in a very limited space or area on the substrate/chip, or a mixed mode or analog process to integrate, for example, precise analog functions, or an RF process to implement wireless capabilities, or a MEMS (micro electro mechanical system) to integrate MEMS devices. Any three-dimensional technology may be used to stack the image CMOS substrate/chip 3002 with a second or subsequent substrate/chip 3006. The second substrate/chip 3006 may support most or most of the circuitry that would otherwise be implemented in the first image CMOS chip 3002 (as implemented on a monochrome substrate/chip), such as peripheral circuitry, and thus increase the overall system area while keeping the pixel array size constant and optimized to the fullest extent possible. Electrical connection between the two substrates/chips can be accomplished through interconnects 3003 and 3005, which can be wire bonds, bumps, and/or TSVs (through silicon vias).
Fig. 31A and 31B show respective perspective and side views of an implementation of an imaging sensor 3100 having a plurality of pixel arrays for producing a three-dimensional image. The three-dimensional image sensor may be created on a plurality of substrates and may include a plurality of pixel arrays and other associated circuits, wherein a plurality of pixel columns 3104a for forming a first pixel array and a plurality of pixel columns 3104b for forming a second pixel array are located on respective substrates 3102a and 3102b, respectively, and a plurality of circuit columns 3108a and 3108b are located on a single substrate 3106. Electrical connections and communications between the pixel columns and associated or corresponding circuit columns are also shown.
It is to be appreciated that the teachings and principles of the present disclosure may be employed on reusable device platforms, limited use device platforms, re-simulation use device platforms, or single use/disposable device platforms without departing from the scope of the present disclosure. It is to be understood that in a re-usable equipment platform, the end user is responsible for cleaning and equipment sterilization. In a limited use device platform, devices may be used for some specific amount of time before becoming inapplicable. Typically new equipment is delivered aseptically using additional uses that require the user to clean and disinfect prior to the additional use. In a re-simulation of use of the device platform, a third party may reprocess the device (e.g., clean, package, and sterilize), and the single use device is used in addition at a lower cost than the new unit. In a single use/disposable device platform, the device is provided aseptically to a work room and used only once prior to processing.
Embodiments of the emitter may use the use of mechanical shutters and filters to create the pulsed color light. As shown in fig. 32, an alternative method of generating pulsed color light using a white light source and a mechanical filter and shutter system 3200. The wheel may contain a pattern of transparent color filter windows and opaque portions for the shutters. The opaque parts will not allow light to pass and will create dark periods where sensor readout will occur. The white light source will be based on any technology: laser, LED, xenon lamp, halogen lamp, metal halogen lamp. White light may be projected through a series of color filters 3207, 3209, and 3211 of the desired pattern of colored light pulses. One embodiment may be illustrated as a red filter 3207, a green filter 3209, a blue filter 3211, a green filter 3209. The filter and shutter system 3200 can be placed on a wheel that rotates at a desired frequency to synchronize with the sensor, such that knowledge of the arcuate length and rotation rate of the mechanical color filters 3207, 3209, and 3211 and the shutter system 3205 will provide timing information for the operation of the corresponding monochrome image sensor.
The embodiment shown in fig. 33 may include only a pattern of transparent color filters 3307,3309, and 3311 on the filter wheel 3300. In the present configuration, different shutters may be used. The shutter may be mechanical and the "pulse" duration may be dynamically adjusted by changing its size. Alternatively, the shutter may be electronic and incorporated into the sensor design. The motor that rotates the filter wheel 3300 would need to communicate with or be controlled in conjunction with the sensor so that knowledge of the bow length and rotation rate of the mechanical color filter 3307,3309 and 3311 system would provide timing information for the operation of the corresponding monochrome image sensor. The control system will need to know the appropriate color filters for each frame acquired by the sensor so that a full color image can be properly reconstructed in the ISP. GBG color patterns are shown, but other colors and/or patterns may be used if advantageous. The relative sizes of the color portions are shown as being equal but may be adjusted if advantageous. The mechanical structure of the filter is shown as a rotationally moving circle, but may be rectangular using linear movement, or different shapes with different movement patterns.
As shown in fig. 34, an embodiment for pulsed color light may include a mechanical wheel or bucket holding electronics and heat sinks for red, green, blue, or white LEDs. The LEDs will be separated by a distance related to the rate of rotation or twisting of the barrel or wheel to allow the timing of the light pulses to be consistent with other embodiments of the patent. The wheel or carriage is rotated using a motor and a mechanical mount that attaches the wheel or tub to the motor. The motor is controlled using a microcontroller, FPGA, DSP or other programmable device containing control algorithms for appropriate timing, as described in this patent. There is a mechanical path on one side optically coupled to the fiber to carry the fiber to the end of the scope of the method described in this patent. The coupling may also have mechanical holes that can be opened and closed to control the amount of light allowed down the fiber optic cable. This may be a mechanical shutter device, alternatively an electronic shutter, i.e. designed as a CMOS or CCD type sensor, may be used. The device would be difficult to control and target the product, but another approach is one that can get pulsed light to the system of this patent.
FIG. 35 illustrates an embodiment of an emitter that includes a linear filter and shutter mechanism to provide pulsed electromagnetic radiation.
FIG. 36 illustrates an embodiment of an emitter that includes a prism filter and shutter mechanism to provide pulsed electromagnetic radiation.
Further, the teachings and principles of the present disclosure may include any and all wavelengths of electromagnetic energy, including the visible and non-visible spectrum, such as Infrared (IR), Ultraviolet (UV), and X-rays.
It will be appreciated that the various features disclosed herein provide significant advantages and advances in the art. The following claims are examples of some of those features.
In the foregoing detailed description of the present disclosure, various features of the present disclosure are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed disclosure requires more features than are expressly recited in each claim. Rather, the inventive aspects lie in less than all features of a single foregoing disclosed embodiment.
It is to be understood that the above-described arrangements are only illustrative of the application of the principles of the present disclosure. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the spirit and scope of the present disclosure, and the scope of the present disclosure and the appended claims are intended to cover such modifications and arrangements.
Thus, while the disclosure herein has been shown in the drawings and described above with particularity and detail, it will be apparent to those of ordinary skill in the art that numerous modifications (including, but not limited to, variations in size, materials, shape, form, function and manner of extension, assembly and use) may be made without departing from the principles and concepts set forth herein.
Further, where appropriate, the functions described herein may be performed in one or more of the following: hardware, software, firmware, digital components, or analog components. For example, one or more Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) may be programmed to perform one or more of the systems and procedures described herein. Certain terms may be used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name but not function.
The foregoing description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Moreover, it should be noted that any and all of the aforementioned alternate embodiments may be used in any desired combination to form additional hybrid embodiments of the present disclosure.
Furthermore, while particular implementations of the disclosure have been described and illustrated, the disclosure is not limited to the specific forms or arrangements of parts so described and illustrated. The scope of the present disclosure is defined by the claims appended hereto, any future claims filed herewith and in different applications, and their equivalents.
Claims (78)
1. A system for digital imaging in an ambient light deficient environment comprising:
an imaging sensor comprising an array of pixels for sensing electromagnetic radiation;
an endoscope for accessing an environment with insufficient ambient light;
a handpiece attached to the endoscope and wherein the endoscope is motorized by manipulation of the handpiece;
a control unit comprising a processor, and wherein the control unit is in electrical communication with an imaging sensor;
a transmitter configured to transmit pulses of electromagnetic radiation; and is
A controller configured to coordinate and synchronize timing of electromagnetic radiation pulses from the emitters during a blanking period of the imaging sensor to generate a digital video stream,
wherein the imaging sensor is programmed to include different blanking periods from one frame to another.
2. The system of claim 1, further comprising a connection cable electrically connecting the handpiece and the control unit.
3. The system of claim 1, wherein the imaging sensor is a monochrome sensor.
4. The system of claim 1, wherein the transmitter is configured to transmit a plurality of electromagnetic wavelengths.
5. The system of claim 4, wherein the transmitter is configured to transmit three electromagnetic wavelengths.
6. The system of claim 5, wherein the three electromagnetic wavelengths emitted by the emitter comprise:
the wavelength of the electromagnetic radiation in the green color,
wavelength of electromagnetic radiation of red color, and
a blue electromagnetic radiation wavelength.
7. The system of claim 6, wherein the blue, red, green wavelengths of electromagnetic radiation are emitted in a pattern.
8. The system of claim 7, wherein green wavelengths are represented in the pattern at twice the frequency of red and blue wavelengths.
9. The system of claim 1, wherein the pulses of electromagnetic radiation emitted by the emitter are of a wavelength visible to a human.
10. The system of claim 1, wherein the pulses of electromagnetic radiation emitted by the emitter are of a wavelength that is invisible to a human being.
11. The system of claim 4, wherein the plurality of electromagnetic wavelengths includes wavelengths visible to humans and wavelengths invisible to humans.
12. The system of claim 4, wherein the plurality of electromagnetic wavelengths are emitted at different amplitudes.
13. The system of claim 12, wherein the different amplitudes compensate for sensitivity of the imaging sensor to different wavelengths.
14. The system of claim 1, wherein the imaging sensor is disposed within the endoscope at a distal portion of the endoscope relative to the handpiece.
15. The system of claim 1, wherein the imaging sensor is disposed within the handpiece.
16. The system of claim 1, wherein the pulses of electromagnetic radiation are transmitted from the emitter to the tip of the endoscope through an optical fiber.
17. The system of claim 2, wherein the connection cable comprises an optical fiber for transmitting electromagnetic radiation from the emitter to the endoscope, and wherein the connection cable further comprises a conductive wire for providing electrical communication from the control unit to the imaging sensor.
18. The system of claim 1, wherein the controller is disposed within the control unit and is in electrical communication with the emitter and the imaging sensor.
19. The system of claim 1, wherein the controller is disposed within a handpiece and is in electrical communication with an emitter and an imaging sensor.
20. The system of claim 1, wherein the emitter is a laser emitter configured to emit laser light.
21. The system of claim 20, further comprising a despeckle for uniformly dispersing the laser light.
22. The system of claim 1, wherein the emitter comprises a light emitting diode.
23. A system for digital imaging in an ambient light deficient environment comprising:
an imaging sensor comprising an array of pixels for sensing electromagnetic radiation, the array of pixels formed in a plurality of rows:
endoscope, for access to ambient light deficient environments:
a handpiece attached to the endoscope, and wherein the endoscope is motorized by manipulation of the handpiece:
a control unit comprising a processor, and wherein the control unit is in electrical communication with an imaging sensor;
a transmitter configured to transmit pulses of electromagnetic radiation; and is
A controller configured to coordinate and synchronize timing of electromagnetic radiation pulses from the emitters during a blanking period of the imaging sensor to generate a digital video stream,
wherein the imaging sensor is programmed to include different blanking periods from one frame to another,
wherein the pixel array comprises a plurality of pixel subsets, wherein each of the plurality of pixel subsets has a different sensitivity.
24. A system for digital imaging in an ambient light deficient environment comprising:
an imaging sensor comprising an array of pixels for sensing electromagnetic radiation;
an endoscope for accessing an environment with insufficient ambient light;
a handpiece attached to the endoscope and wherein the endoscope is motorized by manipulation of the handpiece;
a control unit comprising a processor, and wherein the control unit is in electrical communication with an imaging sensor;
a transmitter configured to transmit pulses of electromagnetic radiation; and is
A controller configured to coordinate and synchronize timing of electromagnetic radiation pulses from the emitters during a blanking period of the imaging sensor to generate a digital video stream,
wherein the imaging sensor is programmed to include different blanking periods from one frame to another,
wherein the variation of the sensitivity of the different subsets of pixels is achieved by separate global exposure times.
25. The system of claim 24, wherein the composition of the electromagnetic radiation varies during different exposure times.
26. A digital imaging system for use with an endoscope in an ambient light deficient environment, comprising:
a transmitter configured to be excited to transmit pulses of electromagnetic radiation to cause illumination within a light deficient environment;
wherein the pulse of electromagnetic radiation is within a first wavelength range comprising a first portion of the electromagnetic spectrum;
wherein the transmitter is further configured to pulse modulate at predetermined intervals;
a pixel array configured for sensing reflected electromagnetic radiation from the pulse of electromagnetic radiation;
wherein the pixel array is further configured to be stimulated at a sensing interval corresponding to a pulse interval of the emitter;
wherein the pixel array is further configured for blanking at a predetermined blanking interval corresponding to the sensing interval;
wherein the pixel array is configured to contain different blanking periods from one frame to another,
a controller configured to create an image stream by coordinating and synchronizing timing of electromagnetic radiation pulses from the emitters by combining the plurality of frames to produce a digital video stream.
27. The digital imaging system of claim 26, further comprising a despeckle device for despeckle the frame, the despeckle device being located in an illumination path between the emitter and the scene.
28. The digital imaging system of claim 26, further comprising resolving the pulses of electromagnetic radiation from the emitter to diffuse light in a light deficient environment.
29. The digital imaging system of claim 26, further comprising energizing the emitter to sequentially emit a plurality of pulses of electromagnetic radiation to cause illumination,
wherein the first pulse is in a first range that is only a portion of the electromagnetic spectrum,
wherein the second pulse is in a second range that is only a portion of the electromagnetic spectrum,
wherein the third pulse is in a third range that is only a portion of the electromagnetic spectrum,
the pulses are pulsed at predetermined intervals,
wherein the pixel array is stimulated at a first sensing interval corresponding to a pulse interval of the first pulse,
wherein the pixel array is stimulated at a second sensing interval corresponding to a pulse interval of the second pulse,
wherein the pixel array is stimulated at a third sensing interval corresponding to a pulse interval of the third pulse.
30. The digital imaging system of claim 29, wherein the emitter is energized not to emit light for a calibration interval, and the pixel array is energized during the calibration interval.
31. The digital imaging system of claim 30, wherein further pulsing is stopped if the pixel array senses light during the calibration interval.
32. The digital imaging system of claim 30, wherein the blanking interval is not concurrent with any interval of the first, second, and third pulses.
33. The digital imaging system of claim 30, wherein the blanking interval is concurrent with a portion of any interval of the first, second, and third pulses.
34. The digital imaging system of claim 29, wherein the first pulse is in the green visible spectral range, and wherein the second pulse is in the red visible spectrum, and wherein the third pulse is in the blue visible spectrum.
35. The digital imaging system of claim 29, wherein one of the plurality of pulses of electromagnetic radiation is from the non-visible range of the electromagnetic spectrum.
36. The digital imaging system of claim 29, wherein the pixel array is configured to sense any of the first, second, and third pulses equally.
37. The digital imaging system of claim 29, wherein the pixel array is configured to sense any spectral range of the electromagnetic spectrum.
38. The digital imaging system of claim 26, wherein the pixel array comprises a plurality of pixel subsets, wherein each of the plurality of pixel subsets has a different sensitivity.
39. The digital imaging system of claim 38, wherein the variation in sensitivity of different subsets of pixels is achieved by separate global exposure times.
40. The digital imaging system of claim 39, wherein the composition of the electromagnetic radiation is varied during different exposure times.
41. The digital imaging system of claim 38, wherein the sensitivities of the plurality of subsets of pixels are used for the purpose of extending a dynamic range of the system.
42. The digital imaging system of claim 38, wherein the sensitivity of the plurality of subsets of pixels is used for the purpose of extending a dynamic range of the system.
43. A system for digital imaging in an ambient light deficient environment comprising:
an imaging sensor comprising an array of pixels for sensing electromagnetic radiation;
an endoscope for accessing an environment with insufficient ambient light;
a handpiece attached to the endoscope and wherein the endoscope is motorized by manipulation of the handpiece;
a control unit comprising a processor, and wherein the control unit is in electrical communication with an imaging sensor;
a transmitter configured to transmit pulses of electromagnetic radiation;
a controller configured to coordinate timing of electromagnetic radiation pulses from the emitter and receive the electromagnetic radiation pulses at the imaging sensor to construct an image and generate a digital video stream; and is
Wherein the transmitter is electrically coupled to the imaging sensor through the controller such that the transmitter emits pulses of electromagnetic radiation during a blanking period of the imaging sensor, such that the first pulses of electromagnetic radiation are pulse modulated during a first blanking period and the second pulses of electromagnetic radiation are pulse modulated during a second blanking period,
the imaging sensor is programmed to include different blanking periods from one frame to another.
44. The system of claim 43, wherein a pulse comprises a duration, wherein the duration of the electromagnetic radiation pulse begins during a blanking period of the imaging sensor and terminates during the blanking period of the imaging sensor.
45. The system of claim 43, wherein the electromagnetic radiation pulse comprises a duration, wherein the duration of the electromagnetic radiation pulse begins during a blanking period of the imaging sensor and terminates after the blanking period of the imaging sensor.
46. The system of claim 43, wherein the electromagnetic radiation pulse comprises a duration, wherein the duration of the electromagnetic radiation pulse begins before a blanking period of the imaging sensor and terminates after the blanking period of the imaging sensor.
47. The system of claim 43, wherein the electromagnetic radiation pulse comprises a duration, wherein the duration of the electromagnetic radiation pulse begins before and terminates during a blanking period of the imaging sensor.
48. The system of claim 43, wherein the imaging sensor comprises an optical black pixel, wherein an optical black pixel comprises an optical black front pixel and an optical black rear pixel.
49. The system of claim 48, wherein the electromagnetic radiation pulse comprises a duration, wherein the duration of the electromagnetic radiation pulse begins during a blanking period of the imaging sensor and terminates when the imaging sensor reads out an optical black front pixel.
50. The system of claim 48, wherein the pulse of electromagnetic radiation comprises a duration, wherein the duration of the pulse of electromagnetic radiation begins when the imaging sensor reads out an optical black back pixel and terminates after when the imaging sensor reads out an optical black front pixel.
51. The system of claim 43, wherein the electromagnetic radiation pulse comprises a duration, wherein the duration of the electromagnetic radiation pulse begins before and terminates during a blanking period of the imaging sensor.
52. The system of claim 43, wherein the emitter is configured to emit a plurality of electromagnetic wavelengths.
53. The system of claim 52, wherein the emitter is configured to emit three electromagnetic wavelengths.
54. The system of claim 53, wherein the three electromagnetic wavelengths emitted by the emitter comprise:
the wavelength of the electromagnetic radiation in the green color,
wavelength of electromagnetic radiation of red color, and
a blue electromagnetic radiation wavelength.
55. The system of claim 54, wherein the blue, red, green wavelengths of electromagnetic radiation are emitted in a pattern.
56. The system of claim 55, wherein the green wavelength is represented in the pattern at twice the frequency of the red and blue wavelengths.
57. The system of claim 43, wherein the pulses of electromagnetic radiation emitted by the emitter are of a wavelength visible to a human.
58. The system of claim 43, wherein the pulses of electromagnetic radiation emitted by the emitter are of a wavelength invisible to humans.
59. The system of claim 52, wherein the plurality of electromagnetic wavelengths includes wavelengths visible to humans and wavelengths invisible to humans.
60. The system of claim 52, wherein the plurality of electromagnetic wavelengths are emitted at different amplitudes.
61. The system of claim 60, wherein the different amplitudes compensate for sensitivity of the imaging sensor to different wavelengths.
62. The system of claim 43, wherein the imaging sensor is disposed within an endoscope at a distal portion of the endoscope relative to the handpiece.
63. The system of claim 43, wherein the imaging sensor is disposed within the handpiece.
64. The system of claim 43, wherein the pulses of electromagnetic radiation are transmitted from the emitter to the tip of the endoscope through an optical fiber.
65. The system of claim 43, further comprising a connection cable for electrically connecting the handpiece with the control unit, wherein the connection cable comprises an optical fiber for transmitting electromagnetic radiation from the emitter to the endoscope, and wherein the connection cable further comprises a conductive wire for providing electrical communication from the control unit to the imaging sensor.
66. The system of claim 43, wherein the controller is disposed within the control unit and in electrical communication with the emitter and the imaging sensor.
67. The system of claim 43, wherein the controller is disposed within a handpiece and is in electrical communication with an emitter and an imaging sensor.
68. The system of claim 43, wherein the emitter is a laser emitter configured to emit laser light.
69. The system of claim 68, further comprising a despeckle for uniformly dispersing the laser light.
70. The system of claim 43, wherein the emitter comprises a light emitting diode.
71. The system of claim 43, wherein the emitter pulses white light.
72. The system of claim 43, wherein the emitter emits constant white light.
73. The system of claim 43, wherein the pulse of electromagnetic radiation comprises a plurality of pulses from within the same fraction that are simultaneously pulsed to increase pulse power.
74. The system of claim 73, wherein the pulse of electromagnetic radiation is generated by using a plurality of lasers that are simultaneously pulsed to produce a desired compensation effect.
75. The system of claim 43, wherein the imaging sensor comprises an array of monochrome pixels.
76. The system of claim 52, wherein the electromagnetic radiation is controlled and adjusted by pulse duration.
77. The system of claim 52, wherein the plurality of pixels of the imaging sensor comprise controllable and adjustable first and second sensitivities, wherein the electromagnetic radiation is controlled and adjusted by adjustment of the imaging sensor sensitivities of the plurality of pixels.
78. A system for digital imaging in an ambient light deficient environment comprising:
an imaging sensor comprising an array of pixels for sensing electromagnetic radiation;
a control unit comprising a controller, and wherein the control unit is in electrical communication with an imaging sensor; a transmitter configured to transmit pulses of electromagnetic radiation;
wherein the transmitter is electrically coupled to the imaging sensor through the controller such that the transmitter transmits a portion of the light pulses during a blanking period of the imaging sensor, and
wherein the controller is configured to synchronize the transmitter and the imaging sensor to produce a digital video stream,
wherein the imaging sensor is programmed to include different blanking periods from one frame to another.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261676289P | 2012-07-26 | 2012-07-26 | |
| US61/676,289 | 2012-07-26 | ||
| US201361790487P | 2013-03-15 | 2013-03-15 | |
| US61/790,487 | 2013-03-15 | ||
| PCT/US2013/052406 WO2014018936A2 (en) | 2012-07-26 | 2013-07-26 | Continuous video in a light deficient environment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1207549A1 HK1207549A1 (en) | 2016-02-05 |
| HK1207549B true HK1207549B (en) | 2019-08-16 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11930994B2 (en) | Continuous video in a light deficient environment | |
| US11751757B2 (en) | Wide dynamic range using monochromatic sensor | |
| CN104619237B (en) | The pulse modulated illumination schemes of YCBCR in light deficiency environment | |
| HK1207549B (en) | Continuous video in a light deficient environment | |
| HK1207551B (en) | Ycbcr pulsed illumination scheme in a light deficient environment | |
| HK1207778B (en) | Wide dynamic range using monochromatic sensor |