MX2008009113A - Camera for electronic device - Google Patents
Camera for electronic deviceInfo
- Publication number
- MX2008009113A MX2008009113A MXMX/A/2008/009113A MX2008009113A MX2008009113A MX 2008009113 A MX2008009113 A MX 2008009113A MX 2008009113 A MX2008009113 A MX 2008009113A MX 2008009113 A MX2008009113 A MX 2008009113A
- Authority
- MX
- Mexico
- Prior art keywords
- row
- image
- pixels
- detector
- camera
- Prior art date
Links
- 230000003287 optical effect Effects 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims description 40
- 238000004891 communication Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000001419 dependent effect Effects 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 2
- 230000009194 climbing Effects 0.000 claims 1
- 239000007858 starting material Substances 0.000 claims 1
- 230000015654 memory Effects 0.000 description 9
- 230000003416 augmentation Effects 0.000 description 8
- 238000012937 correction Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000004438 eyesight Effects 0.000 description 5
- 230000001154 acute effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000004308 accommodation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Abstract
A digital camera comprises a support structure, a lens carried by the support structure and having an optical axis, a detector, carried by the support structure under the lens and comprising a number of adjacent pixel rows, where each pixel row comprises a number of pixels, and each pixel includes an image sensor, and an image signal processor unit connected to the detector, including an image sealer configured to scale each row of pixels by a scale factor which is different from an adjacent row of pixels. The image sealer is thereby configured to compensate for a slanting angle between the camera detector and an object of which an image is captured. A camera module incorporating the digital camera preferably also includes a built in image signal processor, such that the camera module is configured to produce scaled output images.
Description
CAMERA FOR ELECTRONIC DEVICE
RELATED APPLICATION This application claims the benefit of and priority to Provisional Patent Application of E.U.A. No. 60 / 760,899 entitled "Camera for Electronic Device", filed on January 20, 2006, the description of which is incorporated herein by reference if it is exhibited in its entirety.
FIELD OF THE INVENTION The present invention relates to a camera for use in an electronic device, such as a camera incorporated in a radio communication terminal for use in video telephony. More particularly, the invention relates to a solution for adjusting the observation direction of a camera of an electronic device having a screen.
BACKGROUND The cell phone industry has had an enormous development in the world in the past decades. From the initial analog systems, such as those defined by the standards of AMPS (Advanced Mobile Telephony System) and NMT (Nordic Mobile Telephone), the development during recent recent years has focused exclusively on standards for digital solutions for the systems of cellular radio network, such as D-AMPS (e.g., as specified in EIA / TIA-IS-54-B and IS-136) and GSM (Global System for Mobile Communications). Currently, cellular technology enters the 3rd. generation (3G) through communication systems such as WCDMA, which give several advantages over digital systems of 2a. generation referred to above. Many of the advances made in mobile phone technology are related to functional aspects, such as better displays, more efficient and longer-lasting batteries and means to generate polyphonic sound signals. A functional aspect that has become more and more common are the cameras included. Cameras with video camera functionality are currently available on several mobile phones. With the entry of high-bitrate services, such as EDGE (Enhanced Data Regimes for GSM) and 3G, availability and utility for video-related services will increase. In particular, mobile video telephony, with simultaneous communication of sound and moving images, has recently become commercially available. For stationary use, video conferencing systems generally include a camera mounted on or in addition to a communication terminal, such as a personal computer (PC), or integrated into a telephone enabled with Internet protocol (P). The use of said system can be really direct, since the user is placed in the front of the terminal with the camera focused towards the user. However, the mobile video conference is a bit more problematic. The terminal can be placed in a support unit on a desk, from which a camera in the unit is directed towards the target of interest that will be captured, usually the user. A more common way of using a mobile phone for video conferencing with face-to-face transmission is then maintained, so that the included camera is manually directed towards the user. When communicating through a mobile manual terminal, the user can therefore keep the terminal stable at the front of the face so that the receiving party can see the face of the user, that is, the sending party. A problem related to the video conference with a radio terminal is caused by the fact that the included camera is normally placed adjacent to, and parallel with, the screen, that is, the optical axis of the camera is perpendicular to the screen surface. . The terminal therefore has to be directed more or less 90 ° to the face, in order to obtain an appropriate image of the user. However, many users find this way of holding a camera uncomfortable. In addition, for most mobile phone designs it can be difficult to use the terminal when it is placed on a desk without additional support media, since it may require that the user's face be held on the terminal. A related problem is that the terminal may also include a small lamp directed in parallel with the camera to illuminate the target that will be captured. When the camera and the lamp are directed towards the user's face at a 90 ° angle, there is also a risk that reflections of the user's face on the screen surface will alter the images displayed on the screen. Even in the case where the camera is configured to be held at an angle to the target to be captured, such as the face of a user of the camera, it represents a distortion or perspective problem of the image. This can lead to problems when the actual representation of objective dimensions is crucial. In the case of video telephony, the image captured from the user's face will tend to show a wider beard portion compared to the upper part of the face, if the camera is held at an inclined angle away from the face.
SUMMARY A general objective of the invention is to provide a solution for the formation of digital images, wherein the camera can be held at an angle inclined to a target to capture an image, which usually results in a distorted image. According to a first aspect, the selected objective is captured by a digital camera comprising a support structure, a lens carried by the support structure and having an optical axis, a detector, carried by the support structure under the lens and comprising a number of rows of adjacent pixels, wherein each row of pixels comprises a number of pixels, and each pixel includes an image sensor, and an image signal processor unit connected to the detector, including an enlargement for images configured to increase each row of pixels by an increase factor that is different from an adjacent row of pixels. In one embodiment the magnification for images is configured to increase each row of pixels by a magnification factor having a magnitude that is proportional to the position of the row between a start row and a final row. In one embodiment, the image magnification is configured to respond to the input of a start row increment factor and a final row increase factor and comprises a calculation function configured to calculate the magnification factors for each row between the row starting row and the final row. In one embodiment, the magnification for images is configured to calculate an input row length for a row of pixels as a ratio between the desired output row length, common to all rows of pixels, and the magnification factor for said rows. row and, configured to increase the detected image signals by turning the pixels of the row and which are within the row length of input pixels to the desired output row length. In an embodiment, the image increase is configured to produce an output image with central rows. In an embodiment the image magnification is configured to calculate a central starting point for each detector input row using a formula of:
1 - . 1 -in Inition- 2
where start is the first pixel to process in row n; 1 is the number of pixels in the entire row; and ln is the number of pixels to process in row n.
In one embodiment a camera module was formed by the support structure and wherein the image signal processor is included in the camera module. In one embodiment the image magnification is configured to determine a position in a predetermined image format of an output pixel of a certain row of pixels, to determine the corresponding position in the image detected by inverse magnification using the magnification factor for a certain row and determining an intensity value for the output pixel by interpolation of intensity values as detected by pixels adjacent to said corresponding position in the detected image. In one embodiment the image magnification is configured to calculate the magnification factors depending on an expected pre-set angle of inclination between the plane of the detector image and a target for capturing an image. In one embodiment, a field of view of the camera was defined by a region of operating detector surface, which moves out of the center relative to the optical axis of the lens. In an embodiment the image magnification is configured to calculate the magnification factors Sn for each row n by means of the function Sn = m + n * k, where m and k are constants. According to a second aspect, the stated objective is achieved by means of an electronic device comprising a housing; and a digital camera module including a support structure, a lens carried by the support structure and having an optical axis, a detector, carried by the support structure under the lens, comprising a number of rows of adjacent pixels, wherein each row of pixels comprises a number of pixels, and each pixel includes an image sensor, an image signal processor unit connected to the detector, including an image increment configured to increase each row of pixels by an increase factor that is different from an adjacent row of pixels. In one embodiment, the electronic device comprises a radio signal transceiver and a control unit configured to provide an enhanced video signal from the digital camera module to the radio signal transceiver. In one embodiment, the electronic device comprises a screen, configured to present an enlarged image as provided by the digital camera module. According to a third aspect, the stated objective is achieved by means of a method for capturing an image using a digital camera, comprising the steps of: directing the camera to a target; detecting image signals in a detector comprising a number of adjacent rows of pixels, wherein each row of pixels comprises a number of pixels, and each pixel includes an image sensor, processing the signals of the detected image by increasing each row of pixels by an increase factor that is different from an adjacent row of pixels to provide an enlarged image; take out the enlarged image. In one embodiment, the method comprises the step of: Increasing each row of pixels by an increase factor having a magnitude that is proportional to the position of the row between a start row and a final row. In one embodiment, the method comprises the steps of: Defining a starting row increase factor and a final row increase factor; and Calculate the factors for each row between the starting row and the final row. In one embodiment, the method comprises the steps of: Calculating a row length of input for a row of pixels as a ratio between the row length of output, common to all rows of pixels and to the factor of increase for said row and Increasing the signals of images detected by the pixels of said row and that are within the row length of pixels to the desired output row length. In one embodiment, the method comprises the step of: providing an enlarged image with the centered rows. In one embodiment, the method comprises the step of: Calculating an initial starting point for each detector input row using a formula of:
1 - . 1 -nn Inicion =, 2
where start is the first pixel to process in row n; 1 is the number of pixels in the entire row; and ln is the number of pixels to process in row n. In one embodiment, the method comprises the step of: Processing the detected image by means of an integral image signal processor with the digital camera in a camera module of an electronic device. In one embodiment, the method comprises the step of:
Transmit the augmented image to a remote receiver using a transceiver of a radio communication terminal. In one embodiment, the method comprises the step of: presenting the enlarged image on a screen. in one embodiment, the method comprises the steps of: defining an image format; determining a position in the image format of an output pixel of a certain row of pixels; determining a position in the image form of an output pixel of a certain row of pixels; determining a corresponding position in the image detected by inverse magnification using the magnification factor for said row; determining an intensity value for the output pixel by interpolating intensity values as detected by the pixels adjacent to the corresponding position in the detected image. In one embodiment, the method comprises the step of: Calculating magnification factors that depend on an expected inclination angle pre-established between a plane of images of the detector and a target to capture an image thereof.
BRIEF DESCRIPTION OF THE DRAWINGS The aspects and advantages of the present invention will be more evident from the following description of the preferred embodiments with reference to the attached drawings, in which: Figs. 1A and IB schematically illustrate a manual radio communication terminal including a digital camera and a screen according to some embodiments of the invention: Fig. 2 illustrates the terminal of Fig. 1 when used for video conferencing according to some modalities of the invention; Fig. 3 schematically illustrates the manner in which a camera of a terminal is held at an angle towards a user's face; Fig. 4 schematically illustrates a digital camera module according to some embodiments of the invention; Fig. 5 illustrates schematically a conventional camera phone; Fig. 6 illustrates schematically some aspects of a camera phone according to some embodiments of the invention;
Fig. 7 schematically illustrates some aspects of a camera phone according to further embodiments of the invention; Figs. 8 and 9 schematically illustrate a digital camera module according to some embodiments of the invention; Figs. 10 and 11 schematically illustrate a digital camera module according to further embodiments of the invention; Fig. 12 schematically illustrates a distorted image caused by the camera being held at an angle toward a rectangular objective. Figs. 13 and 14 schematically illustrate a distorted image and a corrected image, according to one embodiment of the invention; Fig. 15 schematically illustrates an image taken of a rectangular object held at an angle inclined toward the surface of the camera detector.
DETAILED DESCRIPTION OF THE MODALITIES OF THE
INVENTION The embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which the embodiments of the invention are shown. This invention, however, can be modalized in many different ways and should not be construed as limited to the embodiments exhibited herein. Instead, these modalities are provided so that this description will be uniform and complete and will fully cover the scope of the invention for those skilled in the art. Similar numbers refer to similar elements through it. It will be understood that, although the first, second, etc., terms can be used in the present to describe various elements, these elements will not be limited by this term. These terms are only used to distinguish one element from another. For example, a first element can be called a second element and, similarly, a second element can be called a first element, without departing from the scope of the present invention. As used herein, the term "and / or" includes any combination of one or more of the associated listed items. The terminology used herein is for the purpose of describing particular embodiments and is not intended to limit the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. In addition it will be understood that the terms "comprises", "comprising", "includes" and / or "including" when using the present, specify the presence of established aspects, integers, steps, operations, elements, and / or components, but they do not harm the presence. or addition of one or more other aspects, integers, step, operations, elements, components and / or groups thereof. Unless defined from another mantera, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by someone with ordinary experience of the subject matter to which this invention pertains. It will further be understood that the terms used herein are to be construed as having a meaning consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or non-formal sense unless it is so defined. expressly in the present. The present description refers to the field of electronic devices including a camera and a screen to present images captured by the camera, which are arranged so that one user can observe the screen while the camera is directed towards the same user. Some embodiments of the invention relate to a communication terminal configured for video telephony. Said communication terminal, for example, can be a DECT telephone (Digital European Wireless Telephone) that can be connected to a PSTN (Public Intercom Telephone Network) outlet by means of a cable, or an IP telephone that has an accommodation which includes a screen and a camera. In some embodiments, the communication terminal is a radio communication terminal, such as a mobile telephone for communication through a radio base station and / or directly to another radio terminal. The modalities will now be described with reference to the accompanying drawings. Fig. 1A illustrates an electronic device in the form of a portable communication terminal 10, such as a mobile telephone, according to some embodiments of the invention. Terminal 10 includes a support structure 11 including a housing and a user interface including a keyboard or board 12 and a screen 13. Terminal 10 may also include an audio interface including a microphone and a speaker, radio transceiver circuitry, an antenna, a battery and a microprocessor system including associated software and data memory for radio communication, all carried by the support structure 11 and contained within the housing. In addition to these elements, the device 10 also includes a digital camera 14, an opening which is indicated in Fig. 1A.
As shown in Fig. 1A, the opening of the camera 14 and the screen 13 can be arranged so that both are visible from a common viewing location. For example, both the opening of the camera 14 and the screen 13 can be directed to a user while the user looks to the screen 13. Consequently, a communication terminal 10 can be used for video telephony. The embodiments of the present invention can also be described with reference to the schematic illustration of a communication terminal 10 shown in Fig. IB. Referring now to FIG. IB, an illustrative communication terminal 10, according to some embodiments of the present invention, includes a keypad 12, a display 13, a transceiver 26, a memory 16, a microphone 15, and a speaker 19, and a camera 14 communicating with a control unit or processor 20. The transceiver 26 typically includes a transmitter circuit 27, a receiver circuit 28, and a modem 29, which cooperates to transmit and receive radio frequency signals to the remote transceivers via an antenna 25. The radio frequency signals transmitted between the communication terminal 10 and the remote transceivers may comprise traffic signals and control (eg, send signals / messages for incoming calls), which are used to establish and maintain communication with another party or destination.
The memory 16 may have a general-purpose memory that is used to store program instructions for the processor 20 as well as data, such as audio data, video data, configuration data and / or other data that may be held. access and / or be used by the processor 20. The memory 16 may include a non-volatile read / write memory, a read-only memory and / or a volatile read / write memory. Referring to Fig. 2, the use of a communication terminal 10 for video telephony is illustrated. Typically, in a video telephone session, an image 21 of the remote part is transmitted to terminal 10 and displayed on screen 13 within a dedicated frame 22. At the same time, a smaller image23 of the user of terminal 10 , captured by the camera 14, may be displayed on the screen 13 within a frame 24. The frame 24 may be displayed within a frame / screen separate from the terminal 10 and / or within a secondary frame of the screen 13 as an image inside the image. In this way, the user can receive visual feedback of how the camera 14 is directed, and can handle the terminal 10 for the appropriate direction. A problem related to video telephony has already been described, namely it may be more convenient to keep the terminal 10 at a certain angle towards the user 30, marked? in Fig. 3, instead of parallel to the user's face. By placing the terminal 10 at an oblique angle to the object from which its image will be formed, usually the face of the user, can also make it easier to use the terminal 10 for video telephony by placing the terminal on a support, e.g. , a desktop surface. However, the tilt of a conventional terminal can lead to a displacement effect of the captured image within its dedicated picture frame will be visible on the screen, and of course also for the remote part that receives the captured images. With a rising angle?, The user's face may fall to the frame. At some point, the face will descend to the field of view of the camera, which can be normally on the full-angle scale of 50-70 °. With reference to Figs. 1-14, various embodiments of a camera and an electronic device in the form of a terminal including a camera will now be described. In addition, a camera and a method will be described to correct or adjust the perspective of a captured image, whose image is distorted due to the angle of inclination. Fig. 4 schematically illustrates a digital camera module 14, for use in an electronic device such as the terminal 10 according to some embodiments of the invention. The camera module 14 includes an optical lens 41 that includes one or more individual lenses made of eg, plastic or glass, and having an optical axis 45 indicated by the dotted line. A detector 42 with an upper detector surface 43 is positioned at a distance from and parallel to the lens 41 by means of a support member 44, which may include a sealed plastic housing. The camera module can also include an image signal processor (ISP) 46, which can be connected to the back of the detector 42. Alternatively, ISP 46 can be connected to the detector 42 by the wire, eg, a flexible cable . The geometry of the camera module 14 including the focal length and aperture of the lens 41 and the size of the image plane defined by the surface of the detector 43 and its position relative to the lens 41, defines the field of view of the camera module 14. In order to clearly describe the invention, the term main line of sight will be used to denote the main beam through the lens 41 at the center of the image area used. In general, the surface of the detector 43 is centrally placed below the lens 41, and the main line of sight of the camera 14 therefore coincides with the optical axis 45. The surface of the detector 43 may be generally rectangular or square, and may be symmetrical about the optical axis 45. Fig. 5 illustrates a conventional terminal 210 that includes a camera 214 and a screen 213. In Fig. 5, the axis 251 indicates the normal direction of the screen 213, i.e., a perpendicular axis In addition, the optical axis 245, which represents the main line of sight for the camera 214, is indicated as being substantially parallel to the normal direction 251. FIGS. 6-7 illustrate terminals 10A, 10B according to some embodiments of the invention in a simplified side view in which only the display 23 and the camera 14 are illustrated. Fig. 6 illustrates a terminal 10A in which the camera 14 has been tilted so that its optical axis and main line of sight 45 is angled relative to the normal direction 51 by an acute angle f. In some embodiments, the camera 14 can tilt an angle f corresponding to an angle? of operation desired as indicated in Fig. 3. In this manner, the terminal 10A can be used for video telephony when it is held at an angle to the user without displacing the captured image. However, in these embodiments, since the camera 14 is tilted, it can occupy more space in the terminal 10A. In addition, the clamping of the chamber 14 on eg a PC (Printed Circuit Board) dwarf orientation inclined at the terminal 10A may require additional mounting apparatus, such as an intermediate wedge element. Figure 7 illustrates a terminal 10B according to further embodiments of the invention in which the reference number 51 indicates the normal direction of the screen 13, that is, an axis perpendicular to the surface of the screen 13. In addition, the axis optical 45 of a camera 71 is indicated to be substantially parallel to the normal direction 51. The camera 71 may be held within the terminal 10B substantially parallel to the screen 13, eg, by welding or other type of connection and connection to a Common PCB. However, the field of view of the camera 71, having a main line of vision 72, is defined by an operating region of the surface of the detector 43 that can be moved out of the center relative to the optical axis 45 of the lens (see FIG. Fig. 4). The operating region may be in the entire surface area of the detector 43, where the entire surface of the detector leaves the center. Alternatively, the operating region may be in a portion outside the center of a detector surface otherwise centered 43, in which case the camera 71 can be substantially similar to the camera 14 in terms of the elements shown. The difference is in which pixels of the surface of the detector are used to read the image. Figs. 8 and 9 schematically illustrate some aspects of a camera 71 according to some embodiments of the invention. The references used in Fig. 4 are also used in Figs. 8 and 9 for the corresponding elements. The ISP 46 is left out of Figures 8 and 9 since no need is connected mechanically directly to the camera module 71. Fig. 8 is a side view of the camera 71, and Fig. 9 is a perspective view of the chamber 71 in which the support member 44 has been left out for simplicity. The detector 42 may include an image sensor having a full size detector surface 43 within the area defined by the length A and width C, and may include a number of pixels, e.g., 400x400, 640x480, or any another matrix arrangement. In this embodiment, an operative region 91 of the surface of the detector 43 is defined, where the otic axis 45 of the lens 41 intersects the surface of the detector 43. It may be possible to define the region 91 to deviate along the axes x and y indicated in Fig. 9. However, in the illustrated embodiment, region 91 deviates along the x axis only and is centered along the y axis. the operating region 91 can be moved out of the center towards the right side edge along the x axis and can occupy all the pixels outside the right side edge but not all the pixels towards the left side edge of the surface of the detector 43. Alternatively, the The operating region may be less offset, and may not include the outermost pixels on the right side of the surface of the detector 43. Along the y-axis, the operating region 91 may be narrower than the full width C of the surface of detector 43, as exemplified in the drawing. The center of the operating region 91 can be the center of the sensor image plane, and a main line of vision 72 can be defined from the center of the operating region 91 and the optical center of the lens 41. This main line of sight can be extended acute angle f to the optical axis 45, wherein the magnitude of f leudé will depend on the distance between the center of the operative region 91 and the optical axis 45. Being acute, the angle f is by definition more than 0 ° and less than 90 °. However, for practical reasons, the angle can be on the scale of 5-20 ° or even 5-10 °. However, for practical reasons the angle can be on the scale of 5-20 ° or even 5-10 °. As an example, the surface of the detector 43 may include a 400 x 400 pixel array of image sensors. However, for a video conference, this can be an excessive amount of pixels. QCIF (Common Intermediate Format in Rooms) a videoconference format that specifies data regimes of 30 frames per second (fps), with each frame containing 144 rows and 176 pixels per row. This is one quarter of the full CIF resolution, which defines 355x288 pixels. QCIF support is required by the normal ITU H.261 video conferencing, and as such only requires a 176 x 144. matrix. This is less than half the accessible number of pixels in each direction. In order to improve the image quality it is therefore possible to make use of twice as many rows with twice as many pixels per row, that is, a CIF, which still fits in the 400 x 400 matrix. , an operable region 91 that includes pixels of 355 x 288 was defined on the surface of the detector 43 including 400 x 400 pixels, extending into a side edge of the surface of the centered detector 43, and centered along the side edges , as shown in Fig. 9. A 3.2 x 3.2 mm detector 42 with a pixel point of 3.6 μm has a detector surface (AxC) of about 1.44 x 1.44 mm, and the operable region will have a length B of 288/400 x 1.44 = 1.037 mm. The center of the operable region can be placed 1.44 / 2 -1.037 / 2 = 0.2 mm from the center of the surface of the detector 43. Assuming that the lens 41 is placed at a height of 1.5 mm from the surface of the detector 43, the main line of vision 72 will have an angle of approximately f = arc tangent (0.2 / 1.5), approximately 1. 6 °, to the optical axis 45. Using only one QCIF matrix, the corresponding angle can be tangent arc (1.44 * (1-144 / 400) / (2 * 1.5)), or approximately 17.1 °. However, even if a ZCIF image is to be used, it may be possible to make use of the CIF image plane to improve the image quality.
It may also be possible to define the operating region by means of a subsequent calculation. For example, assuming that the camera will be used at a certain angle? of v. 10, and a QCIF matrix will be used. The necessary deviation? of the operative area 91, when the lens distance is 1.5 nm, then? = 1.5 tan (10 °), or approximately 0.26 mm. For said configuration, the operating region 91 will not go all the edge of the surface of the detector 43; instead, there will be some rows of 55 unused pixels on the top surface of the detector. It should be understood that the numbers presented by above are given only as possible examples, while giving an operable region biased from a detector surface in order to obtain a field a field of view with a main line of sight that is angled with reference The optical axis of the camera lens can be applied to any camera geometry as schematically illustrated in FIG. 4. The detector 42 may be eg, a CMOS detector or a CCD detector and may be a black and white detector, grayscale or color image. In addition, operable region 91 may be specified in hardware or firmware for the ISP camera as a deviated region of interest or window of interest. In some embodiments, the size and position of the operable region 91 can be set as a default value, and therefore can always be used unless commands are given to the user to change this setting, eg, by means of the input interface 12. Fig. 10 illustrates modalities that can be provided as an alternative for the embodiments described with reference to Figs. 8 and 9, or that may be combined with the embodiments of Figs. 8 and 9. Similar to the chamber 14 illustrated in FIG. 4, the chamber 101 of FIG. 10 includes a lens 41 and a detector 42 with a detector surface 43, suspended parallel to and spaced from the lens 41 by a laser-like member. support 44, and potentially a joined ISP (n shown), An optical axis 45 are defined for the lens 41. The surface of the detector 43 has a length A. However, in this case the detector 42 may not be placed centrally under the lens 41. Instead, the detector 42 moves laterally relative to the lens 41, so that the optical axis 45 of the lens 41 moves out of the center with respect to the surface of the detector 43. In Fig. 10, this it is illustrated by the detector 42 moving laterally on the support member 44. Alternatively, the lens 41 can be displaced laterally on the support member 44. By this aspect, the main line of vision 102 of the fields of vision for the camera 101 , extending from the center of the The surface of the detector 43 can be extended at an acute angle f from the optical axis 45, where the magnitude of the angle f is proportional to the tangent arc (? A / h), where? A is the relative lateral translation and h is the distance between the lens 41 and the surface of the detector 43. As an example, if the lateral displacement? A of the detector 42 is 0.2 mm as indicated in Fig. 10, and the distance between the lens 41 and the surface of the detector 43 is 1.5 mm, the main line of sight will have an angle f of about 1. 6th. As indicated, it is possible to combine the modalities of figs. 8 and 9 with the embodiments of Fig. 10, whereby an operable region is defined on the right side of the detector 42, and where the detector 42 also moves laterally to the right with reference to the lens 41. As an example , consider the example described for the embodiment of Fig. 9 with a 400x400 pixel detector surface 43 with a pixel point of 3.6μm, disposed 1.5mm below the lens 41, and further a side shift? A of the detector 42 0.2 mm is used as indicated in Fig. 10. For a CIF mode, the operable region 91 may extend 288 rows into the side edge of the surface of the detector 43 that is furthest from the optical axis 45. The center of the operable region, then 1.44 / 2-1.037 / 2 + 09.2 = 0.4 mm is placed from the optical axis, which means an angle of approximately a = tangent arc (0.4 / 1.5), or approximately 15 °, to the optical axis 45. For a QCIF matrix it is possible to have a longer angle, or alternatively the use of the CIF image and increase it to a QCIF. For embodiments in which the surface of the detector 43 is displaced laterally with respect to the optical axis 45, an improved camera can also be obtained by adapting each pixel element to this deviated optical geometry. Fig. 11 schematically illustrates certain elements of a chamber 110 according to some embodiments of the invention. Fig. 11 illustrates a camera lens 41 and three pixels 110, 120, 130 of a detector 42. An eartical dotted line 114 is shown between the pixels 110 and 120, indicating the center of the detector surface 43 of the detector 42, while the optical axis 45 intersects the inside of the lens 41. Normalemtne, the center of the surface of the detector 43 and the optical axis 45 may coincide, but according to the modalities described with reference to Fig. 10 a distance may be separated? TO. In order to properly guide the incoming light to the detector elements of the detector 42, each pixel can include a sensor including a light sensor element 111, 121, 131, such as a photodiode, and a microcapacitor lens 112, 122, 132. The use of microlenses as part of an image sensor is a common technology for improving the performance of a sensor, as shown in the US Patent No. 5,251,038. Consequently, each skin of the detector 42 may include a micro-condenser lens on the top of the sensor element in order to guide the light rays in the sensor element. The formation and placement of microlenses may depend on the main beam angle of the light beam striking the sensor. The angle may differ with the image height, i.e. the distance from the central optical axis 45 of the lens of the camera 41. Normally, the further away from the optical axis 45 the sensor is placed, the focal length of the lens shall be shorter. of the capacitor. In a normal configuration, the focal length of the microlens is shortened as it moves away from the center of the surface of the detector 43, and for the lens 122, the focal length can be trigonometrically dependent on the distance F to the center 114 of the surface of the detector 43. However, in the case of embodiments according to Figure 10 with translation? A, the microlenses can be adapted to an optical center, even as defined by the optical axis 45, which is not the center of the detector surface 43 Therefore, in some embodiments of the invention, an intended optical center 45 is defined for the surface of the detector 43, which may not coincide with the physical center 114 of the detector surface 43 and which will be the actual optical center when they are combined with the lens 41. The microlenses of each pixel on the surface of the detector 43 can be carefully designed with reference to the defined optical center, normally with increasing focal length as a fu increased distance to the optical center. The focal length for the lens 122 can be trigonometrically dependent on the distance E (= F +? A) to the optical center, i.e. the optical axis 45. The specific relationships depend on the overall design of the camera, and the considerations that need to be taken are well known to one skilled in the art. As illustrated in FIG. 3, a form such as one to be used, e.g., a mobile telephone for a video conference is at a slight angle to the user's face. However, a distorted perspective will result from the inclined angle between the user's face and the optical axis of the camera. In addition, this drawback can be applied even when the camera is configured to capture images within a field of view having a main line of sight that is at an angle to the optical axis of the camera, such as in the embodiments described with reference to the Figs. 7-11. Because the camera is not parallel to a user's face, the face looks wider in the lower region than in the upper region. The image in Fig. 12 clearly illustrates this effect as if the user held a rectangular paper on the front of the face while capturing an image using a camera configured in accordance with the embodiment of Figs. 7-9 with an image region deviated from the center. This is often called the "angular effect". In traditional professional photography this angular effect can be avoided by using special optics, such as shift and tilt lenses, or specially designed cameras with changing and tilting characteristics. In the field of digital image processing, perspective correction is a common feature. Tools such as Adobe® Photoshop provide this feature. Said aspects can also be used in electronic devices for videoconferencing, such as mobile phones with included cameras, for further processing of images. However, this perspective correction demands a lot of computer power. Especially it is a problem with moving images, that is, video that must process many frames per second. The subsequent process is therefore not suitable for implementation in a system where a main mobile phone processor 20 is simultaneously used to encode video and other tasks. It is also convenient to process an image with a resolution and angle coverage greater than the final image, in order to improve the quality of the image. This makes it necessary to transfer a larger image, that is, more data, from the camera to the host. The guest must also handle a non-standard image size and convert it to the desired shape. One embodiment comprises the aspect of handling the perspective correction by the camera image tube and the image signal processor (ISP). There are several advantages with the design as will be explained later. It is also convenient to use an image and optical sensor that covers a slightly larger viewing angle than what is expected from the final image. As can be seen from Fig. 12, the image appears very broad in the lower part compared to the upper part. To correct this, a principle of perspective correction is implemented where the lower row of pixels collapses. Considering each row of pixels, a smaller amount of shrinkage is applied as it moves up in the image. Finally, the upper row contracts less or does not contract at all. The image produced will be narrower in the lower part than in the upper one. To avoid this it is necessary to cut the image or, as an alternative, use a larger image to start. In the latter case, the perspective correction process works on a longer row of image data at the bottom of the image than at the top. The result is an ISP output image with square corners. A preferred implementation of the perspective correction mechanism is done in the hardware or firmware of the camera. A normal digital camera module for a mobile camera that has an integrated ISP has an augmentation function. The augmentation function can be implemented with a block of digital hardware or as a computer code executed by the processor or as a combination of hardware and computer code. However, as mentioned previously, it is not necessary to integrate the ISP unit, it can be a cable connected to the support member or housing of the camera module. In this regard, the digital camera includes an ISP unit, which comprises the associated ISP processor and software. A normal increase can be adjusted to increase the image horizontally and vertically. It can be configured to increase the two dimensions independently. Therefore the image can shrink just in one dimension that comes out of the other untouched. The increase can be configured to increase the image with an act n, where n is a floating number, eg, 1: 1.2 etc. in a preferred embodiment, the individual rows are increased using an interpolation algorithm, eg, linear interpolation in order to determine the signal value to give certain pixels using the detected signal values in the two adjacent pixels of the row in question. According to a preferred embodiment, each row of an image is increased to a different magnification factor of the previous and next row. Preferably, ISP calculates the augmentation factors for each row from an input value of the starting and ending increase factors, eg, first and last row increase factors. The augmentation factors can be expressed as a ratio given by the input row length and the desired output row length expressed in the number of pixels. In a preferred embodiment, the fixed values of augmentation factors are used as the case of telephony use because they are well defined and the angle? between the user's face and the phone can be estimated with adequate accuracy. Since the user looks at the screen on which his image is captured by the camera of the electronic device as shown, the user will automatically grip the electronic device so that the image of the face is more or less centered vertically on the screen. Another important property of the increase that must be introduced is to center the rows. The image is therefore preferably increased in such a way that a central vertical line through the input image is retained in the output image. This can be achieved by calculating the starting point for each entry row. The pixels before the exit point of each row are abandoned since they are drag pixels. In one embodiment, the starting point of each row is calculated from the following equation:
1 - . 1 -nn Inicion = -
where start is the first pixel to process in row n; 1 is the number of pixels in the entire row; and ln is the number of pixels to process in row n. An increase that is designed to increase the vertical dimension depends on the storage of data that has two or more rows of image data. An increase that is designed to increase only the horizontal dimension will only require data storage containing a small number of pixels or, in most cases, a full row of image data. Therefore, to create a cost efficient design, a full increase for both dimensions is not necessary if the increase is not necessary for other purposes. To achieve good image quality the sensor can be designed to have at least a resolution four times higher than what is required in the output image, ie, at least twice the number of pixels in both directions x and y. An example therefore is the use of the 400x400 detector mentioned above for a QCIF output image format. In the case that the vertical magnification is simplified so that you only need two rows of image data for vertical magnification. Fig. 13 illustrates an image captured from and by a user with an angle of inclination? between the optical axis of the camera and the face of the user corresponding to Fig. 3. Even without knowing the person, it can be observed that the user's beard portion appears wider than in reality, compared with the upper portion, since the whole image is tilted. In the image of Fig. 14, the distortion effect of the angle of view of inclination has been corrected according to the invention, successfully increasing each row or line of pixels to a degree corresponding to the angle of inclination?. Therefore, although the image in Figure 14 is tilted, the perspective is correct. In one embodiment of the invention, an image of a rectangular object with known portions held at an angle inclined to the camera detector can be used to calculate and set the magnification factors. Said increase in seal factors can be made in the production and then used with a default setting. As an alternative, a user may be popping to initiate an average adjustment sequence of the operation of the keyboard 12 of the device 10, in which a rectangular object is held in front of and parallel to the face of the user. Preferably, the adjustment sequence causes the user to hold an object of known proportions, such as eg, an A4 sheet or a letter sheet, and verify via the keyboard 12 which type of object is used. The camera is then operated to capture an image of the object and a software with contour detection application is executed by the processor 20 to identify the image of the object, as shown in Fig. 15. Without taking into account the calculation and adjustment of increase factors production or sale by a user is performed, additional calculations are needed after the outline of the image has been defined, as described below with reference to Fig. 15. Fig. 15 illustrates a image of a sheet of A4 paper. In this illustrative mode the camera detector has 400x400 pixels, which means that there is room for a CIF image of 355x288 pixels. However, the output image for production is a QCIF of 176x144 pixels, but instead of using only a portion of QCIF size of the detector surface, a CIF image is read and increased by a factor 2 in height and width to a QCIF in order to obtain better image quality. An A4 sheet has a height that is the square root of 2 times the width of the sheet. As indicated in Fig. 15, the image of sheet A4 is the height b of rows of pixels, starting from the rows above the row starting for a CIF image. In addition, the image occupies c pixels at the bottom edge and pixels at the top edge. In order to calculate the magnification factors, a pair of constants is defined first:
JC i = a
k = i- j
m = j - kd For row n, the magnification factor Sn is: Sn = m + nk In a preferred embodiment, only the pixels that will contribute to the QCIF image can be read and increased in order to reduce the calculation, which It is particularly beneficial for video imaging. In this mode, the desired output length of each row is 176 pixels. This means that the length Ln of a row n to increase is 176 L _ = n
As an example, suppose that the following values have been detected in the image of Fig. 15, calculated in number of rows for b and d, and number of pixels per row for c: a = 150, b = 255, c = 200, d = 5. Using the formulas above, we obtain the following result: So = 0.448, S287 = 0.616 Lo = 393 To verify, we have calculated the increased width z 'of the upper edge of the sheet and the increased width c' of the lower edge of the sheet. the sheet, which will be A '= a * S260 = 90 C = c * S5 = 90, consequently it is a rectangular image. Therefore, the augmentation factors for each row n therefore have to be calculated and adjusted, and also the number of pixels to process the augmented image has been determined for each row, for an image formation scenario where the angle ? it is used as in Fig. 15. Preferably, the inclination angle is not measured or detected when the device is used, instead, a certain angle of inclination? they are defined as an expected angle of inclination that will be used when a user operates the device. Typically, the angle of inclination may be less than 20 °, such as eg, 10 °. For any subsequent image captured by the camera 14, each row n should be increased to 176 white pixels, which are 2x88 pixels symmetrically about a vertical center axis. For a row n, a first white pixel is 8 pixels from the central axis and the corresponding position in the detected image is therefore 88 / Sn. This position may not remain in a particular pixel of the surface of the detector, from which an image signal value may be recovered. Instead, an image signal value for the potion is preferably interpolated from nearby pixels according to any known scheme. Preferably, the intensity level values and color values are interpolated separately. The signal values of obtained images are assigned to the first white pixel in the output image. The next white pixel to assign an image signal value is 87 / Sn, and so on until the vertical center axis is reached. The other side of the central axis is processed in the corresponding way, since the image is symmetrically increased around the axis. This augmentation process is then repeated row by row until 288 rows of the white image have been processed. By performing these steps using the camera's image signal processor, energy is saved from the computer and the digital signal processor 20 of device 010 can be used as such for other purposes. The presented embodiment of the invention adapted to correct distortions in perspective of images differs from the previously proposed solutions, which are based on the subsequent processing of the images, in which this proposed solution suggests that the process be involved within the image line of the image. camera / ISP. The described design will directly correct the perspective without interrupting the host processor that operates in a multitasking envment. This makes the invention particularly suitable for portable devices, such as camera phones where low weight and compact size are important market demands. The preferred embodiments of the proposed design differs in addition to the common augmentation solutions in the way that each row of pixel data can be increased with a different factor than the other rows in the image placement. It can also be designed without any or very little extra hardware such as doors, but most of all without a number of internal memories of expensive rows. The design is also unique since the magnification can automatically center the image which is preferred in a videoconferencing application. The proposed solution preferably uses a fixed facility for perspective correction as well defined in the chaos of video telephony usage. In the drawings and specification, normal embodiments of the invention have been described and although specific terms are used, they are used in a generic and descriptive sense alone and or for purposes of limitation, the scope of the invention being exhibited in the following claims.
Claims (25)
1. - A digital camera comprises: a support structure (44), a detector (42), loaded by the support structure (44) and having an optical axis (45), a detector (42), loaded by the support structure (44) below the lens (41) comprising a number of rows of adjacent pixels, wherein each pixel row comprises a number of pixels and each pixel includes an image sensor, an image signal processing unit (46) directly connected to the detector (42), it includes an image magnification configured to increase each row of pixels by an increase factor that is different from the adjacent row of pixels such that the image is processed within the image duct of the camera.
2. The digital camera of claim 1, wherein the scaler is configured to scale each row of pixels by an increase factor having a magnitude which is proportional to the position of the row between the starter row and the row. final.
3. The digital camera of claim 1, wherein the image scaler is configured to respond to the input of the starting row scaler and the final row scaler and comprises a calculation function configured to calculate the climbing factors. for each row between the start row and the final row.
4. The digital camera of claim 1, wherein the image scaler is configured to compute an input row lens for a pixel row as a radius between the desired output row lens, common to all the rows of pixel and the scalar factor for that row and configure the scale image signals detected by the pixels of that row and which are within the pixel row lens of input to the desired output row lens.
5. The digital camera of claim 1, wherein the image scaler is configured to produce an output image with centered rows.
6. The digital camera of claim 5, wherein the image scaler is configured to calculate the central starting point for each detector input row using the formula of: start = l-ln 2 where start is the first pixel to process in a row n; 1 is the number of pixels in the entire row; and ln is the number of pixels to process in row n.
7. The digital camera of claim 1, wherein a camera module is formed by the support structure and wherein the image signal processor is included in the camera module.
8. The digital camera of claim 1, wherein the image scaler is configured to determine a position in a predetermined image format of an output pixel of a specific row of pixels to determine the corresponding position in the image detected by the inverse scaler using the scaler factor for the specific row and to determine an intensity value of an output pixel by interpolating the intensity values as they were detected by the pixels adjacent to the corresponding position in the detected image.
9. The digital camera of claim 1, wherein the image scaler is configured to calculate scaling factors dependent on the expected inclined angle preprogrammed between the image plane of the detector and an object to capture an image.
10. The digital camera of claim 1, wherein a field of view of the camera is defined by a surface region of the operating detector, which is moved out of the center relative to the optical axis of the lens.
11. The digital camera of claim 1, wherein the image scaler is configured to calculate scale factors Sn for each row n through the function Sn = m + n * k, where m and k are constants.
12. The electronic device comprises: a housing; a digital camera module according to any of claims 1-11, and a main processor (20) positioned to communicate with the camera.
13. The electronic device of claim 12, comprising: a radio signal transceiver; and the main processor is configured to provide a scaled video signal from the digital camera module and the transcriber of the radio signal.
14. The electronic device of claim 12, comprising: a device, configured to present a scaled image as provided by the digital camera module.
15. The method for capturing an image using a digital camera of an electronic device, comprises the steps of: focusing the camera on an object; detecting image signals in a detector comprising a number of adjacent rows of pixels, wherein each pixel row comprises a number of pixels and each pixel includes an image sensor; processing the detected image signals within the camera image duct by scaling each row of pixels by a scale factor which is different from an adjacent row of pixels to provide a scaled image using an image signal processor directly connected to the detector; take the scaled image via a main processor.
16. The method of claim 15, comprising the steps of: scaling each row of pixels by a scale factor having a magnitude which is proportional to the position of the row between a start row and a final row.
17. The method of claim 15, comprising the step of: defining a starting row scale factor and a final row scale factor; and calculate the scale factors for each row between the start row and the final row.
18. The method of claim 15, comprises the steps of: Calculate an input row lens for a row of pixels as a radius between the desired output row lens common to all pixel rows and the scaler factor for that row; and scaling the image signals detected by the pixels of that row and which are within the pixel row lens of input to the desired output row lens.
19. The method of claim 15, comprising the step of: providing a scaled image with the centered rows.
20. The method of claim 15, comprising the steps of: calculating a starting point calculating the central start point for each input row of the detector using the formula of: start = ll 2 where start is the first pixel for process in a row n; 1 is the number of pixels in the entire row; and ln is the number of pixels to process in row n.
21. The method of claim 15, comprising the step of: processing the image detected by the means of an integral image signal processor with the digital camera in a camera module of an electronic device.
22. The method of claim 15, comprising the step of: transmitting the scaled image to a remote receiver using a radio transfer from a radio communication terminal.
23. The method of claim 15, comprises the steps of: presenting the enlarged image on a screen.
24. The method of claim 15, comprising the steps of: defining an image format; determining a position in the image format of an output pixel of a specific pixel row; determine a corresponding position in the detected image by scaling inversely using the scale factor for the specific row; determining an intensity value for the output pixel by interpolating the intensity values as detected by the pixels adjacent to the corresponding position in the detected image.
25. The method of claim 15, comprising the step of: calculating dependent scale factors at pre-programmed expected inclined angle between the image plane of the detector and an object to capture an image.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US60/760,899 | 2006-01-20 | ||
| US11482323 | 2006-07-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| MX2008009113A true MX2008009113A (en) | 2008-09-26 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7822338B2 (en) | Camera for electronic device | |
| EP2525559B1 (en) | Apparatus, method, and system of image processing | |
| US11477413B2 (en) | System and method for providing wide-area imaging and communications capability to a handheld device | |
| CN105830424B (en) | Image capture control method and device | |
| EP2031561B1 (en) | Method for photographing panoramic picture | |
| KR100469727B1 (en) | Communication terminal and method capable of displaying face image of user at the middle part of screen | |
| CN101150669A (en) | Apparatus and method for capturing panoramic images | |
| WO2012151889A1 (en) | Mobile phone | |
| JP5348687B2 (en) | Terminal device and program | |
| CN106612392A (en) | Image shooting method and device based on double cameras | |
| US7918614B2 (en) | Camera for electronic device | |
| CN101371567B (en) | Communication terminal for video telephone | |
| CN107071277B (en) | Optical drawing shooting device and method and mobile terminal | |
| US20080316329A1 (en) | Camera module | |
| KR100818155B1 (en) | Stereo Stereoscopic Camera System and Angle Adjustment Method for Mobile Devices | |
| JP2005109623A (en) | Multiple-lens imaging apparatus and mobile communication terminal | |
| MX2008009113A (en) | Camera for electronic device | |
| JP2005109622A (en) | Multiple-lens imaging apparatus and mobile communication terminal | |
| JP2010278511A (en) | Electronic equipment | |
| JP2022140417A (en) | IMAGING DEVICE, CALIBRATION SYSTEM, IMAGING DEVICE CONTROL METHOD AND PROGRAM | |
| JP2004228688A (en) | Camera attached mobile phone |