[go: up one dir, main page]

HK1199342B - Imaging systems and methods using square image sensor for flexible image orientation - Google Patents

Imaging systems and methods using square image sensor for flexible image orientation Download PDF

Info

Publication number
HK1199342B
HK1199342B HK14112791.0A HK14112791A HK1199342B HK 1199342 B HK1199342 B HK 1199342B HK 14112791 A HK14112791 A HK 14112791A HK 1199342 B HK1199342 B HK 1199342B
Authority
HK
Hong Kong
Prior art keywords
image
square
sub
pixel array
image sensor
Prior art date
Application number
HK14112791.0A
Other languages
Chinese (zh)
Other versions
HK1199342A1 (en
Inventor
巴赫曼.哈吉-克哈穆尼
哈里什.伊维尔
维诺.马尔加萨哈雅姆
Original Assignee
豪威科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 豪威科技股份有限公司 filed Critical 豪威科技股份有限公司
Publication of HK1199342A1 publication Critical patent/HK1199342A1/en
Publication of HK1199342B publication Critical patent/HK1199342B/en

Links

Abstract

Disclosed are an imaging system and a method using square image sensor for flexible image orientation. An imaging system for generating flexibly oriented electronic images includes an image sensor having a square pixel array, imaging optics for forming an optical image on at least a portion of the square pixel array, wherein the portion is within an image circle of the imaging optics and includes at least two rectangular sub-portions differing from each other in aspect ratio and/or orientation, and a processing module capable of generating an electronic image from each of the at least two rectangular sub-portions. An imaging method for generating electronic images of flexible orientation, using a square image sensor pixel array, includes forming an optical image on at least a portion of the square image sensor pixel array, selecting, according to a desired orientation, a rectangular sub-portion of the at least a portion of the square image sensor pixel array, and generating a final electronic image from the sub-portion.

Description

Image capture system and method using square image sensor for flexible image orientation
RELATED APPLICATIONS
Priority of us provisional application No. 61/816,612, filed on 26/4/2013, which is incorporated herein by reference in its entirety.
Background
A general digital camera uses an image sensor having a rectangular (rectangular) pixel array. Such a digital camera is used to capture an image in a landscape (landscapes) orientation in which the image width is larger than the image height, or in a portrait (portrait) orientation in which the image width is smaller than the image height. A typical image sensor pixel array has an aspect ratio (aspect ratio) of 4 to 3 such that one side of the pixel array is 4/3 times longer than the other side of the pixel array.
Today, almost all mobile phones include at least one image sensor so that the mobile phone can function as a digital camera for taking pictures and recording video. The handheld mobile phone is typically held with the display in a portrait orientation, which means that the image sensor captures images of a scene in a portrait orientation. To take a picture or video in a landscape orientation, the mobile phone is rotated 90 degrees so that the display is in a landscape orientation. For recording video, a landscape orientation is preferred because video playback typically occurs on a landscape display, such as a television or computer screen. Thus, the standard video format is horizontal.
Disclosure of Invention
In one embodiment, an image capture system for generating a flexibly aligned electronic image includes: (a) an image sensor having a square pixel array; (b) an image-taking optical element for forming an optical image on at least a portion of the square pixel array, wherein the portion is located within an image circle of the image-taking optical element and includes at least two rectangular sub-portions different from each other in at least one of aspect ratio and orientation; and (c) a processing module capable of generating an electronic image from each of the at least two rectangular subsections.
In one embodiment, a method for imaging to produce a flexible-orientation electronic image using a square image sensor pixel array includes: (a) forming an optical image on at least a portion of the square image sensor pixel array; (b) selecting a rectangular sub-portion of at least a portion of the square image sensor pixel array in accordance with the desired orientation; and (c) generating a final electronic image from the sub-portions.
Drawings
FIG. 1 illustrates a handheld imaging system that utilizes a square image sensor pixel array to provide flexible image orientation, according to one embodiment.
FIG. 2 illustrates an imaging system that utilizes a square image sensor pixel array to provide flexible image orientation, according to one embodiment.
FIG. 3 shows a square image sensor pixel array in accordance with one embodiment.
FIG. 4 shows how a square image sensor pixel array can facilitate the generation of two rectangular electronic images of different orientations in accordance with one embodiment.
FIG. 5 shows how a square image sensor pixel array can provide rectangular electronic images of different orientations and different aspect ratios in accordance with one embodiment.
FIG. 6 shows how a square image sensor pixel array can provide rectangular electronic images of different orientations and different aspect ratios, including a square electronic image that utilizes all of the pixels of the square image sensor pixel array, in accordance with one embodiment.
FIG. 7 shows a method for generating an electronic image of a desired orientation by using a square image sensor pixel array, in accordance with one embodiment.
FIG. 8 shows an embodiment of the method of FIG. 7 utilizing capture of square electronic images, in accordance with an embodiment.
FIG. 9 shows one embodiment of the method of FIG. 7 in which an electronic image is generated using a portion of a square image sensor pixel array, in accordance with one embodiment.
FIG. 10 shows an arrangement capable of providing arbitrarily rotated electronic images by using a square image sensor pixel array, in accordance with an embodiment.
FIG. 11 is a diagram showing an electronic image portion overlaid on a portion of a square image sensor pixel array in accordance with one embodiment.
FIG. 12 illustrates a method for generating an electronic image of a desired orientation using a square array of image sensor pixels without the locations of the image pixels needing to correspond directly to the locations of the image sensor pixels, in accordance with one embodiment.
FIG. 13 shows how a square image sensor pixel array can facilitate the generation of an azimuthally stable image, according to one embodiment.
FIG. 14 shows a method of generating an azimuthally-stable electronic image by using a square image sensor pixel array and a gravity orientation sensor, in accordance with one embodiment.
FIG. 15 shows an embodiment of the method of FIG. 14 utilizing capture of square electronic images, in accordance with an embodiment.
FIG. 16 shows one embodiment of the method of FIG. 14 in which an azimuthally-stabilized electronic image is generated using a portion of a square image sensor pixel array, in accordance with one embodiment.
FIG. 17 illustrates a method of generating an azimuthally-stabilized series of electronic images, in accordance with one embodiment.
Detailed Description
Disclosed herein are imaging systems and methods with flexible image orientation capabilities. These systems and methods utilize an image sensor having a square pixel array for taking pictures or recording video, such as by using the same orientation of the imaging system for both landscape and portrait screen orientations. This eliminates the need to orient the image capture system in a particular way to obtain the desired image orientation. Based on these imaging systems and methods, the user operating the handheld camera device can support the camera in the most comfortable or practical orientation regardless of the desired image orientation. The orientation of the generated electronic image is not limited to the orientation of the image aligned to one side of the square pixel array. More specifically, the presently disclosed imaging system and method for flexible image orientation can generate an electronic image with any orientation in the plane of a square pixel array. In some embodiments, the image capture system and method further comprises a gravity orientation sensor to achieve video stabilization. Video stabilization is preferably implemented in a hand-held camera to correct for changes in camera orientation during video recording. In such embodiments, the imaging system or method may detect a change in camera orientation and adjust the portion of the square pixel array that produces the individual video pictures. This can be done in real time as the video is recorded, or in post-capture video processing. The presently disclosed imaging systems and methods for flexible image orientation provide further flexibility with respect to image format. In embodiments, the same imaging system or method is capable of producing electronic images of different aspect ratios, including, for example, 4: 3, 16: 9, and 1: 1.
FIG. 1 shows an exemplary imaging system (hand-held camera 100) that utilizes a square image sensor pixel array to provide flexible image orientation. The handheld camera 100 includes an imaging objective 130, a square image sensor pixel array 140, a processing/control module 150, and an interface 160. The interface 160 includes a display 120 and a control panel 125. The control panel 125 may be integrated with the display 120, such as a touch screen. A user may use the hand-held camera 100 to capture an image or video stream. The imaging objective 130 forms an image on a square image sensor pixel array 140. The processing/control module 150 processes the image captured by the square image sensor pixel array 140. In one embodiment, this processing is based at least in part on input received through interface 160. For example, the user may communicate a desired image orientation (e.g., portrait or landscape) to the processing/control module 150 via the control panel 125. In some embodiments, the processing/control module 150 further controls at least a portion of the image captured by the square image sensor pixel array 140. The display 120 may display the captured image and/or the processed image to a user.
In some embodiments, the handheld camera 100 also includes a gravity orientation sensor 170. The gravity orientation sensor 170 detects the orientation of gravity, i.e., the orientation with respect to the direction of gravity of the hand-held camera 100 and thus the square image sensor pixel array 140. The gravity orientation sensor 170 is communicatively coupled to the processing/control module 150 such that the processing/control module 150 may utilize the gravity orientation detected by the gravity orientation sensor 170. For example, an image captured by the square image sensor pixel array 140 may be processed based on the gravity orientation detected by the gravity orientation sensor 170. In another example, the processing/control module 150 controls the image captured by the square image sensor pixel array 140 based at least in part on the gravity orientation detected by the gravity orientation sensor 170.
FIG. 2 shows an exemplary imaging system 200 having a square image sensor pixel array for flexible image orientation. The image capture system 200 includes an image sensor 220. Image sensor 220 includes a square image sensor pixel array 140 (fig. 1). The image capture system 200 further includes an imaging objective lens 130 (fig. 1) for forming an optical image on at least a portion of the square image sensor pixel array 140. The image capture system 200 also includes a processing/control module 250 communicatively coupled to the image sensor 220, and an interface 260 communicatively coupled to the processing/control module 250. The processing/control module 250 includes a processor 230 and a memory 240. The memory 240 includes machine-readable instructions 270 encoded in a non-volatile memory portion of the memory 240 and optionally in a data store 280. In some embodiments, the image capture system 200 further includes a gravity orientation sensor 170 communicatively coupled to the processing/control module 250. Optionally, imaging system 200 includes a chamber 290, such as for supporting and/or protecting components of imaging system 200. The cavity 290 may be a camera body.
The processing/control module 250 processes the electronic image captured by the square image sensor pixel array 140 according to one or more instructions 270 and inputs received by the interface 260. The electronic image represents the optical image formed on the square image sensor pixel array 140. For example, processing/control module 250 receives a desired image orientation (e.g., portrait or landscape) from interface 260 and processes the electronic image received by image sensor 220 in accordance with instructions 270 to generate an electronic image of the desired orientation. During processing, the processing/control module 250 may utilize the data store 280 for storage of, for example, captured images, processed images, and temporary data. Optionally, the processing/control module 250 controls at least a portion of the functions of the image sensor 220, such as the appearance of an image captured by the square image sensor pixel array 140, according to the one or more instructions 270 and the input received by the interface 260. For example, the processing/control module 250 controls the image capturing function of the image sensor 220 according to the instructions 270 and the desired image orientation received by the interface 260, so that the image sensor 220 generates an electronic image of the desired orientation. The processing/control module 250 may transfer the captured and/or processed images to the interface 260 and/or store such images to the data store 280.
In one embodiment, interface 260 is a user interface and includes, for example, display 120 and control panel 125 of FIG. 1. In another embodiment, the interface 260 includes one or more of a touch screen, a keypad, a voice interface, a wired or wireless connection to an external control system (e.g., a remote computer).
The handheld camera 100 of fig. 1 is an embodiment of an image capture system 200. Processing/control module 150 (FIG. 1) and interface 160 (FIG. 1) are embodiments of processing/control module 250 and interface 260, respectively.
In some embodiments, the imaging system 200 includes the gravity orientation sensor 170 (FIG. 1). The optional gravity orientation sensor 170 is communicatively coupled to the processing/control module 250 such that the detected gravity orientation may be communicated to the processing/control module 250 for use in image processing and/or image capture. In an exemplary use scenario, the processing/control module 250 receives the detected gravitational orientation from the gravitational orientation sensor 170 and the desired image orientation relative to gravity from the interface 260. Processing/control module 250 then utilizes the detected orientation of gravity, the desired image orientation, and instructions 270 to generate an electronic image having the desired orientation relative to gravity. Processing/control module 250 may apply such a procedure to each electronic image of the video stream to generate a video stream that is azimuthally stable with respect to gravity.
The imaging system 200 may advantageously be implemented in, for example, a hand-held camera (as shown in fig. 1), a camera mounted on a ski/bike/motorcycle helmet, or a camera mounted on a vehicle or another non-stationary object. The imaging system 200 may also be advantageously implemented in cameras having a fixed orientation (e.g., webcam or security camera) or a preferred orientation (e.g., mobile phones and other handheld cameras).
Fig. 3 is a schematic diagram of the square image sensor pixel array 140 of fig. 1 and 2. Square image sensor pixel array 140 includes a square array of light-sensitive pixels that generate signals in response to incident light. These signals may be processed to generate an electronic image representing the optical image formed on the square image sensor pixel array 140. In one embodiment, the pixels of image sensor pixel array 140 may be monochrome pixels to provide a grayscale or black/white electronic image. In another embodiment, the pixels of square image sensor pixel array 140 are color pixels, such as formed by a plurality of photo sites (photosites) that are sensitive to a plurality of different colors. In the present embodiment, square image sensor pixel array 140 can be used to generate an electronic color image representing an optical image formed on square image sensor pixel array 140. In one embodiment, square image sensor pixel array 140 is implemented in a Complementary Metal Oxide Semiconductor (CMOS) image sensor. For example, the image sensor 220 of fig. 2 is a CMOS image sensor. In another embodiment, square image sensor pixel array 140 is implemented in a Charge Coupled Device (CCD) image sensor. For example, the image sensor 220 of fig. 2 is a CCD image sensor.
In some embodiments, the pixels of square image sensor pixel array 140 are arranged in orthogonal rows and columns such that each row and each column includes N pixels, where N is an integer greater than 1. The rows are aligned along a direction 301 and the columns are aligned along a direction 302 perpendicular to the direction 301. In one embodiment, square image sensor pixel array 140 includes 4240x4240 pixels, i.e., each row and each column of square image sensor pixel array 140 includes 4240 pixels. The pixel pitch (i.e., the center-to-center distance between nearest neighbor pixels) may be 1.12 microns.
In the illustration of fig. 3, image sensor pixel array 140 includes an array of components 310. For clarity of illustration, only one component 310 is labeled in FIG. 3. Each element 310 may represent a single pixel or a group of pixels. Fig. 3 is not drawn to scale. Image sensor pixel array 140 may include a different number of components 310 than shown in fig. 3 without departing from its scope.
Fig. 3 also shows image circle 380 representing the boundary within which an optical image is formed in the plane of square image sensor pixel array 140. For example, the image circle 380 may represent the area in the plane of a square image sensor pixel array 140 within which the imaging objective 130 (fig. 1 and 2) may form a useful optical image. The image circle 380 may be defined by a radius from the optical axis of the imaging objective 130 (fig. 1 and 2), where the illumination intensity falls below a particular threshold, as compared to the illumination intensity at the intersection between the optical axis of the imaging objective 130 (fig. 1 and 2) and the square image sensor pixel array 140. The threshold value may take any value greater than or equal to 0, and less than or equal to the illumination intensity at the intersection of the optical axis of the imaging objective 130 (fig. 1 and 2) and the square image sensor pixel array 140, without departing from the scope thereof. The characteristics of the imaging objective, such as the imaging objective 130 (fig. 1 and 2), used to form an optical image on the square image sensor pixel array 140 may be selected to produce an image circle 380 that allows the capture of an image of a certain pixel resolution to aspect ratio. Exemplary embodiments are shown in fig. 4, 5, 6 and 10.
Fig. 4 schematically shows how a square image sensor pixel array 140 (fig. 1 and 2) can facilitate the generation of two rectangular electronic images of different orientations in one exemplary embodiment. Graph 410 shows the use of a square image sensor pixel array 140 to generate an electronic image with a vertical screen orientation where the longer dimension is parallel to direction 302. Graph 420 shows the use of square image sensor pixel array 140 to generate an electronic image with a cross-screen orientation where the longer dimension is parallel to direction 301. Both graphs 410 and 420 show an image circle 480 over the square image sensor pixel array 140. The image circle 480 is an embodiment of the image circle 380 (FIG. 3). Image circle 480 has dimensions relative to the dimensions of square image sensor pixel array 140 such that square image sensor pixel array 140 can be used to produce an aspect ratio of 16: 9 while utilizing the full range of square image sensor pixel array 140 in the longer dimension of the electronic image. Aspect ratio 16: 9 are often used for high definition video and match the aspect ratio of commonly used television screens. The configuration shown by graphs 410 and 420 may be achieved by using the imaging system 200 of the figure, wherein the characteristics of the imaging objective 130 are selected to produce the image circle 480.
Graph 410 represents a rectangular sub-portion 412 of a portion of a square image sensor pixel array 140 disposed within an image circle 480. The rectangular subsection 412 has an aspect ratio of 16: 9 and are vertically aligned so that the longer dimension of the rectangular sub-portion 412 is parallel to the direction 302. The longer dimension of the rectangular subsection 412 utilizes the full range of square image sensor pixel array 140. Graph 420 represents a rectangular sub-portion 422 of a portion of a square image sensor pixel array 140 disposed within an image circle 480. The rectangular sub-portion 422 has an aspect ratio of 16: 9 and are aligned across the screen such that the longer dimension of the rectangular sub-portion 412 is parallel to the direction 301. The longer dimension of the rectangular subsection 422 utilizes the full range of square image sensor pixel arrays 140. Thus, by using rectangular sub-portions 412 and 422, square image sensor pixel array 140 facilitates an aspect ratio of 16 without requiring readjustment of square image sensor pixel array 140: 9 vertical and horizontal screen alignment of the electronic image generation.
Fig. 4 is not to scale. Image sensor pixel array 140 may include a different number of components 310 than shown in fig. 4 without departing from its scope. Although the image circle 480 is shown in fig. 4 as being centered with respect to the square image sensor pixel array 140, the center of the image circle 480 may be offset from the center of the square image sensor pixel array 140 without departing from its scope.
Fig. 5 schematically shows how a square image sensor pixel array 140 (fig. 1 and 2) can provide rectangular electronic images of different orientations and different aspect ratios in one example embodiment. Graphs 510 and 520 respectively show the use of square image sensor pixels 140 to produce an electronic image of the same size and aspect ratio, but with portrait and landscape screen orientations. Likewise, graphs 530 and 540 show the use of square image sensor pixels 140, respectively, to produce electronic images of the same size and aspect ratio, but with portrait and landscape screen orientations. However, the aspect ratios associated with graphs 510 and 520 are different than the aspect ratios of graphs 530 and 540.
Graphs 510, 520, 530, and 540 show image circle 580, which is an embodiment of image circle 380 (FIG. 3). Image circle 580 has dimensions such that a square image sensor pixel array 140 can be used to produce an aspect ratio of 16: 9 and 4: 3 while utilizing a full range of square image sensor pixel arrays 140 in the longer dimension of the electronic image. Aspect ratio 4: 3 is commonly used for photographs.
The configuration shown in graphs 510, 520, 530, and 540 may be achieved by using the image capture system 200 of the figure, where the characteristics of the imaging objective 130 are selected to produce the image circle 580. The image circle 580 is larger than the image circle 480 (fig. 4). In order to achieve the configuration of the patterns 510, 520, 530, and 540, the imaging objective 130 may have the property of producing a larger image circle, as compared to the configuration shown in fig. 4.
Graph 510 represents a rectangular subsection 412 (fig. 4) of the portion of the square image sensor pixel array 140 that lies within the image circle 580. Graph 520 represents a rectangular subsection 422 of a portion of the square image sensor pixel array 140 (fig. 4) that lies within an image circle 580. Graph 530 represents a rectangular sub-portion 532 of the portion of the square image sensor pixel array 140 that is located within the image circle 580. The rectangular subpart 532 has an aspect ratio of 4: 3 and are vertically screen-aligned so that the longer dimension of the rectangular sub-portion 532 is parallel to direction 302. The longer dimension of the rectangular sub-portion 532 utilizes the full range of square image sensor pixel arrays 140. Graph 540 represents a rectangular sub-portion 542 of the portion of the square image sensor pixel array 140 that lies within the image circle 580. The rectangular sub-section 542 has an aspect ratio of 4: 3 and are aligned across the screen such that the longer dimension of the rectangular sub-portion 542 is parallel to the direction 301. The longer dimension of the rectangular subsection 542 utilizes the full range of square image sensor pixel arrays 140. Thus, by virtue of the rectangular sub-portions 412, 422, 532, and 542, the square image sensor pixel array 140 facilitates an aspect ratio of 16: 9 and aspect ratio 4: 3 vertical and horizontal screen alignment of both electronic images.
Fig. 5 is not to scale. Image sensor pixel array 140 may include a different number of components 310 than represented in fig. 5 without departing from its scope. Although the image circle 580 is shown in fig. 5 as being centered with respect to the square image sensor pixel array 140, the center of the image circle 580 may be offset from the center of the square image sensor pixel array 140 without departing from its scope.
Fig. 6 shows an extension of the configuration shown in fig. 5, in which the image circle has been enlarged even further to further aid in the generation of an electronic image using all of the pixels of the square image sensor pixel array 140.
The configuration shown in graphs 610, 620, 630, 640, and 650 can be achieved by using the imaging system 200 of the figure, wherein the characteristics of the imaging objective 130 are selected to produce an image circle 680. The image circle 680 is larger than the image circle 580 (fig. 5). In contrast to the arrangement shown in fig. 5, the imaging objective 130 may have the property of producing a larger image circle for the purpose of the arrangement of the patterns 610, 620, 630, 640 and 650.
Graph 610 represents a rectangular sub-portion 412 (fig. 4) of a portion of a square image sensor pixel array 140 disposed within an image circle 680. Graph 620 represents a rectangular sub-portion 422 (fig. 4) of a portion of a square image sensor pixel array 140 disposed within an image circle 680. Graph 630 represents a rectangular sub-portion 532 (fig. 5) of the portion of the square image sensor pixel array 140 disposed within the image circle 680. Graph 640 represents a rectangular sub-portion 542 of the portion of the square image sensor pixel array 140 disposed within the image circle 680 (fig. 5). Graph 650 represents a rectangular sub-portion 652 of a portion of a square image sensor pixel array 140 disposed within an image circle 680. Rectangular subsection 652 is square (i.e., has an aspect ratio of 1: 1) and includes all of square image sensor pixel array 140. Thus, except for aspect ratio 16: 9 and aspect ratio 4: 3 both vertical and horizontal screen orientations, by virtue of the rectangular sub-portions 412, 422, 532, 542, and 652, the square image sensor pixel array 140 facilitates the generation of a square electronic image based on the entirety of the square image sensor pixel array 140 without the need to re-adjust the square image sensor pixel array 140.
Fig. 6 is not to scale. Image sensor pixel array 140 may include a different number of components 310 than represented in fig. 6 without departing from its scope. Although the image circle 680 is shown in fig. 6 as being centered with respect to the square image sensor pixel array 140, the center of the image circle 680 may be offset from the center of the square image sensor pixel array 140 without departing from its scope.
The above discussion of fig. 4, 5 and 6 may be extended to other and/or additional aspect ratios and dimensions of the rectangular subparts without departing from the scope thereof. For example, in comparison to the configuration of fig. 6, additional rectangular subsections may be formed to facilitate the generation of electronic images for more than two different aspect ratio landscape and portrait orientations. Likewise, rectangular sub-portions may be formed to facilitate the generation of smaller electronic images with the same aspect ratio as the rectangular sub-portions shown in fig. 6.
Although fig. 4, 5, and 6 show progressively larger image circles, similar results may be achieved with a single size image circle while using a progressively smaller embodiment of a square image sensor pixel array 140. For example, square image sensor pixel array 140 can be made with smaller pixel spacing.
FIG. 7 shows one exemplary method 700 for generating an electronic image of a desired orientation using a square image sensor pixel array. The method 700 achieves the desired image orientation without the need to realign the square image sensor pixel array. The method 700 is performed, for example, by the imaging system 200 of fig. 2.
Optionally, method 700 includes step 701, wherein an optical image is formed on at least a portion of the square image sensor pixel array. For example, the imaging objective 130 (fig. 1 and 2) forms an optical image on at least a portion of the square image sensor pixel array 140 (fig. 1 and 2), which is defined by an image circle as represented by image circle 380 in fig. 3.
In step 702, a desired image orientation with respect to a square image sensor pixel array is received. For example, processing/control module 250 (fig. 2) receives a desired image orientation, such as landscape or portrait, from interface 260 (fig. 2), instructions 270 (fig. 2), or data store 280 (fig. 2). In certain embodiments, step 702 may receive other format parameters (e.g., aspect ratio and/or size) in addition to or instead of the image orientation without departing from its scope. For example, processing/control module 250 (fig. 2) may receive desired image orientations (e.g., landscape or portrait) and desired aspect ratios (e.g., 16: 9 and 4: 3) from interface 260 (fig. 2), instructions 270 (fig. 2), or data store 280 (fig. 2).
In step 710, a rectangular subsection of the portion of the square image sensor pixel array on which the optical image is formed is selected. For example, the processing/control module 250 (fig. 2) selects a rectangular sub-portion, as shown in fig. 4, 5 and 6. Processing/control module 250 (fig. 2) performs this selection in accordance with instructions 270 (fig. 2) by using the image orientation received in step 702, and/or other image format parameters. In step 720, an electronic image is generated from the rectangular subsection of the selected square image sensor pixel array. For example, processing/control module 250 (fig. 2) generates an electronic image from the rectangular subsection selected in step 710 in accordance with instructions 270 (fig. 2). In optional step 730, the electronic image generated in step 720 is output. For example, processing/control module 250 (FIG. 2) outputs an electronic image via interface 260 (FIG. 2).
FIG. 8 shows one exemplary method 800 for generating an electronic image of a desired orientation using a square electronic image captured by a square image sensor pixel array. The method 800 is an embodiment of the method 700 (fig. 7) and can be performed by the image capturing system 200 of fig. 2. Method 800 performs steps 702 and 710 of FIG. 7, either serially or in parallel with step 815. Step 815 is optionally after step 701 of fig. 7. In step 815, a square electronic image is captured using the full range of square image sensor pixel arrays. For example, a square electronic image is captured by the image sensor 220 (fig. 2) using the full range of square image sensor pixel array 140 (fig. 1 and 2). Processing/control module 250 (fig. 2) may store the square electronic image to data store 280 (fig. 2). In an alternative embodiment, the electronic image is smaller in size than a full range of square image sensor pixel arrays, and may or may not be square. More specifically, the electronic image has a size and shape to include a predefined plurality of image sizes, aspect ratios, and orientations.
After performing steps 815 and 710, the method 800 continues to step 820. In step 820, an electronic image is generated from the pixels of the square electronic image captured in step 815 that are associated with image sensor pixels located within the rectangular sub-portion of the square image sensor pixel array selected in step 710. Step 820 includes step 825, where the square electronic image is cropped according to the rectangular sub-portion of the square image sensor pixel array selected in step 710. For example, processing/control module 250 (fig. 2) receives the square electronic image generated in step 815 from image sensor 220 (fig. 2) or from data store 280. Processing/control module 250 (fig. 2) crops the square electronic image to include a portion of the square electronic image that corresponds to the rectangular sub-portion selected in step 710 in accordance with instructions 270 (fig. 7). Optionally, method 800 includes step 730 (fig. 7).
FIG. 9 shows one exemplary method 900 for generating an electronic image of a desired orientation by using a portion of a square image sensor pixel array. Method 900 is an embodiment of method 700 (fig. 7) and may be performed by image capture system 200 of fig. 2. The method 900 first performs steps 702, 710 and optionally 701 as discussed in connection with fig. 7. After performing step 710, the method 900 continues to step 920. In step 920, an electronic image is generated from the rectangular sub-portion of the square image sensor pixel array selected in step 710. Step 920 includes step 925 in which a rectangular electronic image is captured using the selected rectangular sub-portion of the square image sensor pixel array. For example, image sensor 220 (fig. 2) captures a rectangular electronic image by using pixels of square image sensor pixel array 140 (fig. 1 and 2) that are located within the rectangular sub-portion of square image sensor pixel array 140 selected in step 710. Optionally, method 900 further includes step 730 (fig. 7).
Fig. 10 shows one example configuration 1000 for obtaining an arbitrarily rotated electronic image by using a square image sensor pixel array 140 (fig. 1 and 2). Configuration 1000 may be achieved using, for example, imaging system 200 (fig. 2). Image circle 1080 is within the boundaries of square image sensor pixel array 140 and thus comprises a portion of square image sensor pixel array 140. Image circle 1080 may be achieved, for example, by appropriate selection of the characteristics of imaging objective 130 (fig. 1 and 2). An arbitrarily rotated electronic image may be produced by the portion of the square image sensor pixel array 140 that is included in the rectangular subsection 1020.
Rectangular subsection 1020 comprises a subsection of a portion of square image sensor pixel array 140 that is located within image circle 1080. The longer dimension of the rectangular sub-portion 1020 is aligned at an angle 1030 away from the direction 301. In one embodiment, possible values for angle 1030 include 0 degrees, 360 degrees, and all values between 0 and 360 degrees. In another embodiment, the possible values of angle 1030 include a plurality of discrete values ranging from 0 to 360 degrees, including at least one of 0 and 360 degrees. For example, the plurality of discrete values may be evenly distributed within a range from 0 and 360 degrees. In this example, a plurality of discrete values is shown for a full angular range at a certain resolution. Exemplary resolutions include, but are not limited to, 0.1, 1, and 5 degrees.
The size and position of image circle 1080 may be different than that shown in configuration 1000 without departing from its scope. For example, the image circle 1080 may have a size and position such that the image circle 1080 includes all desired orientations of rectangular sub-portions of desired size and aspect ratio. Rectangular subsection 1020 may have different sizes, aspect ratios, and locations than shown in configuration 1000 without departing from its scope. In one embodiment, the rectangular sub-section 1020 has a size, aspect ratio, and position such that all desired orientations of the rectangular sub-section 1020 (i.e., all desired values of the angle 1030) are within the image circle 1080. In one embodiment, the rectangular subpart 1020 is square. In the present embodiment, angle 1030 represents an angle between direction 301 and a side of rectangular subsection 1020.
Fig. 10 is not to scale. Image sensor pixel array 140 may include a different number of components than those shown in fig. 10 without departing from its scope.
Fig. 11 is a diagram 1100 showing an electronic image portion 1150 overlaid on an image sensor pixel array portion 1140 (fig. 1, 2, and 10) of a square image sensor pixel array 140. The electronic image portion 1150 is a portion of the electronic image generated from the rectangular sub-portion 1020 (fig. 10). Thus, the electronic image portion 1150 is shown in fig. 11 as being in the position of a portion of the rectangular subsection 1020 (fig. 10), from which position the electronic image portion 1150 is produced. The electronic image is made up of an array of pixels. Thus, electronic image portion 1150 is made up of image pixels 1151 (only one pixel 1151 is labeled in FIG. 11 for clarity of illustration). However, image pixel 1151 does not directly correspond to a single pixel of image sensor pixel array portion 1140. In the illustrated example of diagram 1100, pixel 1151 of electronic image portion 1150 overlaps with four pixels 1141, 1142, 1143, and 1144 of image sensor pixel array portion 1140.
FIG. 12 shows one exemplary method 1200 for generating an electronic image of a desired orientation using a square array of image sensor pixels where the positions of the image pixels do not need to correspond directly to the positions of the image sensor pixels. Thus, method 1200 is one embodiment of method 700 (FIG. 7) and may be utilized to generate electronic images associated with configuration 1000 of FIG. 10 and graph 1100 of FIG. 11. The method 1200 is performed, for example, by the imaging system 200 of fig. 2.
Method 1200 includes steps 702, 710 and optionally 701, as discussed in connection with fig. 12. After performing step 710, method 1200 performs step 1220, which is one embodiment of step 720 of method 700 (FIG. 7). In step 1220, an electronic image is generated from the rectangular subsection of the square image sensor pixel array selected in step 710. Step 1220 includes step 1225 in which each image pixel of the electronic image is populated with a value derived from a signal from at least one image sensor pixel of the square image sensor pixel array disposed closest to the location corresponding to the image pixel. Referring to graph 1100 of fig. 11, image pixel 1151 is populated with a value derived from signals from one or more image sensor pixels 1141, 1142, 1143, and 1144. In one embodiment, this value is derived by using all image sensor pixels that overlap the location of the image pixel. For example, image pixel 1151 is populated with a value derived using all of image sensor pixels 1141, 1142, 1143, and 1144, such as a value representing an average or weighted average of signals from image sensor pixels 1141, 1142, 1143, and 1144. In the case of a weighted average, the weight associated with each of image sensor pixels 1141, 1142, 1143, and 1144 may be proportional to the overlap between image pixel 1151 and the respective image sensor pixel. In another embodiment, this value is derived from the image sensor pixel having the largest overlap with the position of the image pixel. For example, image pixel 1151 is filled with a value derived from the signal from image sensor pixel 1142, since this is the image sensor pixel with the largest overlap with image pixel 1151. Step 1220 is performed, such as by processing/control module 250 (fig. 2), in accordance with instructions 270 (fig. 2). Optionally, method 1200 includes step 730 (fig. 7).
Step 1225 may be incorporated into step 820 of method 800 (fig. 8) and step 920 of method 900 (fig. 9) without departing from the scope thereof.
Fig. 13 shows how a square image sensor pixel array 140 (fig. 1 and 2) can facilitate the generation of an azimuthally stable image in one exemplary embodiment shown by graphs 1301, 1302, and 1303. All of graphs 1301, 1302, and 1303 show square image sensor pixel array 140 (fig. 1, 2, and 10) and image circle 1080 (fig. 10) specifically formed according to configuration 1000 (fig. 10).
In graph 1301, square image sensor pixel array 140 is aligned to have a side (side) parallel to reference direction 1310. The reference direction is for example the direction of gravity. Optical image 1330 is formed within a rectangular sub-portion 1320 of a portion of a square image sensor pixel array 140 that is located within image circle 1080. The rectangular subpart 1320 is one embodiment of the rectangular subpart 1020 (fig. 10). In graph 1302, square image sensor pixel array 140 has been rotated to have an angle with respect to direction 1310. However, the orientation of the optical image 1330 is invariant with respect to the direction 1310. Rectangular subsection 1325 is associated with optical image 1330 in graphic 1302 in the same manner as rectangular subsection 1320 and optical image 1330 in graphic 1301. However, since the orientation of square image sensor pixel array 140 changes from pattern 1301 to pattern 1302, rectangular subsection 1325 comprises a different portion of square image sensor pixel array 140 than rectangular subsection 1320. Graph 1303 shows the same configuration as graph 1302, now seen in the reference picture of a square image sensor pixel array 140. In summary, graphs 1301 and 1302 show the changed orientation of the optical image 1330 with respect to the square image sensor pixel array 140. Graphs 1301 and 1302 further show that two different rectangular sub-portions (i.e., rectangular sub-portions 1320 and 1325) are used to generate two identically aligned electronic images representing optical image 1330.
FIG. 14 shows one exemplary method 1400 for generating an azimuthally-stabilized electronic image by utilizing a square image sensor pixel array and a gravity orientation sensor. The method 1400 is performed, for example, by one embodiment of the imaging system 200 (fig. 2) including the gravity orientation sensor 170 (fig. 1 and 2).
In step 1410, the orientation of the square image sensor pixel array with respect to gravity, i.e., the orientation of gravity of the square image sensor pixel array, is detected. In one embodiment, the gravity orientation sensor is capable of determining the gravity orientation of a square image sensor pixel array. For example, gravity orientation sensor 170 (fig. 1 and 2) detects the gravity orientation of square image sensor pixel array 140 (fig. 1 and 2). In another embodiment, the gravity orientation sensor detects the gravity orientation of itself or another part of the image system incorporating the gravity orientation sensor. The gravitational orientation detected by the gravitational orientation sensor is then processed to determine the gravitational orientation of the square image sensor pixel array. For example, the gravitational orientation sensor 170 (fig. 1 and 2) detects its own gravitational orientation and communicates this gravitational orientation to the processing/control module 250 (fig. 2). Processing/control module 250 (fig. 2) derives the gravitational orientation of square image sensor pixel array 140 from the gravitational orientation of gravity sensor 170 (fig. 1 and 2) in accordance with instructions 270 (fig. 2). To this end, instructions 270 (fig. 2) may include information regarding the structural relationship between gravity orientation sensor 170 (fig. 1 and 2) and square image sensor pixel array 140 (fig. 1 and 2). Either of these two embodiments of step 1410 may determine the angle between the square image sensor pixel array 140 and the direction 1310, as shown in graph 1302 of fig. 13.
In step 1420, a desired image orientation with respect to gravity is received. For example, processing/control module 250 (fig. 2) receives a desired gravity image orientation (e.g., landscape or portrait) from interface 260 (fig. 2) or from instructions 270 (fig. 2). In step 1430, a desired orientation of the electronic image relative to the square image sensor pixel array is determined. This orientation is determined by the detected orientation of gravity of the square image sensor pixel array obtained in step 1410, and the desired image orientation with respect to gravity obtained in step 1420. For example, processing/control module 250 (FIG. 2) determines a desired orientation of the electronic image in accordance with instructions 270 (FIG. 2) and using the inputs from steps 1410 and 1420.
Optionally, step 701 (FIG. 7) of method 700 is performed in parallel or in series with steps 1410, 1420, and 1430. After performing step 1430, the method 1400 performs step 710. For example, processing/control module 250 (fig. 2) selects rectangular sub-portion 1325 (fig. 13) of square image sensor pixel array 140 (fig. 1 and 2), as shown in graph 130 of fig. 13. Next, method 1400 performs step 1220 of method 1200 (FIG. 12). In an alternative embodiment of method 1400, step 1220 (FIG. 12) is replaced with step 720 (FIG. 7) of method 700. Optionally, method 1400 further includes step 730 of method 700 (fig. 7).
FIG. 15 shows an exemplary method 1500 for generating an azimuthally-stable electronic image by using a square electronic image captured by a square image sensor pixel array together with a gravity orientation sensor. Method 1500 is one embodiment of method 1400 of fig. 14. As discussed in connection with fig. 14, the method 1500 performs steps 1410 (fig. 14), 1420 (fig. 14), 1430 (fig. 14), and 710 (fig. 7). In parallel or in series therewith, the method 1500 performs step 815 (FIG. 8). Optionally, step 815 (FIG. 8) follows step 701 (FIG. 7).
After performing steps 815 and 710, the method 1500 performs step 1520. At step 1520, an aligned electronic image is generated from the pixels of the square electronic image captured at step 815 (which are associated with image sensor pixels located within the rectangular sub-portion) of the square image sensor pixel array selected at step 710. Step 1520 includes step 1525 in which each image pixel of the aligned electronic image is populated with a value derived from at least one pixel of the square electronic image located closest to the pixel of the aligned image. Referring to graph 1100 of FIG. 11, portion 1140 may be interpreted as a portion of the square electronic image captured in step 815. The pixels of the aligned image may be filled in a manner similar to that of image pixel 1151 by using values associated with one or more of image sensor pixels 1141, 1142, 1143, and 1144 (as discussed in connection with fig. 11). For example, processing/control module 250 (fig. 2) receives the square electronic image generated in step 815 from image sensor 220 (fig. 2) or from data store 280. Processing/control module 250 (fig. 2) fills each pixel of the aligned electronic image with a value derived from at least one pixel of the square electronic image located closest to the pixel of the aligned image in accordance with instruction 270 (fig. 2), wherein each pixel of the aligned electronic image has a position corresponding to a position within the rectangular sub-portion. Optionally, method 1500 also includes step 730 (fig. 7).
FIG. 16 shows one exemplary method 1600 for generating an azimuthally-stabilized electronic image by utilizing a portion of a square image sensor pixel array and a gravity orientation sensor. Method 1600 is one embodiment of method 1400 (FIG. 14). The method 1600 performs steps 1410 (fig. 14), 1420 (fig. 14), 1430 (fig. 14), 710 (fig. 7), and optionally step 701 (fig. 7), as discussed in connection with fig. 14. After performing step 710, method 1600 performs step 1620, wherein an electronic image is generated from the rectangular sub-portion of the square image sensor pixel array selected in step 710. Step 1620 comprises sequential steps 925 (FIG. 9) and 1225 (FIG. 12). Optionally, method 1600 further comprises step 730 (fig. 7).
FIG. 17 shows one example method 1700 for generating an azimuthally-stabilized series of electronic images (e.g., video). In step 1710, the method 1400 of FIG. 14 is performed for all electronic images in the series of electronic images, as discussed in connection with FIG. 14. In one embodiment, method 1400 is performed in accordance with an embodiment thereof, method 1500 (FIG. 15). In another embodiment, method 1400 is performed in accordance with an embodiment thereof, method 1600 (FIG. 16).
Combinations of features
The above-described features, as well as those claimed below, may be combined in various ways without departing from the scope thereof. For example, it will be appreciated that implementations of one imaging system or method described herein using square image sensors for flexible image orientation may be combined or interchanged with features of another imaging system or method described herein using square image sensors for flexible image orientation. The following examples show possible non-limiting combinations of the above-described embodiments. It should be clearly understood that various other changes and modifications can be made herein to the methods and apparatus without departing from the spirit and scope of the invention:
(A) an image capture system for generating a flexibly aligned electronic image may comprise: an image sensor having a square pixel array and an image taking optical element for forming an optical image on at least a portion of the square pixel array, the portion being located within an image circle of the image taking optical element and the portion including at least two rectangular sub-portions different from each other in at least one of aspect ratio and orientation.
(B) The imaging system of (a), further comprising a processing module capable of generating an electronic image from each of the at least two rectangular sub-portions.
(C) The image capture system of (a) and (B), wherein the at least two rectangular sub-portions may include a first sub-portion and a second sub-portion, the first sub-portion having an aspect ratio a: b, wherein a is different from b, and the second sub-portion has the same size to aspect ratio as the first sub-portion, and an orientation perpendicular to the first sub-portion.
(D) The image capture system of (C), the square pixel array may comprise NxN pixels.
(E) The imaging system of (D), the first and second sub-portions may have sides parallel to sides of the square pixel array.
(F) The imaging system of (E), the first sub-portion may comprise a series of N pixels parallel to a first side of the square pixel array.
(G) The imaging system of (F), the second sub-portion may comprise N pixels in rows, the rows being perpendicular to the columns.
(H) The image capture system of (C) through (G), the at least two rectangular sub-portions may further include a third sub-portion having an aspect ratio C: d, wherein c is different from d, and c/d is different from a/b.
(I) The imaging system of (H), the at least two rectangular sub-portions may further comprise a fourth sub-portion having the same size to aspect ratio as the third sub-portion and an orientation perpendicular to the third sub-portion.
(J) The imaging system of (I), the first, second, third and fourth sub-portions may have sides parallel to sides of the square pixel array.
(K) The imaging system of (J), the first and third sub-portions may comprise columns of N pixels parallel to a first side of the square pixel array, and the second and fourth sub-portions comprise rows of N pixels, the rows being perpendicular to the columns.
(L) the image capture system of (a) to (K), the at least two rectangular subsections may comprise a rectangular section having an aspect ratio of 1: 1, square sub-section.
(M) the imaging system of (L), the square sub-portion may include all pixels of the square pixel array.
(O) the image capturing system as described in (B) to (M), further comprising a gravity orientation sensor for detecting a gravity orientation of the image capturing system with respect to gravity.
(P) the image capture system of (O), the at least two rectangular subsections may comprise a plurality of rectangular subsections having aspect ratios a: b and mutually differently oriented.
(Q) the imaging system of (P), the processing module capable of selecting one of the plurality of sub-portions as a function of the orientation of gravity such that the electronic image reflects a desired orientation with respect to gravity.
(R) the image capture system of (P) to (Q), the plurality of sub-portions may include a frame having an aspect ratio a: b, all possible orientations of the sub-portions of b.
(S) the image capturing system as described in (A) to (R), may be implemented in a hand-held camera.
(T) an image capture method for generating a flexible azimuth electronic image by using a square image sensor pixel array, may include: forming an optical image on at least a portion of the square image sensor pixel array; and selecting a rectangular sub-portion of the at least a portion of the square image sensor pixel array in accordance with a desired orientation.
(U) the image capture method of (T), further comprising generating a final electronic image from the sub-portions.
(V) the image capture method of (U), the selecting step may include selecting a rectangular subsection of the at least a portion of the square image sensor pixel array, wherein the rectangular subsection has a side parallel to a side of the square image sensor pixel array.
(W) the image capturing method as described in (T) to (U), the step of selecting a rectangular sub-portion may include: the rectangular sub-portions are selected from a set of sub-portions of the horizontal screen alignment and sub-portions of the vertical screen alignment.
(X) the image capture method of (T) to (W), the selecting step may comprise selecting the rectangular sub-portion of the at least a portion of the square image sensor pixel array in accordance with the desired orientation and aspect ratio.
(Y) the image capturing method as described in (T) to (X), may further include the step of receiving the desired orientation.
(Z) the image capturing method as described in (T) to (Y), the selecting step being performed by a processing module.
(AA) the image capturing method as described in (T) to (Z), may further include: a step of determining a gravity orientation of the square image sensor pixel array by using a gravity orientation sensor.
(AB) the image capturing method as described in (T), (U), and (X) to (Z), further comprising: the step of determining the orientation of gravity of the square image sensor pixel array using a gravity orientation sensor may comprise selecting a rectangular sub-portion of the at least a portion of the square image sensor pixel array in dependence on a desired orientation relative to the orientation of gravity.
(AC) the image capturing method as described in (AB), may further include: a final electronic image is generated from the sub-portions.
(AD) the image capturing method as described in (AC), the generating step may further include: filling each pixel of the final electronic image with a value that is related to a signal from at least one of the pixels of the square image sensor pixel array that is located closest to the location corresponding to the pixel of the final electronic image.
(AE) the image capturing method as described in (T) to (AD), and the electronic image may be a video stream.
(AF) the image capture method as described in (AC) to (AD), the electronic images may be video streams, and the method may include repeating the determining, forming, selecting, and generating steps for each of the electronic images in the video streams to generate an azimuthally stabilized video stream.
(AG) the image capturing method as described in (U), (V) and (AC) to (AF), before the generating step, may further include a step of capturing a square electronic image.
(AH) the image capturing method as described in (T) and (W) to (AB), further comprising: generating a final electronic image from the sub-portions; and capturing the square electronic image before the generating step.
(AI) the image capturing method as described in (AG) to (AH), and the generating step may include clipping the square electronic image according to the rectangular sub-portion selected in the selecting step.
(AJ) the image capturing method as described in (U), (V), and (AC) to (AF), the generating step may include capturing a rectangular image by using the rectangular sub-section selected in the selecting step.
(AK) the image capturing method as described in (T) to (AJ), may be implemented in a hand-held camera.
Without departing from the spirit and scope of the present invention, it is noted that the above-mentioned methods and systems can be modified and altered, and that they are described in the above specification and drawings by way of example only, and not by way of limitation. The following claims are intended to cover both the generic and specific features described, and all statements of the scope of the present method and system, which, as a matter of language, might be said to fall there between.

Claims (18)

1. An image capture system for generating a flexibly aligned electronic image, comprising a mobile phone, said mobile phone comprising:
(a) an image sensor having a square pixel array with pixels arranged in orthogonal rows and columns;
(b) an image capturing optic for forming an optical image on at least a portion of the square pixel array, the at least a portion being located within an image circle of the image capturing optic and the at least a portion comprising a plurality of overlapping and concentric rectangular sub-portions having mutually different orientations, at least one of the rectangular sub-portions having a side that is not parallel to the rows and columns; and
(c) a processing module capable of generating an electronic image from any of said rectangular subparts, wherein said processing module is configured to, for each pixel of said electronic image and when the sides of the rectangular subpart are not parallel to said rows and columns, fill each pixel of said electronic image with a value related to a signal from at least one pixel of said square array of pixels located closest to the location corresponding to each pixel of said electronic image.
2. The image capture system of claim 1, wherein the rectangular sub-portion comprises a first sub-portion and a second sub-portion, the first sub-portion having an aspect ratio a: b, wherein a is different from b, and the second sub-portion has the same size to aspect ratio as the first sub-portion, and an orientation perpendicular to the first sub-portion.
3. The image capture system of claim 2, wherein the square pixel array includes NxN pixels, the first and second sub-portions have sides parallel to sides of the square pixel array, the first sub-portion includes columns of N pixels parallel to the first side of the square pixel array, and the second sub-portion includes rows of N pixels, the rows being perpendicular to the columns.
4. The imaging system of claim 2, wherein the rectangular sub-portion further comprises a third sub-portion and a fourth sub-portion, the third sub-portion having an aspect ratio c: d, wherein c is different from d, and c/d is different from a/b, and the fourth sub-portion has the same size to aspect ratio as the third sub-portion, and an orientation perpendicular to the third sub-portion.
5. The image capture system of claim 4, wherein the first, second, third and fourth sub-portions have sides parallel to sides of the square pixel array, the first and third sub-portions comprising columns of N pixels parallel to a first side of the square pixel array, and the second and fourth sub-portions comprising rows of N pixels, the rows perpendicular to the columns.
6. The imaging system of claim 2, wherein the rectangular sub-portion further comprises a rectangular sub-portion having an aspect ratio of 1: 1, the third subsection comprising all pixels of the square pixel array.
7. The image capture system of claim 1, wherein the mobile phone further comprises a gravity orientation sensor for detecting a gravity orientation of the image capture system relative to gravity, and wherein the processing module is capable of selecting any of the rectangular sub-portions based on the gravity orientation such that the electronic image reflects a desired orientation relative to gravity.
8. The image capture system of claim 7, wherein the rectangular sub-portions comprise a discrete set of rectangular sub-portions evenly distributed in a range from 0 to 360 degrees and include at least one rectangular sub-portion that is not square aligned with the square pixel array.
9. An image capture method for producing a flexible-orientation electronic image using a square pixel sensor array, wherein the pixels of the square pixel array are arranged in orthogonal rows and columns, the method comprising:
forming an optical image on at least a portion of the square image sensor pixel array implemented in a mobile phone;
selecting a rectangular subsection of the at least a portion of the square image sensor pixel array in accordance with a desired orientation, the rectangular subsection having the desired orientation; and
generating a final electronic image from the rectangular subsection, the generating including, for each pixel of the final electronic image, when a side of the rectangular subsection is not parallel to the rows and columns, filling each pixel of the final electronic image with a value that is related to a signal from at least one pixel of the square pixel array located closest to a location corresponding to each pixel of the final electronic image.
10. The method of claim 9, wherein the selecting comprises selecting a rectangular sub-portion of the at least a portion of the square image sensor pixel array, the rectangular sub-portion having sides parallel to sides of the square image sensor pixel array.
11. The method of claim 10, wherein said selecting a rectangular sub-portion comprises: the rectangular sub-portions are selected from a set of sub-portions of the horizontal screen alignment and sub-portions of the vertical screen alignment.
12. The method of claim 9, wherein the selecting comprises selecting the rectangular sub-portion of the at least a portion of the square image sensor pixel array as a function of the desired orientation.
13. The method of claim 9, further comprising receiving the desired orientation.
14. The method of claim 13, wherein the selecting is performed by a processing module using the desired bearing received in the receiving.
15. The method of claim 9, further comprising: determining a gravitational orientation of the square image sensor pixel array using a gravitational orientation sensor implemented on the mobile phone, the selecting comprising selecting a rectangular sub-portion of the at least a portion of the square image sensor pixel array according to a desired orientation relative to the gravitational orientation, the rectangular sub-portion being at a non-right angle to the square image sensor pixel array.
16. The method of claim 15, wherein the electronic images are video streams, the method comprising repeating the determining, forming, selecting, and generating for each of the electronic images in the video streams to produce an azimuthally-stabilized video stream.
17. The method of claim 9, wherein prior to said generating, further comprising capturing a square electronic image, said generating comprising cropping said square electronic image based on said rectangular sub-portion selected in said selecting.
18. The method of claim 9, wherein said generating comprises retrieving a rectangular image by using said rectangular sub-portion selected in said selecting.
HK14112791.0A 2013-04-26 2014-12-22 Imaging systems and methods using square image sensor for flexible image orientation HK1199342B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361816612P 2013-04-26 2013-04-26
US61/816,612 2013-04-26

Publications (2)

Publication Number Publication Date
HK1199342A1 HK1199342A1 (en) 2015-06-26
HK1199342B true HK1199342B (en) 2018-06-22

Family

ID=

Similar Documents

Publication Publication Date Title
US9531970B2 (en) Imaging systems and methods using square image sensor for flexible image orientation
US8619120B2 (en) Imaging apparatus, imaging method and recording medium with program recorded therein
CN107770433B (en) Image acquisition device and image smooth scaling method thereof
TWI622293B (en) Method, storage medium and camera system for creating panoramic image
JP5652649B2 (en) Image processing apparatus, image processing method, and image processing program
US9661210B2 (en) Image pickup device and image pickup apparatus
US20050195295A1 (en) Image-taking apparatus and image processing method
CN104813648B (en) Image processing apparatus, photographic device and image processing method
EP2618585A1 (en) Monocular 3d-imaging device, shading correction method for monocular 3d-imaging device, and program for monocular 3d-imaging device
JP2007134903A (en) Peripheral light amount correction device, peripheral light amount correction method, electronic information device, control program, and readable recording medium
US9269131B2 (en) Image processing apparatus with function of geometrically deforming image, image processing method therefor, and storage medium
US20120242870A1 (en) Image capturing apparatus
US20060087707A1 (en) Image taking apparatus
CN109196857B (en) Imaging element and camera device
JP2014003586A (en) Image processing system, image processing method and program
JP5526287B2 (en) Imaging apparatus and imaging method
CN104641276A (en) Imaging device and signal processing method
US9967452B2 (en) Imaging apparatus and imaging method for controlling auto-focus
US9743007B2 (en) Lens module array, image sensing device and fusing method for digital zoomed images
CN101389006A (en) Monitoring system
CN113890999B (en) Photographing method and device, electronic device and computer-readable storage medium
HK1199342B (en) Imaging systems and methods using square image sensor for flexible image orientation
JP2009218661A (en) Imaging device with image distortion correcting function
JP2007049266A (en) Picture imaging apparatus
TWI639338B (en) Image capturing apparatus and image smooth zooming method thereof