[go: up one dir, main page]

WO2019012660A1 - Image processing device and light field imaging device - Google Patents

Image processing device and light field imaging device Download PDF

Info

Publication number
WO2019012660A1
WO2019012660A1 PCT/JP2017/025592 JP2017025592W WO2019012660A1 WO 2019012660 A1 WO2019012660 A1 WO 2019012660A1 JP 2017025592 W JP2017025592 W JP 2017025592W WO 2019012660 A1 WO2019012660 A1 WO 2019012660A1
Authority
WO
WIPO (PCT)
Prior art keywords
light field
image
value
light
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2017/025592
Other languages
French (fr)
Japanese (ja)
Inventor
有紀 徳橋
智史 渡部
岡村 俊朗
隼一 古賀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corp filed Critical Olympus Corp
Priority to PCT/JP2017/025592 priority Critical patent/WO2019012660A1/en
Publication of WO2019012660A1 publication Critical patent/WO2019012660A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems

Definitions

  • the present invention relates to an image processing apparatus and a light field imaging apparatus.
  • a subject is provided with an imaging device in which a plurality of pixels are two-dimensionally arranged, and a microlens array having a microlens corresponding to each of a plurality of pixels of the imaging device on the subject side with respect to the imaging device
  • a light field imaging apparatus for imaging a three-dimensional distribution of see, for example, Patent Document 1.
  • an image acquired by a light field imaging apparatus (hereinafter referred to as a light field image) itself is different from an image acquired by a normal imaging apparatus, in which images of a large number of three-dimensionally distributed points are overlapped Therefore, basic information such as the planar position and distance of the subject can not be intuitively understood unless image processing is performed.
  • the light field image since the light field image has an enormous amount of information, it is difficult to process the image in real time to reconstruct a three-dimensional image composed of a plurality of slice images.
  • the present invention has been made in view of the above-described circumstances, and it is an object of the present invention to provide an image processing apparatus and a light field imaging apparatus capable of detecting without missing events from light field images acquired over a plurality of frames.
  • light field images of a plurality of frames acquired over time are input, and the light field of at least a part of the light field image of at least one of the plurality of frames is received.
  • the image processing apparatus is an image processing apparatus that calculates and outputs an event evaluation value that is a representative value of a numerical value indicating deviation of a pixel value of a pixel included in an image from a predetermined reference value.
  • predetermined reference values of pixels included in the light field image for at least a partial region of at least one light field image
  • the event evaluation value which is a representative value of the numerical value indicating the deviation from is calculated and output.
  • the occurrence of a significant change (event) in any position or frame in the input light field image of a plurality of frames from the predetermined reference value of the pixel value It can be easily confirmed by an event evaluation value which is a representative value of at least a part of the numerical value indicating the deviation. Thereby, as compared with the case where the occurrence of an event is confirmed by visual observation for all the light field images, the event can be detected without missing the event.
  • the reference value may be an average value of pixel values of corresponding pixels of the light field image. Further, in the above aspect, the reference value may be an average value of pixel values of a plurality of pixels of the light field image of a plurality of the frames. Further, in the above aspect, the reference value may be an average value of pixel values of all pixels of all the light field images.
  • the reference value may be a pixel value of a corresponding pixel in the light field image of the frame different in the time axis direction. In the above aspect, the reference value may be a pixel value of a corresponding pixel in the light field image of the frame adjacent in the time axis direction.
  • an imaging optical system for condensing light from an object and forming an image of the object, a position of a primary image formed by the imaging optical system, or the primary image
  • a microlens array having a plurality of microlenses arranged two-dimensionally at positions conjugate to an image and focusing the light from the imaging optical system, and a plurality of pixels for receiving the light focused by the microlenses
  • an image sensor for generating the light field image by photoelectrically converting light received in the pixel, and any of the above image processing devices for processing the light field image generated by the image sensor. It is a light field imaging device.
  • the pixels are arranged for each of the areas corresponding to the microlenses, and the image processing apparatus is arranged for each of the pixels corresponding to each of the microlenses according to the incident angle of light on each of the microlenses.
  • the event evaluation value may be calculated by weighting.
  • the image processing device divides each light field image into a plurality of regions corresponding to incident angles of light to the microlenses, and the event evaluation is performed for each of the divided regions. The value may be calculated.
  • FIG. 1 It is a schematic diagram which shows the light field imaging device which concerns on one Embodiment of this invention. It is a figure which shows an example of the change for every flame
  • FIG. 6 is a diagram illustrating a case where the subject is disposed behind the focus position of the imaging optical system.
  • FIG. 6 is a diagram illustrating a case where the subject is disposed on the front side of the focus position of the imaging optical system.
  • the light field imaging apparatus 1 includes an imaging optical system 3 that condenses light from a subject S (object point) to form an image of the subject S, and imaging optical system A micro lens array 5 having a plurality of micro lenses 5a for collecting light from the system 3; and an imaging element 9 comprising a plurality of pixels 9a for receiving and photoelectrically converting the light collected by the plurality of micro lenses 5a
  • An image processing apparatus 2 according to the present embodiment for processing a light field image acquired by the imaging device 9 and a display unit 4 are provided.
  • the microlens array 5 is configured such that the plurality of microlenses 5 a having positive power are positioned at the focal point of the imaging optical system 3 (the position of the primary image or the position conjugate with the primary image). It is arranged in a two-dimensional array along a plane orthogonal to the axis L.
  • the plurality of microlenses 5 a are arranged at a sufficiently large pitch (for example, a pitch eight times the pixel pitch of the imaging device 9) compared to the pixel pitch of the imaging device 9.
  • the imaging element 9 is also configured by two-dimensionally arranging the pixels 9 a in a direction orthogonal to the optical axis L of the imaging optical system 3.
  • a plurality of (for example, 8 ⁇ 8 in the above example) are arranged in each region corresponding to the plurality of microlenses 5 a of the microlens array 5.
  • the plurality of pixels 9 a photoelectrically convert the detected light and output a light intensity signal (pixel value) as light field image information of the subject S.
  • the imaging device 9 is configured to sequentially output light field image information of a plurality of frames acquired at different times in the time axis direction. For example, movie shooting or time-lapse shooting.
  • the image processing apparatus 2 is configured by a processor, and the event evaluation value A is calculated according to Equation 1 for the first k-th light field image of a sequence of light field images of a plurality of frames acquired by the imaging device 9 at predetermined time intervals It is designed to calculate k .
  • the event evaluation value Ak is a representative value for each light field image of a numerical value indicating a deviation from the average value (predetermined reference value) of the entire sequence of pixel values of the pixels included in the light field image.
  • n is the pixel position in the light field image
  • ntotal is the number of pixels of each light field image
  • ktotal is the total number of frames
  • I kn is the pixel value of the n-th pixel in the k-th light field image
  • Iavg n is the n-th pixel value of the average image of the entire sequence (average value of the pixel values of n-th pixels in each light field image of the entire sequence).
  • the display unit 4 is a monitor that displays an event evaluation value Ak calculated by being processed by the image processing device 2.
  • the image processing apparatus 2 and the light field imaging apparatus 1 configured as described above will be described.
  • the light field imaging apparatus 1 according to the present embodiment, light from the subject S is collected by the imaging optical system 3 and arranged at the focal point position by performing imaging at predetermined time intervals.
  • the light is collected by the respective microlenses 5a of the microlens array 5 and received by the pixels of the image pickup device 9, whereby light field image information is acquired.
  • event evaluation value A k is a graph showing changes in each frame of the event evaluation value A k is generated, the display unit 4 Is displayed on.
  • FIG. 2 An example of the graph which shows the change for every flame
  • the event evaluation value Ak is calculated using the pixel value of the average image of the whole sequence, but the sequence is divided into a plurality of sections along the time axis, and the pixel value of the average image in each section May be used.
  • the pixel value of the average image in each section May be used. For example, in the case of time-lapse photography to repeat the imaging for 10 seconds every hour, either one of frames of event evaluation value light field image of the photographing section when calculating the A k of photographing section It is sufficient to use the pixel value of the average image calculated only from.
  • the pixel value (moving average) of the average image of the light field images of the plurality of frames immediately before may be used. This also makes it possible to confirm the occurrence of an event without ending the entire sequence. In addition, even when the entire pixel value changes gradually regardless of the event, it is possible to accurately detect the occurrence of the event without being affected by the influence.
  • the average value is obtained for each pixel position, but instead, as shown in Equation 2, an average value of pixel values of the entire sequence may be used.
  • Iavg is an average value of pixel values in the entire sequence.
  • the absolute value of the difference is the entire light field image
  • the event evaluation value B k may be used as the sum of the above.
  • the sensitivity of the change of the event evaluation value Ak can be improved. That is, when an event occurs in a very small area of the light field image, calculating the event evaluation value Ak for the entire light field image results in a small value and it is difficult to distinguish it from noise.
  • the event evaluation value Ak can be made a large value by
  • the weight of an unimportant area can be reduced to reduce the influence, or the weight can be made zero and the calculation time It can be shortened.
  • the pixel value tends to be greatly reduced in the pixels corresponding to the vicinity of the boundary of the microlens 5a. These pixels correspond to the end of the pupil of the imaging optical system 3, and the pixels corresponding to the vicinity of the center of the microlens 5 a correspond to the center of the pupil of the imaging optical system 3.
  • the event evaluation values A k may be to perform a weighting for each pixel position corresponding to the micro lenses 5a. For example, the weight of the pixels arranged in the vicinity of the boundary of the micro lens 5a can be reduced to make it less susceptible to noise.
  • the event evaluation value Ak when summing the absolute value of the difference from the reference value of the pixel value of each pixel, the weight is low for the pixels near the boundary of the microlens 5a, and near the center of the microlens 5a The weights may be increased for the pixels corresponding to. As a method of weighting in this case, it is possible to acquire an image as shown in FIG.
  • the image 6 in which a uniform plane (for example, a plane of white only) or a space is photographed in advance, and adopt a weight corresponding to the intensity distribution. . If the amount of light decreases or the image is distorted at a high image height, the image height may also be taken into consideration for weighting. Further, the area may be divided for each pixel position corresponding to the microlens 5a, and the evaluation value may be calculated for each area.
  • a uniform plane for example, a plane of white only
  • the image height may also be taken into consideration for weighting.
  • the area may be divided for each pixel position corresponding to the microlens 5a, and the evaluation value may be calculated for each area.
  • the light-field image has the three-dimensional information, but also contains event information generated within the object S, clearly the event evaluation values A k calculated from the entire light field image Sometimes it is difficult for symptoms to appear. Therefore, the light field image may be divided into areas in the direction of the optical axis L, and the event evaluation value Ak may be calculated for each area.
  • the angle of light incident on the microlens 5a is determined when the one pixel 9a is determined.
  • the image at the position away from the in-focus position does not simply blur on the imaging device 9 as taken by a normal camera, but in the light field image as shown in FIGS. 7A to 7C.
  • the image of one point of the subject S is light as shown in FIG. 8A to FIG. 8C according to the position z1, z2, z3 (the incident angle of light to the micro lens 5a) in the optical axis L direction of that point. It is distributed to a plurality of two-dimensional coordinates in a plane orthogonal to the axis L direction.
  • the light field imaging device 1 can calculate the three-dimensional coordinates of the original position based on the intersections of a plurality of light rays corresponding to a plurality of two-dimensional coordinates. It is supposed to shoot including information. A method for simply refocusing using this is known, and the event evaluation value Ak may be calculated by dividing the area in the direction of the optical axis L using this method.
  • a method may be adopted in which the influence of a point to be emphasized by refocusing is increased, and the influence of a point that is spread over a wide range and weakened by refocusing is reduced.
  • the refocused image does not necessarily have to be created, and the pixels to be used may be changed according to the L direction positions z1, z2 and z3.
  • the pixels used when calculating the event evaluation value are changed according to the position in the optical axis L direction.
  • the event evaluation value Az k is as shown in Expression 4.
  • z is the position in the optical axis L direction
  • c is a coefficient determined by the configuration of the imaging optical system 3
  • Iz k (x, y) is the sum of the pixel values of the pixels corresponding to the microlens at the coordinate (x, y) of the kth frame, with the position z in the optical axis L direction
  • Iavez (x, y) is the average value of the entire sequence of the sum of the pixel values of the pixels corresponding to the microlens 5 a at the coordinates (x, y) of the position z in the optical axis L direction.
  • c ⁇ z is rounded to be an integer.
  • c ⁇ z 0, that is, as shown in FIGS. 7A and 8A, in the case where the subject S is disposed at the position z in the optical axis L direction focusing on the microlens 5a
  • FIG. As shown in Equations (5) and (5), pixel values of different pixels corresponding to the same microlens 5a are added.
  • the event evaluation value Az k calculated for the optical axis L direction position z in the range of each area may be averaged.
  • the difference between Iz k (x, y) and the sequence average value (reference value) is squared so that the effect of the point emphasized in refocusing becomes large.
  • This part is weakened by refocusing by not using it for deriving the evaluation value when abs (Iz k (x, y)-Iavgz (x, y)) is less than or equal to the threshold value TH, as shown in Eq. You may make it the influence of the point to be made small.
  • the distance between the optical axis L direction positions z in each region may be, for example, an optical axis L direction position z such that c ⁇ z is an integer, or the distance between the optical axis L direction positions z is theoretical It may be set to have about the resolution.
  • the range of the interval or area of the optical axis L direction position z calculated according to the optical axis L direction position z may be changed. For example, it may be fine at places where the optical axis L direction position z is small and rough at places where it is large.
  • the event evaluation value Az k may be calculated only in the range of the position z in the narrow optical axis L direction.
  • the decimal value can also be handled by a method of interpolating pixel values from a plurality of pixels.
  • the pixel values of all the pixels corresponding to all the microlenses 5a are added when calculating the sum of the pixel values, the calculation may be performed using only some pixels.
  • the sum of squares of differences with the pixel value of the corresponding pixel of the light field image of the immediately preceding frame is calculated as shown in Eq. It may be an event evaluation value Az k .
  • machine learning may be used to calculate the event evaluation value Az k , determine the event, classify the event, and the like from the light field image and the feature of the sequence.
  • an event evaluation value Az k representing an event likeness based on a learning result is obtained from the acquired light field image sequence. It can be acquired.
  • classification of the generated event may be performed by learning including the type of event or noise component (for example, calcium emission, movement of subject S, incidence of external light, etc.).
  • a value obtained by adding the absolute value of the difference value indicating the deviation from the reference value for the entire sequence is used as the event evaluation value.
  • Any other representative value for example, any statistical value such as an average value, a maximum value, a minimum value, and a median value may be adopted as the event information.
  • the case where the pixel 9a of the imaging device 9 matches the pixel on the light field image used for event detection is illustrated, but instead, the pixel 9a of the imaging device 9 and the light field image The upper pixel may not match.
  • the pitch of the micro lens 5a may not be an integral multiple of the pixel pitch, or setting errors may occur such as being slightly rotated and disposed, and the pixel is not A calibration process may be performed to interpolate and rearrange. In this case, strictly speaking, the pixel 9a of the image sensor 9 and the pixel on the light field image used for event detection do not match.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

複数フレームにわたり取得されたライトフィールド画像からイベントを見落とすことなく検出することを目的として、本発明に係る画像処理装置(2)は、経時的に取得された複数フレームのライトフィールド画像が入力され、複数のフレームのうち少なくとも1つのフレームのライトフィールド画像の少なくとも一部の領域について、ライトフィールド画像に含まれる画素の画素値の所定の基準値からの乖離を示す数値の代表値であるイベント評価値を算出して出力するものである。The image processing apparatus (2) according to the present invention receives a plurality of frames of light field images acquired over time, in order to detect events without missing events from light field images acquired over a plurality of frames, An event evaluation value which is a representative value of numerical values indicating deviation of a pixel value of a pixel included in a light field image from at least a partial region of a light field image of at least one frame among a plurality of frames. Is calculated and output.

Description

画像処理装置およびライトフィールド撮像装置Image processing apparatus and light field imaging apparatus

 本発明は、画像処理装置およびライトフィールド撮像装置に関するものである。 The present invention relates to an image processing apparatus and a light field imaging apparatus.

 従来、複数の画素が2次元的に配置された撮像素子と、撮像素子よりも被写体側において撮像素子の複数の画素ごとに対応して配置されたマイクロレンズを有するマイクロレンズアレイとを備え、被写体の3次元分布を撮像するライトフィールド撮像装置が知られている(例えば、特許文献1参照。)。 Conventionally, a subject is provided with an imaging device in which a plurality of pixels are two-dimensionally arranged, and a microlens array having a microlens corresponding to each of a plurality of pixels of the imaging device on the subject side with respect to the imaging device There is known a light field imaging apparatus for imaging a three-dimensional distribution of (see, for example, Patent Document 1).

 一般に、ライトフィールド撮像装置により取得された画像(以下、ライトフィールド画像という。)そのものは通常の撮像装置により取得される画像とは異なり、3次元的に分布する多数の点の像が重なり合ったものであるため、画像処理を施さないと被写体の平面位置や距離などの基本的な情報が直感的には分からない。また、ライトフィールド画像は情報量が膨大であるため、リアルタイムに画像処理して複数枚のスライス画像からなる3次元画像を再構成することは困難である。 In general, an image acquired by a light field imaging apparatus (hereinafter referred to as a light field image) itself is different from an image acquired by a normal imaging apparatus, in which images of a large number of three-dimensionally distributed points are overlapped Therefore, basic information such as the planar position and distance of the subject can not be intuitively understood unless image processing is performed. In addition, since the light field image has an enormous amount of information, it is difficult to process the image in real time to reconstruct a three-dimensional image composed of a plurality of slice images.

 また、ライトフィールド撮像装置により動画撮影やタイムラプス撮影を行うことにより、時系列に配列される複数フレームのライトフィールド画像を取得することが行われる。この場合、いずれかのタイミングで観察対象に変化(イベント)が発生することを期待して、長時間にわたって撮影することが行われる。 In addition, by performing moving image shooting and time-lapse shooting with the light field imaging apparatus, it is performed to obtain light field images of a plurality of frames arranged in time series. In this case, photographing is performed for a long time in anticipation of occurrence of change (event) in the observation target at any timing.

特開2010-102230号公報JP, 2010-102230, A

 しかしながら、上述したように、通常の画像とは異なり3次元的に分布する多数の点の像が重なり合ったライトフィールド画像を目視観察して、イベントの発生の有無を検出することは容易ではなく、たとえ判別できる程の変化があったとしても、大量の画像を全て目視観察するには労力および時間を要し、イベントを見落とす可能性があるという不都合がある。 However, as described above, it is not easy to detect the occurrence of an event by visually observing a light field image in which the images of a large number of three-dimensionally distributed points are overlapped unlike a normal image. Even if there is a discernible change, visual observation of a large number of images requires labor and time, and there is a disadvantage that an event may be overlooked.

 本発明は上述した事情に鑑みてなされたものであって、複数フレームにわたり取得されたライトフィールド画像からイベントを見落とすことなく検出することができる画像処理装置およびライトフィールド撮像装置を提供することを目的としている。 The present invention has been made in view of the above-described circumstances, and it is an object of the present invention to provide an image processing apparatus and a light field imaging apparatus capable of detecting without missing events from light field images acquired over a plurality of frames. And

 本発明の一態様は、経時的に取得された複数フレームのライトフィールド画像が入力され、複数の前記フレームのうち少なくとも1つの前記フレームの前記ライトフィールド画像の少なくとも一部の領域について、該ライトフィールド画像に含まれる画素の画素値の所定の基準値からの乖離を示す数値の代表値であるイベント評価値を算出して出力する画像処理装置である。
 本態様によれば、経時的に取得された複数フレームのライトフィールド画像が入力されると、少なくとも1つのライトフィールド画像の少なくとも一部の領域について、ライトフィールド画像に含まれる画素の所定の基準値からの乖離を示す数値の代表値であるイベント評価値が算出されて出力される。
In one aspect of the present invention, light field images of a plurality of frames acquired over time are input, and the light field of at least a part of the light field image of at least one of the plurality of frames is received. The image processing apparatus is an image processing apparatus that calculates and outputs an event evaluation value that is a representative value of a numerical value indicating deviation of a pixel value of a pixel included in an image from a predetermined reference value.
According to this aspect, when light field images of a plurality of frames acquired over time are input, predetermined reference values of pixels included in the light field image for at least a partial region of at least one light field image The event evaluation value which is a representative value of the numerical value indicating the deviation from is calculated and output.

 すなわち、入力された複数フレームのライトフィールド画像内のいずれかの位置またはいずれかのフレームで他のフレームと比較した顕著な変化(イベント)が発生したことを、画素値の所定の基準値からの乖離を示す数値の少なくとも一部の領域の代表値であるイベント評価値により簡易に確認することができる。これにより、全てのライトフィールド画像について目視観察によりイベントの発生を確認する場合と比較して、イベントを見落とすことなく検出することができる。 That is, the occurrence of a significant change (event) in any position or frame in the input light field image of a plurality of frames from the predetermined reference value of the pixel value It can be easily confirmed by an event evaluation value which is a representative value of at least a part of the numerical value indicating the deviation. Thereby, as compared with the case where the occurrence of an event is confirmed by visual observation for all the light field images, the event can be detected without missing the event.

 上記態様においては、前記基準値が、前記ライトフィールド画像の対応する画素の画素値の平均値であってもよい。
 また、上記態様においては、前記基準値が、複数の前記フレームの前記ライトフィールド画像の複数の画素の画素値の平均値であってもよい。
 また、上記態様においては、前記基準値が、全ての前記ライトフィールド画像の全画素の画素値の平均値であってもよい。
In the above aspect, the reference value may be an average value of pixel values of corresponding pixels of the light field image.
Further, in the above aspect, the reference value may be an average value of pixel values of a plurality of pixels of the light field image of a plurality of the frames.
Further, in the above aspect, the reference value may be an average value of pixel values of all pixels of all the light field images.

 また、上記態様においては、前記基準値が、時間軸方向に異なる前記フレームの前記ライトフィールド画像における対応する画素の画素値であってもよい。
 また、上記態様においては、前記基準値が、時間軸方向に隣接する前記フレームの前記ライトフィールド画像における対応する画素の画素値であってもよい。
Further, in the above aspect, the reference value may be a pixel value of a corresponding pixel in the light field image of the frame different in the time axis direction.
In the above aspect, the reference value may be a pixel value of a corresponding pixel in the light field image of the frame adjacent in the time axis direction.

 また、本発明の他の態様は、被写体からの光を集光し、前記被写体の像を結像する撮像光学系と、該撮像光学系により結像される1次像の位置または該1次像と共役な位置に2次元的に配列され前記撮像光学系からの光を集光する複数のマイクロレンズを有するマイクロレンズアレイと、該マイクロレンズにより集光された光を受光する複数の画素を有し、該画素において受光した光を光電変換することにより前記ライトフィールド画像を生成する撮像素子と、該撮像素子により生成された前記ライトフィールド画像を処理する上記いずれかの画像処理装置とを備えるライトフィールド撮像装置である。 In another aspect of the present invention, an imaging optical system for condensing light from an object and forming an image of the object, a position of a primary image formed by the imaging optical system, or the primary image A microlens array having a plurality of microlenses arranged two-dimensionally at positions conjugate to an image and focusing the light from the imaging optical system, and a plurality of pixels for receiving the light focused by the microlenses And an image sensor for generating the light field image by photoelectrically converting light received in the pixel, and any of the above image processing devices for processing the light field image generated by the image sensor. It is a light field imaging device.

 上記態様においては、前記画素が前記マイクロレンズに対応する領域毎に配置され、前記画像処理装置が、各前記マイクロレンズへの光の入射角度に応じて各前記マイクロレンズに対応する前記画素毎に重みを付けて前記イベント評価値を算出してもよい。
 また、上記態様においては、前記画像処理装置が、各前記ライトフィールド画像を、各前記マイクロレンズへの光の入射角度に対応する複数の領域に分割し、分割された該領域毎に前記イベント評価値を算出してもよい。
In the above aspect, the pixels are arranged for each of the areas corresponding to the microlenses, and the image processing apparatus is arranged for each of the pixels corresponding to each of the microlenses according to the incident angle of light on each of the microlenses. The event evaluation value may be calculated by weighting.
Further, in the above aspect, the image processing device divides each light field image into a plurality of regions corresponding to incident angles of light to the microlenses, and the event evaluation is performed for each of the divided regions. The value may be calculated.

 本発明によれば、複数フレームにわたり取得されたライトフィールド画像からイベントを見落とすことなく検出することができるという効果を奏する。 According to the present invention, it is possible to detect a light field image acquired over a plurality of frames without missing events.

本発明の一実施形態に係るライトフィールド撮像装置を示す模式図である。It is a schematic diagram which shows the light field imaging device which concerns on one Embodiment of this invention. 図1のライトフィールド撮像装置に備えられた画像処理装置により出力されたイベント評価値のフレーム毎の変化の一例を示す図である。It is a figure which shows an example of the change for every flame | frame of the event evaluation value output by the image processing apparatus with which the light field imaging device of FIG. 1 was equipped. 図1の画像処理装置によりライトフィールド画像を分割した領域毎にイベント評価値を算出する場合を説明する図である。It is a figure explaining the case where an event evaluation value is calculated for every field which divided the light field picture by the image processing device of FIG. 図3の範囲Aにおける領域の境界とマイクロレンズとの関係を示す拡大図である。It is an enlarged view which shows the relationship between the boundary of the area | region in the range A of FIG. 3, and a micro lens. 図3の画像処理装置により出力されたイベント評価値の時間変化の一例を示す図である。It is a figure which shows an example of the time change of the event evaluation value output by the image processing apparatus of FIG. 図1のライトフィールド撮像装置により一様な白色平面を撮影して得られたライトフィールド画像の一例を示す図である。It is a figure which shows an example of the light field image obtained by image | photographing a uniform white plane with the light field imaging device of FIG. 図1のライトフィールド撮像装置の撮像光学系の焦点位置に被写体が配置されている場合を示す図である。It is a figure which shows the case where a to-be-photographed object is arrange | positioned in the focus position of the imaging optical system of the light field imaging device of FIG. 被写体が撮像光学系の焦点位置よりも奥側に配置されている場合を示す図である。FIG. 6 is a diagram illustrating a case where the subject is disposed behind the focus position of the imaging optical system. 被写体が撮像光学系の焦点位置よりも手前側に配置されている場合を示す図である。FIG. 6 is a diagram illustrating a case where the subject is disposed on the front side of the focus position of the imaging optical system. 図7Aの場合に取得されたライトフィールド画像の一例を示す図である。It is a figure which shows an example of the light field image acquired in the case of FIG. 7A. 図7Bの場合に取得されたライトフィールド画像の一例を示す図である。It is a figure which shows an example of the light field image acquired in the case of FIG. 7B. 図7Cの場合に取得されたライトフィールド画像の一例を示す図である。It is a figure which shows an example of the light field image acquired in the case of FIG. 7C. ライトフィールド画像情報を光軸方向に領域分割し撮像光学系の焦点位置に配置されている領域についてイベント評価値を算出する場合を説明する図である。It is a figure explaining the case where area | region division of light field image information is carried out to the optical axis direction, and an event evaluation value is calculated about the area | region arrange | positioned at the focus position of an imaging optical system. ライトフィールド画像情報を光軸方向に領域分割し撮像光学系の焦点位置からずれた位置に配置されている領域についてイベント評価値を算出する場合を説明する図である。It is a figure explaining the case where area | region division of light field image information is carried out to the optical axis direction, and an event evaluation value is calculated about the area | region arrange | positioned from the focus position of an imaging optical system.

 本発明の一実施形態に係る画像処理装置2およびライトフィールド撮像装置1について、図面を参照して以下に説明する。
 本実施形態に係るライトフィールド撮像装置1は、図1に示されるように、被写体S(物点)からの光を集光して被写体Sの像を結像する撮像光学系3と、撮像光学系3からの光を集光する複数のマイクロレンズ5aを有するマイクロレンズアレイ5と、複数のマイクロレンズ5aにより集光された光を受光して光電変換する複数の画素9aを備える撮像素子9と、該撮像素子9により取得されたライトフィールド画像を処理する本実施形態に係る画像処理装置2と、表示部4とを備えている。
An image processing apparatus 2 and a light field imaging apparatus 1 according to an embodiment of the present invention will be described below with reference to the drawings.
As shown in FIG. 1, the light field imaging apparatus 1 according to the present embodiment includes an imaging optical system 3 that condenses light from a subject S (object point) to form an image of the subject S, and imaging optical system A micro lens array 5 having a plurality of micro lenses 5a for collecting light from the system 3; and an imaging element 9 comprising a plurality of pixels 9a for receiving and photoelectrically converting the light collected by the plurality of micro lenses 5a An image processing apparatus 2 according to the present embodiment for processing a light field image acquired by the imaging device 9 and a display unit 4 are provided.

 マイクロレンズアレイ5は、図1に示すように、正のパワーを有する複数のマイクロレンズ5aを撮像光学系3の焦点位置(1次像の位置または該1次像と共役な位置)に、光軸Lに直交する平面に沿って2次元的に配列して構成されている。これら複数のマイクロレンズ5aは、撮像素子9の画素ピッチに比べて十分に大きいピッチ(例えば、撮像素子9の画素ピッチの8倍のピッチ。)で配列されている。 As shown in FIG. 1, the microlens array 5 is configured such that the plurality of microlenses 5 a having positive power are positioned at the focal point of the imaging optical system 3 (the position of the primary image or the position conjugate with the primary image). It is arranged in a two-dimensional array along a plane orthogonal to the axis L. The plurality of microlenses 5 a are arranged at a sufficiently large pitch (for example, a pitch eight times the pixel pitch of the imaging device 9) compared to the pixel pitch of the imaging device 9.

 撮像素子9も各画素9aを撮像光学系3の光軸Lに直交する方向に2次元的に配列して構成されている。マイクロレンズアレイ5の複数のマイクロレンズ5aに対応する領域ごとに複数個(例えば、上記例では8×8個)ずつ配列されている。これら複数の画素9aは、検出した光を光電変換して、被写体Sのライトフィールド画像情報としての光強度信号(画素値)を出力するようになっている。 The imaging element 9 is also configured by two-dimensionally arranging the pixels 9 a in a direction orthogonal to the optical axis L of the imaging optical system 3. A plurality of (for example, 8 × 8 in the above example) are arranged in each region corresponding to the plurality of microlenses 5 a of the microlens array 5. The plurality of pixels 9 a photoelectrically convert the detected light and output a light intensity signal (pixel value) as light field image information of the subject S.

 撮像素子9は、時間軸方向に異なる時刻に取得された複数フレームのライトフィールド画像情報を順次出力するようになっている。例えば、動画撮影あるいはタイムラプス撮影等である。 The imaging device 9 is configured to sequentially output light field image information of a plurality of frames acquired at different times in the time axis direction. For example, movie shooting or time-lapse shooting.

 画像処理装置2は、プロセッサにより構成され、所定の時間間隔で撮像素子9により取得された複数フレームのライトフィールド画像からなるシーケンスの最初からk番目のライトフィールド画像について、数1によりイベント評価値Aを算出するようになっている。
 イベント評価値Aは、ライトフィールド画像に含まれる各画素の画素値のシーケンス全体の平均値(所定の基準値)からの乖離を示す数値のライトフィールド画像毎の代表値である。
The image processing apparatus 2 is configured by a processor, and the event evaluation value A is calculated according to Equation 1 for the first k-th light field image of a sequence of light field images of a plurality of frames acquired by the imaging device 9 at predetermined time intervals It is designed to calculate k .
The event evaluation value Ak is a representative value for each light field image of a numerical value indicating a deviation from the average value (predetermined reference value) of the entire sequence of pixel values of the pixels included in the light field image.

Figure JPOXMLDOC01-appb-M000001
 
 
 ここで、
 nはライトフィールド画像における画素位置、
 ntotalは各ライトフィールド画像の画素数、
 ktotalはフレームの総数、
 Iknはk番目のライトフィールド画像におけるn番目の画素の画素値、
 Iavgはシーケンス全体の平均画像のn番目の画素値(シーケンス全体の各ライトフィールド画像のn番目の画素の画素値の平均値)である。
Figure JPOXMLDOC01-appb-M000001


here,
n is the pixel position in the light field image,
ntotal is the number of pixels of each light field image,
ktotal is the total number of frames,
I kn is the pixel value of the n-th pixel in the k-th light field image,
Iavg n is the n-th pixel value of the average image of the entire sequence (average value of the pixel values of n-th pixels in each light field image of the entire sequence).

 表示部4は、画像処理装置2により処理されることにより算出されたイベント評価値Aを表示するモニタである。 The display unit 4 is a monitor that displays an event evaluation value Ak calculated by being processed by the image processing device 2.

 このように構成された本実施形態に係る画像処理装置2およびライトフィールド撮像装置1の作用について説明する。
 本実施形態に係るライトフィールド撮像装置1によれば、所定の時間間隔をあけて撮影を行うことにより、被写体Sからの光が撮像光学系3によって集光されるとともにその焦点位置に配置されているマイクロレンズアレイ5の各マイクロレンズ5aによって集光され、撮像素子9の各画素により受光され、ライトフィールド画像情報が取得される。
The operation of the image processing apparatus 2 and the light field imaging apparatus 1 according to the present embodiment configured as described above will be described.
According to the light field imaging apparatus 1 according to the present embodiment, light from the subject S is collected by the imaging optical system 3 and arranged at the focal point position by performing imaging at predetermined time intervals. The light is collected by the respective microlenses 5a of the microlens array 5 and received by the pixels of the image pickup device 9, whereby light field image information is acquired.

 取得された複数フレームのライトフィールド画像情報は、画像処理装置2に送られることによりイベント評価値Aが算出され、イベント評価値Aのフレーム毎の変化を示すグラフが生成され、表示部4に表示される。 Light field image information of the acquired plurality of frames, by being sent to the image processing apparatus 2 is calculated event evaluation value A k is a graph showing changes in each frame of the event evaluation value A k is generated, the display unit 4 Is displayed on.

 図2にイベント評価値Aのフレーム毎の変化を示すグラフの一例を示す。
 図2によれば、イベント評価値Aの大きなフレームやイベント評価値Aが急激に変化するフレームを撮影したタイミングで、撮像素子9により取得される光強度を変化させるようなイベントが発生した可能性があることがわかる。
An example of the graph which shows the change for every flame | frame of event evaluation value Ak is shown in FIG.
According to FIG. 2, at a timing when large frame and event evaluation values A k has taken rapidly changing frames of event evaluation values A k, the event that changes the light intensity obtained by the imaging device 9 has occurred It is understood that there is a possibility.

 したがって、複数フレームのライトフィールド画像からなるシーケンスからイベント評価値Aを導出することにより、イベント評価値Aに基づいてイベント発生時刻や継続時間などのイベント情報を推定でき、目視観察による場合よりも簡易かつ迅速にイベントの発生を確認することができるという利点がある。 Therefore, by deriving the event evaluation values A k from the sequence of light field images of a plurality of frames, can be estimated event information and event occurrence time and duration based on the event evaluation values A k, than with visual observation Also, there is an advantage that the occurrence of an event can be confirmed easily and quickly.

 なお、本実施形態においては、シーケンス全体の平均画像の画素値を用いてイベント評価値Aを算出したが、シーケンスを時間軸で複数区間に分割し、各区間内での平均画像の画素値を用いることにしてもよい。例えば、1時間おきに10秒間の撮影を繰り返すようなタイムラプス撮影の場合には、いずれかの撮影区間のいずれかのフレームのイベント評価値Aを算出する際に当該撮影区間内のライトフィールド画像のみから算出した平均画像の画素値を用いればよい。 In this embodiment, the event evaluation value Ak is calculated using the pixel value of the average image of the whole sequence, but the sequence is divided into a plurality of sections along the time axis, and the pixel value of the average image in each section May be used. For example, in the case of time-lapse photography to repeat the imaging for 10 seconds every hour, either one of frames of event evaluation value light field image of the photographing section when calculating the A k of photographing section It is sufficient to use the pixel value of the average image calculated only from.

 これにより、長時間にわたるシーケンスであっても、シーケンス終了前にイベントの発生の有無を確認することができるという利点がある。 This has the advantage that it is possible to check whether or not an event has occurred before the end of the sequence, even for long sequences.

 また、各フレームのライトフィールド画像を取得する都度にイベントの発生の有無を検出したい場合には、直前の複数フレームのライトフィールド画像の平均画像の画素値(移動平均)を用いてもよい。
 これによっても、一連のシーケンスを全て終了しなくても、イベントの発生の有無を確認することができる。また、イベントに関わらず全体の画素値が徐々に変化してしまうような場合においても、その影響を受けることなくイベントの発生の有無を精度よく検出することができるという利点がある。
When it is desired to detect the occurrence of an event each time a light field image of each frame is acquired, the pixel value (moving average) of the average image of the light field images of the plurality of frames immediately before may be used.
This also makes it possible to confirm the occurrence of an event without ending the entire sequence. In addition, even when the entire pixel value changes gradually regardless of the event, it is possible to accurately detect the occurrence of the event without being affected by the influence.

 また、上記実施形態においては、画素の位置ごとに平均値を求めていたが、これに代えて、数2に示すように、シーケンス全体の画素値の平均値を用いてもよい。

Figure JPOXMLDOC01-appb-M000002
 
 
 ここで、
 Iavgは、シーケンス全体の画素値の平均値である。 Further, in the above embodiment, the average value is obtained for each pixel position, but instead, as shown in Equation 2, an average value of pixel values of the entire sequence may be used.
Figure JPOXMLDOC01-appb-M000002


here,
Iavg is an average value of pixel values in the entire sequence.

 この方法によれば、被写体Sが動いてもイベント評価値Aが変動しないので、光強度の変化のみに注目したい場合に効果的である。 According to this method, since the event evaluation value Ak does not change even if the subject S moves, it is effective in the case where only a change in light intensity is desired.

 また、所定の基準値として平均値に代えて時間軸方向に隣接する(直前の)ライトフィールド画像の対応画素の画素値を用い、数3に示すように、差分の絶対値をライトフィールド画像全体で合計したものをイベント評価値Bとしてもよい。 Also, using the pixel value of the corresponding pixel of the light field image adjacent (immediately) in the time axis direction instead of the average value as the predetermined reference value, as shown in equation 3, the absolute value of the difference is the entire light field image The event evaluation value B k may be used as the sum of the above.

Figure JPOXMLDOC01-appb-M000003
 
 このようにすることで、ほぼリアルタイムにイベントの発生の有無をモニタすることができる。
Figure JPOXMLDOC01-appb-M000003

By doing this, it is possible to monitor the occurrence of an event substantially in real time.

 また、上記実施形態においては、各フレームのライトフィールド画像毎に1つのイベント評価値Aを算出することとしたが、これに代えて、図3に示されるように、各フレームのライトフィールド画像を複数(例えば、3×3=9個)の領域に分割し、各領域についてイベント評価値Aを算出することにしてもよい。 In the above embodiment, one event evaluation value Ak is calculated for each light field image of each frame, but instead, as shown in FIG. 3, the light field image of each frame is May be divided into a plurality of (for example, 3 × 3 = 9) areas, and the event evaluation value Ak may be calculated for each area.

 この場合には、図4に示されるように、領域の境界をマイクロレンズ5aに対応する画素範囲の境界に一致させることが好ましい。これにより、図5に示されるように、領域の数に等しいイベント評価値を得ることができる。 In this case, as shown in FIG. 4, it is preferable to make the boundary of the region coincide with the boundary of the pixel range corresponding to the microlens 5a. Thereby, as shown in FIG. 5, an event evaluation value equal to the number of regions can be obtained.

 このようにすることで、イベント評価値Aの変化の感度を向上することができるという利点がある。すなわち、ライトフィールド画像のごく一部の領域でイベントが発生した場合に、ライトフィールド画像全体でイベント評価値Aを算出すると小さな値となってノイズと区別し難いが、領域に分けて領域毎に算出することによりイベント評価値Aを大きな値とすることができる。 By doing this, there is an advantage that the sensitivity of the change of the event evaluation value Ak can be improved. That is, when an event occurs in a very small area of the light field image, calculating the event evaluation value Ak for the entire light field image results in a small value and it is difficult to distinguish it from noise. The event evaluation value Ak can be made a large value by

 また、イベントの発生位置が経時的に変化する場合に、ライトフィールド画像全体でイベント評価値Aを算出するとイベントの発生を示す一定のイベント評価値Aが継続するだけであるが、領域毎にイベント評価値Aを算出することにより、ライトフィールド画像内でのイベントの発生位置と時間とを同時に検出することができるという利点がある。 In addition, when the event occurrence position changes with time, calculating the event evaluation value Ak for the entire light field image only continues the constant event evaluation value Ak indicating the occurrence of the event, but for each area in by calculating an event evaluation value a k, there is an advantage that it is possible to detect the occurrence location and time of the event in the light field images simultaneously.

 また、ライトフィールド画像全体で1つのイベント評価値Aを算出する場合においても、複数領域に分割して領域毎に重み付けしてイベント評価値Aを算出することにしてもよい。これにより、重要でない領域の影響を減らし、イベントの情報をより明確に推定し易くすることができる。 Further, in the case of calculating the one event evaluation values A k in the entire light field image also it may be possible to calculate an event evaluation value A k by weighting for each region is divided into a plurality of areas. This can reduce the influence of unimportant regions and make it easier to estimate event information more clearly.

 例えば、被写体Sの特定部分に注目すべきであることが予め分かっているような場合には、重要でない領域の重みを小さくして影響を減らすことができ、あるいは重みをゼロにして計算時間を短縮することができる。 For example, when it is known in advance that a specific part of the subject S should be focused, the weight of an unimportant area can be reduced to reduce the influence, or the weight can be made zero and the calculation time It can be shortened.

 また、ライトフィールド画像は、図6に示されるように、マイクロレンズ5aの境界付近に対応する画素において画素値が大きく低下する傾向がある。これらの画素が撮像光学系3の瞳の端に相当し、マイクロレンズ5aの中心付近に対応する画素が撮像光学系3の瞳の中央に相当する。 Further, in the light field image, as shown in FIG. 6, the pixel value tends to be greatly reduced in the pixels corresponding to the vicinity of the boundary of the microlens 5a. These pixels correspond to the end of the pupil of the imaging optical system 3, and the pixels corresponding to the vicinity of the center of the microlens 5 a correspond to the center of the pupil of the imaging optical system 3.

 そこで、イベント評価値Aを算出する際に、マイクロレンズ5aに対応する画素位置毎に重み付けを行うことにしてもよい。例えば、マイクロレンズ5aの境界付近に配置されている画素については重みを低くしてノイズの影響を受け難くすることができる。イベント評価値Aを算出するとき、各画素の画素値の基準値との差分の絶対値を総計する際に、マイクロレンズ5aの境界付近の画素については重みを低く、マイクロレンズ5aの中心付近に対応する画素については重みを高くすればよい。
 この場合の重みの付け方として、予め一様な面(例えば白一色の平面)や空間を撮影した図6のような画像を取得し、その強度分布に対応する重みを採用することにしてもよい。なお、像高が高いところで光量が低下したり像が歪んだりする場合は、像高も考慮して重みをつけてもよい。また、マイクロレンズ5aに対応する画素位置毎に領域を分け、領域毎に評価値を算出してもよい。
Therefore, when calculating the event evaluation values A k, may be to perform a weighting for each pixel position corresponding to the micro lenses 5a. For example, the weight of the pixels arranged in the vicinity of the boundary of the micro lens 5a can be reduced to make it less susceptible to noise. When calculating the event evaluation value Ak , when summing the absolute value of the difference from the reference value of the pixel value of each pixel, the weight is low for the pixels near the boundary of the microlens 5a, and near the center of the microlens 5a The weights may be increased for the pixels corresponding to.
As a method of weighting in this case, it is possible to acquire an image as shown in FIG. 6 in which a uniform plane (for example, a plane of white only) or a space is photographed in advance, and adopt a weight corresponding to the intensity distribution. . If the amount of light decreases or the image is distorted at a high image height, the image height may also be taken into consideration for weighting. Further, the area may be divided for each pixel position corresponding to the microlens 5a, and the evaluation value may be calculated for each area.

 また、ライトフィールド画像は3次元的な情報を有しているが、被写体Sの内部で発生したイベントの情報も含んでいるが、ライトフィールド画像全体から算出したイベント評価値Aには明確に兆候が現れ難い場合がある。そこで、ライトフィールド画像を光軸L方向で領域分割し、各領域についてイベント評価値Aを算出することにしてもよい。 Further, the light-field image has the three-dimensional information, but also contains event information generated within the object S, clearly the event evaluation values A k calculated from the entire light field image Sometimes it is difficult for symptoms to appear. Therefore, the light field image may be divided into areas in the direction of the optical axis L, and the event evaluation value Ak may be calculated for each area.

 本実施形態に係るライトフィールド撮像装置1は、撮像素子9上の各画素9aには特定の角度(一の画素9aをカバーするマイクロレンズ5aへの光の入射角度が特定の範囲)の光線しか入射しないため、一の画素9aを決定するとマイクロレンズ5aに入射する光の角度が決定するようになっている。 In the light field imaging device 1 according to the present embodiment, only the light beam of a specific angle (the incident angle of light to the microlens 5a covering one pixel 9a is a specific range) in each pixel 9a on the imaging device 9 Since the light is not incident, the angle of light incident on the microlens 5a is determined when the one pixel 9a is determined.

 この場合、合焦位置から離れた位置における像は、通常のカメラで撮影されたように撮像素子9上で単にぼけるのではなく、図7Aから図7Cに示されるように、ライトフィールド画像においては被写体Sの1つの点の像は、その点の光軸L方向位置z1,z2,z3(マイクロレンズ5aへの光の入射角度)に応じて、図8Aから図8Cに示されるように、光軸L方向に直交する平面内において複数の2次元座標に分配される。 In this case, the image at the position away from the in-focus position does not simply blur on the imaging device 9 as taken by a normal camera, but in the light field image as shown in FIGS. 7A to 7C. The image of one point of the subject S is light as shown in FIG. 8A to FIG. 8C according to the position z1, z2, z3 (the incident angle of light to the micro lens 5a) in the optical axis L direction of that point. It is distributed to a plurality of two-dimensional coordinates in a plane orthogonal to the axis L direction.

 これは、被写体Sの光軸L方向位置z1,z2,z3が異なるために、マイクロレンズ5aに入射する光の入射角度が変化して、異なる画素に光が入射するためである。ライトフィールド撮像装置1は、複数の2次元座標に対応する複数の光線の交点に基づいて、元の位置の3次元座標を算出することができ、被写体Sの2次元情報だけでなく、3次元情報も含めて撮影するようになっている。これを利用して簡易的にリフォーカスする手法が知られており、この手法を利用して光軸L方向に領域分割してイベント評価値Aを算出すればよい。上記手法として、例えば、リフォーカスにより強調される点の影響を大きくなるようにし、リフォーカスによって広い範囲に広がって弱められる点の影響を小さくするものを採用してもよい。また、必ずしもリフォーカス画像を作る必要はなく、L方向位置z1,z2,z3に応じて用いる画素を変えていけばよい。 This is because the positions of the subject S in the optical axis L direction z1, z2, z3 are different, so that the incident angle of the light incident on the microlens 5a is changed, and the light is incident on different pixels. The light field imaging device 1 can calculate the three-dimensional coordinates of the original position based on the intersections of a plurality of light rays corresponding to a plurality of two-dimensional coordinates. It is supposed to shoot including information. A method for simply refocusing using this is known, and the event evaluation value Ak may be calculated by dividing the area in the direction of the optical axis L using this method. As the above method, for example, a method may be adopted in which the influence of a point to be emphasized by refocusing is increased, and the influence of a point that is spread over a wide range and weakened by refocusing is reduced. Further, the refocused image does not necessarily have to be created, and the pixels to be used may be changed according to the L direction positions z1, z2 and z3.

 すなわち、図9Aおよび図9Bに示されるように、イベント評価値を算出する際に用いる画素を光軸L方向位置に応じて変えていく。マイクロレンズ5aに対応する撮像素子9の画素数をm×nとし、ライトフィールド画像の画素数をp×qとすると、イベント評価値Azは数4の通りとなる。 That is, as shown in FIGS. 9A and 9B, the pixels used when calculating the event evaluation value are changed according to the position in the optical axis L direction. Assuming that the number of pixels of the image pickup device 9 corresponding to the microlens 5 a is m × n and the number of pixels of the light field image is p × q, the event evaluation value Az k is as shown in Expression 4.

Figure JPOXMLDOC01-appb-M000004
 
 ここで、
 zは光軸L方向位置、
 cは撮像光学系3の構成により定まる係数、
 Iz(x,y)は光軸L方向位置z、k番目のフレームの座標(x,y)のマイクロレンズに対応する画素の画素値の合計、
 Iavez(x,y)は光軸L方向位置zの座標(x,y)のマイクロレンズ5aに対応する画素の画素値の合計値のシーケンス全体の平均値である。
 また、c×zは整数になるように丸められる。
Figure JPOXMLDOC01-appb-M000004

here,
z is the position in the optical axis L direction,
c is a coefficient determined by the configuration of the imaging optical system 3,
Iz k (x, y) is the sum of the pixel values of the pixels corresponding to the microlens at the coordinate (x, y) of the kth frame, with the position z in the optical axis L direction,
Iavez (x, y) is the average value of the entire sequence of the sum of the pixel values of the pixels corresponding to the microlens 5 a at the coordinates (x, y) of the position z in the optical axis L direction.
Also, c × z is rounded to be an integer.

 例えば、図9Aおよび図9Bにm=n=5のライトフィールド画像の一部を模式的に示す。
 例えば、c×z=0の場合、すなわち、図7Aおよび図8Aに示されるように、被写体Sがマイクロレンズ5aに合焦する光軸L方向位置zに配置されている場合には、図9Aおよび数5に示されるように、同じマイクロレンズ5aに対応する異なる画素の画素値を加算するようになっている。

Figure JPOXMLDOC01-appb-M000005
  For example, FIGS. 9A and 9B schematically show a part of a light field image of m = n = 5.
For example, in the case of c × z = 0, that is, as shown in FIGS. 7A and 8A, in the case where the subject S is disposed at the position z in the optical axis L direction focusing on the microlens 5a, FIG. As shown in Equations (5) and (5), pixel values of different pixels corresponding to the same microlens 5a are added.
Figure JPOXMLDOC01-appb-M000005

 また、c×z=1の場合には、すなわち、図7B、図7C、図8Bおよび図8Cに示されるように、合焦位置がずれている場合には、図9Bおよび数6に示されるように、x方向に5×i、y方向に5×jだけずれた画素の輝度を加算するようになっている。

Figure JPOXMLDOC01-appb-M000006
  Also, in the case of c × z = 1, that is, as shown in FIGS. 7B, 7C, 8B and 8C, when the in-focus position is shifted, it is shown in FIGS. Thus, the luminances of pixels shifted by 5 × i in the x direction and 5 × j in the y direction are added.
Figure JPOXMLDOC01-appb-M000006

 被写体Sの光軸L方向位置zに応じて複数の領域に分割する場合には、各領域の範囲の光軸L方向位置zについて算出されたイベント評価値Azを加算平均すればよい。
 なお、数4では、リフォーカスで強調された点の効果が大きくなるように、Iz(x,y)とシーケンス平均値(基準値)との差を二乗している。この部分は、数7に示されるように、abs(Iz(x,y)-Iavgz(x,y))が閾値TH以下の場合には評価値導出に使用しないことにより、リフォーカスで弱められる点の影響を小さくするようにしてもよい。

Figure JPOXMLDOC01-appb-M000007
  In the case of dividing into a plurality of areas according to the optical axis L direction position z of the subject S, the event evaluation value Az k calculated for the optical axis L direction position z in the range of each area may be averaged.
In Equation 4, the difference between Iz k (x, y) and the sequence average value (reference value) is squared so that the effect of the point emphasized in refocusing becomes large. This part is weakened by refocusing by not using it for deriving the evaluation value when abs (Iz k (x, y)-Iavgz (x, y)) is less than or equal to the threshold value TH, as shown in Eq. You may make it the influence of the point to be made small.
Figure JPOXMLDOC01-appb-M000007

 各領域内の光軸L方向位置zの間隔は、例えば、c×zが整数になるような光軸L方向位置zを選択してもよいし、光軸L方向位置zの間隔が理論的な分解能程度になるように設定してもよい。光軸L方向位置zに応じて計算する光軸L方向位置zの間隔や領域の範囲を変更してもよい。例えば、光軸L方向位置zが小さいところでは細かく、大きいところでは粗くしてもよい。 The distance between the optical axis L direction positions z in each region may be, for example, an optical axis L direction position z such that c × z is an integer, or the distance between the optical axis L direction positions z is theoretical It may be set to have about the resolution. The range of the interval or area of the optical axis L direction position z calculated according to the optical axis L direction position z may be changed. For example, it may be fine at places where the optical axis L direction position z is small and rough at places where it is large.

 このように、光軸L方向に領域を分けてイベント評価値Azを算出することにより、イベントの発生によるイベント評価値Azの変化がより顕著となり、イベントの検出の感度を向上することができる。さらに、どの光軸L方向位置zでイベントが発生したのかについても確認することができ、イベント発生の光軸L方向位置zと時間とを同時に把握することができるという利点がある。 Thus, by dividing the area in the direction of the optical axis L to calculate an event evaluation value Az k, the change in the event evaluation values Az k due to the occurrence of the event becomes more prominent, it is possible to improve the sensitivity of detection of events it can. Furthermore, it is possible to confirm at which optical axis L-direction position z the event has occurred, and it is possible to simultaneously grasp the optical axis L-direction position z of event occurrence and the time.

 また、予めイベントの発生する光軸L方向位置の目処がついている場合には、狭い光軸L方向位置zの範囲でのみイベント評価値Azを算出すればよいので、計算時間を短縮することができるという利点がある。
 また、c×zが整数となるように丸める場合について説明したが、複数の画素から画素値を補間する方法で小数のまま扱うこともできる。また画素値の合計を計算する際に全てのマイクロレンズ5aに対応する全ての画素の画素値を加算したが、一部の画素のみで算出してもよい。
In addition, in the case where the position in the direction of the optical axis L in which an event occurs is previously included, the event evaluation value Az k may be calculated only in the range of the position z in the narrow optical axis L direction. Has the advantage of being able to
In addition, although the case where rounding is performed so that c × z is an integer has been described, the decimal value can also be handled by a method of interpolating pixel values from a plurality of pixels. Further, although the pixel values of all the pixels corresponding to all the microlenses 5a are added when calculating the sum of the pixel values, the calculation may be performed using only some pixels.

 また、リアルタイムにイベント評価値Azをモニタしたい場合には数3と同様にして、数8に示されるように、直前フレームのライトフィールド画像の対応画素の画素値との差分の二乗の合計をイベント評価値Azとしてもよい。

Figure JPOXMLDOC01-appb-M000008
  Further, when it is desired to monitor the event evaluation value Az k in real time, the sum of squares of differences with the pixel value of the corresponding pixel of the light field image of the immediately preceding frame is calculated as shown in Eq. It may be an event evaluation value Az k .
Figure JPOXMLDOC01-appb-M000008

 また、機械学習を利用して、ライトフィールド画像やシーケンスの特徴から、イベント評価値Azの算出、イベントの判定や分類などを行うことにしてもよい。
 予め多数のシーケンスのライトフィールド画像を用いて、イベントとはどういうものかについて学習させておくことにより、取得されたライトフィールド画像のシーケンスから学習結果に基づいてイベントらしさを表すイベント評価値Azを取得することができる。また、イベントやノイズ成分の種類(例えば、カルシウム発光、被写体Sの移動および外部光の入射等)も含めて学習させることにより、発生したイベントの分類まで行うようにしてもよい。
In addition, machine learning may be used to calculate the event evaluation value Az k , determine the event, classify the event, and the like from the light field image and the feature of the sequence.
By learning about what an event is by using light field images of a large number of sequences in advance, an event evaluation value Az k representing an event likeness based on a learning result is obtained from the acquired light field image sequence. It can be acquired. In addition, classification of the generated event may be performed by learning including the type of event or noise component (for example, calcium emission, movement of subject S, incidence of external light, etc.).

 また、本実施形態においては、各フレームのライトフィールド画像の各画素について、基準値からの乖離を示す差分値の絶対値をシーケンス全体について加算した値をイベント評価値としたが、これに代えて、他の任意の代表値、例えば、平均値、最大値、最小値および中央値等任意の統計値をイベント情報として採用してもよい。 Further, in the present embodiment, for each pixel of the light field image of each frame, a value obtained by adding the absolute value of the difference value indicating the deviation from the reference value for the entire sequence is used as the event evaluation value. Any other representative value, for example, any statistical value such as an average value, a maximum value, a minimum value, and a median value may be adopted as the event information.

 また、本実施形態においては、撮像素子9の画素9aとイベント検出に用いるライトフィールド画像上の画素とが一致する場合を例示したが、これに代えて、撮像素子9の画素9aとライトフィールド画像上の画素とが一致していなくてもよい。
 実際の光学系においては、マイクロレンズ5aのピッチが画素ピッチの整数倍にならなかったり、少し回転して配置される等のセッティング誤差が発生したりすることがあり、画像処理の最初に画素を補間処理して再配置するキャリブレーション処理を行うことがある。この場合、厳密には撮像素子9の画素9aとイベント検出に用いるライトフィールド画像上の画素とが一致しないようになっている。
Further, in the present embodiment, the case where the pixel 9a of the imaging device 9 matches the pixel on the light field image used for event detection is illustrated, but instead, the pixel 9a of the imaging device 9 and the light field image The upper pixel may not match.
In an actual optical system, the pitch of the micro lens 5a may not be an integral multiple of the pixel pitch, or setting errors may occur such as being slightly rotated and disposed, and the pixel is not A calibration process may be performed to interpolate and rearrange. In this case, strictly speaking, the pixel 9a of the image sensor 9 and the pixel on the light field image used for event detection do not match.

 1 ライトフィールド撮像装置
 2 画像処理装置
 3 撮像光学系
 5 マイクロレンズアレイ
 5a マイクロレンズ
 9 撮像素子
 9a 画素
 S 被写体
DESCRIPTION OF SYMBOLS 1 light field imaging device 2 image processing device 3 imaging optical system 5 micro lens array 5a micro lens 9 imaging element 9a pixel S object

Claims (9)

 経時的に取得された複数フレームのライトフィールド画像が入力され、
 複数の前記フレームのうち少なくとも1つの前記フレームの前記ライトフィールド画像の少なくとも一部の領域について、該ライトフィールド画像に含まれる画素の画素値の所定の基準値からの乖離を示す数値の代表値であるイベント評価値を算出して出力する画像処理装置。
Multiple frames of light field images acquired over time are input,
The representative value of the numerical values indicating the deviation from the predetermined reference value of the pixel values of the pixels included in the light field image for at least a partial region of the light field image of at least one of the plurality of frames An image processing apparatus that calculates and outputs a certain event evaluation value.
 前記基準値が、前記ライトフィールド画像の対応する画素の画素値の平均値である請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the reference value is an average value of pixel values of corresponding pixels of the light field image.  前記基準値が、複数の前記フレームの前記ライトフィールド画像の複数の画素の画素値の平均値である請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the reference value is an average value of pixel values of a plurality of pixels of the light field image of the plurality of frames.  前記基準値が、全ての前記ライトフィールド画像の全画素の画素値の平均値である請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the reference value is an average value of pixel values of all pixels of all the light field images.  前記基準値が、時間軸方向に異なる前記フレームの前記ライトフィールド画像における対応する画素の画素値である請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the reference value is a pixel value of a corresponding pixel in the light field image of the frame different in the time axis direction.  前記基準値が、時間軸方向に隣接する前記フレームの前記ライトフィールド画像における対応する画素の画素値である請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the reference value is a pixel value of a corresponding pixel in the light field image of the frame adjacent in the time axis direction.  被写体からの光を集光し、前記被写体の像を結像する撮像光学系と、
 該撮像光学系により結像される1次像の位置または該1次像と共役な位置に2次元的に配列され前記撮像光学系からの光を集光する複数のマイクロレンズを有するマイクロレンズアレイと、
 該マイクロレンズにより集光された光を受光する複数の画素を有し、該画素において受光した光を光電変換することにより前記ライトフィールド画像を生成する撮像素子と、
 該撮像素子により生成された前記ライトフィールド画像を処理する請求項1から請求項6のいずれかに記載の画像処理装置とを備えるライトフィールド撮像装置。
An imaging optical system which condenses light from a subject and forms an image of the subject;
A microlens array having a plurality of microlenses arranged two-dimensionally at the position of a primary image formed by the imaging optical system or at a position conjugate to the primary image and focusing the light from the imaging optical system When,
An imaging device having a plurality of pixels for receiving light collected by the microlens, and photoelectrically converting the light received in the pixels to generate the light field image;
A light field imaging apparatus comprising: the image processing apparatus according to any one of claims 1 to 6, which processes the light field image generated by the imaging element.
 前記画素が前記マイクロレンズに対応する領域毎に配置され、
 前記画像処理装置が、各前記マイクロレンズへの光の入射角度に応じて各前記マイクロレンズに対応する前記画素毎に重みを付けて前記イベント評価値を算出する請求項7に記載のライトフィールド撮像装置。
The pixel is disposed in each area corresponding to the microlens.
The light field imaging according to claim 7, wherein the image processing device calculates the event evaluation value by weighting each of the pixels corresponding to each of the microlenses in accordance with an incident angle of light to each of the microlenses. apparatus.
 前記画像処理装置が、各前記ライトフィールド画像を、各前記マイクロレンズへの光の入射角度に対応する複数の領域に分割し、分割された該領域毎に前記イベント評価値を算出する請求項7に記載のライトフィールド撮像装置。 The image processing apparatus divides each of the light field images into a plurality of regions corresponding to the incident angle of light to each of the microlenses, and calculates the event evaluation value for each of the divided regions. The light field imaging device according to claim 1.
PCT/JP2017/025592 2017-07-13 2017-07-13 Image processing device and light field imaging device Ceased WO2019012660A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/025592 WO2019012660A1 (en) 2017-07-13 2017-07-13 Image processing device and light field imaging device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/025592 WO2019012660A1 (en) 2017-07-13 2017-07-13 Image processing device and light field imaging device

Publications (1)

Publication Number Publication Date
WO2019012660A1 true WO2019012660A1 (en) 2019-01-17

Family

ID=65001158

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/025592 Ceased WO2019012660A1 (en) 2017-07-13 2017-07-13 Image processing device and light field imaging device

Country Status (1)

Country Link
WO (1) WO2019012660A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020255399A1 (en) * 2019-06-21 2020-12-24 株式会社ソニー・インタラクティブエンタテインメント Position detection system, image processing device, position detection method, and position detection program
CN113935967A (en) * 2021-10-13 2022-01-14 温州大学大数据与信息技术研究院 No-reference light field image quality evaluation method
DE102023123170A1 (en) * 2023-08-29 2025-03-06 Deutsches Zentrum für Luft- und Raumfahrt e.V. Camera device and method for extracting depth information using a camera device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013243643A (en) * 2012-04-27 2013-12-05 Toshiba Corp Image processing device, image display device, and method
JP2015122780A (en) * 2012-07-12 2015-07-02 オリンパス株式会社 Imaging apparatus and computer program
JP2015186037A (en) * 2014-03-24 2015-10-22 キヤノン株式会社 Image processing device, control method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013243643A (en) * 2012-04-27 2013-12-05 Toshiba Corp Image processing device, image display device, and method
JP2015122780A (en) * 2012-07-12 2015-07-02 オリンパス株式会社 Imaging apparatus and computer program
JP2015186037A (en) * 2014-03-24 2015-10-22 キヤノン株式会社 Image processing device, control method, and program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020255399A1 (en) * 2019-06-21 2020-12-24 株式会社ソニー・インタラクティブエンタテインメント Position detection system, image processing device, position detection method, and position detection program
JPWO2020255399A1 (en) * 2019-06-21 2020-12-24
US12106507B2 (en) 2019-06-21 2024-10-01 Sony Interactive Entertainment Inc. Position detection system, image processing device, position detection method, and position detection program
CN113935967A (en) * 2021-10-13 2022-01-14 温州大学大数据与信息技术研究院 No-reference light field image quality evaluation method
DE102023123170A1 (en) * 2023-08-29 2025-03-06 Deutsches Zentrum für Luft- und Raumfahrt e.V. Camera device and method for extracting depth information using a camera device

Similar Documents

Publication Publication Date Title
US9715734B2 (en) Image processing apparatus, imaging apparatus, and image processing method
JP6509027B2 (en) Object tracking device, optical apparatus, imaging device, control method of object tracking device, program
JP6456156B2 (en) Normal line information generating apparatus, imaging apparatus, normal line information generating method, and normal line information generating program
JP4915859B2 (en) Object distance deriving device
US8786718B2 (en) Image processing apparatus, image capturing apparatus, image processing method and storage medium
KR102166372B1 (en) Optical tracking system and optical tracking method
CN109883391B (en) Monocular distance measurement method based on digital imaging of microlens array
US10855903B2 (en) Image processing apparatus, image capturing apparatus, control method and recording medium
JP6786225B2 (en) Image processing equipment, imaging equipment and image processing programs
JP2014158258A (en) Image processing apparatus, image capturing apparatus, image processing method, and program
JP6091318B2 (en) Ranging device and control method thereof
JP6746359B2 (en) Image processing device, imaging device, image processing method, program, and storage medium
JP6353233B2 (en) Image processing apparatus, imaging apparatus, and image processing method
JP2015075526A5 (en) Image data processing device, distance calculation device, imaging device, and image data processing method
JP7378219B2 (en) Imaging device, image processing device, control method, and program
JP2015142364A5 (en)
JP6675510B2 (en) Subject tracking device and its control method, image processing device and its control method, imaging device and its control method, and program
WO2019012660A1 (en) Image processing device and light field imaging device
JP2017158018A (en) Image processing apparatus, control method therefor, and imaging apparatus
JP2010113043A5 (en)
JP2023115356A (en) Measuring device, imaging device, control method and program
KR101857977B1 (en) Image apparatus for combining plenoptic camera and depth camera, and image processing method
US10783646B2 (en) Method for detecting motion in a video sequence
US10891716B2 (en) Process allowing the removal through digital refocusing of fixed-pattern noise in effective images formed by electromagnetic sensor arrays in a light field
WO2019012657A1 (en) Image processing device and light field imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17917540

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17917540

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP