[go: up one dir, main page]

WO2011145668A1 - Dispositif de traitement d'image, circuit de traitement d'image, procédé de traitement d'image, et programme - Google Patents

Dispositif de traitement d'image, circuit de traitement d'image, procédé de traitement d'image, et programme Download PDF

Info

Publication number
WO2011145668A1
WO2011145668A1 PCT/JP2011/061472 JP2011061472W WO2011145668A1 WO 2011145668 A1 WO2011145668 A1 WO 2011145668A1 JP 2011061472 W JP2011061472 W JP 2011061472W WO 2011145668 A1 WO2011145668 A1 WO 2011145668A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
output
target pixel
determination result
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2011/061472
Other languages
English (en)
Japanese (ja)
Inventor
泰文 萩原
修 萬羽
浩司 大塚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Publication of WO2011145668A1 publication Critical patent/WO2011145668A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/213Circuitry for suppressing or minimising impulsive noise
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • H04N1/4092Edge or detail enhancement
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Definitions

  • the present invention relates to an image processing apparatus or the like that performs edge enhancement processing on image data.
  • a technique for displaying the entire image clearly by enhancing the outline of the input image is known.
  • a sharpening filter for example, a Laplacian filter
  • a filter for example, a Wiener filter
  • this causes a problem that noise (ringing) occurs in the vicinity of the edge portion when contour enhancement processing is performed on a portion where the edge portion is clear (for example, captions or time display portions on a television screen).
  • noise ringing
  • the Wiener filter coefficient is the largest at the center, and once falls outside it, it rises again. Therefore, there has been a problem that the ringing in the vicinity of the edge portion is further emphasized and becomes conspicuous.
  • an object of the present invention is to provide an image processing apparatus and the like that can suppress noise generated in the vicinity of an edge when an edge enhancement process is performed on an input image. That is.
  • the image processing apparatus of the present invention is One of the pixels constituting the image data is set as the target pixel, A contour emphasis processing unit that generates a contour emphasis component processed by the contour emphasis processing filter with reference to the target pixel and a first peripheral pixel located around the target pixel; A determination result output unit that refers to a second peripheral pixel included in a range smaller than the first peripheral pixel, determines whether the pixel of interest is a flat portion or an edge portion, and outputs a determination result; When the determination result is that the target pixel is an edge portion, an edge emphasis component is superimposed on the target pixel and output as an output pixel, and the determination result is a result that the target pixel is a flat portion.
  • a pixel output unit that outputs the target pixel as an output pixel; It is characterized by providing.
  • the determination result output unit includes: An image analysis unit that calculates a variance or edge strength of the second peripheral pixels, and outputs a calculation result as an image analysis result; When the image analysis result is equal to or less than a predetermined threshold, it is determined as a flat portion, and when it exceeds a predetermined threshold, a determination unit that determines an edge portion; It is characterized by having.
  • the determination result output unit outputs 0 as a determination result for a flat portion, 1 for an edge portion,
  • the pixel output unit outputs the output pixel superimposed on the target pixel after multiplying the edge enhancement component by the determination result.
  • the determination unit determines a flat portion when the image analysis result is equal to or less than a first threshold value, and an edge portion when the image analysis result exceeds a second threshold value, further exceeds the first threshold value and is equal to or less than the second threshold value.
  • the determination result output unit outputs any value in a range from 0 to 1 in the case of a texture portion as a determination result.
  • the image processing circuit of the present invention includes: One of the pixels constituting the image data is set as the target pixel, A contour emphasis processing circuit that generates a contour emphasis component processed by the contour emphasis processing filter with reference to a target pixel and a first peripheral pixel located around the target pixel; A determination result output circuit that refers to a second peripheral pixel included in a range smaller than the first peripheral pixel, determines whether the pixel of interest is a flat portion or an edge portion, and outputs a determination result; When the determination result is that the target pixel is an edge portion, an edge emphasis component is superimposed on the target pixel and output as an output pixel, and the determination result is a result that the target pixel is a flat portion.
  • a pixel output circuit that outputs the target pixel as an output pixel; It is characterized by providing.
  • the image processing method of the present invention includes: One of the pixels constituting the image data is set as the target pixel, Generating a contour enhancement component processed by the contour enhancement processing filter with reference to the target pixel and a first peripheral pixel located around the target pixel; Referring to a second peripheral pixel included in a range smaller than the first peripheral pixel, determining whether the target pixel is a flat portion or an edge portion, and outputting a determination result; When the determination result is that the target pixel is an edge portion, an edge emphasis component is superimposed on the target pixel and output as an output pixel, and the determination result is a result that the target pixel is a flat portion. In this case, the target pixel is output as an output pixel.
  • the determination result is that the target pixel is an edge portion
  • an edge emphasis component is superimposed on the target pixel and output as an output pixel, and the determination result is a result that the target pixel is a flat portion.
  • a contour emphasis component processed by the contour emphasis processing filter is generated with reference to a target pixel and a first peripheral pixel located around the target pixel, and a range smaller than the first peripheral pixel.
  • an outline emphasis component is superimposed on the target pixel and output as an output pixel.
  • the target pixel is a flat portion
  • the target pixel is output as an output pixel. Therefore, when determining whether the portion is a flat portion or an edge portion, by performing the contour emphasis process in a range smaller than the number of pixels to be subjected to the edge enhancement processing, the flat portion can be extended from the flat portion to the pixel closer to the edge portion. Can be determined.
  • FIG. 1 is an external view of a mobile phone (image processing apparatus) in the present embodiment. It is a figure for demonstrating the function structure of the whole mobile telephone. It is a figure for demonstrating the function structure of an outline emphasis part. It is a schematic diagram for demonstrating image data. It is a figure for demonstrating a line memory part. It is a figure for demonstrating the function structure of an outline emphasis processing part. It is a figure for demonstrating the filter coefficient h (m, n). It is a figure for demonstrating a Laplacian filter coefficient. It is a figure for demonstrating the function structure of an application area
  • the mobile phone 1 includes the image processing apparatus of the present invention.
  • the flow of processing when the present invention is applied examples, and modifications to which the present invention can be applied will be described.
  • FIG. 1 is a diagram illustrating an appearance of a mobile phone 1 according to the present embodiment.
  • the mobile phone 1 includes a display unit 60 configured by a liquid crystal display or the like, an operation unit 65 configured by input buttons, a microphone unit 70 that receives voice input, and a speaker unit 75 that outputs voice. I have.
  • a receiving antenna 32 for receiving a terrestrial digital broadcast broadcast from a broadcasting station and a communication antenna (not shown) for receiving transmission / reception signals from a mobile phone base station are provided.
  • FIG. 2 is a diagram for explaining the functional configuration of the mobile phone 1.
  • the control unit 10 includes a wireless communication unit 20, a tuner unit 30, a camera unit 35, a storage unit 40, and a video signal processing unit 50. And the display part 60, the operation part 65, the microphone part 70, and the speaker part 75 are connected.
  • the control unit 10 is a functional unit for controlling the entire mobile phone 1.
  • CPU Central * Process * Unit
  • the wireless communication unit 20 is a functional unit for connecting to a mobile phone communication network, and a communication antenna 22 is connected thereto. Communication data is transmitted / received to / from the base station via the communication antenna 22 and input / output to / from the control unit 10.
  • the functional units of the wireless communication unit 20 and the communication antenna 22 are functional units that are required when the mobile phone 1 is used as a telephone function, and detailed description thereof is omitted.
  • the tuner unit 30 is connected to a receiving antenna 32, and extracts data (TS data) of the transport stream (TS) from the broadcast wave corresponding to the broadcasting station selected by the user via the receiving antenna 32. And a functional unit for outputting.
  • the output TS data is output by the display unit 60 and the speaker unit 75 via the video signal processing unit 50 or stored in the image data storage area 42 of the storage unit 40.
  • the tuner can receive terrestrial digital broadcasting.
  • the camera unit 35 is a functional unit that outputs an image (still image or moving image) formed on an imaging element such as a CCD (Charge Coupled Device) through a lens as image data.
  • the output image data is displayed on the display unit 60 via the video signal processing unit 50 or stored in the image data storage area 42 of the storage unit 40.
  • the storage unit 40 is a functional unit that stores various programs and various data necessary for the operation of the mobile phone 1.
  • the storage unit 40 is configured by, for example, a semiconductor memory, an HDD (Hard Disk Drive), or the like.
  • the storage unit 40 has an image data storage area 42 in which image data is stored.
  • the image data in this embodiment is so-called still image data such as a photograph (for example, still image data such as JPEG format or TIFF format), but so-called moving image data such as video (AVI format or MPEG). Moving image data of a format or the like).
  • the video signal processing unit 50 is a functional unit for inputting image data stored in the tuner unit 30, the camera unit 35, or the image data storage area 42, performing various processes, and outputting the processed data to the display unit 60.
  • the video signal processing unit 50 includes an RGBYUV conversion unit 52, a contour enhancement unit 54, a YUVRGB conversion unit 56, and a gamma correction unit 58.
  • the RGBYUV conversion unit 52 and the YUVRGB conversion unit 56 are functional units for converting a color space. That is, it is a functional unit that converts RGB to YUV (YCbCr / YPbPr), or converts YUV to RGB. Since the method of converting the color space is a known technique, the description thereof is omitted.
  • the contour emphasizing unit 54 is a functional unit that emphasizes the contour of the input image and outputs it as an output image.
  • the contour emphasis unit 54 will be described later in detail.
  • the gamma correction unit 58 is a functional unit that performs gamma correction of input image data. By performing gamma correction, the display unit 60 outputs an image with appropriate luminance.
  • the gamma correction value may be determined in advance based on image data, may use a value calculated each time, or may use a value set by the user. It is good.
  • the display unit 60 is a functional unit for displaying an image output from the video signal processing unit 50 and displaying information provided to the user.
  • the display unit 60 includes, for example, a liquid crystal display (LCD), an organic EL panel, or the like.
  • the operation unit 65 is a functional unit for receiving inputs such as various operation instructions from the user.
  • the operation unit 65 includes, for example, button keys, a touch panel, and the like.
  • the microphone unit 70 and the speaker unit 75 are functional units for inputting and outputting sound (voice).
  • the sound input from the microphone unit 70 is converted into sound data, output to the control unit 10, and output to each functional unit as appropriate.
  • the speaker unit 75 is a functional unit that converts voice data output from each functional unit into voice and outputs the voice.
  • image data is input from the tuner unit 30, the camera unit 35, or the storage unit 40.
  • the input image data is subjected to edge enhancement processing by the video signal processing unit 50.
  • the image that has undergone the contour enhancement process is output to the display unit 60.
  • the contour emphasis unit 54 includes a line memory unit 100, an contour emphasis processing unit 200, an application area determination unit (determination result output unit) 300, and a pixel output unit 400.
  • the input image will be described with reference to FIG. 4.
  • the input image f is an image having a horizontal width W and a vertical width H, and the horizontal position is represented by i and the vertical position is represented by j.
  • f (i, j) indicates a pixel in the horizontal direction i and the vertical direction j.
  • the pixel f (i, j) of the input image f is input to the line memory unit 100. Then, as many pixels as necessary for processing (for N lines of the vertical filter size centering on the vertical direction j) are stored in the line memory unit 100 (region surrounded by a one-dot chain line in FIG. 4).
  • the contour emphasis processing unit 200, the application region determination unit 300, and the pixel output unit 400 are the target pixel lm (i, j) to be processed from the line memory unit 100, and peripheral pixels including the periphery of the target pixel (M ⁇ N pixel range) ) Read lm (i + m, j + n) as necessary.
  • m ⁇ M / 2 to M / 2
  • n ⁇ N / 2 to N / 2 (truncated after the decimal point) (area surrounded by a dotted line in FIG. 4).
  • the contour enhancement processing unit 200 performs a contour enhancement process on the pixels read from the line memory unit 100, and outputs the processing result to the pixel output unit 400 as a contour enhancement component sp (i, j).
  • the application region determination unit 300 reads out pixels from the line memory unit 100 and receives an application region determination threshold value AR th from the outside. An application area determination is made based on the input value, and an application area determination result ar (i, j) is output to the pixel output unit 400.
  • the pixel output unit 400 is input from the pixel lm (i, j) read from the line memory unit 100, the contour enhancement component sp (i, j) input from the contour enhancement processing unit 200, and the application region determination unit 300.
  • the output pixel g (i, j) is output based on the application region determination result ar (i, j) and the contour enhancement degree ED input from the outside.
  • the line memory unit 100 is a memory unit that can store pixels for the number of lines necessary for the outline enhancement processing unit 200 and the application area determination unit 300 in the input image.
  • the line memory unit 100 is schematically shown in FIG.
  • the outline emphasis processing unit 200 and the application area determination unit 300 can share a line memory, and therefore use a memory for five lines.
  • Necessary pixels are read out from the contour enhancement processing unit 200 and the application area determination unit 300, respectively.
  • the contour emphasis processing unit 200 is a functional unit that performs contour emphasis processing by performing filter processing on the target pixel. As shown in FIG. 6, among the images stored in the line memory unit 100, lm (i + m, j + n), that is, the target pixel lm (i, j) and its peripheral pixels are read and processed, and the contour enhancement component is executed. sp (i, j) is output.
  • the contour emphasis processing is calculated by the following equation, assuming that the pixel of interest lm (i, j) and the contour emphasis component sp (i, j) are used.
  • h (m, n) is an array (for example, the array shown in FIG. 7) representing the coefficients of the filter
  • FIG. 8A is an array showing Laplacian filter coefficients h (m, n) used in this embodiment
  • the application area determination unit 300 is a functional unit for determining an area to which image processing is applied. As illustrated in FIG. 9, the application area determination unit 300 includes an image analysis unit 310 and a determination unit 320.
  • the image analysis unit 310 reads out lm (i + m, j + n), that is, the target pixel lm (i, j) and its peripheral pixels from the images stored in the line memory unit 100, performs image analysis, and performs image analysis results. ian is output to the determination unit 320.
  • the edge strength E (B) When using edge strength
  • the edge strength E (B) When using edge strength as an image analysis result, the edge strength E (gradient magnitude) is calculated using the following.
  • f x (x, y) represents a difference in the horizontal direction
  • f y (x, y) represents a difference in the vertical direction.
  • Sobel filter As a calculation method of fx (x, y) and fy (x, y), a Sobel filter represented by the following equation can be used.
  • Laplacian filter As the edge intensity E, the output result of the Laplacian filter obtained by adding the result of the secondary differential in the horizontal direction and the result of the secondary differential in the vertical direction may be used. In this case, a filter using an arrangement as shown in FIG. 10C is used.
  • the determination unit 320 is a functional unit that compares the image analysis result ian input from the image analysis unit 310 with the application region determination threshold AR th input from the outside, and outputs the application region determination result ar (i, j). is there.
  • a method for determining the application area determination result ar (i, j) the following three types of methods are conceivable. This will be described below with reference to FIG. In FIG. 11, the horizontal axis represents the image analysis result ian, and the vertical axis represents the application area determination result ar (i, j).
  • the part where the application area determination result ar (i, j) is output as “1” is an edge part, and the part where the application area determination result ar (i, j) is output as “0” is a flat part. It is.
  • the intermediate application area determination value MAAV is output as the application area determination result ar (i, j).
  • the range in which the intermediate application region determination value MAAV is output as the application region determination result ar (i, j) is determined as the texture portion, and the flat portion, texture portion, edge portion, and contour enhancement can be appropriately performed for each region. It will be.
  • the application area determination result of the texture part can be adjusted by changing the intermediate application area determination value MAAV.
  • the application region determination result linearly increases from the application region determination threshold AR2 th toward the application region determination threshold AR1 th.
  • ar (i, j) can be continuously changed from 1 to 0 and output.
  • the contour emphasis of the texture portion is gradually connected according to the application area determination result ar (i, j), and the output image looks natural.
  • the pixel output unit 400 includes multipliers 410 and 420 and an adder 430.
  • the edge enhancement component sp (i, j) and the edge enhancement degree ED are input to the multiplier 410 and output to the multiplier 420.
  • Multiplier 420 multiplies application region determination result ar (i, j) by the input from multiplier 410 and outputs the result to adder 430.
  • the adder 430 adds (superimposes) the output from the multiplier 420 to the pixel of interest lm (i, j) (its luminance component), and outputs an output pixel g (i, j).
  • step S1000 Pixel input processing is performed in which pixels (input pixels) of the input image are input (step S1000). Among the input pixels, pixels necessary for downstream processing are stored in the line memory unit 100.
  • step S2000 the edge emphasis process is performed on the pixels input by the pixel input process (step S2000).
  • step S3000 an application area determination process is executed (step S3000).
  • step S4000 it is determined whether or not all the pixels constituting the input image have been completed (step S5000). If all the pixels have been processed, the present process is terminated (step S5000; Yes), and the pixels are still pixels. Is left, the process is repeated from step S1000.
  • step S5000 it is determined whether or not all the pixels constituting the input image have been completed. If all the pixels have been processed, the present process is terminated (step S5000; Yes), and the pixels are still pixels. Is left, the process is repeated from step S1000.
  • the pixel of interest lm (i, j) and lm (i + m, j + n) including the periphery of the image of interest are stored in the line memory unit 100 from the input image (step S1002).
  • m and n are areas indicating peripheral pixels of the target pixel
  • m is a horizontal filter range
  • n is a vertical filter range.
  • the contour emphasis processing may be processing realized by the contour emphasis processing unit 200 as hardware, or is realized by software by the control unit 10 and the like reading and executing a program stored in the storage unit 40. It may be a process.
  • the variables used in the contour emphasizing process of FIG. 15 are lm (i + m, j + n) is a peripheral pixel including the target pixel read from the pixel stored in the line memory unit 100, and h (m, n) Is an array representing filter coefficients, m is a horizontal filter range, n is a vertical filter range, sp (i, j) is a contour enhancement component, temp is a temporary variable, P is a filter calculation result Each represents a temporary variable.
  • step S2010 when m is smaller than 2, 1 is added (incremented) to m, and the process is repeatedly executed from step S2006 (step S2010; Yes ⁇ step S2012 ⁇ step S2006).
  • step S2010 when m is 2 or more (step S2010; No), it is determined whether n is smaller than 2 (step S2014).
  • step S2004 when n is smaller than 2, 1 is added to n (incremented), and the processing is repeatedly executed from step S2004 (step S2014; Yes ⁇ step S2016 ⁇ step S2004).
  • step S2014 when n is 2 or more (step S2014; No), the value of P is substituted into sp (i, j), and this process is terminated (step S2018).
  • the application area determination process may be a process realized by the application area determination unit 300 as hardware, or realized by software by the control unit 10 or the like reading and executing a program stored in the storage unit 40. May be processed.
  • step S3100 an image analysis process for analyzing the input image is executed (step S3100), and a determination process for determining the area of the image process to be applied to the input image is executed (step S3200).
  • step S3200 a determination process for determining the area of the image process to be applied to the input image is executed.
  • the image analysis process in step S3100 and the determination process in step S3200 will be described in detail.
  • step S3110 The image analysis process will be described with reference to FIG. 17, the image analysis process is a process of calculating an image analysis result ian, and executes an average calculation process (step S3110) and a dispersion calculation process (step S3150).
  • step S3110 the average calculation process in step S3110 and the dispersion calculation process in step S3150 will be described in detail.
  • step S3116 lm (i + m, j + n) is added to sum (step S3116). Then, it is determined whether or not m is smaller than 1. If m is smaller than 1, 1 is added to m, and the process is repeated from step S3116 (step S3118; Yes ⁇ Step S3120 ⁇ Step S3116).
  • step S3118 if m is 1 or more (step S3118; No), it is next determined whether n is smaller than 1 (step S3122).
  • n is smaller than 1, 1 is added to n, and the processing is repeated from step S3114 (step S3122; Yes ⁇ step S3124 ⁇ step S3114).
  • step S3122 when n is 1 or more (step S3122; No), the total value sum is added to ⁇ by the number of pixels of interest and its surrounding pixels (that is, the number of pixels read out by lm (i + m, j + n)). By dividing by 9, the average value is calculated and substituted for ⁇ (step S3126). Of course, the number to be divided changes according to the number of pixels.
  • the variables used in the distributed arithmetic processing of FIG. 19 are lm (i + m, j + n) stored in the line memory unit 100 and read out pixels, m is a horizontal filter range, and n is a vertical filter range. , ⁇ represents an average value, ⁇ 2 represents variance, ian represents an image analysis result, temp represents a temporary variable, and sum represents a temporary variable of the total value.
  • step S3156 (lm (i + m, j + n) ⁇ ) 2 is substituted for temp (step S3156). Then, the value of temp is added to sum (step S3158). Then, it is determined whether or not m is smaller than 1. If m is smaller than 1, 1 is added to m, and the process is repeated from step S3156 (step S3160; Yes ⁇ Step S3162 ⁇ Step S3156).
  • step S3164 it is next determined whether n is smaller than 1 (step S3164).
  • n is smaller than 1, 1 is added to n, and the processing is repeatedly executed from step S3154 (step S3164; Yes ⁇ step S3166 ⁇ step S3154).
  • step S3164 when n is 1 or more (step S3164; No), it substitutes the sum / 9 to sigma 2 (step S3168). Since ⁇ 2 becomes variance, it is substituted for the image analysis result ian as it is (step S3170).
  • the determination process will be described with reference to FIG. Variables used in the determination process in FIG. 20, the AR th is application area determination threshold, ian the image analysis results, flag is a temporary variable for determining whether the image analysis result coverage area determination threshold value or less, ar ( i, j) represent application area determination results, respectively.
  • the application area determination threshold AR th is input (step S3202). As described above, there may be a plurality of values that AR th can take, but in the determination process of the present embodiment, it will be described as one in order to simplify the description. Then, 0 is substituted into the variable flag to initialize the variable flag (S3204).
  • the pixel output process may be a process realized by the pixel output unit 400 as hardware, or is realized by software when the control unit 10 or the like reads and executes a program stored in the storage unit 40. It may be processing.
  • the variables used in the pixel output processing of FIG. 21 are ED for the edge enhancement level, sp (i, j) for the edge enhancement component, ar (i, j) for the application area determination result, and lm (i, j).
  • g (i, j) denotes an output pixel
  • output denotes a temporary variable used for pixel output calculation.
  • the edge enhancement level ED is input (step S4002).
  • the variable output is initialized by substituting 0 into the variable output (step S4004).
  • a value obtained by multiplying ED and sp (i, j) by ar (i, j) and further adding lm (i, j) is substituted for variable output (step S4006).
  • the output is output as an output pixel g (i, j) (step S4008).
  • FIG. 22 is a diagram schematically illustrating an array of input images (pixels stored in the line memory unit 100), and FIG. 23 is a diagram illustrating a numerical example of the input image (luminance). As shown in the figure, five lines of pixels are input to the line memory unit 100 for nine pixels in the horizontal direction. A case where the processing is executed on the sample points S1 to S9 will be described.
  • FIG. 24 is a diagram showing specific pixel values (luminance).
  • f (i, j) is an input pixel
  • r (i, j) in FIG. 24B is a value when only the conventional contour enhancement processing (5 ⁇ 5) is performed. is there.
  • FIG. 24C shows the values of the output pixel g (i, j) in the first embodiment (when the contour enhancement process (5 ⁇ 5) and the application area determination process (3 ⁇ 3) are performed). is there.
  • FIG. 25 is a diagram showing luminance on the vertical axis and sample points S1 to S9 on the horizontal axis.
  • the sample points S1 to S3 are configured as flat portions
  • S4 to S6 are configured as edge portions
  • S7 to S9 are configured as flat portions.
  • FIG. 25A shows the input pixel f (i, j)
  • FIG. 25B shows the case where only the conventional edge enhancement processing is performed
  • FIG. 25C shows the output pixel in the first embodiment. It is a figure at the time of performing an outline emphasis process (5x5) and an application area
  • the application region determination unit 300 for S3 (50) and its peripheral pixels (3 ⁇ 3 range). make a decision with. Since the image analysis result ian (0) is equal to or less than the application area determination threshold AR th (100), the application area determination result ar (i, j) is (0).
  • the input pixel (50) is output as the output pixel (50). Note that this can be obtained in the same manner for the sample points S1, S2, S7, S8, and S9.
  • the application region determination unit 300 determines S4 (50) and its surrounding pixels (3 ⁇ 3 range). Since the image analysis result ian (500) is not less than or equal to the application area determination threshold AR th (100), the application area determination result ar (i, j) is (1).
  • a value obtained by adding the input pixel (50) to the value obtained by multiplying the edge enhancement component sp (i, j) ( ⁇ 30) by the edge enhancement degree ED (1) is the processed pixel (20). ) Is output. This can be obtained in the same manner for the sample points S5 and S6.
  • ringing occurs conventionally in the range A of FIG. 25B, whereas in the first embodiment, the ringing can be suppressed in the range B of FIG. 25C.
  • FIG. 26 is a diagram showing a specific image example.
  • FIG. 26A is an input image (an image that has not been subjected to contour enhancement processing)
  • FIG. 26B is an image that has undergone only conventional contour enhancement processing
  • FIG. 26C is an output image of the first embodiment. is there.
  • the left side is the actual processed image
  • the right side is a schematic representation of the processed image.
  • the dotted line portion of this schematic image schematically represents ringing occurring near the edge.
  • the ringing (range A) generated by the conventional contour emphasis process of FIG. 26B the ringing (range B) generated by the first embodiment of FIG. 26C is suppressed. . Therefore, it is possible to appropriately emphasize only the contour portion (edge portion, texture portion) without conspicuous noise as compared with the conventional case.
  • FIG. 27 shows an example of the Wiener filter used in the second embodiment.
  • FIG. 27A is an array showing the Wiener filter coefficients h (m, n) used in the second embodiment
  • FIG. 28 is a diagram showing specific pixel values (luminance).
  • f (i, j) is an input pixel
  • r (i, j) in FIG. 28 (b) is a value obtained when only the conventional contour enhancement processing (5 ⁇ 5) is performed. is there.
  • FIG. 28C shows the values of the output pixel g (i, j) in the first embodiment (when the contour enhancement process (5 ⁇ 5) and the application area determination process (3 ⁇ 3) are performed). is there.
  • FIG. 29 is a diagram showing luminance on the vertical axis and sample points S1 to S9 on the horizontal axis.
  • the sample points S1 to S3 are configured as flat portions
  • S4 to S6 are configured as edge portions
  • S7 to S9 are configured as flat portions.
  • FIG. 29A shows the input pixel f (i, j)
  • FIG. 29B shows the case where only the conventional edge enhancement processing is performed
  • FIG. 29C shows the output pixel in the first embodiment. It is a figure at the time of performing an outline emphasis process (5x5) and an application area
  • the second embodiment will be described by paying attention to the pixel on the third line.
  • the application region determination unit 300 for S3 50
  • its peripheral pixels 3 ⁇ 3 range.
  • the input pixel (50) is output as the output pixel (50). Note that this can be obtained in the same manner for the sample points S1, S2, S7, S8, and S9.
  • the application region determination unit 300 determines S4 (50) and its surrounding pixels (3 ⁇ 3 range). Since the image analysis result ian (500) is not less than or equal to the application area determination threshold AR th (100), the application area determination result ar (i, j) is (1).
  • a value obtained by adding the input pixel (50) to the value obtained by multiplying the edge enhancement component sp (i, j) ( ⁇ 30) by the edge enhancement degree ED (1) is the processed pixel (20). ) Is output. This can be obtained in the same manner for the sample points S5 and S6.
  • ringing occurs conventionally in the range A of FIG. 29B, whereas in the second embodiment, the ringing can be suppressed in the range B of FIG. 29C.
  • the values of the sample points S3 to S5 change sharply in the conventional edge emphasis process, and an unnatural ringing such as a ghost occurs.
  • FIG. 30 is a diagram showing a specific image example.
  • 30A is an input image (an image that has not been subjected to contour enhancement processing)
  • FIG. 30B is an image that has undergone only conventional contour enhancement processing
  • FIG. 30C is an output image of the second embodiment. is there.
  • the left side is the actual processed image
  • the right side is a schematic representation of the processed image.
  • the dotted line portion of this schematic image schematically represents ringing occurring near the edge.
  • the ringing (range B) generated by the second embodiment of FIG. 30 (c) is suppressed as compared with the ringing (range A) generated by the conventional contour enhancement processing of FIG. 30 (b). . Therefore, it is possible to appropriately emphasize only the contour portion (edge portion, texture portion) without conspicuous noise as compared with the conventional case.
  • the filter coefficient is the largest at the center, and once falls outside and then rises again, there is a problem that ringing near the edge is noticeable.
  • the second embodiment is greatly improved.
  • FIG. 31 shows a functional configuration of the contour emphasis unit 54b in the third embodiment.
  • the application region determination result ar (i, j) output from the application region determination unit 300 is input to the contour enhancement processing unit 200.
  • the determination unit 320 included in the application region determination unit 300 determines the application region determination result ar (i, j)
  • the horizontal axis represents the image analysis result ian
  • the vertical axis represents the application area determination result ar (i, j).
  • FIG. 32A shows a case where there is one threshold
  • FIG. 32B shows a case where there are two thresholds.
  • FIG. 32B is a diagram illustrating a case where AR1 th and AR2 th are used as the application region determination threshold. If the image analysis result ian is equal to or smaller than the application area determination threshold AR1 th , the application area determination result ar (i, j) is set to “0”. On the contrary, if the image analysis result ian is larger than the application area determination threshold AR2 th , the application area determination result ar (i, j) is set to “1”.
  • the intermediate application area determination value MAAV is output as the application area determination result ar (i, j). In this embodiment, MAAV outputs “0.5”.
  • the contour emphasis processing unit 20 changes the filter coefficient.
  • ar (i, j) 1
  • the filter coefficient h (m, n) uses the value shown in FIG.
  • FIG. 34 shows a pixel output unit 400b according to the third embodiment.
  • the pixel output unit 400b since the application region determination result ar (i, j) is not input to the pixel output unit 400b, the pixel output unit 400b includes one multiplier and one adder.
  • the threshold value used in the above-described embodiment may be set by the user. That is, as shown in FIG. 35, a screen for setting a threshold value is displayed on the display screen displayed on the display unit 60. Then, when the user sets a threshold value, it is possible to perform enhancement processing according to the user's preference. In addition, by displaying a preview of the effect on the image on the threshold setting screen, the user can set a more appropriate threshold.
  • the contour emphasis processing unit uses 5 ⁇ 5 peripheral pixels
  • the application area determination unit uses 3 ⁇ 3 peripheral pixels
  • a Laplacian filter is used as the contour emphasis filter. Only one embodiment when used is shown, and appropriate processing may be executed as appropriate according to the type of filter and the range used as the peripheral pixel.
  • the range of peripheral pixels is smaller than the range used for application area determination (flat portion determination) as compared with the range of contour enhancement processing.
  • the Laplacian filter and the Wiener filter are described as examples of the filter applied by the contour enhancement processing unit.
  • the filter is effective in enhancing the contour (for example, an unsharp mask filter).
  • an unsharp mask filter for example, an unsharp mask filter
  • the image processing device is applied to a mobile phone.
  • the image processing device can be applied to various display devices such as a television, a car navigation system, and a computer.
  • FIG. 36 illustrates a simple display device 9 connected to a computer.
  • an external input unit 910 a storage unit 920, a video signal processing unit 930, and a display unit 940 are connected to the control unit 900.
  • the video signal processing unit 930 includes a contour enhancement unit 54 and a gamma correction unit 58 to which the present invention is applied.
  • the video signal input by the external input unit 910 is displayed on the display unit 940 via the video signal processing unit 930.
  • the flatness determination is performed with fewer pixels than the pixels used when performing the contour emphasis processing by the contour emphasizing unit, when approaching the edge portion from the flat portion, the determination of the edge portion is performed more than the conventional flat portion determination. It is determined that even a nearby pixel is flat. Therefore, it is possible to perform enhancement processing with suppressing ringing and display an image with a more natural finish.
  • a program that operates as an image processing apparatus is a program that controls a CPU or the like (a program that causes a computer to function) so as to realize the functions of the above-described embodiments.
  • Information handled by these devices is temporarily stored in a temporary storage device (for example, RAM) at the time of processing, then stored in various ROM or HDD storage devices, and read and corrected by the CPU as necessary. • Writing is performed.
  • a recording medium for storing the program a semiconductor medium (for example, ROM, a non-volatile memory card, etc.), an optical recording medium / a magneto-optical recording medium (for example, DVD (Digital Versatile Disc), MO (Magneto Optical) Disc), MD (Mini Disc), CD (Compact Disc), BD, etc.), magnetic recording medium (eg, magnetic tape, flexible disk, etc.), etc.
  • a semiconductor medium for example, ROM, a non-volatile memory card, etc.
  • an optical recording medium / a magneto-optical recording medium for example, DVD (Digital Versatile Disc), MO (Magneto Optical) Disc), MD (Mini Disc), CD (Compact Disc), BD, etc.
  • magnetic recording medium eg, magnetic tape, flexible disk, etc.
  • the program when distributing to the market, can be stored in a portable recording medium for distribution, or transferred to a server computer connected via a network such as the Internet.
  • a server computer connected via a network such as the Internet.
  • the storage device of the server computer is also included in the present invention.
  • each device in the above-described embodiment may be realized as an LSI (Large Scale Integration) which is typically an integrated circuit.
  • LSI Large Scale Integration
  • Each functional block of each device may be individually chipped, or a part or all of them may be integrated into a chip.
  • the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • integrated circuit technology that replaces LSI appears due to progress in semiconductor technology, it is of course possible to use an integrated circuit based on this technology.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

Dans la présente invention, un premier pixel de périphérique situé dans la périphérie d'un pixel intéressant est référencé et un composant au contour amélioré qui est traité par un filtre de traitement d'amélioration de contour est généré ; un second pixel de périphérie qui est contenu dans une portée plus petite que le premier pixel de périphérie est référencé et l'on détermine si le pixel intéressant est une partie de niveau ou une partie de bord ; si le pixel intéressant est une partie de bord, le composant à contour amélioré est superposé sur le pixel intéressant et transmis comme pixel de sortie ; et si le pixel intéressant est une partie de niveau, le pixel intéressant est transmis comme pixel de sortie. Il est ainsi possible de produire un dispositif de traitement d'image capable d'atténuer le bruit qui se produit près des parties de bord lorsqu'on effectue un traitement d'amélioration de contour sur une image entrée.
PCT/JP2011/061472 2010-05-20 2011-05-19 Dispositif de traitement d'image, circuit de traitement d'image, procédé de traitement d'image, et programme Ceased WO2011145668A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-116659 2010-05-20
JP2010116659 2010-05-20

Publications (1)

Publication Number Publication Date
WO2011145668A1 true WO2011145668A1 (fr) 2011-11-24

Family

ID=44991761

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/061472 Ceased WO2011145668A1 (fr) 2010-05-20 2011-05-19 Dispositif de traitement d'image, circuit de traitement d'image, procédé de traitement d'image, et programme

Country Status (1)

Country Link
WO (1) WO2011145668A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015033695A1 (fr) * 2013-09-06 2015-03-12 シャープ株式会社 Dispositif de traitement d'image
CN109544490A (zh) * 2018-10-17 2019-03-29 北京达佳互联信息技术有限公司 图像增强方法、装置和计算机可读存储介质
US11798489B2 (en) 2021-07-08 2023-10-24 Lg Display Co., Ltd. Gate driver and display device using the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003132352A (ja) * 2001-10-23 2003-05-09 Noritsu Koki Co Ltd 画像処理方法、画像処理装置、画像処理プログラム、これを記録したコンピュータ読み取り可能な記録媒体
JP2006324804A (ja) * 2005-05-17 2006-11-30 Matsushita Electric Works Ltd 輪郭強調回路
JP2007213125A (ja) * 2006-02-07 2007-08-23 Sony Corp 画像処理装置および方法、記録媒体、並びに、プログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003132352A (ja) * 2001-10-23 2003-05-09 Noritsu Koki Co Ltd 画像処理方法、画像処理装置、画像処理プログラム、これを記録したコンピュータ読み取り可能な記録媒体
JP2006324804A (ja) * 2005-05-17 2006-11-30 Matsushita Electric Works Ltd 輪郭強調回路
JP2007213125A (ja) * 2006-02-07 2007-08-23 Sony Corp 画像処理装置および方法、記録媒体、並びに、プログラム

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015033695A1 (fr) * 2013-09-06 2015-03-12 シャープ株式会社 Dispositif de traitement d'image
CN109544490A (zh) * 2018-10-17 2019-03-29 北京达佳互联信息技术有限公司 图像增强方法、装置和计算机可读存储介质
CN109544490B (zh) * 2018-10-17 2021-07-13 北京达佳互联信息技术有限公司 图像增强方法、装置和计算机可读存储介质
US11798489B2 (en) 2021-07-08 2023-10-24 Lg Display Co., Ltd. Gate driver and display device using the same

Similar Documents

Publication Publication Date Title
CN101232574B (zh) 成像装置、噪声去除设备及噪声去除方法
KR101303415B1 (ko) 가중된 기울기들을 이용하여 이미지 데이터를 디모자이킹하기 위한 시스템 및 방법
US9626744B2 (en) Global approximation to spatially varying tone mapping operators
US8165420B2 (en) Noise reduction circuit and image processing device
US8189944B1 (en) Fast edge-preserving smoothing of images
KR101327789B1 (ko) 이미지의 다양한 노이즈들을 동시에 저감하는 방법 및 장치
US9384533B2 (en) Method and device for converting image resolution, and electronic device having the device
JP2008263475A (ja) 画像処理装置および方法、並びに、プログラム
US7110044B2 (en) Image detail enhancement system
CN102640489A (zh) 用于检测和校正图像传感器中的缺陷像素的系统和方法
US20110158541A1 (en) Image processing device, image processing method and program
JPWO2011033619A1 (ja) 画像処理装置、画像処理方法、画像処理プログラム、及び、記憶媒体
US8238685B2 (en) Image noise reduction method and image processing apparatus using the same
CN102821230A (zh) 图像处理装置和图像处理方法
CN102209180A (zh) 图像处理设备和图像处理方法
JP2014115790A (ja) 画像処理装置、情報処理方法及びプログラム
US8488899B2 (en) Image processing apparatus, method and recording medium
KR100565065B1 (ko) 필터 뱅크를 이용한 이미지 세부묘사 향상 방법 및 장치
US8073282B2 (en) Scaling filter for video sharpening
JP2011065339A (ja) 画像処理装置、画像処理方法、画像処理プログラム、及び、記憶媒体
WO2011145668A1 (fr) Dispositif de traitement d'image, circuit de traitement d'image, procédé de traitement d'image, et programme
JP2001292325A (ja) エッジ強調装置、エッジ強調方法および記録媒体
WO2010007933A1 (fr) Dispositif de traitement de signal d'image et dispositif d'affichage d'image
WO2012147879A1 (fr) Dispositif de traitement d'image, dispositif d'affichage, procédé de traitement d'image et programme de traitement d'image
JP5562812B2 (ja) 送受切替回路、無線装置および送受切替方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11783602

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11783602

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP