WO2006082979A1 - 画像処理装置および画像処理方法 - Google Patents
画像処理装置および画像処理方法 Download PDFInfo
- Publication number
- WO2006082979A1 WO2006082979A1 PCT/JP2006/302059 JP2006302059W WO2006082979A1 WO 2006082979 A1 WO2006082979 A1 WO 2006082979A1 JP 2006302059 W JP2006302059 W JP 2006302059W WO 2006082979 A1 WO2006082979 A1 WO 2006082979A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- interest
- image
- attraction
- degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
Definitions
- the present invention relates to a technique for extracting an area of interest in an image, and more particularly to a technique for extracting an area of interest in accordance with a user's request.
- ROI region of interest
- Image processing for example, enlargement or refinement
- Non-Patent Document 1 Conventionally, various methods for extracting a region of interest from an image have been proposed (see, for example, Non-Patent Document 1 and Non-Patent Document 2).
- the conventional technique for extracting and using a region of interest has a problem that it cannot sufficiently reflect the user's request regarding the extraction of the region of interest.
- the conventional technique extracts the region of interest from the image, it only extracts it according to a predetermined algorithm (attraction level calculation formula, etc.). The size, position, number of regions, etc.) were not taken into account, and it was difficult to extract the region of interest desired by the user.
- Non-Patent Document 1 it can be understood from the multi-resolution model (stepped representation of image resolution by a pyramid structure) as the human attention position (the position where the line of sight is held or the position where the user is gazing) (Corresponding to the position of the region) is extracted, but there is no description of the attention range (corresponding to the range in the region of interest).
- Non-Patent Document 2 In contrast to the method of Non-Patent Document 1, Non-Patent Document 2 also describes the range of caution. In the case of non-patent document 2, regarding the size of the attention range, Based on a specific visual model, the attention position and the size of the attention range depend only on the target image from which the region of interest is extracted. That is, the above non-patent literature
- Non-Patent Document 1 As with 1, the issue of not being able to respond to user requests remains. In other words, it is not possible to extract the same region of interest, the same shape of interest, or the same number of regions of interest for multiple image inputs.
- the shape or number is uniquely determined by a predetermined algorithm, and in general, different regions of interest such as numbers and shapes are extracted.
- automatic extraction of a region of interest is disclosed in several powerful papers, etc., and the region of interest is automatically extracted using the data structure of JPEG2000. There are also proposals for methods (see Non-Patent Document 3, for example).
- Non-Patent Document 3 Even in the case of Non-Patent Document 3, the position and size of the extracted region of interest depend only on the image. This is obviously a big problem in practical use. Even if the “construction of a human gaze model based on a visual model” in Non-Patent Document 3 is used for the actual region of interest extraction, what is obtained is uncontrollable and its utility value is small. The method of interest extraction used for actual use must be able to appropriately reflect the user's intentions and instructions.
- a method for selectively extracting a region of interest based on an instruction input of user iso-force is also disclosed.
- instruction information acquired from the user etc.
- information on the subject such as “the part of the person ’s face is visible” or “the part of the subject that is visible in front”, “red part, part” or “
- image attributes such as information about impressions (features) of images such as “flashy parts”.
- information on how to display the region of interest such as the number, size, or shape of the region of interest to be extracted, or, for example, “Create a photographic power thumbnail image (reduced version image for list display)” or “ There are cases where the user receives information related to the action and processing, such as “extracting only a part where a person is shown,” as an instruction input.
- V for example (when “person” is instructed), change the formula for calculating the degree of attraction so that the person in the image shows the object and the area is more attractive.
- Non-Patent Document 1 "A Saliency— Based Search Mechanism For Overt And Covert Shifts Of Visual Attention J (Itti et al., Vision Research, Vol. 40, 2000, ppl489-1506)
- Non-Patent Document 2 "Gaze Model Based on Scale Space Theory” (Science Theory, D-II, Vol. J 86 -D-II, No. 10, ppl490- 1501, October 2003)
- Non-patent document 3 “Automatic extraction and evaluation of region of interest in JPEG2000 transcoder” Takeo Hamada et al., 30th Annual Conference of the Institute of Image Electronics Engineers of Japan, No. 10, pp. 115-116, June 2002. Disclosure of Invention
- the request to “extract two (or two or more) regions where a red object is reflected” is not possible with the conventional methods (at best, in the image) Select the two most red dots from the top, and you can only deal with the surrounding areas as interesting areas). Even more simply, when extracting the region of interest when the size, number, shape, etc. of the region of interest are specified separately, it is not necessary to specify the content of the image such as “red region”. If you specify "Extract one area”, “Extract circular area”, etc.), the conventional method cannot cope with it at all, or the degree of attraction in the image is the highest, and the number of points specified Simply select a point and output it as a specified shape, such as a circle or rectangle.
- An object of the present invention is to provide an image processing apparatus and the like that can extract an area of interest in accordance with a user's wishes when extracting the area of interest from an image. Means for solving the problem
- an image processing apparatus includes an image input unit that acquires image data representing an image, and an instruction input unit that receives conditions relating to extraction of the region of interest of the image. And, based on pixels corresponding to the degree of attraction that exceeds a predetermined threshold value in the calculated degree of attraction, the image A region generating means for generating a region of interest from the data, and a determining unit for determining whether or not the generated region of interest satisfies the received condition, and when it is determined that the condition is not satisfied The threshold value is changed, and the processing of the region generation unit and the determination unit is repeated.
- the region of interest generated based on the degree of attraction does not satisfy the accepted condition, the region of interest is generated again by changing the threshold of the degree of attraction, so that the user's request is met. Region of interest can be extracted.
- the image processing apparatus can accept the number, shape, size, position, and extraction range of the region of interest as the conditions regarding the region of interest extraction.
- Various degrees of weighting for example, weighting according to the probability distribution for the specified range, weighting according to the distance from the contour line, or weighting according to the distance from the specified position) are added to the degree of attraction. You can also
- the specific request of the user is reflected by satisfying the conditions such as the number, shape, size, position, and extraction range of the received region of interest. Is possible.
- the region generation unit is characterized in that the number of image data to be clustered is changed by changing the threshold value.
- the area generation means further includes the number of clusters obtained as a result of the clustering.
- the threshold value may be changed based on the above. Furthermore, by using a plurality of generated clusters and performing interpolation or extrapolation, an attractiveness threshold for extracting a region of interest that meets the conditions specified by the instruction input means (that satisfies the output conditions) Can also be determined.
- the image processing apparatus may extract an edge or an object, and calculate a degree of attraction based on the result.
- the degree of objectivity (object degree) of a predetermined object may be obtained by using pattern matching or -Ural net, and the degree of attraction may be calculated based on this.
- the “attraction degree” may be calculated by adding “weight corresponding to the type of object” to the “object degree”. You can also calculate the degree of attraction based on a human gaze model.
- the image processing apparatus may generate an area having an attractiveness in the image higher than a predetermined value (threshold) as the region of interest, and the attractiveness in the image is predetermined.
- the region of interest may be generated by clustering based on the attractiveness of the region (position) higher than the value (threshold) and the characteristics of the input image (texture, shade, color, etc.).
- clustering may be performed on a plurality of threshold values, and a second clustering region including a region determined as a region of interest as a result of the first clustering may be generated as the region of interest.
- the image processing apparatus can output status information indicating that extraction is impossible when it is determined that the region of interest in accordance with the designated condition cannot be generated. Furthermore, it is possible to output arbitrary status information indicating the processing progress and processing status.
- the image processing apparatus can generate a region of interest so that the first region of interest and the second region of interest do not overlap.
- the region of interest can be generated so that the second region of interest is included in the first region of interest.
- the interest area can be generated in approximately the same size.
- the regions of interest can be generated in different sizes.
- the image processing apparatus performs clustering with the number of clusters controlled so that the number of clusters matches the region of interest generation condition, and outputs the obtained cluster as a region of interest.
- as a method of controlling the number of clusters based on the distribution of the degree of attraction in the image, the degree of attraction is so high that it includes the ridge area (just like drawing a contour line on the map).
- the number of clusters can be controlled by increasing or decreasing the corresponding threshold.
- the present invention is realized as an image processing method using steps as characteristic constituent means in the image processing apparatus, or realized as a program or an integrated circuit for causing a personal computer or the like to execute these steps.
- the program can be widely distributed via a recording medium such as a DVD or a transmission medium such as the Internet.
- an area of interest in accordance with a user's request (requirements related to attributes of the area of interest, such as shape, size, position, number of areas, etc.) is extracted while considering the content and characteristics of the image. be able to.
- FIG. 1 is a block diagram showing a functional configuration of an image processing apparatus according to the present embodiment.
- FIG. 2 (a) shows an example of an original image.
- Figure 2 (b) is a schematic diagram showing a multi-resolution image as a mosaic image.
- FIG. 3 is a schematic diagram in which edge detection is performed on a mosaic image.
- FIG. 4 (a) shows an example of an original image.
- Fig. 4 (b) is a schematic diagram showing a multi-resolution image as a mosaic image.
- FIGS. 5 (a) and 5 (b) are schematic diagrams showing an example of extraction when the shape and size of the region to be extracted are specified.
- FIGS. 6 (a) and 6 (b) are schematic diagrams showing an example of weight distribution and an example of extraction when the position of the region to be extracted is designated.
- FIG. 7 (a) shows an example of an original image.
- Figure 7 (b) shows an example of extracting the region of interest.
- FIGS. 8 (a) and 8 (b) are diagrams showing an example of extracting a region of interest.
- FIG. 9 (a) and (b) are diagrams schematically showing examples of mosaic images.
- FIGS. 10A and 10B are diagrams schematically showing an example of an edge image.
- FIG. 11 is a diagram schematically and three-dimensionally showing an attractiveness map and a region of interest.
- FIG. 12 is a diagram schematically and three-dimensionally showing an attractiveness map and a region of interest.
- FIG. 13 is a diagram schematically showing an attractiveness map and a region of interest.
- FIG. 14 is a flowchart showing a processing flow of the image output apparatus according to the present invention.
- FIGS. 15 (a) to 15 (d) are diagrams showing the relationship between the distribution of data to be clustered, threshold values, and generated clusters in a two-dimensional schematic diagram.
- FIG. 16 is a diagram that shows one-dimensionally the relationship between the attractiveness distribution, the threshold value, and the generated cluster from another viewpoint.
- FIG. 17 is an example of a graph showing the relationship between a threshold and a large number of generated classes.
- FIG. 1 is a block diagram showing a functional configuration of the image processing apparatus 100 according to Embodiment 1 of the present invention.
- the image processing apparatus 100 is an independent apparatus or portable terminal that can extract a region of interest in accordance with a user's request while considering the content and characteristics of the image. It is a device provided as a part of the function, image input unit 102, shape specifying unit 112, size specifying unit 114, position range specifying unit 116, number specifying unit 118, attractiveness calculating unit 122, attractiveness calculating image Processing unit 124, status display unit 132, region generation condition setting unit 142, region generation unit 144, clustering unit 146, threshold value determination unit 147, attractiveness map unit 148, image output unit 152, status output unit 154, and region information output Part 156 is provided.
- the image input unit 102 includes a storage device such as a RAM, and acquires the acquired original image (eg, digital Image taken by a tall camera or the like).
- the attractiveness calculating image processing unit 124 performs image processing necessary for calculating the attractiveness (also referred to as “attention level”) at each position in the image.
- the attractiveness calculating unit 122 actually calculates the attractiveness of each position.
- “attraction degree” refers to the degree of user's attention to a part of an image (for example, represented by a real number from 0 to 1, an integer from 0 to 255, etc.).
- the status display unit 132 is a liquid crystal panel, for example, and displays a series of processing contents.
- the image output unit 152, the status output unit 154, and the region information output unit 156 send the processed image, processing status, and information on the region of interest (for example, coordinates and size) to the status display unit 132 or an external display device. Etc.
- the region generation condition setting unit 142 receives the user's isotropic instructions and conditions via each designation unit (shape designation unit 112, size designation unit 114, position range designation unit 116, number designation unit 118). Based on this, the region generation unit 144 sets a region of interest determination condition, which is a condition for determining the region of interest.
- the region generation condition setting unit is an example of a region generation unit.
- the area generation unit 144 is a microphone computer including, for example, a RAM and a ROM that stores a control program, and controls the entire apparatus 100. Furthermore, the region generation unit 144 generates a region of interest based on the degree of attraction of each pixel.
- the region generation unit 144 includes a clustering unit 146, a threshold determination unit 147, and an attractiveness map unit 148.
- the attractiveness map unit 148 generates, for each image, an attractiveness map (which will be described later) in which the calculated attractiveness is associated with the position on the XY coordinates.
- the attractiveness map is equivalent to the brightness value of each pixel replaced with the attractiveness value. If the degree of attraction is defined for each block of any size (n x m pixels: n, m are positive integers), all pixels in each block have the same degree of attraction (or multi-resolution decomposition) It can be thought of as a pyramid.
- the clustering unit 146 performs clustering on the above-described attractiveness map according to the distribution of attractiveness.
- clustering means that similar image data (or image patterns) are grouped into the same class.
- Clustering methods include layering methods such as the shortest distance method that group together image data that are close to each other, and k-average methods. There is a percent optimization method.
- the power described later for the clustering method The basic operation is to divide the attractiveness map into several clusters (also called “segments” and “categories”) based on the distribution of attractiveness. Is.
- clustering means that “internal connection” but “external separation” can be achieved for a set of classification targets (in this case, a set of points on which each degree of attraction is defined). It is also defined as “dividing into a small subset (in this case, a group of points with a defined degree of attraction)” and refers to a method of bringing together similar things.
- a partial set that is divided or classified is called a “cluster”. For example, if the distribution of attractiveness on the attractiveness map exists locally at four locations, this is equivalent to dividing them into four categories.
- the threshold value determination unit 147 controls the threshold value when determining the degree of attraction on the attraction level map. Specifically, the threshold is increased or decreased when the number or size of the clusters divided by the clustering unit 146 does not satisfy the condition accepted by the user isotropic force.
- the threshold determination unit is an example of a determination unit.
- each designation unit (the shape designation unit 112, the size designation unit 114, the position range designation unit 116, and the number designation unit 118) will be described in detail below. It should be noted that the input to each of the above-mentioned specifying sections may be performed by the user or input via a control program or the like.
- the shape designating unit 112, the size designating unit 114, the position range designating unit 116, and the number designating unit 118 include a keyboard and a mouse (or by executing a control program), and extract a region of interest from a user or the like. To accept conditions and instructions.
- the shape designation part, the size designation part, the position range designation part, and the number designation part are examples of instruction input means.
- the shape designating unit 112 accepts designation of the shape of the region of interest for which user iso-force extraction is desired (for example, circular, rectangular, elliptical, etc.). Note that the shape types are not limited to these shapes, and any shape can be accepted (as will be described later, FIG. 5 (a) shows two circular shapes with different sizes as the shape of the region of interest to be extracted from the user etc. This is an example when the region of interest is specified).
- the size specifying unit 114 determines the size of the region of interest (ROI) from which the user isotropic force is to be extracted (for example, It accepts designation of absolute size based on the number of pixels and relative size expressed as a ratio to the vertical and horizontal size of the image. At this time, in addition to specifying by size, it can be specified by “ratio to the size of the largest region of interest that can be extracted”, “second largest area”, “largest area included in a specific size”, etc. It is also acceptable to accept an attribute that replaces. In this case, the size itself may change dynamically depending on the content of the image (see Fig. 5 (a)).
- the size of the shape is not limited to the method described above, and may be designated by any designation method including those that dynamically change according to the contents of the image or those that do not change dynamically. Good.
- the position range designation unit 116 accepts designation of the position and range of the region of interest to be extracted. For example, it accepts designations such as absolute position (absolute point) based on the number of pixels and relative position versus point expressed as a ratio to the vertical and horizontal size of the image.
- Arbitrary methods can be used for the number of points, the designation form, and the usage method (rule when extracting the region of interest).
- the region of interest is extracted so as to always include the specified point.
- the number of points and the specified form such as “Place priority and extract high priority and include all points” and “Extract the region of interest as an area containing multiple points”.
- the usage method can be arbitrarily selected.
- the number of points that can be specified may be singular or plural.
- a condition for extracting a region of interest it may be necessary to include it as a condition, such as when all specified points are included or when at least one is included. Even vague conditions (best F auto type).
- the size and number of the range and the usage method can be arbitrarily selected as in the case where the point is designated. For example, “Extract the region of interest to include at least 20% of the specified range”, “Extract the region of interest from within the specified range”, “If there are multiple specified ranges, at least either Either range is 50
- priorities and weight ranges based on probability distributions. Any method based on mathematical / statistical processing can be used within the range that can be realized by the practitioner at the technical level at the time of filing, such as extracting areas so as to increase.
- the user's isotropic force accepts specification of a specific range (for example, accepts specification of a range with a mouse, a pen, etc.) or automatically specifies a predetermined range when a point is specified.
- a specific range for example, accepts specification of a range with a mouse, a pen, etc.
- Any existing user interface may be used, such as a method to enable setting.
- the number designation unit 118 can be combined with the number condition of the designated region of interest.
- the number of specified points and how to use them can be arbitrarily set together with the number condition to be extracted, such as “extract at least one of them to include the specified points”.
- the number designation unit 118 accepts designation of the number of regions of interest to be extracted by the user iso-force. As in the case of the point designation method in the previous position range designation unit 116, the number of regions of interest to be designated may be singular or plural. In addition, the number of areas of interest to be specified, the form of designation, and the method of use (rules on the extraction and use of areas of interest) can be used in any case (as described later, Fig. 5 (a) shows two interests. An example is shown when an area is specified).
- the conditions and instructions accepted through the shape designation unit 112, the size designation unit 114, the position range designation unit 116, and the number designation unit 118 are arbitrary conditions, but at least one accepted condition is The region of interest will be extracted using it.
- a shape specifying unit 112 As an interface for receiving an instruction from a user or the like, a shape specifying unit 112, a size specifying unit 114, a position range specifying unit 116, and a number specifying unit 118 are provided.
- these may not be the above structure.
- an interface may be provided to separately input V and elements specified for extraction of the region of interest!
- an interface that can be designated as V, where the regions of interest overlap, and an interface that controls the distance between the regions of interest. Toughace, control each other's size (when extracting multiple regions of interest, “extract only one region larger than the other”, “extract all regions of interest with approximately the same size”, etc.) Interface) may be provided.
- the interface for inputting separately is not limited to the above, but it is possible to provide an interface that receives any designation within the range that can control the extraction of the region of interest.
- the degree of attraction is calculated by the degree of attraction calculation unit 122 and the image processing unit 124 for attraction level calculation.
- the attractiveness calculating unit 122 calculates the local attractiveness of the image.
- the degree-of-attraction calculation image processing unit 124 performs image processing necessary to calculate the degree of attraction in the degree-of-attraction calculation unit 122.
- a conventional region of interest extraction method or a human gaze model can be used as the image processing in the image processing unit 124 for calculating the degree of attraction.
- a technique for obtaining a local attraction (a human eye gaze model) in an image is described in the above prior art. In both cases, a gaze model is constructed based on local differences in images.
- the portion corresponding to the calculation of the attractiveness by the gaze model is the attractiveness calculation unit 122.
- the image processing portion including the difference processing is applied to the attractiveness calculation image processing unit 124.
- the image is decomposed at many resolutions (hereinafter referred to as “high resolution”) (referred to as an image pyramid structure). After calculating the hue difference from the block, etc., the calculated value of the attractiveness at each resolution is added together with a predetermined weight, and the final “attractiveness” is calculated by adding the weight according to the position. .
- the attractiveness calculation image processing unit 124 has a multi-resolution separation and hue conversion function.
- existing filter processing such as noise reduction and normalization (such as histogram equalizer and dynamic range adjustment), smoothing (blur, low-pass filter, Gaussian filter, etc.) and edge enhancement, morphing conversion using OPENING and CLOSING, etc.
- noise reduction and normalization such as histogram equalizer and dynamic range adjustment
- smoothing blue, low-pass filter, Gaussian filter, etc.
- edge enhancement morphing conversion using OPENING and CLOSING, etc.
- morphing conversion using OPENING and CLOSING, etc.
- the smoothing process is a process that also leads to the scale space in the above-described conventional technology. Instead of defining and calculating the scale space for each element (individual pixels or individual blocks) in the image, a Gaussian filter is used. You can substitute the whole image.
- the attractiveness calculation unit 122 calculates the attractiveness of each layer at each resolution, and calculates the final attractiveness in consideration of the weighting of the calculated value in each layer.
- the method of extracting the region of interest is not limited to the above-described processing for specifying an object (a method for processing an image globally), but the target is specified as in the above-described conventional technique! Case processing may be used.
- the brain region is extracted from the MRI image as the region of interest.
- detection, discrimination, and recognition techniques that are generally performed using templates, -Ural nets, BOOSTING, etc., such as human face detection and character recognition, as well as general object detection and recognition, etc. It can also be used as an extraction method.
- calculation of the "probability" of an object may be used for calculating the attractiveness.
- the degree of attraction may be obtained by multiplying the probability by the coefficient corresponding to the type of the object. For example, the coefficient is “2.0” for a face, “1.0” for a flower, “1.5” for a dog, etc.
- the difference in attractiveness with respect to the object may be expressed by a coefficient.
- top-down type image processing in which some information about an object is known is referred to as "top-down type” image processing, and the content and object of the image A process in which information about an object is not known is distinguished by calling it a “bottom-up” image process.
- the state display unit 132 presents the processing status and condition setting status in the attraction level calculation unit 122, the attraction level calculation image processing unit 124, and the region generation condition setting unit 142 described later to the user. For example, each situation is presented to the user using any means such as a liquid crystal panel or LED.
- the image processing result in the attractiveness calculating image processing unit 124 may be displayed.
- the “attraction level” at each part of the image calculated by the attraction level calculation unit 122 may be processed and displayed so as to be visible.
- FIG. 2 schematically shows a mosaic image 202 as an example of an image that has been subjected to multi-resolution conversion by the image processing unit 124 for calculating the original image 200 and the attractiveness degree (for the original image 200 and the mosaic image 202).
- Each block originally has a gray value.
- it is refused in advance that the shade is virtually represented by black and white binary values by dither error diffusion. The same)) o
- the “attraction degree” is defined using only the edge strength.
- the attraction degree can be calculated by various methods. Use a simple example).
- the edge strength expressed by the density of line segments is used.
- FIG. 3 is an example of an image on which edge detection has been performed. (Originally, we would like to express the value of the degree of attraction in shades, but it is not possible with binary drawings, so it is shown schematically as shown in Fig. 3.)
- the state display unit 132 displays the mosaic image 202 in FIG. 2B and the edge detection image 300 in FIG. Thereby, the user can know the image processing status and the attractiveness distribution.
- status display section 132 is not an essential component in the first embodiment, as is the case with each designation section (size designation section 114, position range designation section 116, number designation section 118, etc.). It is a component that can be selected as needed.
- functions of the region generation condition setting unit 142 and the region generation unit 144 will be described. As described above, the region generation unit 144 determines the region of interest based on the degree of attraction. The region generation condition setting unit 142 specifies the determination conditions at this time.
- the region generation condition setting unit 142 sets the region of interest determination condition, the user in each designation unit (the shape designation unit 112, the size designation unit 114, the position range designation unit 116, the number designation unit 118) Set based on instructions of equal power.
- the region of interest determination condition is set so that the region of interest becomes that shape.
- the region-of-interest determination condition is set so that the region of interest has that size.
- the region-of-interest determination condition is set so that the number of interesting regions is the number.
- FIG. 4A shows an example of the original image.
- FIG. 4 (a) is a diagram schematically showing a state in which the object A410, the object B412, the object C414, and the object D416 are shown in the original image 400.
- FIG. 4B schematically shows an example of an edge image 440 obtained by performing mosaic processing on the original image 400 and performing edge extraction.
- the strength of the edge that matches the degree of shading of each block in the edge image 440 is represented.
- a circle is specified as the shape of the region of interest by the shape specifying unit 112 and a predetermined size is specified as the size of the region of interest from the size specifying unit 114.
- a predetermined size is specified as the size of the region of interest from the size specifying unit 114.
- the condition is that a circular region of interest is extracted with the two regions of interest approximately the size shown in Fig. 5 (a).
- set a range that allows variation in size in this case, size example A502
- allowable variation range 506 indicated by a broken line in FIG. Also good.
- the presence / absence of the fluctuation allowance 506, the specific diameter, etc. may be defined in advance as a preset, or the user's isotropic force may be accepted through each designation unit (in the above example,
- the region generation condition setting unit 142 sets the conditions for determining the region of interest based on the designation of each designated unit force in this way.
- the region generation unit 144 extracts a region of interest according to the size example A502 and the size example B504. A specific example of extracting an area corresponding to the size example A502 will be described with reference to FIG.
- the edge strength in the edge image 440 (in FIG. 5 (b), the darker the color of each block is, the stronger the edge is) is directly equivalent to the degree of attraction.
- the edge image 440 is an attraction level map showing the level of attraction level.
- the edge image 440 will be referred to as an attractiveness map 440, particularly in the description using the attractiveness.
- the size example A502 having the variation allowable width 506 scans in the attractiveness map 440 just like pattern matching. This is equivalent to searching for a position with the highest total degree of attraction on the circle (attraction level score).
- a slight difference from general pattern matching is that the attractiveness inside the circle does not contribute to the attractiveness score, but only the attractiveness of the blocks belonging to the circumference contributes to the attractiveness score.
- a general pattern matching algorithm can be applied as it is, but this leads to overestimation of the degree of attraction inside the region of interest rather than on the region of interest boundary line.
- the region of interest map 440 is scanned in the same manner as pattern matching in size example A502 so that the attraction level score is maximized. As a result, the region of interest obtained is ROI determination example A542 shown in FIG. is there. Similarly, ROI determination example B544 shown in FIG. 5 (b) corresponding to the size example B504. If there is no specification of a circle, it can be transformed into an ellipse. [0094] In the above example, the force described using only the pattern-matching method. The determination of the position of the specific interesting region and the determination of the position involving the shape change of the region of interest itself are the pattern-matching method. It is possible to implement other than.
- Dynamic contour extraction technology is a technique that defines the energy of a contour and changes the contour to minimize the energy in the image for the purpose of extracting the contour. This is an extraction method by calculation.
- a predetermined number for example, 20 points
- candidate points that can be regarded as moving deformation destination candidates are set for the respective control points.
- the maximum value is obtained when the energy itself is other than "circular". It is possible to use a method of designing such that it is taken, or a method of correcting to a circle when it has converged. Such definition of energy may also be performed by the region generation condition setting unit 142.
- the configurations of the region generation condition setting unit 142 and the region generation unit 144 are not limited to the above examples, and may be configured using other existing technologies. .
- two regions of interest are extracted without overlapping, but when extracting a plurality of regions of interest, It may be necessary to determine whether or not they overlap each other. For example, there are cases where each designation unit specifies whether or not to allow overlap, or when the image processing apparatus 100 is preset so as not to allow overlap.
- the degree of attraction is calculated based on the previous edge strength. It shall be.
- the area of interest is designed so that the attractiveness score decreases (that is, the area between the area of interest 822 and the area of interest 824). ) As a region of interest, or a large region that includes region of interest 822 and region of interest 824 is not extracted as a region of interest.
- the interest area 822 and the interest area 824 may be output as the interest areas without any problem.
- FIGS. 9 (a) and 9 (b) are diagrams illustrating an example in which the original image 800 in FIG. 8 is mosaicked with two block sizes. Needless to say, this is a schematic example of the original image 800 broken down into multiple resolutions. As shown in mosaic images A900 in Fig. 9 (a) and mosaic image B910 in Fig. 9 (b), the results obtained with multiple resolutions and the edge strengths obtained are shown in Fig. 10 (a) and (b). . As in Fig. 3, the edge strength is expressed for convenience by the density of the line segments. Comparing the edge image A100 00 and the edge image B1010, it can be seen that the edge image B1010 captures a more global edge distribution and the edge image A1000 captures a more local edge distribution.
- FIG. 11 shows an example in which the edge strength is sequentially obtained by multi-resolution as described above, and the edge strength is read as the degree of attraction in the same manner as in the previous examples, and an attraction degree map is generated.
- Fig. 11 schematically represents an attractiveness map when the original image 800 is decomposed into a plurality of multi-resolutions, the edge strength is obtained at each resolution, and the edge strength is read as the attractiveness. (Attraction map 1100).
- the height direction represents the height of the attractiveness.
- This degree of attraction map 1100 is shown in a state where the map is cut at a certain value (attraction level) just as the map is cut along contour lines. Attracting degree map 1100 black, the part is cut into circles!
- the attractiveness map 1100 in FIG. 11 has six cut areas. One of them is the area of interest 1110.
- Fig. 12 shows an example of changing the height at which this rounding is performed.
- the attractiveness map 1200 in FIG. 12 is rounded at a value lower than that in FIG.
- the main cross section is indicated by black dots.
- the cross-section created in Fig. 11 (the interest region 1110) is shown in Fig. 12 as a region surrounded by a dotted line!
- the higher (higher degree of attention) area is included in the lower, wider area.
- By generating candidate regions of interest hierarchically it is possible to output a region that matches the user's request as a region of interest.
- the region corresponding to the region of interest 1110 cannot be extracted explicitly.
- judgment can be made by incorporating existing clustering methods (for example, hierarchical methods such as the shortest distance method and split optimization methods such as k-means) and BOOSTING methods.
- the accuracy can be increased.
- the object in the image is extracted using an existing template match, etc. (an object extraction that can be applied to the approximate position and shape is not necessary even if it is not a complete object extraction.) Use it for clustering.
- FIG. 13 is a diagram illustrating the operation of threshold determination section 147 by simplifying the relationship between the attractiveness map in FIG. 11 and FIG. 12 and the threshold (cross section).
- FIG. 13 schematically shows the change in the degree of attractiveness when the image is crossed by the scanning line 1310.
- a region of interest is extracted with each of threshold A1302, threshold B1304, and threshold C1306.
- By changing the threshold value it is possible to extract a region that has been adapted according to the size or shape instruction of the region of interest from the user or the like without changing the formula for calculating the degree of attraction.
- a cluster is configured by the degree of attraction corresponding to each ROI configured when each of threshold A1302, threshold B1304, and threshold C1306 in FIG. 13 is used.
- threshold A1302 two clusters with a higher degree of attraction than threshold A1302, and three lower clusters (ROI-3 external left side, ROI-3 external right side force R OI-7 external left side, ROI—7 external right side corresponds to it)).
- ROI-3 external left side, ROI-3 external right side force R OI-7 external left side, ROI—7 external right side corresponds to it
- the shape of the region of interest is specified from the shape specifying unit 112
- the size of the region of interest is specified from the size specifying unit 114
- the number of the region of interest is specified from the number specifying unit 118 This is a specific example of generating the region of interest and the conditions for determining the region of interest in the case of being performed.
- the region of interest is added so that the region of interest becomes that position, or weighting is added according to the distance of the position force.
- a region of interest determination condition is set so as to be extracted.
- Fig. 6 (a) is a diagram showing an example when weights are set for the attractiveness map. Here, it is shown that the blacker region has a higher weight. By multiplying this weight setting with the attractiveness map, it is possible to extract the region of interest with more emphasis on the center.
- edge image 440 is replaced with an attractiveness map in the same manner as in the description of Figs.
- a new attraction map obtained by multiplying the edge image 440 (attraction level map 440) by the weight setting 600 is the weighted edge image 640 in FIG. 6 (b).
- the edge (equivalent to the degree of attraction) near the specified position (center) is emphasized, far from the specified position !, edge (equivalent to attraction level) It can be divided that is drawn weakly.
- region of interest is determined by pattern matching for the weighted edge image 640 in the same manner as in FIG. Needless to say, the region close to the specified position (center) can be output as the region of interest.
- the image output unit 152, the status output unit 154, and the region information output unit 156 include, for example, a liquid crystal panel, and output the processed image, the processing status, and information on the region of interest (such as coordinates and size), respectively.
- the status output unit 154 can also output the processing status during each process as a log in addition to information such as whether or not the region of interest extraction has succeeded. Use it as a monitor for each process in the present embodiment, in the same way as or in place of status display section 132.
- image output unit 152 and the status output unit 154 are not indispensable components in the present embodiment, like the respective specification units (such as the size specification unit 114). It is a component that can be selected as needed.
- FIG. 7 (b) is a diagram showing a result of interest region extraction (interest region extraction image 702) for the original image 200, and is an example of image output in the image output unit 152.
- FIG. 14 is a flowchart showing a process flow in the image processing apparatus 100.
- an image is input through the image input unit 102 (S100), and the user's isotropic force also receives an instruction through the shape specifying unit 112 to the number specifying unit 118 (S102).
- the size is included (S104: Yes), and if the shape is further specified (S120: Yes), this is notified to the region generation condition setting unit 142 (S122).
- the region generation unit 144 instructs the attraction degree map unit 148 to generate an attraction degree map based on the above specified conditions (S124). Further, the region generation unit 144 selects an optimal ROI using a method similar to the conventional method (S126).
- the ROI specified by the above processing is displayed (S118).
- the entire image power region of interest is extracted.
- a predetermined range or a specified range range may be extracted.
- the presence / absence of the number designation is determined after the presence / absence of the designation of size in S104.
- the present invention is not limited to this configuration, and it corresponds to the designation of the size.
- Each of the dependency relations (upstream and downstream relations in the flowchart, etc.) that allows the process corresponding to the process and the shape specification and the process corresponding to the number specification to function independently can be arbitrarily set according to the specification requirements. Can be systematized.
- the region output as ROI based on each cluster is determined. You may decide. At this time, the number of data to be clustered (data distribution) itself changes by changing the threshold.
- Figures 15 (a) to 15 (d) are two-dimensional schematic diagrams showing the relationship between the distribution of data to be clustered, threshold values, and generated clusters.
- the horizontal axis is the X direction (image width direction)
- the vertical axis is the y direction (image height direction)
- the coordinates where there is image data whose degree of attraction exceeds the threshold A are shown. Plotting.
- Point A is the degree of attraction corresponding to pixel (xl, yl).
- Fig. 15 (a) When Fig. 15 (a) is divided into clusters using a general clustering method, it is expected to be roughly classified into two as shown in Fig. 15 (b). If it is the optimization method and efficiency method of the conventional clustering method, the theme is how to make the two optimal areas, or how the two areas can be classified into four. However, in this method, the distribution of image data itself can be modified by changing the threshold value.
- FIG. 15C shows an example of image data distribution when the threshold A is changed to the threshold B.
- threshold A is greater than threshold B.
- the star-shaped points represent points that exceed threshold A
- the round points represent points that do not exceed threshold A but exceed threshold B.
- Fig. 15 (c) If the same general clustering method is applied to Fig. 15 (c), it is expected to be classified into four clusters as shown in Fig. 15 (d). As a result, the extraction of four interesting areas is specified as an input instruction.
- the data area belonging to each cluster can be set by using the clustering result at threshold B.
- the region of interest to be output is, for example, an ellipse surrounding each cluster in FIG.
- FIG. 16 is a diagram that shows one-dimensionally the relationship between the attractiveness distribution, the threshold value, and the generated cluster from another viewpoint.
- the horizontal axis represents the coordinates (the image is represented one-dimensionally), and the vertical axis represents the degree of attraction.
- points where the attractiveness exceeds threshold A or threshold B are indicated by oval black saddle points.
- the attractiveness graph is a discrete value (a discrete value in pixel units).
- the degree of attraction exceeds a predetermined value (here, threshold value B)
- the points of interest are all included or inscribed (for example, a rectangle) as a region of interest.
- One-dimensional writing is equivalent to the conventional ROI (1601).
- the region of interest can be extracted flexibly in the data distribution.
- 601) and the clustering result based on threshold A or threshold B for the degree of attraction.
- the distribution of the attractiveness varies depending on the image. For this reason, there is no general rule about the threshold, the number of data to be obtained, and the number of clusters, but setting the threshold more appropriately can reduce the need for repeated clustering.
- the image processing apparatus can generate a region of interest from a single still image, a group of still images, or a moving image according to the user's request (the shape, size, and number of regions of interest).
- a region of interest from a single still image, a group of still images, or a moving image according to the user's request (the shape, size, and number of regions of interest).
- it is also useful for a system for storing, managing or classifying images.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2007501675A JPWO2006082979A1 (ja) | 2005-02-07 | 2006-02-07 | 画像処理装置および画像処理方法 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2005031113 | 2005-02-07 | ||
| JP2005-031113 | 2005-02-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2006082979A1 true WO2006082979A1 (ja) | 2006-08-10 |
Family
ID=36777356
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2006/302059 Ceased WO2006082979A1 (ja) | 2005-02-07 | 2006-02-07 | 画像処理装置および画像処理方法 |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20070201749A1 (ja) |
| JP (1) | JPWO2006082979A1 (ja) |
| WO (1) | WO2006082979A1 (ja) |
Cited By (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080187241A1 (en) * | 2007-02-05 | 2008-08-07 | Albany Medical College | Methods and apparatuses for analyzing digital images to automatically select regions of interest thereof |
| JP2009003615A (ja) * | 2007-06-20 | 2009-01-08 | Nippon Telegr & Teleph Corp <Ntt> | 注目領域抽出方法、注目領域抽出装置、コンピュータプログラム、及び、記録媒体 |
| JP2009212740A (ja) * | 2008-03-04 | 2009-09-17 | Nittoh Kogaku Kk | 変化要因情報のデータの生成法および信号処理装置 |
| JP2009295081A (ja) * | 2008-06-09 | 2009-12-17 | Iwasaki Electric Co Ltd | 目立ち画像生成装置、及び目立ち画像生成プログラム |
| JP2011514789A (ja) * | 2008-03-20 | 2011-05-06 | インスティテュート フュール ラントファンクテクニーク ゲー・エム・ベー・ハー | ビデオ画像の小さな画面サイズへの適合方法 |
| WO2011074198A1 (ja) * | 2009-12-14 | 2011-06-23 | パナソニック株式会社 | ユーザインタフェース装置および入力方法 |
| WO2011148562A1 (ja) * | 2010-05-26 | 2011-12-01 | パナソニック株式会社 | 画像情報処理装置 |
| JP2012022414A (ja) * | 2010-07-12 | 2012-02-02 | Nippon Hoso Kyokai <Nhk> | 関心密度分布モデル化装置及びそのプログラム |
| WO2013128522A1 (ja) * | 2012-02-29 | 2013-09-06 | 日本電気株式会社 | 配色判定装置、配色判定方法および配色判定プログラム |
| WO2013128523A1 (ja) * | 2012-02-29 | 2013-09-06 | 日本電気株式会社 | 配色変更装置、配色変更方法および配色変更プログラム |
| KR101341576B1 (ko) * | 2012-11-20 | 2013-12-13 | 중앙대학교 산학협력단 | 등고선 기반 관심영역 결정방법 및 장치 |
| US8698959B2 (en) | 2009-06-03 | 2014-04-15 | Thomson Licensing | Method and apparatus for constructing composite video images |
| JP2017224068A (ja) * | 2016-06-14 | 2017-12-21 | 大学共同利用機関法人自然科学研究機構 | 質感評価システム |
| CN112132135A (zh) * | 2020-08-27 | 2020-12-25 | 南京南瑞信息通信科技有限公司 | 一种基于图像处理的电网传输线检测方法、存储介质 |
| US10878265B2 (en) | 2017-03-13 | 2020-12-29 | Ricoh Company, Ltd. | Image processing device and image processing method for setting important areas in an image |
| KR102433384B1 (ko) | 2016-01-05 | 2022-08-18 | 한국전자통신연구원 | 텍스처 이미지 처리 장치 및 방법 |
Families Citing this family (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2107787A1 (en) * | 2008-03-31 | 2009-10-07 | FUJIFILM Corporation | Image trimming device |
| US9087059B2 (en) | 2009-08-07 | 2015-07-21 | Google Inc. | User interface for presenting search results for multiple regions of a visual query |
| US9135277B2 (en) | 2009-08-07 | 2015-09-15 | Google Inc. | Architecture for responding to a visual query |
| US8670597B2 (en) * | 2009-08-07 | 2014-03-11 | Google Inc. | Facial recognition with social network aiding |
| US8253802B1 (en) * | 2009-09-01 | 2012-08-28 | Sandia Corporation | Technique for identifying, tracing, or tracking objects in image data |
| US9183224B2 (en) * | 2009-12-02 | 2015-11-10 | Google Inc. | Identifying matching canonical documents in response to a visual query |
| US9176986B2 (en) | 2009-12-02 | 2015-11-03 | Google Inc. | Generating a combination of a visual query and matching canonical document |
| US8811742B2 (en) | 2009-12-02 | 2014-08-19 | Google Inc. | Identifying matching canonical documents consistent with visual query structural information |
| US8805079B2 (en) | 2009-12-02 | 2014-08-12 | Google Inc. | Identifying matching canonical documents in response to a visual query and in accordance with geographic information |
| US20110128288A1 (en) * | 2009-12-02 | 2011-06-02 | David Petrou | Region of Interest Selector for Visual Queries |
| US9405772B2 (en) * | 2009-12-02 | 2016-08-02 | Google Inc. | Actionable search results for street view visual queries |
| US8977639B2 (en) | 2009-12-02 | 2015-03-10 | Google Inc. | Actionable search results for visual queries |
| US9852156B2 (en) | 2009-12-03 | 2017-12-26 | Google Inc. | Hybrid use of location sensor data and visual query to return local listings for visual query |
| JP5144789B2 (ja) * | 2011-06-24 | 2013-02-13 | 楽天株式会社 | 画像提供装置、画像処理方法、画像処理プログラム及び記録媒体 |
| US8935246B2 (en) | 2012-08-08 | 2015-01-13 | Google Inc. | Identifying textual terms in response to a visual query |
| US9298980B1 (en) * | 2013-03-07 | 2016-03-29 | Amazon Technologies, Inc. | Image preprocessing for character recognition |
| US10878024B2 (en) * | 2017-04-20 | 2020-12-29 | Adobe Inc. | Dynamic thumbnails |
| JP6938270B2 (ja) | 2017-08-09 | 2021-09-22 | キヤノン株式会社 | 情報処理装置および情報処理方法 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS58219682A (ja) * | 1982-06-14 | 1983-12-21 | Fujitsu Ltd | 文字画像情報の読取方式 |
| JPH0785275A (ja) * | 1993-06-29 | 1995-03-31 | Fujitsu General Ltd | 画像抽出方法および装置 |
| JP2004220368A (ja) * | 2003-01-15 | 2004-08-05 | Sharp Corp | 安定度検証に特長をもつ画像処理手順設計エキスパートシステム |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1346321B1 (en) * | 2000-12-14 | 2005-09-21 | Matsushita Electric Works, Ltd. | Image processor and pattern recognition apparatus using the image processor |
| US7564994B1 (en) * | 2004-01-22 | 2009-07-21 | Fotonation Vision Limited | Classification system for consumer digital images using automatic workflow and face detection and recognition |
-
2006
- 2006-02-07 WO PCT/JP2006/302059 patent/WO2006082979A1/ja not_active Ceased
- 2006-02-07 US US11/547,643 patent/US20070201749A1/en not_active Abandoned
- 2006-02-07 JP JP2007501675A patent/JPWO2006082979A1/ja not_active Withdrawn
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS58219682A (ja) * | 1982-06-14 | 1983-12-21 | Fujitsu Ltd | 文字画像情報の読取方式 |
| JPH0785275A (ja) * | 1993-06-29 | 1995-03-31 | Fujitsu General Ltd | 画像抽出方法および装置 |
| JP2004220368A (ja) * | 2003-01-15 | 2004-08-05 | Sharp Corp | 安定度検証に特長をもつ画像処理手順設計エキスパートシステム |
Cited By (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080187241A1 (en) * | 2007-02-05 | 2008-08-07 | Albany Medical College | Methods and apparatuses for analyzing digital images to automatically select regions of interest thereof |
| US8126267B2 (en) * | 2007-02-05 | 2012-02-28 | Albany Medical College | Methods and apparatuses for analyzing digital images to automatically select regions of interest thereof |
| JP2009003615A (ja) * | 2007-06-20 | 2009-01-08 | Nippon Telegr & Teleph Corp <Ntt> | 注目領域抽出方法、注目領域抽出装置、コンピュータプログラム、及び、記録媒体 |
| JP2009212740A (ja) * | 2008-03-04 | 2009-09-17 | Nittoh Kogaku Kk | 変化要因情報のデータの生成法および信号処理装置 |
| JP2011514789A (ja) * | 2008-03-20 | 2011-05-06 | インスティテュート フュール ラントファンクテクニーク ゲー・エム・ベー・ハー | ビデオ画像の小さな画面サイズへの適合方法 |
| JP2009295081A (ja) * | 2008-06-09 | 2009-12-17 | Iwasaki Electric Co Ltd | 目立ち画像生成装置、及び目立ち画像生成プログラム |
| US8698959B2 (en) | 2009-06-03 | 2014-04-15 | Thomson Licensing | Method and apparatus for constructing composite video images |
| WO2011074198A1 (ja) * | 2009-12-14 | 2011-06-23 | パナソニック株式会社 | ユーザインタフェース装置および入力方法 |
| CN102301316A (zh) * | 2009-12-14 | 2011-12-28 | 松下电器产业株式会社 | 用户界面装置以及输入方法 |
| CN102301316B (zh) * | 2009-12-14 | 2015-07-22 | 松下电器(美国)知识产权公司 | 用户界面装置以及输入方法 |
| US8830164B2 (en) | 2009-12-14 | 2014-09-09 | Panasonic Intellectual Property Corporation Of America | User interface device and input method |
| CN102906790B (zh) * | 2010-05-26 | 2015-10-07 | 松下电器(美国)知识产权公司 | 图像信息处理装置 |
| CN102906790A (zh) * | 2010-05-26 | 2013-01-30 | 松下电器产业株式会社 | 图像信息处理装置 |
| JP5837484B2 (ja) * | 2010-05-26 | 2015-12-24 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | 画像情報処理装置 |
| WO2011148562A1 (ja) * | 2010-05-26 | 2011-12-01 | パナソニック株式会社 | 画像情報処理装置 |
| US8908976B2 (en) | 2010-05-26 | 2014-12-09 | Panasonic Intellectual Property Corporation Of America | Image information processing apparatus |
| JP2012022414A (ja) * | 2010-07-12 | 2012-02-02 | Nippon Hoso Kyokai <Nhk> | 関心密度分布モデル化装置及びそのプログラム |
| JPWO2013128522A1 (ja) * | 2012-02-29 | 2015-07-30 | 日本電気株式会社 | 配色判定装置、配色判定方法および配色判定プログラム |
| US8736634B2 (en) | 2012-02-29 | 2014-05-27 | Nec Corporation | Color scheme changing apparatus, color scheme changing method, and color scheme changing program |
| WO2013128522A1 (ja) * | 2012-02-29 | 2013-09-06 | 日本電気株式会社 | 配色判定装置、配色判定方法および配色判定プログラム |
| WO2013128523A1 (ja) * | 2012-02-29 | 2013-09-06 | 日本電気株式会社 | 配色変更装置、配色変更方法および配色変更プログラム |
| JP5418740B1 (ja) * | 2012-02-29 | 2014-02-19 | 日本電気株式会社 | 配色変更装置、配色変更方法および配色変更プログラム |
| US9600905B2 (en) | 2012-02-29 | 2017-03-21 | Nec Corporation | Color-scheme determination device, color-scheme determination method, and color-scheme determination program |
| KR101341576B1 (ko) * | 2012-11-20 | 2013-12-13 | 중앙대학교 산학협력단 | 등고선 기반 관심영역 결정방법 및 장치 |
| KR102433384B1 (ko) | 2016-01-05 | 2022-08-18 | 한국전자통신연구원 | 텍스처 이미지 처리 장치 및 방법 |
| JP2017224068A (ja) * | 2016-06-14 | 2017-12-21 | 大学共同利用機関法人自然科学研究機構 | 質感評価システム |
| US10878265B2 (en) | 2017-03-13 | 2020-12-29 | Ricoh Company, Ltd. | Image processing device and image processing method for setting important areas in an image |
| CN112132135A (zh) * | 2020-08-27 | 2020-12-25 | 南京南瑞信息通信科技有限公司 | 一种基于图像处理的电网传输线检测方法、存储介质 |
| CN112132135B (zh) * | 2020-08-27 | 2023-11-28 | 南京南瑞信息通信科技有限公司 | 一种基于图像处理的电网传输线检测方法、存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20070201749A1 (en) | 2007-08-30 |
| JPWO2006082979A1 (ja) | 2008-06-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2006082979A1 (ja) | 画像処理装置および画像処理方法 | |
| CN109918969B (zh) | 人脸检测方法及装置、计算机装置和计算机可读存储介质 | |
| CN101523412B (zh) | 基于人脸的图像聚类 | |
| US8345974B2 (en) | Hierarchical recursive image segmentation | |
| CN100405388C (zh) | 特定被摄体检测装置 | |
| JP5283088B2 (ja) | 画像検索装置および同画像検索装置に適用される画像検索用コンピュータプログラム | |
| JP6192271B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
| US8488190B2 (en) | Image processing apparatus, image processing apparatus control method, and storage medium storing program | |
| JP2006285944A (ja) | 被写体の構成要素を検出する装置および方法 | |
| JP2008097607A (ja) | 入力イメージを自動的に分類する方法 | |
| JP4772819B2 (ja) | 画像検索装置および画像検索方法 | |
| JP7077046B2 (ja) | 情報処理装置、被写体の判別方法及びコンピュータプログラム | |
| JP2005190400A (ja) | 顔画像検出方法及び顔画像検出システム並びに顔画像検出プログラム | |
| KR100836740B1 (ko) | 영상 데이터 처리 방법 및 그에 따른 시스템 | |
| JP3708042B2 (ja) | 画像処理方法及びプログラム | |
| JP6546385B2 (ja) | 画像処理装置及びその制御方法、プログラム | |
| JP2009123234A (ja) | オブジェクト識別方法および装置ならびにプログラム | |
| JP3720892B2 (ja) | 画像処理方法および画像処理装置 | |
| JP4285640B2 (ja) | オブジェクト識別方法および装置ならびにプログラム | |
| CN114359090A (zh) | 一种口腔ct影像的数据增强方法 | |
| CN113723453A (zh) | 花粉图像分类方法及装置 | |
| JP4285644B2 (ja) | オブジェクト識別方法および装置ならびにプログラム | |
| CN100565556C (zh) | 特定被摄体检测装置 | |
| WO2020208955A1 (ja) | 情報処理装置、情報処理装置の制御方法及びプログラム | |
| CN107545261A (zh) | 文本检测的方法及装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 2007501675 Country of ref document: JP |
|
| ENP | Entry into the national phase |
Ref document number: 2007201749 Country of ref document: US Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 11547643 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 06713202 Country of ref document: EP Kind code of ref document: A1 |
|
| WWW | Wipo information: withdrawn in national office |
Ref document number: 6713202 Country of ref document: EP |