[go: up one dir, main page]

CN114627376B - Vegetation classification method and device - Google Patents

Vegetation classification method and device

Info

Publication number
CN114627376B
CN114627376B CN202210282035.3A CN202210282035A CN114627376B CN 114627376 B CN114627376 B CN 114627376B CN 202210282035 A CN202210282035 A CN 202210282035A CN 114627376 B CN114627376 B CN 114627376B
Authority
CN
China
Prior art keywords
target
image
determining
vegetation
pixel information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210282035.3A
Other languages
Chinese (zh)
Other versions
CN114627376A (en
Inventor
房铄东
赵宝堃
莫豪文
吴化禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202210282035.3A priority Critical patent/CN114627376B/en
Publication of CN114627376A publication Critical patent/CN114627376A/en
Application granted granted Critical
Publication of CN114627376B publication Critical patent/CN114627376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a vegetation classification method and device, which comprise the steps of preprocessing an obtained remote sensing image of a target area to obtain a target image, determining a buffer area based on the remote sensing image of the target area, counting pixel information of the buffer area, determining a target segmentation threshold value based on the pixel information, and classifying the target image based on the target segmentation threshold value to obtain a vegetation area. The target segmentation threshold value is a bisector axis corresponding to the maximum inter-class variance determined according to the target processing mode and the pixel information, wherein the target processing mode is a mode of adding one step length to the bisector axis to perform iterative processing, and the method for determining the target segmentation threshold value is suitable for threshold segmentation of planar floating point data, so that high-precision floating point data is prevented from being compressed, and vegetation classification precision is improved.

Description

Vegetation classification method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a vegetation classification method and device.
Background
Since vegetation can bring economic value, environmental protection value and other values, how to accurately identify vegetation has become an important point at present.
At present, vegetation information is generally identified and extracted through ground feature spectral features, namely, the characteristic that different objects have different visible reflected light or absorptivity of different wavelengths is mainly utilized, and the characteristics comprise a single-band threshold method, an inter-spectrum relation method and an exponential model method, and the three methods are strongly dependent on threshold segmentation and belong to the same threshold classification method. The vegetation and non-vegetation classification is carried out based on a vegetation index model method, the selection of the threshold value is also transitionally dependent, and the quality of the threshold value selection method can be directly the accuracy of final vegetation classification of the image. Typical threshold selection methods include manual empirical selection, histogram analysis, oxford, adaptive thresholding, and the like. The manual experience selection method is seriously dependent on experience judgment and is difficult to automate, the histogram analysis method is only suitable for images with obvious contrast between foreground colors and background colors such as binary images, the adaptive threshold method is usually used for threshold optimization of local images and is usually used for solving the problems of difficult threshold selection and the like caused by uneven illumination distribution of the images, and the Ojin method only supports RGB color brightness values such as {0,1, 2.. The remote sensing images are usually floating point data, so that the method cannot be applied.
Therefore, the existing threshold selection method cannot be applied to all application scenes, and accuracy of vegetation classification is reduced.
Disclosure of Invention
Aiming at the problems, the invention provides a vegetation classification method and device, which improves the vegetation classification precision.
In order to achieve the above object, the present invention provides the following technical solutions:
A method of vegetation classification comprising:
preprocessing the obtained remote sensing image of the target area to obtain a target image;
determining a buffer area based on the remote sensing image of the target area;
Counting pixel information of the buffer area, and determining a target segmentation threshold based on the pixel information, wherein the target segmentation threshold is a bisector corresponding to the maximum inter-class variance determined according to a target processing mode and the pixel information, and the target processing mode is a mode of adding one step length to the bisector for iterative processing;
And classifying the target image based on the target segmentation threshold value to obtain a vegetation region.
Optionally, the preprocessing the obtained remote sensing image of the target area to obtain a target image includes:
screening the obtained remote sensing image of the target area to obtain a first image;
Removing the cloud in the first image, and extracting the median of the first image from which the cloud is removed;
synthesizing the first image with the cloud removed based on the median to obtain a synthesized image, and performing band calculation according to a vegetation index model to obtain a normalized vegetation index;
and determining a target image corresponding to the normalized vegetation index in the synthesized image.
Optionally, the determining the buffer area based on the remote sensing image of the target area includes:
performing edge detection on the remote sensing image of the target area to obtain an edge pixel;
and performing pixel expansion on each edge pixel to obtain a buffer area.
Optionally, the counting pixel information of the buffer area and determining the target segmentation threshold based on the pixel information includes:
processing the vegetation index data of the buffer area to obtain a one-dimensional ordered number group;
Counting the corresponding pixel information in the one-dimensional ordered data to obtain a minimum value and a maximum value;
determining a bisector axis based on a preset step length and the minimum value;
Classifying the one-dimensional ordered data based on the bipartite axis, and determining the probability of points falling into each classification;
calculating to obtain an inter-class variance according to the probability, adding a step length to the diad to obtain a new diad, and determining the inter-class variance by using the new diad until the diad is not more than the maximum value, and stopping iteration;
And determining a diad corresponding to the maximum value of the inter-class variance as a target segmentation threshold.
Optionally, the method further comprises:
and rendering the target image based on the color characteristics corresponding to the vegetation region to obtain a rendered image.
A vegetation classification device comprising:
the preprocessing unit is used for preprocessing the obtained remote sensing image of the target area to obtain a target image;
the first determining unit is used for determining a buffer area based on the remote sensing image of the target area;
The second determining unit is used for counting pixel information of the buffer area and determining a target segmentation threshold value based on the pixel information, wherein the target segmentation threshold value is a bisector corresponding to the maximum inter-class variance determined according to a target processing mode and the pixel information, and the target processing mode is a mode of adding one step length to the bisector for iterative processing;
and the classification unit is used for classifying the target image based on the target segmentation threshold value to obtain a vegetation region.
Optionally, the preprocessing unit includes:
the screening subunit is used for screening the obtained remote sensing image of the target area to obtain a first image;
An extraction subunit, configured to remove a cloud in the first image, and extract a median of the first image from which the cloud is removed;
the calculating subunit is used for synthesizing the first image with the cloud removed based on the median to obtain a synthesized image, and carrying out wave band calculation according to a vegetation index model to obtain a normalized vegetation index;
and the first determining subunit is used for determining a target image corresponding to the normalized vegetation index in the synthetic image.
Optionally, the first determining unit includes:
the detection subunit is used for carrying out edge detection on the remote sensing image of the target area to obtain an edge pixel;
and the expansion processing subunit is used for carrying out pixel expansion on each edge pixel to obtain a buffer area.
Optionally, the second determining unit is specifically configured to:
processing the vegetation index data of the buffer area to obtain a one-dimensional ordered number group;
Counting the corresponding pixel information in the one-dimensional ordered data to obtain a minimum value and a maximum value;
determining a bisector axis based on a preset step length and the minimum value;
Classifying the one-dimensional ordered data based on the bipartite axis, and determining the probability of points falling into each classification;
calculating to obtain an inter-class variance according to the probability, adding a step length to the diad to obtain a new diad, and determining the inter-class variance by using the new diad until the diad is not more than the maximum value, and stopping iteration;
And determining a diad corresponding to the maximum value of the inter-class variance as a target segmentation threshold.
Optionally, the apparatus further comprises:
And the rendering unit is used for rendering the target image based on the color characteristics corresponding to the vegetation area to obtain a rendered image.
Compared with the prior art, the vegetation classification method and device comprise the steps of preprocessing an obtained remote sensing image of a target area to obtain a target image, determining a buffer area based on the remote sensing image of the target area, counting pixel information of the buffer area, determining a target segmentation threshold based on the pixel information, and classifying the target image based on the target segmentation threshold to obtain a vegetation area. The target segmentation threshold value is a bisector axis corresponding to the maximum inter-class variance determined according to the target processing mode and the pixel information, wherein the target processing mode is a mode of adding one step length to the bisector axis to perform iterative processing, and the method for determining the target segmentation threshold value is suitable for threshold segmentation of planar floating point data, so that high-precision floating point data is prevented from being compressed, and vegetation classification precision is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a vegetation classification method according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for determining a target segmentation threshold according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of extracting vegetation from a remote sensing image according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vegetation classification device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first and second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to the listed steps or elements but may include steps or elements not expressly listed.
In an embodiment of the present invention, a method for classifying vegetation is provided, referring to fig. 1, the method may include the following steps:
s101, preprocessing the obtained remote sensing image of the target area to obtain a target image.
The target area refers to an area where vegetation needs to be classified, and may be, for example, a designated area to be classified. When the remote sensing image corresponding to the target area is acquired, the remote sensing image of the target area can be acquired through remote sensing image data in an open source remote sensing image database.
In order to improve the processing precision, the remote sensing image of the acquired target area can be preprocessed. The interference data or the image may be removed, or key information may be extracted. In one implementation manner of the embodiment of the invention, preprocessing the obtained remote sensing image of the target area to obtain a target image comprises the steps of screening the obtained remote sensing image of the target area to obtain a first image, removing the cloud from the first image, extracting the median of the first image after removing the cloud, synthesizing the first image after removing the cloud based on the median to obtain a synthesized image, performing band calculation according to a vegetation index model to obtain a normalized vegetation index, and determining the target image corresponding to the normalized vegetation index in the synthesized image.
When the first image is obtained, the quality evaluation may be performed on the image to screen out the image satisfying the processing condition. Then, a cloud removal algorithm can be utilized to remove the cloud roll in the target area image. The normalized vegetation index (Normalized difference vegetation index, NDVI) is a model which can extract vegetation according to the ratio operation between wave bands, and the vegetation has the characteristics of high reflectivity in near infrared wave band and strong absorptivity in red wave band because of the composition of leaf cells. NDVI is the detection of vegetation growth status, vegetation coverage, elimination of some radiation errors, etc. NDVI reflects the background effects of plant canopy, such as soil, wet ground, snow, dead leaves, roughness, etc., and is related to vegetation coverage.
S102, determining a buffer area based on the remote sensing image of the target area.
The corresponding buffer spacing may be set based on the control resolution of the vegetation index, and then the corresponding buffer determined based on the buffer spacing. The method can also comprise the steps of carrying out edge detection on the remote sensing image of the target area to obtain edge pixels, and carrying out pixel expansion on each edge pixel to obtain a buffer area. In the embodiment, in order to remove the influence of other irrelevant background ground objects in the area on the result, a Canny edge detection algorithm is utilized, and pixel expansion is carried out on pixel by pixel according to a certain proportion on the basis, so that a buffer area is obtained, and a global threshold segmentation algorithm OTSU (Otsu method) can concentrate the target on the edge part, so that the accuracy of threshold segmentation is improved. The edge detection algorithm is a common algorithm for extracting image features in image processing and computer vision, and detects edge contour lines of a target area in an image based on the characteristic that an image gradient can obtain a maximum value at an edge pixel point.
S103, counting pixel information of the buffer area, and determining a target segmentation threshold value based on the pixel information.
The method for determining the target segmentation threshold in the embodiment of the invention can be applied to the processing of planar floating point data. The method is mainly realized by an improved Ojin method, wherein the Ojin method is also called as a maximum inter-class variance method (OTSU), and is characterized in that a pixel gray value is calculated by a statistical learning method and a threshold value is selected, so that an image is divided into a target part and a foreground part.
Specifically, the target segmentation threshold is a bisector axis corresponding to the maximum inter-class variance determined according to a target processing mode and the pixel information, wherein the target processing mode is a mode of adding one step length to the bisector axis for iterative processing.
In one implementation manner of the embodiment of the invention, the statistics of the pixel information of the buffer area and the determination of the target segmentation threshold based on the pixel information comprise the steps of processing vegetation index data of the buffer area to obtain one-dimensional ordered groups, carrying out statistics on corresponding pixel information in the one-dimensional ordered data to obtain minimum values and maximum values, determining two-way axes based on preset step sizes and the minimum values, classifying the one-dimensional ordered data based on the two-way axes to determine the probability of points falling into each classification, calculating according to the probability to obtain an inter-class variance, adding a step size to the two-way axes to obtain new two-way axes, determining the inter-class variance by the new two-way axes until the two-way axes are not more than the maximum value, and determining the two-way axes corresponding to the maximum value of the inter-class variance as the target segmentation threshold.
S104, classifying the target image based on the target segmentation threshold value to obtain a vegetation region.
And after the target segmentation threshold is determined, image classification processing can be performed, and a vegetation region can be obtained. In order to facilitate visual display, the target image can be rendered based on the color features corresponding to the vegetation region to obtain a rendered image. The vegetation is selected and overlapped on the true color synthetic image, and the classification result is displayed in an image.
The invention provides a vegetation classification method which comprises the steps of preprocessing an obtained remote sensing image of a target area to obtain a target image, determining a buffer area based on the remote sensing image of the target area, counting pixel information of the buffer area, determining a target segmentation threshold value based on the pixel information, and classifying the target image based on the target segmentation threshold value to obtain a vegetation area. The target segmentation threshold value is a bisector axis corresponding to the maximum inter-class variance determined according to the target processing mode and the pixel information, wherein the target processing mode is a mode of adding one step length to the bisector axis to perform iterative processing, and the method for determining the target segmentation threshold value is suitable for threshold segmentation of planar floating point data, so that high-precision floating point data is prevented from being compressed, and vegetation classification precision is improved.
Referring to fig. 2, a flowchart of a method for determining a target segmentation threshold according to an embodiment of the present application is shown. Firstly, the vegetation index data of the edge buffer area is read into a memory by an array, and is ordered to obtain a one-dimensional ordered array.
Then, determining the minimum value and the maximum value in the one-dimensional ordered array, for example, obtaining the minimum value and the maximum value in the array and respectively marking as min and max, setting a step length according to the target precision, taking the ith diad as pivot (i) =min+ (i+1) step, taking the diad pivot as the axis, marking the array as Class1 (min < Class1< =pivot) and Class2 (pivot < Class2< =max), counting the number of points falling on the Class1 and the Class2 as probabilities p1, p2, calculating the average value marking of the two classes as mean1, mean2, and substituting the average marking into an inter-Class variance formula delta 2i=p1*p2*(mean1-mean2)2 after being pushed. The inter-class variance is stored in the hash table map [ pivot ] = inter-class variance.
And adding a step length to the diad on the basis of the above, namely, i=i+1, judging whether the diad is smaller than or equal to the maximum value max, if so, continuing the operation with a new diad, and if not, ending the iteration. In the hash table, a bisection axis corresponding to the maximum value of the inter-class variance is determined as a target segmentation threshold.
Referring to fig. 3, a flow chart of extracting vegetation from a remote sensing image according to an embodiment of the present invention is shown. It comprises the following steps:
and (3) preprocessing data, namely acquiring remote sensing images of a designated area through an open source remote sensing image database, carrying out quality evaluation on the images, screening out images meeting the conditions, removing the curly clouds in the regional images by using a cloud removing algorithm, taking the median of each image, synthesizing the images, and carrying out band calculation according to a vegetation index model to obtain an NDVI (normalized vegetation index) result.
And generating a buffer area, namely utilizing a Canny algorithm to detect edges in order to remove the influence of other irrelevant background ground objects in the area on the result, and expanding pixels according to a certain proportion from pixel to pixel on the basis to obtain the buffer area, so that an OTSU (global threshold segmentation algorithm) can concentrate the target on the edge part, and the accuracy of threshold segmentation is improved.
Threshold segmentation, namely counting pixel information on the basis of the edge buffer area, and calculating to obtain an optimal segmentation threshold according to an improved OTSU algorithm.
And (3) visualization, namely performing image bipartition processing on the selected threshold value to obtain a vegetation region, and performing image display on a classification result by selecting and overlapping the obtained vegetation on the true color synthesized image.
The invention is based on the index threshold segmentation method of the improved Ojin method, avoids compressing high-precision floating point data, greatly improves the segmentation precision, has the characteristics of real-time performance, high accuracy, rapidness, convenience and the like, and simultaneously achieves full automation. The method can be applied to agricultural credit scenes, saves the cost of manpower and material resources for traditional manual agricultural land measurement, and can effectively shorten the loan application period.
In another embodiment of the present invention, there is also provided a vegetation classification device, referring to fig. 4, comprising:
a preprocessing unit 401, configured to preprocess an obtained remote sensing image of a target area to obtain a target image;
a first determining unit 402, configured to determine a buffer area based on the remote sensing image of the target area;
A second determining unit 403, configured to count pixel information of the buffer, and determine a target segmentation threshold based on the pixel information, where the target segmentation threshold is a bisector axis corresponding to a maximum inter-class variance determined according to a target processing mode and the pixel information, and the target processing mode is a mode of adding one step to the bisector axis to perform iterative processing;
and the classification unit 404 is configured to perform classification processing on the target image based on the target segmentation threshold value, so as to obtain a vegetation region.
The embodiment of the invention provides a vegetation classification device which comprises a preprocessing unit, a first determining unit, a second determining unit and a classification unit, wherein the preprocessing unit is used for preprocessing an obtained remote sensing image of a target area to obtain a target image, the first determining unit is used for determining a buffer area based on the remote sensing image of the target area, the second determining unit is used for counting pixel information of the buffer area and determining a target segmentation threshold value based on the pixel information, and the classification unit is used for classifying the target image based on the target segmentation threshold value to obtain a vegetation area. The target segmentation threshold value is a bisector axis corresponding to the maximum inter-class variance determined according to the target processing mode and the pixel information, wherein the target processing mode is a mode of adding one step length to the bisector axis to perform iterative processing, and the method for determining the target segmentation threshold value is suitable for threshold segmentation of planar floating point data, so that high-precision floating point data is prevented from being compressed, and vegetation classification precision is improved.
Optionally, the preprocessing unit includes:
the screening subunit is used for screening the obtained remote sensing image of the target area to obtain a first image;
An extraction subunit, configured to remove a cloud in the first image, and extract a median of the first image from which the cloud is removed;
the calculating subunit is used for synthesizing the first image with the cloud removed based on the median to obtain a synthesized image, and carrying out wave band calculation according to a vegetation index model to obtain a normalized vegetation index;
and the first determining subunit is used for determining a target image corresponding to the normalized vegetation index in the synthetic image.
Optionally, the first determining unit includes:
the detection subunit is used for carrying out edge detection on the remote sensing image of the target area to obtain an edge pixel;
and the expansion processing subunit is used for carrying out pixel expansion on each edge pixel to obtain a buffer area.
Optionally, the second determining unit is specifically configured to:
processing the vegetation index data of the buffer area to obtain a one-dimensional ordered number group;
Counting the corresponding pixel information in the one-dimensional ordered data to obtain a minimum value and a maximum value;
determining a bisector axis based on a preset step length and the minimum value;
Classifying the one-dimensional ordered data based on the bipartite axis, and determining the probability of points falling into each classification;
calculating to obtain an inter-class variance according to the probability, adding a step length to the diad to obtain a new diad, and determining the inter-class variance by using the new diad until the diad is not more than the maximum value, and stopping iteration;
And determining a diad corresponding to the maximum value of the inter-class variance as a target segmentation threshold.
Optionally, the apparatus further comprises:
And the rendering unit is used for rendering the target image based on the color characteristics corresponding to the vegetation area to obtain a rendered image.
Based on the foregoing embodiments, embodiments of the present application provide a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the vegetation classification method as described in any of the above.
The embodiment of the invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the steps of the vegetation classification method realized by the program.
The Processor or CPU may be at least one of an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device implementing the above-mentioned processor function may be other, and embodiments of the present application are not limited in detail.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), or any combination thereof, and may be any terminal including one or more of the above memories, such as a mobile phone, a computer, a tablet device, a personal digital assistant, or the like.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is merely a logical function division, and there may be additional divisions of actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate components may or may not be physically separate, and components displayed as units may or may not be physical units, may be located in one place, may be distributed on a plurality of network units, and may select some or all of the units according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may be separately used as a unit, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of hardware plus a form of software functional unit. It will be appreciated by those of ordinary skill in the art that implementing all or part of the steps of the above method embodiments may be implemented by hardware associated with program instructions, where the above program may be stored in a computer readable storage medium, where the program when executed performs the steps comprising the above method embodiments, where the above storage medium includes a mobile storage device, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or optical disk, or other various media that may store program code.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the several product embodiments provided by the application can be combined arbitrarily under the condition of no conflict to obtain new product embodiments.
The features disclosed in the embodiments of the method or the apparatus provided by the application can be arbitrarily combined without conflict to obtain new embodiments of the method or the apparatus.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A method of classifying vegetation, comprising:
preprocessing the obtained remote sensing image of the target area to obtain a target image;
performing edge detection on the remote sensing image of the target area to obtain edge pixels, and performing pixel expansion on each edge pixel to obtain a buffer area;
Counting pixel information of the buffer area, and determining a target segmentation threshold based on the pixel information, wherein the target segmentation threshold is a bisector corresponding to the maximum inter-class variance determined according to a target processing mode and the pixel information, and the target processing mode is a mode of adding one step length to the bisector for iterative processing;
Classifying the target image based on the target segmentation threshold to obtain a vegetation region;
The counting of the pixel information of the buffer area and the determination of the target segmentation threshold based on the pixel information comprise the following steps:
processing the vegetation index data of the buffer area to obtain a one-dimensional ordered number group;
Counting the corresponding pixel information in the one-dimensional ordered array to obtain a minimum value and a maximum value;
determining a bisector axis based on a preset step length and the minimum value;
classifying the one-dimensional ordered number groups based on the bipartite axis, and determining the probability of points falling into each classification;
calculating to obtain an inter-class variance according to the probability, adding a step length to the diad to obtain a new diad, and determining the inter-class variance by using the new diad until the diad is not more than the maximum value, and stopping iteration;
determining a diad axis corresponding to the maximum value of the inter-class variance as a target segmentation threshold;
the method for determining the target segmentation threshold is applicable to threshold segmentation of planar floating point data.
2. The method according to claim 1, wherein preprocessing the obtained remote sensing image of the target area to obtain the target image comprises:
screening the obtained remote sensing image of the target area to obtain a first image;
Removing the cloud in the first image, and extracting the median of the first image from which the cloud is removed;
synthesizing the first image with the cloud removed based on the median to obtain a synthesized image, and performing band calculation according to a vegetation index model to obtain a normalized vegetation index;
and determining a target image corresponding to the normalized vegetation index in the synthesized image.
3. The method according to claim 1, wherein the method further comprises:
and rendering the target image based on the color characteristics corresponding to the vegetation region to obtain a rendered image.
4. A vegetation classification device, comprising:
the preprocessing unit is used for preprocessing the obtained remote sensing image of the target area to obtain a target image;
the first determining unit is used for carrying out edge detection on the remote sensing image of the target area to obtain edge pixels, and carrying out pixel expansion on each edge pixel to obtain a buffer area;
The second determining unit is used for counting pixel information of the buffer area and determining a target segmentation threshold value based on the pixel information, wherein the target segmentation threshold value is a bisector corresponding to the maximum inter-class variance determined according to a target processing mode and the pixel information, and the target processing mode is a mode of adding one step length to the bisector for iterative processing;
The classification unit is used for classifying the target image based on the target segmentation threshold value to obtain a vegetation region;
The second determining unit is specifically configured to:
processing the vegetation index data of the buffer area to obtain a one-dimensional ordered number group;
Counting the corresponding pixel information in the one-dimensional ordered array to obtain a minimum value and a maximum value;
determining a bisector axis based on a preset step length and the minimum value;
classifying the one-dimensional ordered number groups based on the bipartite axis, and determining the probability of points falling into each classification;
calculating to obtain an inter-class variance according to the probability, adding a step length to the diad to obtain a new diad, and determining the inter-class variance by using the new diad until the diad is not more than the maximum value, and stopping iteration;
determining a diad axis corresponding to the maximum value of the inter-class variance as a target segmentation threshold;
the method for determining the target segmentation threshold is applicable to threshold segmentation of planar floating point data.
5. The apparatus of claim 4, wherein the preprocessing unit comprises:
the screening subunit is used for screening the obtained remote sensing image of the target area to obtain a first image;
An extraction subunit, configured to remove a cloud in the first image, and extract a median of the first image from which the cloud is removed;
the calculating subunit is used for synthesizing the first image with the cloud removed based on the median to obtain a synthesized image, and carrying out wave band calculation according to a vegetation index model to obtain a normalized vegetation index;
and the first determining subunit is used for determining a target image corresponding to the normalized vegetation index in the synthetic image.
6. The apparatus of claim 4, wherein the apparatus further comprises:
And the rendering unit is used for rendering the target image based on the color characteristics corresponding to the vegetation area to obtain a rendered image.
CN202210282035.3A 2022-03-22 2022-03-22 Vegetation classification method and device Active CN114627376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210282035.3A CN114627376B (en) 2022-03-22 2022-03-22 Vegetation classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210282035.3A CN114627376B (en) 2022-03-22 2022-03-22 Vegetation classification method and device

Publications (2)

Publication Number Publication Date
CN114627376A CN114627376A (en) 2022-06-14
CN114627376B true CN114627376B (en) 2025-09-09

Family

ID=81904869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210282035.3A Active CN114627376B (en) 2022-03-22 2022-03-22 Vegetation classification method and device

Country Status (1)

Country Link
CN (1) CN114627376B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116258869B (en) * 2023-01-10 2023-08-18 滁州学院 A Method for Extracting the Boundary Lines of Phyllostachys pubescens Forest Based on Sentinel-2 Remote Sensing Data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648347A (en) * 2019-09-24 2020-01-03 北京航天宏图信息技术股份有限公司 Coastline extraction method and device based on remote sensing image
CN113255452A (en) * 2021-04-26 2021-08-13 中国自然资源航空物探遥感中心 Extraction method and extraction system of target water body

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8145677B2 (en) * 2007-03-27 2012-03-27 Faleh Jassem Al-Shameri Automated generation of metadata for mining image and text data
CN103679202B (en) * 2013-12-17 2017-04-12 中国测绘科学研究院 Method and device suitable for classifying vegetation through optical remote sensing satellite images
CN107194942B (en) * 2017-03-27 2020-11-10 广州地理研究所 Method for determining image classification segmentation scale threshold
CN109034026B (en) * 2018-07-16 2021-06-25 中国科学院东北地理与农业生态研究所 A method and system for extracting mangroves from water and land areas in remote sensing images
CN110390267B (en) * 2019-06-25 2021-06-01 东南大学 Mountain landscape building extraction method and device based on high-resolution remote sensing image
CN110986884A (en) * 2019-11-21 2020-04-10 吉林省水利水电勘测设计研究院 Unmanned aerial vehicle-based aerial survey data preprocessing and vegetation rapid identification method
CN112907587B (en) * 2021-04-01 2022-03-01 西南石油大学 High mountain forest line extraction method based on Otsu and edge detection algorithm of GEE

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648347A (en) * 2019-09-24 2020-01-03 北京航天宏图信息技术股份有限公司 Coastline extraction method and device based on remote sensing image
CN113255452A (en) * 2021-04-26 2021-08-13 中国自然资源航空物探遥感中心 Extraction method and extraction system of target water body

Also Published As

Publication number Publication date
CN114627376A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
Sadeghi-Tehran et al. Multi-feature machine learning model for automatic segmentation of green fractional vegetation cover for high-throughput field phenotyping
US8983200B2 (en) Object segmentation at a self-checkout
CN110648347A (en) Coastline extraction method and device based on remote sensing image
CN112001374B (en) Cloud detection method and device for hyperspectral image
CN111598827A (en) Appearance flaw detection method, electronic device and storage medium
CN111666900B (en) Method and device for acquiring land cover classification map based on multi-source remote sensing images
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN109859257B (en) Skin image texture evaluation method and system based on texture directionality
CN108229232B (en) Method and device for scanning two-dimensional codes in batch
CN109726649B (en) Remote sensing image cloud detection method, system and electronic equipment
CN111339948A (en) An automatic identification method for newly added buildings in high-resolution remote sensing images
CN114627376B (en) Vegetation classification method and device
Wu et al. Automatic kernel counting on maize ear using RGB images
CN110070545B (en) A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density
CN115294447A (en) Tool checking method, system, computer equipment and storage medium
CN111062341A (en) Video image area classification method, device, equipment and storage medium
CN107121681B (en) Residential area extraction system based on high score satellite remote sensing date
CN115908774A (en) Quality detection method and device of deformed material based on machine vision
CN112686222B (en) Method and system for detecting ship target by satellite-borne visible light detector
CN118296170B (en) Warehouse entry preprocessing method and system for remote sensing images
Chen et al. 2d tree detection in large urban landscapes using aerial lidar data
CN115601655A (en) Water body information identification method and device based on satellite remote sensing and readable medium
CN112184745A (en) Image segmentation method, segmentation device and terminal equipment
CN111199228A (en) License plate positioning method and device
CN111079797A (en) Image classification method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant