Disclosure of Invention
The application provides a remote sensing image classification method, a remote sensing image classification device, electronic equipment and a readable storage medium, so that the accuracy of remote sensing image classification is improved.
In a first aspect, a remote sensing image classification method is provided, which includes:
acquiring remote sensing image data to be tested;
and classifying the remote sensing image data to be tested based on a preset remote sensing image classification model to obtain the classification accuracy aiming at the remote sensing image data to be tested.
In one possible implementation, the acquiring remote sensing image data to be tested includes:
acquiring target domain remote sensing image data;
and extracting a preset number of remote sensing image data in the target domain remote sensing image data as to-be-tested remote sensing image data.
In one possible implementation, the method further comprises:
and evaluating the remote sensing image classification model according to the classification accuracy to obtain an evaluation result of the remote sensing image classification model, and determining the network performance of the remote sensing image classification model according to the evaluation result.
In one possible implementation manner, before the classifying the remote sensing image data to be tested based on the preset remote sensing image classification model, the method includes:
constructing a remote sensing image classification model;
obtaining sample remote sensing image data;
and inputting the sample remote sensing image data into an objective function for model training to obtain the remote sensing image classification model.
In one possible implementation, the constructing a remote sensing image classification model includes:
adjusting the number of neurons of a classification layer of a preset first model to obtain a target classifier;
accessing a first network consisting of a full connection layer on the target classifier to obtain a source domain classifier;
accessing a certain number of fully connected layers behind an encoder of the first model to form a distance estimator;
and constructing the remote sensing image classification model according to the source domain classifier and the distance evaluator.
In one possible implementation manner, the sample remote sensing image data comprises source domain remote sensing image data and residual target domain remote sensing image data, and the residual target domain remote sensing image data is the remote sensing image data which is obtained by removing the remote sensing image data to be tested from the target domain remote sensing image data.
In a second aspect, a remote sensing image classification device is provided, which includes:
the acquisition module is used for acquiring remote sensing image data to be tested;
and the first processing module is used for carrying out classification processing on the remote sensing image data to be tested based on a preset remote sensing image classification model to obtain the classification accuracy aiming at the remote sensing image data to be tested.
In one possible implementation manner, the obtaining module is configured to obtain target domain remote sensing image data; and extracting a preset number of remote sensing image data in the target domain remote sensing image data as to-be-tested remote sensing image data.
In one possible implementation, the method further includes:
and the second processing module is used for evaluating the remote sensing image classification model according to the classification accuracy to obtain an evaluation result of the remote sensing image classification model, so as to determine the network performance of the remote sensing image classification model according to the evaluation result.
In one possible implementation, the method further includes:
the third processing module is used for constructing a remote sensing image classification model; obtaining sample remote sensing image data; and inputting the sample remote sensing image data into an objective function for model training to obtain the remote sensing image classification model.
In a possible implementation manner, the third processing module is configured to adjust the number of neurons in a classification layer of a preset first model to obtain a target classifier; accessing a first network consisting of a full connection layer on the target classifier to obtain a source domain classifier; accessing a certain number of fully connected layers behind an encoder of the first model to form a distance estimator; and constructing the remote sensing image classification model according to the source domain classifier and the distance evaluator.
In one possible implementation manner, the sample remote sensing image data comprises source domain remote sensing image data and residual target domain remote sensing image data, and the residual target domain remote sensing image data is the remote sensing image data which is obtained by removing the remote sensing image data to be tested from the target domain remote sensing image data.
In a third aspect, an electronic device is provided, including: a processor and a memory;
the memory for storing a computer program;
the processor is used for executing the remote sensing image classification method by calling the computer program.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that, when run on a computer, enables the computer to perform the above-described remote sensing image classification method.
By means of the technical scheme, the technical scheme provided by the application at least has the following advantages:
in the application, the remote sensing image classification model is introduced, the classification of the remote sensing image data to be tested is realized, and the classification accuracy is improved compared with the existing remote sensing image classification mode.
Detailed Description
The present application provides a method and an apparatus for classifying remote sensing images, an electronic device and a readable storage medium, and the following describes in detail embodiments of the present application with reference to the accompanying drawings.
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
At present, the most important method in the transfer learning is to perform the transfer learning based on domain adaptation. The domain self-adaptive method mainly learns the domain invariant features to connect the source domain and the target domain data, so that the target domain data can be classified and identified by directly using a model trained by the source domain. With the continuous development of deep learning technology, deep learning becomes a key technology for image feature extraction and classification in the image field, so that migration learning also performs domain adaptation on a deep neural network, so that the deep neural network trained by labeled source domain data has good classification performance on unlabeled target domain data.
The current partial migration learning algorithm reduces the feature distribution difference of source domain and target domain data by selecting or re-weighting source domain samples, and the partial migration learning algorithm matches the source domain and target domain feature distributions by mapping the source domain distributions to the feature transformation space of the target domain, both of which improve the migration effect of the model. The deep domain confusion method is characterized in that an adaptation layer is added into an existing deep neural network to form a brand-new remote sensing image classification model, the characteristics of the adaptation layer learned by the network are mapped to a reconstructed kernel Hilbert space, and then the data characteristics of a source domain and a target domain are adapted by using a maximum average difference algorithm, so that the network migration performance is improved. The deep adaptive network improves the migration performance by adapting the network multi-layer characteristics through a maximum average difference algorithm and a multi-core function. The joint adaptive network improves the migration performance of the network by matching the joint distribution of the multiple layers of the two domains using a joint maximum mean difference algorithm. The depth correlation alignment method aligns second-order statistics of the source domain and target domain characteristics by carrying out nonlinear transformation on the source domain and target domain characteristics, realizes characteristic adaptation and reduces characteristic distribution difference between data sets. The confrontation discriminant domain adaptation algorithm will generate the confrontation idea for cross-domain recognition. The feature extractors of the data of the source domain and the target domain of the algorithm do not share parameters, and the network is trained by using the countermeasure thought, so that the feature extractors of the source domain and the target domain after training are similar as much as possible, and the purpose of domain adaptation is achieved.
As shown in fig. 1, a schematic flow chart of a remote sensing image classification method provided by the present application is provided, where the method includes the following steps:
step S101, obtaining remote sensing image data to be tested;
and S102, classifying the remote sensing image data to be tested based on a preset remote sensing image classification model to obtain the classification accuracy of the remote sensing image data to be tested.
In the application, the remote sensing image classification model is introduced, the classification of the remote sensing image data to be tested is realized, and the classification accuracy is improved compared with the existing remote sensing image classification mode.
Based on the technical solution provided by the present application, the following explains the technical solution in detail, as shown in fig. 2, which is a specific processing flow chart of a possible implementation manner of the remote sensing image classification method provided by the present application.
In a possible implementation manner, before the remote sensing image classification based on the remote sensing image classification model is realized, the training of the remote sensing image classification model for image classification needs to be performed, the training process can be performed in the electronic equipment, and by continuously training by using a large amount of sample remote sensing image data, the classification result obtained when the remote sensing image classification model is used for classification tends to be more accurate, so that the classification accuracy is improved.
The construction and training of the remote sensing image classification model can comprise the following processes:
s1. model construction
Based on the existing Alexnet model, the number of neurons of a Alexnet model classification layer (the last full connection layer) is adjusted, so that a target classifier is obtained; then, accessing a residual error network (ResidualNet) consisting of two full connection layers (Res _ fc1, Res _ fc2) of c and c to obtain a source domain classifier after the target classifier, wherein c is the number of data classes; three fully-connected layers (dis _ fc1, dis _ fc2 and dis _ fc3) are connected behind the Alexnet model feature extractor to form the Wasserstein distance estimator, wherein the first two layers of the three fully-connected layers are c × c fully-connected layers, and the last layer only has one neuron.
The model comprises the following components:
(1) a feature extractor (Encoder), a feature extractor composed of the front 7 layers of Alexnet model and used for learning the domain invariant feature of the cross domain, and a sample x belongs to RnThe feature extractor learns a feature mapping function f through network trainingg:Rn→RdExtracting d-dimensional characteristics of sample remote sensing image data, wherein the characteristic extractor parameter is thetag。
(2) Sorting layer fcFor classifying the extracted features, the classification layer parameter being θc。
(3) Residual error network frIs used for connecting a target classifier and a source classifier to carry out classifier adaptation, and the residual network parameter is thetar。
(4) Wasserstein distance estimator fwFor feature adaptation of the source domain data features and the target domain data features, the weighting parameter of the evaluator is θw。
Based on the construction of the remote sensing image classification model, the source domain data and the target domain data related to the remote sensing image classification model also need to be subjected to feature adaptation:
the Wasserstein distance is a measure of the difference between two data distributions and can be implemented by a weaker topology, which means that it can converge the distributions and facilitate the definition of a continuous mapping function. In addition, since the distribution of the Wasserstein distances on the low-dimensional manifold is continuous and differentiable everywhere, the method for calculating the difference of the feature distributions of the source domain and the target domain by using the Wasserstein distances can provide a stable gradient during network training, so that the difference of the feature distributions of the source domain and the target domain can be measured by using the Wasserstein distances for feature adaptation.
For the source domain and target domain features distributed as P and Q, the Wasserstein distance calculation formula is as follows:
according to Kantorovich-Rubinstein theory, the Wasserstein distance approximates the formula:
wherein L is Lipschitz constraint, | f | | non-phosphorLSup | f (x) -f (y) l/d (x, y), d (x, y) representing the distance between samples x and y.
Let x besAnd xtRespectively representing the sample remote sensing image data of a source domain and a target domain, and the characteristic distribution of the source domain and the target domain data of P and Q, wherein the Wasserstein distance of the characteristic distribution of the source domain and the target domain is as follows:
if the evaluator fwAll satisfying the 1-Lipschitz constraint, the Wasserstein distance can be approximated by the following optimization method:
when learning a domain-invariant feature representation, firstly optimizing the Wasserstein distance estimator to the optimum, and then optimizing the feature extractor to enable the feature extractor to learn the domain-invariant feature representation, therefore, the final optimization goal for feature adaptation is as follows:
a classifier adaptation is also required for the source domain classifier and the target domain classifier:
in the transfer learning, a remote sensing image classification model with completed characteristic adaptation is used for testing a source domain sample and a target domain sample, and the fact that the testing accuracy of the remote sensing image classification model on the source domain sample and the target domain sample is still greatly different shows that a source domain classifier fs(x) And a target domain classifier ft(x) There is still a difference between, i.e. fs(x)≠ft(x) Therefore, performing feature adaptation alone cannot completely eliminate the difference between domains. Suppose f can be expressed by a perturbation function Δ f (x)s(x) And ft(x) The difference between them, then the classifier can be reduced by learning the perturbation functionThe difference between them, i.e. classifier adaptation is performed. Assuming that multiple nonlinear fully-connected layers can approximate a complex function, it can be assumed that multiple nonlinear fully-connected layers can approximate the perturbation function Δ f (x), and thus multiple nonlinear fully-connected layers can be used to learn the perturbation function. In the remote sensing image classification model, Res _ fc1 and Res _ fc2 are two fully connected layers which form a residual error network for learning a disturbance function. Source domain classifier fs(x) And a target domain classifier ft(x) Implementation of f by connection through residual network and element addition modules(x)=ft(x)+Δf(x)。
S2, obtaining sample remote sensing image data
The sample remote sensing image data can comprise source domain remote sensing image data and residual target domain remote sensing image data, wherein the residual target domain remote sensing image data is the remote sensing image data which is residual after the remote sensing image data to be tested is removed from the target domain remote sensing image data.
For the embodiment of the application, before the training of the remote sensing image classification model, a large amount of sample remote sensing image data is obtained, the sample remote sensing image data may be input manually, extracted from a local storage, or obtained by sending an obtaining request of the sample remote sensing image data to a server, and certainly, the obtaining way of the sample remote sensing image data is not limited thereto.
S2, training process of remote sensing image classification model
For the embodiment of the application, a large amount of acquired sample remote sensing image data are input into a model to be trained, and the model is continuously perfected through massive training, so that the remote sensing image classification model is obtained; the model can be more accurate and accurate when the remote sensing images are classified through a large amount of training.
In a specific embodiment, for the acquisition of sample remote sensing image data, a remote sensing image data set can be acquired, source domain data and target domain data are acquired from a remote sensing image data set UC merceded Land-Use data set and a WHU-RS19 data set, all data in the source domain data are used for training data, 90% of data in the target domain data are used for training data, and other data in the target domain data are used for testing.
In one possible implementation, to assist in training the model, an objective function may be used to optimize the model:
when the model is trained, the model also needs to be optimized, and the supervised information is also used for training the model with the classification and recognition performance, so that the supervised information of the source domain data can be used for assisting the training. The empirical error of the classifier on the source domain is:
where J is the cross entropy loss function.
When the model is optimized, the entropy function of the target domain data is also used for optimizing the network, so that the classifier has better classification performance on the target domain data. Specifically, the following are shown:
wherein H () is a feature distribution
Is determined by the entropy function of (a),
c is the number of the categories,
for inputting data
The probability of prediction as class j.
The final optimization objective function of the model is:
the source domain data and the target domain data are separately input into a model, which is then optimized using an objective function.
For the present application, in one possible implementation, the aforementioned processing of step S101 specifically includes the processing of step S201 described below.
Step S201, obtaining remote sensing image data to be tested.
In a possible implementation manner, for the above-mentioned obtaining of the remote sensing image data to be tested, the electronic device may directly receive data input by a user, or may receive data carried in a request uploaded by an opposite-end device.
In one possible implementation, obtaining remote sensing image data to be tested includes:
acquiring target domain remote sensing image data; and extracting a preset number of remote sensing image data in the target domain remote sensing image data as to-be-tested remote sensing image data.
For the present application, in one possible implementation, the aforementioned processing of step S102 specifically includes the processing of step S202 described below.
And S202, classifying the remote sensing image data to be tested based on the remote sensing image classification model to obtain corresponding classification accuracy.
In a possible implementation manner, after the remote sensing image data to be tested is obtained, the remote sensing image data to be tested is input into the remote sensing image classification model, and the remote sensing image data to be tested is processed by the model, so that the classification accuracy of the remote sensing image data to be tested is obtained.
In one possible implementation, after obtaining the classification accuracy, the following processing may be further included:
and S203, evaluating the remote sensing image classification model according to the classification accuracy to obtain an evaluation result.
In a possible implementation manner, after the classification accuracy of the remote sensing image data to be tested is obtained, the remote sensing image classification model is evaluated according to the classification accuracy to obtain an evaluation result of the remote sensing image classification model, and the network performance of the remote sensing image classification model is determined according to the evaluation result.
In the application, the remote sensing image classification model is introduced, the classification of the remote sensing image data to be tested is realized, and the classification accuracy is improved compared with the existing remote sensing image classification mode.
Based on the above technical solution of the remote sensing image classification method provided by the present application, the present application correspondingly provides a schematic structural diagram of a remote sensing image classification device, as shown in fig. 3, the remote sensing image classification device 30 of the present application may include:
the acquisition module 31 is used for acquiring remote sensing image data to be tested;
and the first processing module 32 is configured to perform classification processing on the remote sensing image data to be tested based on a preset remote sensing image classification model, so as to obtain classification accuracy for the remote sensing image data to be tested.
In one possible implementation manner, the obtaining module 31 is configured to obtain target domain remote sensing image data; and extracting a preset number of remote sensing image data in the target domain remote sensing image data as to-be-tested remote sensing image data.
In one possible implementation, the method further includes:
and the second processing module 33 is configured to evaluate the remote sensing image classification model according to the classification accuracy to obtain an evaluation result of the remote sensing image classification model, so as to determine the network performance of the remote sensing image classification model according to the evaluation result.
In one possible implementation, the method further includes:
the third processing module 34 is used for constructing a remote sensing image classification model; obtaining sample remote sensing image data; and inputting the sample remote sensing image data into an objective function for model training to obtain the remote sensing image classification model.
In a possible implementation manner, the third processing module 33 is configured to adjust the number of neurons in a classification layer of a preset first model to obtain a target classifier; accessing a first network consisting of a full connection layer on the target classifier to obtain a source domain classifier; accessing a certain number of fully connected layers behind an encoder of the first model to form a distance estimator; and constructing the remote sensing image classification model according to the source domain classifier and the distance evaluator.
In one possible implementation manner, the sample remote sensing image data comprises source domain remote sensing image data and residual target domain remote sensing image data, and the residual target domain remote sensing image data is the remote sensing image data which is obtained by removing the remote sensing image data to be tested from the target domain remote sensing image data.
In the application, the remote sensing image classification model is introduced, the classification of the remote sensing image data to be tested is realized, and the classification accuracy is improved compared with the existing remote sensing image classification mode.
Referring now to fig. 4, a block diagram of an electronic device (e.g., the terminal device of fig. 1) 400 suitable for implementing embodiments of the present application is shown. The terminal device in the embodiments of the present application may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing device 401, performs the above-described functions defined in the methods of the embodiments of the present application.
It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or backend server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
The electronic device provided by the application is applicable to any embodiment of the remote sensing image classification method, and is not described herein again.
In the application, the remote sensing image classification model is introduced, the classification of the remote sensing image data to be tested is realized, and the classification accuracy is improved compared with the existing remote sensing image classification mode.
The present application provides a computer-readable storage medium storing a computer program that causes a computer to execute the remote sensing image classification method shown in the above-described embodiment.
The computer-readable storage medium provided in the present application is applicable to any embodiment of the above remote sensing image classification method, and is not described herein again.
In the application, the remote sensing image classification model is introduced, the classification of the remote sensing image data to be tested is realized, and the classification accuracy is improved compared with the existing remote sensing image classification mode.
It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the aspects specified in the block or blocks of the block diagrams and/or flowchart illustrations disclosed herein.
The modules of the device can be integrated into a whole or can be separately deployed. The modules can be combined into one module, and can also be further split into a plurality of sub-modules.
Those skilled in the art will appreciate that the drawings are merely schematic representations of one preferred embodiment and that the blocks or flow diagrams in the drawings are not necessarily required to practice the present application.
Those skilled in the art will appreciate that the modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, and may be correspondingly changed in one or more devices different from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
The above application serial numbers are for descriptive purposes only and do not represent the merits of the embodiments.
The disclosure of the present application is only a few specific embodiments, but the present application is not limited to these, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.