[go: up one dir, main page]

CN114998257B - Image evaluation method, device and electronic equipment - Google Patents

Image evaluation method, device and electronic equipment Download PDF

Info

Publication number
CN114998257B
CN114998257B CN202210611112.5A CN202210611112A CN114998257B CN 114998257 B CN114998257 B CN 114998257B CN 202210611112 A CN202210611112 A CN 202210611112A CN 114998257 B CN114998257 B CN 114998257B
Authority
CN
China
Prior art keywords
aesthetic
score
image
features
evaluated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210611112.5A
Other languages
Chinese (zh)
Other versions
CN114998257A (en
Inventor
李亚乾
职天武
李雷达
杨宇哲
郭彦东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210611112.5A priority Critical patent/CN114998257B/en
Publication of CN114998257A publication Critical patent/CN114998257A/en
Application granted granted Critical
Publication of CN114998257B publication Critical patent/CN114998257B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种图像评价方法、装置以及电子设备。方法包括:获取待评价图像;获取美感锚点知识库,美感锚点知识库中包括多个分数等级各自对应的多个样本图像的美学特征,多个分数等级为基于多个样本图像各自的多个美学评分得到,其中,每个样本图像对应的多个美学评分包括多个评价者输出的评分;将待评价图像以及美感锚点知识库输入预先训练的美学评价网络模型中,以获取美学评价网络模型输出的美学评分。从而使得美学评价网络模型在借助于美感锚点知识库对待评价图像进行美学评分的过程中,可以同时参考不同分数等级的美学特征,以便可以同时参考不同分数等级的大众化美学评价经验,从而使得所输出的美学评分更加全面且准确。

The embodiment of the present application discloses an image evaluation method, device and electronic device. The method includes: obtaining an image to be evaluated; obtaining an aesthetic anchor knowledge base, the aesthetic anchor knowledge base includes aesthetic features of multiple sample images corresponding to multiple score levels, the multiple score levels are obtained based on multiple aesthetic scores of the multiple sample images, wherein the multiple aesthetic scores corresponding to each sample image include scores output by multiple evaluators; inputting the image to be evaluated and the aesthetic anchor knowledge base into a pre-trained aesthetic evaluation network model to obtain the aesthetic score output by the aesthetic evaluation network model. Thereby, the aesthetic evaluation network model can simultaneously refer to the aesthetic features of different score levels in the process of aesthetically grading the image to be evaluated with the help of the aesthetic anchor knowledge base, so as to simultaneously refer to the popular aesthetic evaluation experience of different score levels, so that the output aesthetic score is more comprehensive and accurate.

Description

Image evaluation method and device and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image evaluation method, an image evaluation device, and an electronic device.
Background
As technology advances, images may be aesthetically evaluated by electronic devices. However, in the related process of performing aesthetic evaluation of images by electronic equipment, there are cases where the evaluation result is not comprehensive enough and not accurate enough.
Disclosure of Invention
In view of the above problems, the present application provides an image evaluation method, an image evaluation device, and an electronic device, so as to improve the above problems.
The application provides an image evaluation method, which comprises the steps of obtaining an image to be evaluated, obtaining an aesthetic anchor point knowledge base, wherein the aesthetic anchor point knowledge base comprises aesthetic features of a plurality of sample images corresponding to a plurality of score grades, the score grades are obtained based on the aesthetic scores of the sample images, the aesthetic scores of the sample images comprise scores output by a plurality of evaluators, and inputting the image to be evaluated and the aesthetic anchor point knowledge base into a pre-trained aesthetic evaluation network model to obtain the aesthetic scores output by the aesthetic evaluation network model.
In a second aspect, the application provides an image evaluation device, which comprises an image acquisition unit, a knowledge base acquisition unit and an image evaluation unit, wherein the image acquisition unit is used for acquiring an image to be evaluated, the knowledge base acquisition unit is used for acquiring an aesthetic feature of a plurality of sample images corresponding to a plurality of score grades in an aesthetic anchor knowledge base, the score grades are obtained based on the aesthetic scores of the sample images, the aesthetic scores of the sample images comprise scores output by a plurality of evaluators, and the image evaluation unit is used for inputting the image to be evaluated and the aesthetic anchor knowledge base into a pre-trained aesthetic evaluation network model so as to acquire the aesthetic scores output by the aesthetic evaluation network model.
In a third aspect, the application provides an electronic device comprising at least a processor and a memory, one or more programs stored in the memory and configured to be executed by the processor to implement the above-described method.
In a fourth aspect, the present application provides a computer readable storage medium having program code stored therein, wherein the program code, when executed by a processor, performs the above-described method.
The application provides an image evaluation method, an image evaluation device and electronic equipment, which are characterized in that an image to be evaluated is firstly obtained, an aesthetic anchor point knowledge base is obtained, wherein the aesthetic anchor point knowledge base comprises aesthetic characteristics of a plurality of sample images corresponding to a plurality of score grades, and a plurality of aesthetic scores corresponding to each sample image comprise scores output by a plurality of evaluators. The image to be evaluated and the aesthetic anchor knowledge base are then input into a pre-trained aesthetic evaluation network model to obtain an aesthetic score output by the aesthetic evaluation network model. In this way, in the process of carrying out aesthetic evaluation on the image to be evaluated, the aesthetic anchor point knowledge base comprising the aesthetic characteristics with a plurality of score grades can be introduced, and in the case that the plurality of score grades in the aesthetic anchor point knowledge base are obtained based on the respective plurality of aesthetic scores of the plurality of sample images and the corresponding plurality of aesthetic scores of each sample image comprise scores output by a plurality of evaluators, the aesthetic evaluation network model can refer to the aesthetic characteristics with different score grades simultaneously in the process of carrying out aesthetic evaluation on the image to be evaluated by means of the aesthetic anchor point knowledge base, so that the popular aesthetic evaluation experience with different score grades can be referred to simultaneously, and the output aesthetic scores are more comprehensive and accurate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic diagram of an application scenario of an image evaluation method according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating another application scenario of the image evaluation method according to the embodiment of the present application;
FIG. 3 is a flowchart of an image evaluation method according to an embodiment of the present application;
FIG. 4 is a flowchart of an image evaluation method according to another embodiment of the present application;
FIG. 5 shows a schematic representation of acquiring an aesthetic reference image in the practice of the present application;
FIG. 6 is a flowchart of an image evaluation method according to still another embodiment of the present application;
FIG. 7 is a flowchart of an image evaluation method according to still another embodiment of the present application;
FIG. 8 shows a schematic diagram of each module in an aesthetic evaluation network model in the practice of the present application;
FIG. 9 is a schematic diagram illustrating the retrieval and application of an aesthetic anchor knowledge base in the practice of the present application;
FIG. 10 shows a schematic representation of the aesthetic score achieved in the practice of the present application;
fig. 11 is a block diagram illustrating a structure of another content pushing apparatus according to an embodiment of the present application;
fig. 12 is a block diagram showing a structure of still another content pushing apparatus according to an embodiment of the present application;
Fig. 13 is a block diagram showing the structure of another electronic apparatus of the present application for performing the image evaluation method according to the embodiment of the present application;
Fig. 14 is a storage unit for storing or carrying program code for implementing an image evaluation method according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Visual aesthetics of an image measures the visual appeal of an image in the human eye. Since visual aesthetics is a subjective attribute, emotion and personal taste are often involved, making automatic assessment of image aesthetics a very subjective task. As technology advances, images may be aesthetically evaluated by electronic devices.
However, the inventors found in the research that in the related process of aesthetic evaluation through electronic devices, the aesthetic evaluation of images is often performed in a data-driven manner, which results in insufficient overall and inaccurate evaluation results. For example, the inventors have discovered in research that some ways of aesthetic evaluation by electronic devices ignore features that are exhibited when human aesthetic evaluation is performed, lack of guidance for aesthetic prior knowledge, and result in image aesthetic evaluation in a purely data-driven manner.
Accordingly, the inventors have found the above problems in the study and have proposed an image evaluation method, apparatus and electronic device capable of improving the above problems in the present application. In the image evaluation method, an image to be evaluated and an aesthetic anchor point knowledge base can be acquired first, wherein the aesthetic anchor point knowledge base comprises aesthetic features of a plurality of sample images corresponding to a plurality of score grades, and a plurality of aesthetic scores corresponding to each sample image comprise scores output by a plurality of evaluators. The image to be evaluated and the aesthetic anchor knowledge base are then input into a pre-trained aesthetic evaluation network model to obtain an aesthetic score output by the aesthetic evaluation network model.
In this way, in the process of carrying out aesthetic evaluation on the image to be evaluated, the aesthetic anchor point knowledge base comprising the aesthetic characteristics with a plurality of score grades can be introduced, and in the case that the plurality of score grades in the aesthetic anchor point knowledge base are obtained based on the respective plurality of aesthetic scores of the plurality of sample images and the corresponding plurality of aesthetic scores of each sample image comprise scores output by a plurality of evaluators, the aesthetic evaluation network model can refer to the aesthetic characteristics with different score grades simultaneously in the process of carrying out aesthetic evaluation on the image to be evaluated by means of the aesthetic anchor point knowledge base, so that the popular aesthetic evaluation experience with different score grades can be referred to simultaneously, and the output aesthetic scores are more comprehensive and accurate.
Before further elaborating on the embodiments of the present application, an application environment related to the embodiments of the present application will be described.
The application scenario according to the embodiment of the present application is described first.
In the embodiment of the application, the provided image evaluation method can be executed by the electronic equipment. In this manner performed by the electronic device, all steps in the image evaluation method provided by the embodiment of the present application may be performed by the electronic device. For example, as shown in fig. 1, in the case where all steps in the image evaluation method provided in the embodiment of the present application may be performed by an electronic device, all steps may be performed by a processor of the electronic device 100.
Furthermore, the image evaluation method provided by the embodiment of the application can also be executed by the server. Correspondingly, in this manner executed by the server, the server may start executing the steps in the image evaluation method provided by the embodiment of the present application in response to the trigger instruction. The triggering instruction may be sent by an electronic device used by a user, or may be triggered locally by a server in response to some automation event.
In addition, the image evaluation method provided by the embodiment of the application can be cooperatively executed by the electronic equipment and the server. In such a manner that the electronic device and the server cooperatively execute, part of the steps in the image evaluation method provided by the embodiment of the present application are executed by the electronic device, and the other part of the steps are executed by the server. Illustratively, as shown in FIG. 2, the electronic device 100 may perform an image evaluation method that includes acquiring an image to be evaluated, then transmitting the image to be evaluated to the server 200, then performing by the server 200 acquisition of an aesthetic anchor knowledge base, inputting the image to be evaluated and the aesthetic anchor knowledge base into a pre-trained aesthetic evaluation network model to acquire aesthetic scores output by the aesthetic evaluation network model, and returning the aesthetic scores to the electronic device 100.
In this way, the steps performed by the electronic device and the server are not limited to those described in the above examples, and in practical applications, the steps performed by the electronic device and the server may be dynamically adjusted according to practical situations.
It should be noted that, the electronic device 100 may be a tablet computer, a smart watch, a smart voice assistant, or other devices besides the smart phone shown in fig. 1 and 2. The server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. In the case where the image evaluation method provided by the embodiment of the present application is executed by a server cluster or a distributed system formed by a plurality of physical servers, different steps in the image evaluation method may be executed by different physical servers, or may be executed by a server built based on the distributed system in a distributed manner.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 3, an image evaluation method provided by an embodiment of the present application includes:
s110, acquiring an image to be evaluated.
In the embodiment of the application, the image to be evaluated can be understood as an image to be subjected to aesthetic evaluation. Aesthetic evaluation, among other things, can be understood as aesthetic evaluation of an image to determine how aesthetic the image is in the mind of a user. In the embodiment of the application, various ways of acquiring the image to be evaluated are available.
As one way, the input may be made by a user. In this way, it can be obtained by the user operating the electronic device. Alternatively, the electronic device may perform image acquisition in response to a shooting operation triggered by a user, and then take the acquired image as the image to be evaluated. Optionally, the electronic device may select an image from an album of the electronic device as the image to be evaluated in response to a selection operation triggered by the user.
Alternatively, the image to be evaluated may be selected by the device itself. For example, still taking an image acquisition scene as an example. In the image acquisition scene, the image stored in the electronic device may be used as the image to be evaluated.
S120, acquiring an aesthetic anchor point knowledge base, wherein the aesthetic anchor point knowledge base comprises aesthetic characteristics of a plurality of sample images corresponding to a plurality of score grades, the score grades are obtained based on a plurality of aesthetic scores of the sample images, and the aesthetic scores corresponding to the sample images comprise scores output by a plurality of evaluators.
Wherein the aesthetic anchor knowledge base is for storing aesthetic features characterizing aesthetic priori knowledge. It should be noted that, in the embodiment of the present application, the aesthetic features stored in the aesthetic anchor point knowledge base are obtained by extracting features from the sample images corresponding to the multiple aesthetic scores, and in the case that the multiple aesthetic scores corresponding to each sample image are scores output by multiple evaluators, the multiple aesthetic scores of each sample image can represent the aesthetic evaluation of the sample image by the vast user. And, in the case that the sample images corresponding to the aesthetic anchor point knowledge base are divided into a plurality of score-level sample images based on a plurality of aesthetic scores corresponding to the sample images, after the aesthetic feature extraction is performed on the sample images, the aesthetic features corresponding to the score-level sample images can be obtained, so that the aesthetic features of the score-level sample images can be stored in the aesthetic anchor point knowledge base. In embodiments of the present application, the aesthetic characteristics of the sample image for each score level characterize the characteristics that the image has for achieving the aesthetic level of each score level.
It should be noted that, as a way, in the embodiment of the present application, the establishment of the aesthetic anchor point knowledge base is completed before the image to be evaluated is acquired, and the established aesthetic anchor point knowledge base is stored. In this way, obtaining the aesthetic anchor knowledge base can then be understood as obtaining an already pre-established aesthetic anchor knowledge base.
S130, inputting the image to be evaluated and the aesthetic anchor point knowledge base into a pre-trained aesthetic evaluation network model to obtain aesthetic scores output by the aesthetic evaluation network model.
In the embodiment of the application, the pre-trained aesthetic evaluation network model is used for carrying out aesthetic evaluation on the currently input image to be evaluated based on the aesthetic anchor point knowledge base and outputting the aesthetic score obtained by evaluation. In the training process of the neural network to be trained, the neural network to be trained can be trained through the aesthetic sense anchor point knowledge base established in the embodiment of the application, so as to obtain an aesthetic evaluation network model. Optionally, in the training process of the neural network to be trained, a training data set may be first acquired, where the training data set includes a plurality of sample images and a plurality of aesthetic scores corresponding to each sample image, and the plurality of aesthetic scores corresponding to each sample image includes scores output by a plurality of evaluators, and training is performed on the neural network model to be trained based on the training data set and the aesthetic anchor knowledge base, so as to obtain a trained aesthetic evaluation network model.
Optionally, after the training data set is obtained, the sample image in the training data set may be preprocessed to obtain a preprocessed training data set, and then the neural network model to be trained is trained through the preprocessed training data set and the aesthetic anchor knowledge base to obtain the aesthetic evaluation network model after training is completed. Wherein the preprocessing may include at least one of changing the size of the sample image to a specified size, flipping a portion of the sample images among all the sample images, and normalizing the pixel values of all the sample images to between [0,1 ]. Changing the size of the sample images to the specified size may include adjusting the sizes of all the sample images to 299×299. Flipping some of all of the sample images may include randomly flipping the sample images based on a specified probability (e.g., 0.5). As a preprocessing process, the sizes of all sample images in the training data set can be changed into specified sizes, then all sample images in the training data set are randomly turned over, and then the pixel values of the sample images in the training data set are normalized, so that a preprocessed training data set is obtained.
According to the image evaluation method provided by the embodiment, firstly, an image to be evaluated is obtained, and an aesthetic anchor point knowledge base is obtained, wherein the aesthetic anchor point knowledge base comprises aesthetic features of a plurality of sample images corresponding to a plurality of score grades, and the aesthetic scores corresponding to each sample image comprise scores output by a plurality of evaluators. The image to be evaluated and the aesthetic anchor knowledge base are then input into a pre-trained aesthetic evaluation network model to obtain an aesthetic score output by the aesthetic evaluation network model. In this way, in the process of carrying out aesthetic evaluation on the image to be evaluated, the aesthetic anchor point knowledge base comprising the aesthetic characteristics with a plurality of score grades can be introduced, and in the case that the plurality of score grades in the aesthetic anchor point knowledge base are obtained based on the respective plurality of aesthetic scores of the plurality of sample images and the corresponding plurality of aesthetic scores of each sample image comprise scores output by a plurality of evaluators, the aesthetic evaluation network model can refer to the aesthetic characteristics with different score grades simultaneously in the process of carrying out aesthetic evaluation on the image to be evaluated by means of the aesthetic anchor point knowledge base, so that the popular aesthetic evaluation experience with different score grades can be referred to simultaneously, and the output aesthetic scores are more comprehensive and accurate.
Referring to fig. 4, an image evaluation method provided by an embodiment of the present application includes:
S210, acquiring an image to be evaluated.
S220, acquiring a plurality of sample images and a plurality of aesthetic scores corresponding to each sample image, wherein the aesthetic scores corresponding to each sample image comprise scores output by a plurality of evaluators.
In embodiments of the present application, a pre-established aesthetic image dataset may be obtained, the aesthetic image dataset comprising images each corresponding to a plurality of aesthetic scores, and the plurality of aesthetic scores being derived from a plurality of users. After acquiring the pre-established aesthetic image dataset, the images in the aesthetic image dataset may be used as the plurality of sample images to be acquired in the embodiment of the application, or the partial images may be selected from the aesthetic image dataset as the plurality of sample images to be acquired in the embodiment of the application. Alternatively, the pre-established aesthetic image dataset may be AVA (Aesthetic Visual Database) datasets.
And S230, obtaining the average aesthetic score of each sample image according to the corresponding aesthetic scores of each sample image.
In one form, deriving the average aesthetic score for each sample image from the respective plurality of aesthetic scores for each sample image includes deriving aesthetic score distribution information for each sample image from the respective plurality of aesthetic scores for each sample image, and deriving the average aesthetic score for each sample image based on the aesthetic score distribution information for each sample image.
Wherein the aesthetic score distribution information can be derived from the plurality of aesthetic scores for each sample image: the expression of the aesthetic score distribution information for each sample image is as follows:
where N is the maximum value of the discrete aesthetic score, The probability of i being the aesthetic score corresponding to the sample image. After the aesthetic score distribution information for each sample image is obtained, the average aesthetic score for each sample image may then be obtained based on the following formula:
Where S a is the average aesthetic score of the image.
And S240, obtaining a plurality of score grades based on the average aesthetic score of each sample image, and acquiring sample images corresponding to the score grades from the sample images, wherein the average aesthetic score of the sample image corresponding to each score grade is matched with the corresponding score grade.
In the embodiment of the application, the obtained score grade is a score grade used for carrying out aesthetic evaluation on the image to be evaluated. In embodiments of the present application, there may be a variety of ways to determine multiple score levels.
As one approach, the average aesthetic score of the corresponding average aesthetic scores of the sample image that matches the plurality of specified scores may be treated as a plurality of score levels. Wherein matching with the specified score may be understood as being the same as the specified score or a difference between the specified score is smaller than a preset threshold. For example, if the acquired sample images include the sample image P1, the sample image P2, the sample image P3, the sample image P4, and the sample image P5. Wherein the average aesthetic score of the sample image P1 is f1, the average aesthetic score of the sample image P2 is f2, the average aesthetic score of the sample image P3 is f3, the average aesthetic score of the sample image P4 is f4, and the average aesthetic score of the sample image P5 is f5. If the specified scores include f3, f5, and f7, after the average aesthetic score of the plurality of sample images is matched with the specified scores, the average aesthetic score f3 and the average aesthetic score f5 may be determined, which are the same as f3 and f5 in the plurality of specified scores, and thus the score f3 and the score f5 may be used as a score level, and thus a plurality of score levels may be obtained.
Alternatively, deriving the plurality of score levels based on the average aesthetic score for each sample image includes obtaining a frequency of occurrence of the average aesthetic score for each sample image, and taking the plurality of average aesthetic scores for which the corresponding frequency of occurrence satisfies a specified ordering condition as the plurality of score levels. The occurrence frequency of the average aesthetic score can be understood as the number of trips of a certain average aesthetic score in the average aesthetic scores corresponding to all the sample images.
For example, if the average aesthetic score of the sample image P1 is f1, the average aesthetic score of the sample image P2 is f2, the average aesthetic score of the sample image P3 is f2, the average aesthetic score of the sample image P4 is f4, the average aesthetic score of the sample image P5 is f4, according to this example, the occurrence frequency of the average aesthetic score f2 is 2, the occurrence frequency of the average aesthetic score f4 is 2, and the occurrence frequency of the average aesthetic score f1 is 1.
In this manner, the specified ordering condition may include ordering the top N bits frequently. For example, the average aesthetic scores corresponding to all the sample images include an average aesthetic score f2, an average aesthetic score f4, and an average aesthetic score f1, and in the case where N is 2, the average aesthetic score f2 and the average aesthetic score f4 may be regarded as score levels, thereby obtaining a plurality of score levels.
It should be noted that, in the embodiment of the present application, after the average aesthetic score of each sample image is calculated, the average aesthetic score of each sample image may be normalized to be between [0,1], so as to facilitate the subsequent aesthetic evaluation network model to perform the aesthetic evaluation based on the score grade envy to-be-evaluated image in the aesthetic anchor point knowledge base. Alternatively, if images in the AVA dataset are directly based as the plurality of sample images, then the plurality of normalized score levels based on the frequency of occurrence of the average aesthetic score may be 0.3, 0.4, 0.5, 0.6, and 0.7.
It should be noted that, in the case where the plurality of score levels are determined based on the occurrence frequency of the average aesthetic score, the determined sample images corresponding to the plurality of score levels are relatively concentrated in number and relatively difficult to score, and therefore, by building the aesthetic anchor knowledge base from the sample images corresponding to the plurality of score levels, the aesthetic features included in the built aesthetic anchor knowledge base can be made to be better represented, and the aesthetic features included in the built aesthetic anchor knowledge base can be made to be better represented as popular aesthetic.
S250, obtaining an aesthetic anchor point knowledge base based on the aesthetic characteristics of the sample images corresponding to the score levels.
In one manner, the sample images corresponding to the plurality of score levels may be ranked based on the scoring variance of the sample images corresponding to the plurality of score levels, the sample image with the ranking position at the specified forward position of the sample images corresponding to the plurality of score levels may be used as the aesthetic reference image corresponding to the plurality of score levels,
And respectively extracting aesthetic features from the aesthetic reference images corresponding to the score levels to obtain the aesthetic features corresponding to the score levels, and constructing an aesthetic anchor point knowledge base based on the aesthetic features corresponding to the score levels.
It should be noted that, for an image, there may be a scene to which the image belongs. For example, if the content of some image is about a person, the image may belong to a person scene. The content of some images is about an animal, which may belong to an animal scene. Some images are related to natural wind and light, and then the images belong to natural wind and light scenes. Some of the images are still related in content, and then the images belong to still scenes. The content of some images is urban scene, and the images belong to urban scene. The dimensions on which aesthetic evaluation is based may be different for images of different scenes. In the embodiment of the application, the effect of differentiating the different aesthetic features in the aesthetic anchor point knowledge base is to respectively perform aesthetic evaluation on the image to be evaluated based on the aesthetic features of each score grade in the process of performing the aesthetic evaluation on the image to be evaluated based on the aesthetic anchor point knowledge base, and then to comprehensively obtain the final aesthetic score based on the aesthetic evaluation made by the aesthetic features of each score grade.
In order to make the aesthetic feature corresponding to each score level more comprehensive, after obtaining the sample images of multiple score levels, the images of multiple specified scenes can be further selected from the sample images of each score level, so that the aesthetic feature of each score level can uniformly cover the multiple specified scenes in the finally established aesthetic anchor point knowledge base. As one mode, after obtaining the sample images corresponding to the plurality of score levels, the images of the plurality of specified scenes may be obtained from the sample images corresponding to the plurality of score levels, respectively, to obtain the sample images of the plurality of specified scenes corresponding to the score levels. In this way, the sample images corresponding to the score levels are ranked based on the scoring variances of the sample images corresponding to the score levels, the sample images with the ranking positions at the front positions are used as aesthetic reference images corresponding to the score levels, the sample images with the ranking positions at the front positions are selected from the sample images of each designated scene corresponding to the score levels and used as aesthetic reference images of each designated scene corresponding to the score levels, and the aesthetic reference images corresponding to the score levels are obtained based on the aesthetic reference images of each designated scene corresponding to the score levels. Wherein specifying a forward position may be understood as ordering positions at a forward specified position. For example, the sample image with the order position three bits earlier, or the sample image with the order position five bits earlier may be included.
Illustratively, the determined plurality of score levels includes a score f1, a score f2, a score f3, a score f4, and a score f5, and the plurality of specified scenes includes a scene 1, a scene 2, a scene 3, a scene 4, and a scene 5. After obtaining a plurality of score levels including a score f1, a score f2, a score f3, a score f4, and a score f5 based on the average aesthetic score of each sample image, as shown in fig. 5, the sample images corresponding to any one of a scene 1, a scene 2, a scene 3, a scene 4, and a scene 5 are further obtained based on the score f1, the score f2, the score f3, the score f4, and the score f5, so as to obtain a corresponding average aesthetic score belonging to the score level f1, and the sample image of any one of the scene 1, the scene 2, the scene 3, the scene 4, and the scene 5, obtain a corresponding average aesthetic score belonging to the score level f2, and the sample image of any one of the scene 1, the scene 2, the scene 3, the scene 4, and the scene 5, and the sample image of any one of the scene 1, the scene 2, the scene 4, and the scene 5 are obtained, and the sample image of any one of the scene 1, the scene 2, the scene 3, the scene 4, and the scene 5 are obtained.
Further, for a sample image for which the corresponding average aesthetic score belongs to score level f1 and the belonging scene is any one of scene 1, scene 2, scene 3, scene 4, and scene 5, the sample image for which the average aesthetic score belongs to score level f1 and the belonging scene is scene 1 is further selected to have score variance satisfying the specified ordering, as an aesthetic reference image for which the score level is f1 and the scene is scene 1, as an aesthetic reference image for which the average aesthetic score belongs to score level f1 and the belonging scene is scene 2 is selected to have score variance satisfying the specified ordering, as an aesthetic reference image for which the score level is f1 and the belonging scene is scene 2, as an aesthetic reference image for which the average aesthetic score belongs to score level f1 and the belonging scene 3 is selected to have score variance satisfying the specified ordering, as an aesthetic reference image for which the average aesthetic score belongs to score level f1 and the scene 4 is selected to have score variance satisfying the specified ordering, as an aesthetic reference image for which the average score level is f1 and the belonging to scene 4 is scene 2 is selected to have score level f1 and the belonging to scene 5 is selected to the specified ordering.
Correspondingly, for other score levels, corresponding aesthetic scene images are determined based on the manner shown above for determining aesthetic reference images corresponding to a plurality of specified scenes in each score level, and then the aesthetic reference images corresponding to each of the plurality of score levels are derived based on all of the determined aesthetic reference images.
It should be noted that, in the embodiment of the present application, the sample image may be identified based on the pre-trained scene classification model to determine the scene to which the sample image belongs. Optionally, the ResNet network may be trained by using training data with labels, so as to obtain a scene classification model. Optionally, in order to ensure that the scene classification of the sample image is accurate, in the process of performing scene annotation on the image of the scene to be annotated, the image of the scene to be annotated may be subjected to scene annotation under the condition that the classification confidence is greater than or equal to 0.7.
S260, inputting the image to be evaluated and the aesthetic anchor point knowledge base into a pre-trained aesthetic evaluation network model to obtain aesthetic scores output by the aesthetic evaluation network model.
According to the image evaluation method provided by the embodiment, in the process of performing aesthetic evaluation on the image to be evaluated, the aesthetic anchor point knowledge base comprising the aesthetic features with a plurality of score grades can be introduced, and in the case that the plurality of score grades in the aesthetic anchor point knowledge base are obtained based on the respective plurality of aesthetic scores of the plurality of sample images and the corresponding plurality of aesthetic scores of each sample image comprise scores output by a plurality of evaluators, the aesthetic evaluation network model can refer to the aesthetic features with different score grades simultaneously in the process of performing aesthetic evaluation on the image to be evaluated by means of the aesthetic anchor point knowledge base, so that popular aesthetic evaluation experiences with different score grades can be referred to simultaneously, and the output aesthetic scores are more comprehensive and accurate. In addition, in the embodiment, in the process of establishing the aesthetic anchor point knowledge base, images of a plurality of specified scenes are further acquired from sample images corresponding to the plurality of score levels, so that the aesthetic features in the aesthetic anchor point knowledge base can better cover the plurality of specified scenes, and the aesthetic evaluation of the images to be evaluated through the aesthetic anchor point knowledge base is facilitated, so that the method has better comprehensiveness.
Referring to fig. 6, an image evaluation method provided by an embodiment of the present application includes:
S310, acquiring an image to be evaluated.
S320, obtaining an aesthetic anchor point knowledge base, wherein the aesthetic anchor point knowledge base comprises aesthetic characteristics of a plurality of sample images corresponding to a plurality of score grades, the score grades are obtained based on a plurality of aesthetic scores of the sample images, and the aesthetic scores corresponding to the sample images comprise scores output by a plurality of evaluators.
S330, inputting the image to be evaluated and the aesthetic anchor point knowledge base into a pre-trained aesthetic evaluation network model to obtain aesthetic scores output by the aesthetic evaluation network model.
And S340, obtaining aesthetic features to be evaluated of the image to be evaluated through a feature extraction module of the aesthetic evaluation network model.
In this embodiment, the aesthetic evaluation network model may include a feature extraction module, an anchor point reference module (described further below), and an aesthetic decision module (described further below). In S340, the feature obtained for aesthetic evaluation may be acquired as the aesthetic feature to be evaluated based on the feature extraction module. It should be noted that, in the process of acquiring the feature to be evaluated, the manner in which the feature is acquired from the image to be evaluated as the feature to be evaluated is the same as the manner in which the feature is acquired from the sample image (or the aesthetic reference image) as the corresponding aesthetic feature in the process of establishing the aesthetic anchor point knowledge base, so that the obtained features can be compared with each other based on the same manner.
And S350, calibrating the aesthetic features to be evaluated of the image to be evaluated through an anchor point reference module of the aesthetic evaluation network model to obtain the calibrated aesthetic features corresponding to the score grades.
As one way, as shown in fig. 7, calibrating the aesthetic feature to be evaluated of the image to be evaluated by the anchor point reference module of the aesthetic evaluation network model to obtain the calibrated aesthetic feature corresponding to each of the plurality of score levels, including:
and S351, dividing the aesthetic features corresponding to each score grade into a plurality of groups through an anchor point reference module of the aesthetic evaluation network model, and obtaining the aesthetic features of the groups corresponding to each score grade, wherein the aesthetic features in each group correspond to the same appointed scene, and the appointed scenes corresponding to the aesthetic features of different groups are different.
It should be noted that, in the process of calibrating the feature to be evaluated through the anchor point reference module, the feature to be evaluated may be calibrated through the aesthetic features corresponding to each of the plurality of score levels in the aesthetic anchor point knowledge base. In the calibration process, in order to make the aesthetic features for calibration corresponding to each score level more uniform to cover each specified scene, the aesthetic features corresponding to each score level can be divided into a plurality of groups based on the specified scene. In this case, the aesthetic features in one grouping are derived based on the same image of the scene to which they belong, meaning that the aesthetic features in one grouping are those that characterize the same designated scene.
And S352, obtaining the feature similarity between the aesthetic features to be evaluated and the aesthetic features of the multiple groups corresponding to each score grade.
The feature similarity may be cosine similarity.
And S353, acquiring a plurality of groups corresponding to each score grade and target aesthetic features corresponding to the aesthetic features to be evaluated, wherein the target aesthetic features are aesthetic features with the largest feature similarity between the aesthetic features of each group and the aesthetic features to be evaluated.
After the aesthetic features of each score level are divided into a plurality of groups based on the scene, the aesthetic feature with the largest characteristic similarity with the feature to be evaluated can be screened out from each group to serve as the target aesthetic feature of the group, and then a plurality of target aesthetic features corresponding to each score level can be obtained. Illustratively, taking the score (level) f1 as an example, after grouping aesthetic features corresponding to the score f1 according to a plurality of specified scenes, the group 1, the group 2, the group 3, the group 4, and the group 5 can be obtained. Wherein, aesthetic features corresponding to group 1 include aesthetic feature t1, aesthetic feature t2, and aesthetic feature t3. The aesthetic features corresponding to group 2 include aesthetic feature t4, aesthetic feature t5, and aesthetic feature t6. The aesthetic features corresponding to group 3 include aesthetic feature t7, aesthetic feature t8, and aesthetic feature t9. The aesthetic features corresponding to group 4 include aesthetic feature t10, aesthetic feature t11, and aesthetic feature t12. The aesthetic features corresponding to group 5 include aesthetic feature t13, aesthetic feature t14, and aesthetic feature t15. For group 1, the aesthetic feature that determines the greatest feature similarity with the feature to be evaluated may be aesthetic feature t2. For group 2, the aesthetic feature determined to have the greatest feature similarity with the feature to be evaluated may be aesthetic feature t6. For group 3, the aesthetic feature determined to have the greatest feature similarity with the feature to be evaluated may be aesthetic feature t7. For group 4, the aesthetic feature determined to have the greatest feature similarity with the feature to be evaluated may be aesthetic feature t12. For group 5, the aesthetic feature determined to have the greatest feature similarity with the feature to be evaluated may be aesthetic feature t13. Then the plurality of target aesthetic features corresponding to score f1 includes aesthetic feature t2, aesthetic feature t6, aesthetic feature t7, aesthetic feature t12, and aesthetic feature t13.
And S354, obtaining difference features of the aesthetic features to be evaluated, wherein the difference features correspond to the target aesthetic features of the multiple groups corresponding to each score grade.
It should be noted that, in the embodiment of the present application, the features of the image may be in a vector form. The difference feature corresponding to the two features is obtained, which can be understood as subtracting the two features based on the dimension of the vector. The subtraction of the two features based on the vector dimension may be understood as the subtraction of the values of the positions of the corresponding elements in the features of the two vector forms. Illustratively, a plurality of element positions are included in a vector-form feature, and the values in each element position generally form a vector-form feature. For example, for vector [ a, b, c ], where value a occupies one element position, value b occupies one element position, and value c occupies one element position. If the vector [ a, b, c ] is subtracted from the vector [ d, e, f ], the resulting vector is [ a-d, b-e, c-f ]. Similarly, if the vector representation of the aesthetic feature to be evaluated is [ a1, b1, c2], and the vector representation of a certain target aesthetic feature is [ a3, b3, c1], then the corresponding difference between the aesthetic feature to be evaluated and the target aesthetic feature is [ a1-a3, b1-b3, c2-c1].
And S355, obtaining the calibration aesthetic feature corresponding to each score grade based on the multiple weights corresponding to the aesthetic feature to be evaluated and the difference features of the multiple groups corresponding to each score grade.
Optionally, based on a plurality of weights corresponding to the aesthetic feature to be evaluated and a plurality of grouping difference features corresponding to each score grade, obtaining the calibration aesthetic feature corresponding to each score grade comprises the steps of multiplying the plurality of weights corresponding to the aesthetic feature to be evaluated and the plurality of grouping difference features corresponding to the same score grade one by one to obtain a plurality of features to be spliced corresponding to each score grade, and splicing the plurality of features to be spliced corresponding to each score grade to obtain the calibration aesthetic feature corresponding to each score grade. Illustratively, if the respective difference characteristics of the plurality of packets in the score level f1 include [ a1, b1, c1], [ a2, b2, c2], [ a3, b3, c3], [ a4, b4, c4], and [ a5, b5, c5]. Wherein the plurality of weights for the plurality of difference features for the score class f1 includes q1, q2, q3, q4, and q5. The plurality of features to be stitched may include [q1×a1,q1×b1,q1×c1]、[q2×a2,q2×b2,q2×c2]、[q3×a3,q1×b3,q3×c3]、[q4×a4,q4×b4,q4×c4]、[q5×a5,q5×b5,q5×c5], and the resulting alignment aesthetic feature after stitching the plurality of features to be stitched may be q 1x a1, q1×b1, q1×c1, q2×a2, q2×b2, q2×c2, q3×a3, q1×b3, q2×c2, q3×a3, q1 is represented by the number b 3.
In the present embodiment, a plurality of weights corresponding to the aesthetic feature to be evaluated for each score level may be obtained from the aesthetic feature to be evaluated. Alternatively, the aesthetic feature to be evaluated may be input to the convolution layer first, and then the output of the convolution layer is input to the global averaging pooling layer, so as to obtain a plurality of weights corresponding to the aesthetic feature to be evaluated for each score level. The convolution kernel size of the convolution layer is 1*1, the filling size is 0, the channel number of the output feature is 5, and the activation function is PReLU.
And S360, enabling an aesthetic decision module of the aesthetic evaluation network model to score the image to be evaluated based on the calibrated aesthetic features corresponding to the score levels respectively so as to obtain the aesthetic scores output by the aesthetic evaluation network model.
In one manner, the image evaluation method provided by the embodiment of the application further comprises enabling the anchor point reference module of the aesthetic evaluation network model to obtain an aesthetic score prediction difference value corresponding to each score grade based on the corresponding calibration aesthetic feature of each score grade. Wherein the aesthetic score prediction differences for each score level characterize the difference between the aesthetic score of the aesthetic feature of each score level and the aesthetic score of the feature to be evaluated.
Alternatively, the calibrated aesthetic feature may be input to a convolution layer, and then the output of the convolution layer is input to a global averaging pooling layer to obtain an aesthetic score prediction difference value corresponding to each score level, where the convolution kernel of the convolution layer has a size 1*1, a fill size 0, a channel number of the output feature of 1, and an activation function of Tanh.
In this way, the aesthetic decision module of the aesthetic evaluation network model can be enabled to obtain the fusion weight of each score grade based on the aesthetic score prediction difference value corresponding to each score grade, obtain the multi-scale reference feature based on the fusion weight corresponding to each score grade and the calibrated aesthetic feature corresponding to each score grade, and predict the aesthetic score distribution and the average aesthetic score of the image to be evaluated based on the multi-scale reference feature.
The aesthetic score prediction difference value corresponding to each score level may be input to a first full-connection layer, and then the output of the first full-connection layer is input to a second full-connection layer, so as to obtain the fusion weight of each score level. The first full-connection layer comprises 128 nodes, the activation function is PReLU, the second full-connection layer comprises 8 nodes, and the activation function is Softmax.
Optionally, the multi-scale reference feature is obtained based on the fusion weight corresponding to each score level and the calibration aesthetic feature corresponding to each score level, and the multi-scale reference feature is obtained by weighting and summing the calibration aesthetic features corresponding to each score level based on the fusion weight corresponding to each score level.
The aesthetic decision module can comprise a convolution layer, a hidden network, a full connection layer and an output layer of N nodes which are sequentially arranged. Optionally, BN (Batch Normalization) layers and one Dropout (Dropout layer) layer are further included between the fully connected layer and the output layer. The output layer activation function is Softmax, N is the discrete aesthetic score maximum, and the output of the output layer is the predicted aesthetic score distribution. From this aesthetic score distribution, the average aesthetic score of the image to be evaluated can then be obtained.
It should be noted that, in the case where the aesthetic evaluation network model for performing aesthetic evaluation on the image to be evaluated includes the anchor point reference module and the aesthetic decision module, in the process of obtaining the aesthetic evaluation network model through training, two different modules in the neural network model to be trained may be trained separately, so as to train one module as the anchor point reference module and train the other module as the aesthetic decision module.
Optionally, in training to obtain the anchor reference module, the loss function used is as follows:
Wherein, AndAnd (3) respectively carrying out continuous iteration on the training image through a gradient descent method to optimize the loss function until the calculated loss function result is smaller than a threshold value of 0.013.
The loss function employed in training to arrive at an aesthetic decision module is as follows:
Wherein, AndTrue probability and predicted probability of image aesthetic score i, respectively, lambda of 0.01, S a andAnd (3) respectively carrying out continuous iteration on the training image by a gradient descent method to optimize the loss function until the calculated loss function result is smaller than a threshold value of 0.021.
It should be noted that, the aesthetic decision module and the anchor point reference module have a certain hierarchical relationship, so that the two modules can be trained alternately to obtain a final aesthetic evaluation network model.
A process for aesthetic evaluation of an image according to an embodiment of the present application is described below with reference to fig. 8.
As shown in fig. 8, the feature extraction module, anchor point reference module, and aesthetic decision module shown in fig. 8 collectively constitute an aesthetic evaluation network model.
After the aesthetic image (image to be evaluated) is obtained, the aesthetic image may be input to the feature extraction module, and features of the aesthetic image may be extracted as general aesthetic features (aesthetic features to be evaluated) by the feature extraction module. It should be noted that the feature extraction module may be composed of a backbone network in the aesthetic evaluation network model. Wherein the backbone network in the aesthetic evaluation network model may be a network of a plurality of convolution layers (e.g., the first 6 convolution layers) ordered first from the input layer.
After the general aesthetic feature is obtained, the general aesthetic feature and the aesthetic anchor knowledge base may be further input into the anchor reference module together, and then the general aesthetic feature is calibrated based on a plurality of anchor calibration branches (for example, 5 branches shown in fig. 8) established by the aesthetic anchor knowledge base, so as to obtain a calibrated aesthetic feature and an aesthetic score difference (i.e., the aesthetic score prediction difference in the foregoing embodiment) corresponding to each anchor calibration branch. Wherein each anchor calibration branch may be understood as an aesthetic feature corresponding to a plurality of score levels in the foregoing embodiments.
The aesthetic score difference is then input to the fully connected layer to obtain a fusion weight as a reference, then a multi-scale reference feature is obtained through the fusion weight and the corresponding calibration aesthetic feature of each anchor calibration branch, and the multi-scale reference feature is input to the convolution layer and the fully connected layer to obtain the aesthetic quality score (including the aesthetic score distribution and the average aesthetic score).
Furthermore, the establishment of the aesthetic anchor knowledge base and the application of the aesthetic anchor knowledge base in the embodiment of the present application are described below with reference to fig. 9. After the aesthetic feeling anchor point knowledge base can be obtained through the sample images of the scenes corresponding to the score levels, the priori knowledge in the aesthetic feeling anchor point knowledge base can be used for calibrating the aesthetic characteristics to be evaluated of the image to be evaluated through the anchor point calibration branches in the anchor point reference module. As shown in fig. 10, the reference features may be aesthetic features in the aesthetic anchor knowledge base, and the common features may be aesthetic features to be evaluated.
According to the image evaluation method provided by the embodiment, in the process of performing aesthetic scoring on the image to be evaluated by means of the aesthetic evaluation network model through the aesthetic anchor point knowledge base, the aesthetic features of different score grades can be simultaneously referred to, so that popular aesthetic evaluation experiences of different score grades can be simultaneously referred to, and the output aesthetic scoring is more comprehensive and accurate. In addition, in the present embodiment, in the process of obtaining the aesthetic score of the image to be evaluated through the aesthetic evaluation network model, the feature to be evaluated of the image to be evaluated may be calibrated for each score grade, and then the multi-scale reference feature for scoring and outputting the aesthetic evaluation network model is obtained according to the calibrated aesthetic feature corresponding to each score grade, so that the aesthetic score output by the aesthetic evaluation network model is more accurate.
Referring to fig. 11, an image evaluation apparatus 400 according to an embodiment of the present application, the apparatus 400 includes:
an image acquisition unit 410 is configured to acquire an image to be evaluated.
The knowledge base obtaining unit 420 is configured to obtain an aesthetic anchor knowledge base, where the aesthetic anchor knowledge base includes aesthetic features of a plurality of sample images corresponding to a plurality of score levels, and the plurality of score levels are obtained based on a plurality of aesthetic scores of the plurality of sample images, where the plurality of aesthetic scores corresponding to each sample image includes scores output by a plurality of evaluators.
The image evaluation unit 430 is configured to input the image to be evaluated and the aesthetic anchor knowledge base into a pre-trained aesthetic evaluation network model, so as to obtain an aesthetic score output by the aesthetic evaluation network model.
The knowledge base obtaining unit 420 is specifically configured to obtain a plurality of sample images and a plurality of aesthetic scores corresponding to each sample image, where the plurality of aesthetic scores corresponding to each sample image include scores output by a plurality of evaluators, obtain an average aesthetic score of each sample image according to the plurality of aesthetic scores corresponding to each sample image, obtain a plurality of score levels based on the average aesthetic score of each sample image, and obtain a sample image corresponding to each score level from the plurality of sample images, where the average aesthetic score of the sample image corresponding to each score level matches the corresponding score level, and obtain an aesthetic anchor knowledge base based on the aesthetic features of the sample image corresponding to each score level.
Optionally, the knowledge base obtaining unit 420 is specifically configured to sort the sample images corresponding to the score levels based on the score variances of the sample images corresponding to the score levels, use the sample image with the sorting position at the front position in the sample images corresponding to the score levels as the aesthetic reference image corresponding to the score levels, extract the aesthetic features of the aesthetic reference image corresponding to the score levels to obtain the aesthetic features corresponding to the score levels, and construct the aesthetic anchor knowledge base based on the aesthetic features corresponding to the score levels.
Optionally, the knowledge base obtaining unit 420 is further specifically configured to obtain images of a plurality of specified scenes from sample images corresponding to the plurality of score levels, so as to obtain sample images of the plurality of specified scenes corresponding to each score level. Correspondingly, the knowledge base obtaining unit 420 is further specifically configured to select, from the sample images of each specified scene corresponding to each score level, a plurality of sample images with the ranking positions at the front positions as aesthetic reference images of each specified scene corresponding to each score level, and obtain aesthetic reference images corresponding to each score level based on the aesthetic reference images of each specified scene corresponding to each score level.
Optionally, the knowledge base obtaining unit 420 is specifically configured to obtain aesthetic score distribution information of each sample image according to the respective multiple aesthetic scores of each sample image, and obtain an average aesthetic score of each sample image based on the aesthetic score distribution information of each sample image.
Optionally, the knowledge base obtaining unit 420 is specifically configured to obtain the occurrence frequency of the average aesthetic score of each sample image, and take a plurality of average aesthetic scores, whose corresponding occurrence frequency meets the specified ordering condition, as a plurality of score levels.
The image evaluation unit 430 is specifically configured to obtain an aesthetic feature to be evaluated of the image to be evaluated through the feature extraction module of the aesthetic evaluation network model, calibrate the aesthetic feature to be evaluated of the image to be evaluated through the anchor point reference module of the aesthetic evaluation network model to obtain a calibrated aesthetic feature corresponding to each of the plurality of score levels, and score the image to be evaluated through the aesthetic decision module of the aesthetic evaluation network model based on the calibrated aesthetic feature corresponding to each of the plurality of score levels to obtain an aesthetic score output by the aesthetic evaluation network model.
Optionally, the image evaluation unit 430 is specifically configured to divide, by an anchor point reference module of the aesthetic evaluation network model, the aesthetic features of each of the score levels into a plurality of groups, obtain the aesthetic features of each of the score levels corresponding to the plurality of groups, wherein the aesthetic features of each of the groups correspond to a same designated scene and the aesthetic features of different groups correspond to different designated scenes, obtain feature similarities between the aesthetic features of each of the score levels and the aesthetic features of the plurality of groups corresponding to each of the score levels, obtain the target aesthetic features of each of the score levels corresponding to the plurality of groups and the aesthetic features of each of the score levels, wherein the target aesthetic features are the aesthetic features of each of the groups having the largest feature similarity with the aesthetic features of each of the score levels, obtain the difference features of each of the score levels corresponding to the aesthetic features of each of the plurality of groups corresponding to each of the score levels, and obtain the calibration aesthetic features of each of the score levels based on the weights of each of the score levels and the aesthetic features of each of the plurality of groups corresponding to each of the score levels.
Optionally, the image evaluation unit 430 is specifically configured to multiply each score level with a plurality of weights corresponding to the aesthetic features to be evaluated, with a plurality of grouped difference features corresponding to the same score level one by one to obtain a plurality of features to be spliced corresponding to each score level, and splice the plurality of features to be spliced corresponding to each score level to obtain the calibration aesthetic features corresponding to each score level.
Optionally, the image evaluation unit 430 is specifically configured to cause the anchor point reference module of the aesthetic evaluation network model to obtain an aesthetic score prediction difference value corresponding to each score level based on the calibrated aesthetic feature corresponding to each score level. The aesthetic decision module of the aesthetic evaluation network model obtains the fusion weight of each score grade based on the aesthetic score prediction difference value corresponding to each score grade, obtains the multi-scale reference characteristic based on the fusion weight corresponding to each score grade and the calibration aesthetic characteristic corresponding to each score grade, and predicts the aesthetic score distribution and the average aesthetic score of the image to be evaluated based on the multi-scale reference characteristic.
Optionally, the image evaluation unit 430 is specifically configured to perform weighted summation on the calibration aesthetic feature corresponding to each score level based on the fusion weight corresponding to each score level, so as to obtain a multi-scale reference feature. As one way, as shown in FIG. 12, the apparatus 400 further includes a model training unit 440 for acquiring a training dataset including a plurality of sample images and a plurality of aesthetic scores corresponding to each sample image, each sample image including scores output by a plurality of evaluators, and training the neural network model to be trained based on the training dataset and the aesthetic anchor knowledge base to obtain a trained aesthetic evaluation network model.
It should be noted that, in the present application, the device embodiment and the foregoing method embodiment correspond to each other, and specific principles in the device embodiment may refer to the content in the foregoing method embodiment, which is not described herein again.
An electronic device according to the present application will be described with reference to fig. 10.
Referring to fig. 10, based on the image evaluation method and apparatus, another electronic device 100 capable of executing the image evaluation method is provided in the embodiment of the present application. The electronic device 100 includes one or more (only one shown) processors 102, memory 104, and a network module 106 coupled to one another. The memory 104 stores therein a program capable of executing the contents of the foregoing embodiments, and the processor 102 can execute the program stored in the memory 104.
Wherein the processor 102 may include one or more processing cores. The processor 102 utilizes various interfaces and lines to connect various portions of the overall electronic device 100, perform various functions of the electronic device 100, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104, and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 102 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like, the GPU is used for rendering and drawing display contents, and the modem is used for processing wireless communication. It will be appreciated that the modem may not be integrated into the processor 102 and may be implemented solely by a single communication chip.
The Memory 104 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). Memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the terminal 100 in use (such as phonebook, audio-video data, chat-record data), etc.
The network module 106 is configured to receive and transmit electromagnetic waves, and to implement mutual conversion between the electromagnetic waves and the electrical signals, so as to communicate with a communication network or other devices, such as an audio playing device. The network module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and the like. The network module 106 may communicate with various networks such as the Internet, intranets, wireless networks, or other devices via wireless networks. The wireless network may include a cellular telephone network, a wireless local area network, or a metropolitan area network. For example, the network module 106 may interact with base stations.
Referring to fig. 11, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 800 has stored therein program code which can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 800 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 800 has storage space for program code 810 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 810 may be compressed, for example, in a suitable form.
In summary, according to the image evaluation method, the image evaluation device and the electronic device provided by the application, the image to be evaluated is firstly obtained, and the aesthetic anchor knowledge base is obtained, wherein the aesthetic anchor knowledge base comprises the aesthetic characteristics of a plurality of sample images corresponding to a plurality of score grades, and the aesthetic scores corresponding to each sample image comprise scores output by a plurality of evaluators. The image to be evaluated and the aesthetic anchor knowledge base are then input into a pre-trained aesthetic evaluation network model to obtain an aesthetic score output by the aesthetic evaluation network model. In this way, in the process of carrying out aesthetic evaluation on the image to be evaluated, the aesthetic anchor point knowledge base comprising the aesthetic characteristics with a plurality of score grades can be introduced, and in the case that the plurality of score grades in the aesthetic anchor point knowledge base are obtained based on the respective plurality of aesthetic scores of the plurality of sample images and the corresponding plurality of aesthetic scores of each sample image comprise scores output by a plurality of evaluators, the aesthetic evaluation network model can refer to the aesthetic characteristics with different score grades simultaneously in the process of carrying out aesthetic evaluation on the image to be evaluated by means of the aesthetic anchor point knowledge base, so that the popular aesthetic evaluation experience with different score grades can be referred to simultaneously, and the output aesthetic scores are more comprehensive and accurate.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it will be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or replacements do not drive the essence of the corresponding technical solution to deviate from the spirit and scope of the technical solution of the embodiments of the present application.

Claims (13)

1. An image evaluation method, characterized in that the method comprises:
acquiring an image to be evaluated;
Acquiring a generated aesthetic feeling anchor point knowledge base, wherein the generation process of the aesthetic feeling anchor point knowledge base comprises the steps of acquiring a plurality of sample images and a plurality of corresponding aesthetic scores of each sample image, wherein the corresponding aesthetic scores of each sample image comprise scores output by a plurality of evaluators; obtaining a plurality of score grades based on the average aesthetic score of each sample image, obtaining the sample image corresponding to each score grade from the sample images, wherein the average aesthetic score of the sample image corresponding to each score grade is matched with the corresponding score grade, sorting the sample images corresponding to each score grade based on the score variance of the sample image corresponding to each score grade, taking the sample image with the sorting position at the appointed front position in the sample image corresponding to each score grade as the aesthetic reference image corresponding to each score grade, extracting the aesthetic characteristic of the aesthetic reference image corresponding to each score grade, and obtaining the aesthetic characteristic corresponding to each score grade;
Inputting the image to be evaluated and the aesthetic anchor point knowledge base into a pre-trained aesthetic evaluation network model to obtain aesthetic scores output by the aesthetic evaluation network model.
2. The method of claim 1, wherein the acquiring the sample image corresponding to each of the plurality of score levels from the plurality of sample images further comprises:
Respectively acquiring images of a plurality of specified scenes from sample images corresponding to the score levels to obtain sample images of the specified scenes corresponding to the score levels;
The sorting the sample images with the positions at the designated front positions in the sample images corresponding to the score levels respectively as aesthetic reference images corresponding to the score levels respectively comprises the following steps:
selecting a plurality of sample images with sorting positions at the front positions from sample images of each designated scene corresponding to each score grade respectively as aesthetic reference images of each designated scene corresponding to each score grade respectively;
based on the aesthetic reference image of each specified scene corresponding to each score grade, the aesthetic reference images corresponding to the score grades are obtained.
3. The method of claim 1, wherein the deriving an average aesthetic score for each sample image from the respective plurality of aesthetic scores for each sample image comprises:
According to the corresponding aesthetic scores of each sample image, obtaining aesthetic score distribution information of each sample image;
An average aesthetic score for each sample image is derived based on the aesthetic score distribution information for each sample image.
4. The method of claim 1, wherein deriving a plurality of score levels based on the average aesthetic score for each sample image comprises:
acquiring the occurrence frequency of the average aesthetic score of each sample image;
and taking the plurality of average aesthetic scores, corresponding to the occurrence frequencies of which meet the specified ordering condition, as a plurality of score grades.
5. The method of claim 1, wherein said inputting the image to be evaluated and the aesthetic anchor knowledge base into a pre-trained aesthetic evaluation network model to obtain an aesthetic score output by the aesthetic evaluation network model comprises:
obtaining aesthetic features to be evaluated of the image to be evaluated through a feature extraction module of the aesthetic evaluation network model;
Calibrating aesthetic features to be evaluated of the image to be evaluated through an anchor point reference module of the aesthetic evaluation network model and the aesthetic anchor point knowledge base to obtain calibrated aesthetic features corresponding to the score grades;
and enabling an aesthetic decision module of the aesthetic evaluation network model to score the image to be evaluated based on the calibrated aesthetic features corresponding to the score grades so as to obtain aesthetic scores output by the aesthetic evaluation network model.
6. The method of claim 5, wherein calibrating the aesthetic feature to be evaluated of the image to be evaluated by the anchor point reference module of the aesthetic evaluation network model and the aesthetic anchor point knowledge base to obtain the calibrated aesthetic feature corresponding to each of the plurality of score levels, comprises:
Dividing the aesthetic features corresponding to each score grade into a plurality of groups through an anchor point reference module of the aesthetic evaluation network model to obtain the aesthetic features of the groups corresponding to each score grade, wherein the aesthetic features in each group correspond to the same appointed scene, and the appointed scenes corresponding to the aesthetic features of different groups are different;
acquiring feature similarity between the aesthetic features to be evaluated and the aesthetic features of a plurality of groups corresponding to each score grade respectively;
acquiring target aesthetic features corresponding to the aesthetic features to be evaluated and a plurality of groups corresponding to each score grade, wherein the target aesthetic features are aesthetic features with the largest feature similarity with the aesthetic features to be evaluated in the aesthetic features of each group;
Acquiring difference features between the aesthetic features to be evaluated and target aesthetic features of a plurality of groups corresponding to each score grade respectively;
And obtaining the calibration aesthetic feature corresponding to each score grade based on the multiple weights corresponding to the aesthetic feature to be evaluated and the difference features of the multiple groups corresponding to each score grade.
7. The method of claim 6, wherein deriving the calibration aesthetic feature for each score level based on the plurality of weights for each score level corresponding to the aesthetic feature to be evaluated and the respective plurality of groupings of difference features for each score level comprises:
multiplying each score grade by a plurality of weights corresponding to the aesthetic features to be evaluated and the difference features of a plurality of groups corresponding to the same score grade one by one to obtain a plurality of features to be spliced corresponding to each score grade;
And splicing the multiple to-be-spliced features corresponding to each score grade to obtain the corresponding calibration aesthetic features of each score grade.
8. The method of claim 6, wherein the method further comprises:
Enabling an anchor point reference module of the aesthetic evaluation network model to obtain an aesthetic score prediction difference value corresponding to each score grade based on the corresponding calibration aesthetic feature of each score grade;
The aesthetic decision module for enabling the aesthetic evaluation network model to score the image to be evaluated based on the calibrated aesthetic features corresponding to the score levels respectively so as to obtain the aesthetic score output by the aesthetic evaluation network model comprises the following steps:
Enabling an aesthetic decision module of the aesthetic evaluation network model to obtain a fusion weight of each score grade based on the aesthetic score prediction difference value corresponding to each score grade;
based on the fusion weight corresponding to each score grade and the calibration aesthetic feature corresponding to each score grade, obtaining a multi-scale reference feature;
predicting an aesthetic score distribution and an average aesthetic score of the image to be evaluated based on the multi-scale reference features.
9. The method of claim 8, wherein the deriving the multi-scale reference feature based on the fusion weight for each score level and the calibration aesthetic feature for each score level comprises:
And weighting and summing the calibration aesthetic features corresponding to each score grade based on the fusion weight corresponding to each score grade to obtain the multi-scale reference features.
10. The method of any one of claims 1-9, wherein the inputting the image to be evaluated and the aesthetic anchor knowledge base into a pre-trained aesthetic evaluation network model to obtain an aesthetic score output by the aesthetic evaluation network model further comprises:
acquiring a training data set, wherein the training data set comprises a plurality of sample images and a plurality of aesthetic scores corresponding to each sample image, and the aesthetic scores corresponding to each sample image comprise scores output by a plurality of evaluators;
Training the neural network model to be trained based on the training data set and the aesthetic sense anchor point knowledge base to obtain a trained aesthetic evaluation network model.
11. An image evaluation device, characterized in that the device comprises:
An image acquisition unit for acquiring an image to be evaluated;
The system comprises a knowledge base acquisition unit, a plurality of aesthetic reference images, a plurality of aesthetic reference image construction units, a plurality of aesthetic feature library construction units, a plurality of anchor point detection units and a plurality of anchor point detection units, wherein the knowledge base acquisition unit is used for acquiring a generated aesthetic anchor point knowledge base, the generation process of the aesthetic anchor point knowledge base comprises acquiring a plurality of sample images and a plurality of aesthetic scores corresponding to each sample image, the plurality of aesthetic scores corresponding to each sample image comprise scores output by a plurality of evaluators, acquiring an average aesthetic score of each sample image according to the plurality of aesthetic scores corresponding to each sample image, acquiring a plurality of score grades based on the average aesthetic score of each sample image, and acquiring a sample image corresponding to each score grade from the plurality of sample images, wherein the average aesthetic score of each sample image corresponding to each score grade is matched with the corresponding score grade;
The image evaluation unit is used for inputting the images to be evaluated and the aesthetic anchor point knowledge base into a pre-trained aesthetic evaluation network model so as to acquire aesthetic scores output by the aesthetic evaluation network model.
12. An electronic device comprising a processor and a memory, one or more programs stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-10.
13. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, wherein the program code, when being executed by a processor, performs the method of any of claims 1-10.
CN202210611112.5A 2022-05-31 2022-05-31 Image evaluation method, device and electronic equipment Active CN114998257B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210611112.5A CN114998257B (en) 2022-05-31 2022-05-31 Image evaluation method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210611112.5A CN114998257B (en) 2022-05-31 2022-05-31 Image evaluation method, device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114998257A CN114998257A (en) 2022-09-02
CN114998257B true CN114998257B (en) 2025-03-21

Family

ID=83030778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210611112.5A Active CN114998257B (en) 2022-05-31 2022-05-31 Image evaluation method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114998257B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197104A (en) * 2023-09-20 2023-12-08 Oppo广东移动通信有限公司 Image processing methods, devices, electronic equipment and computer-readable media

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9613058B2 (en) * 2014-12-17 2017-04-04 Adobe Systems Incorporated Neural network image curation control
US10489688B2 (en) * 2017-07-24 2019-11-26 Adobe Inc. Personalized digital image aesthetics in a digital medium environment
WO2019114147A1 (en) * 2017-12-15 2019-06-20 华为技术有限公司 Image aesthetic quality processing method and electronic device
CN110689523A (en) * 2019-09-02 2020-01-14 西安电子科技大学 Personalized image information evaluation method based on meta-learning and information data processing terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Anchor-based knowledge embedding for image aesthetics assessment;Leida Li et al.;《Neurocomputing》;20230331;第539卷;正文1-13页 *
用户性格分析与个性化图像美学评价研究;祝汉城;《中国博士学位论文全文数据库 (信息科技辑)》;20210715(第7期);摘要,第4-5节,图4-1 - 5-5 *

Also Published As

Publication number Publication date
CN114998257A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN110766096B (en) Video classification method and device and electronic equipment
CN111095293B (en) Image aesthetic processing methods and electronic equipment
CN110990631A (en) Video screening method and device, electronic equipment and storage medium
CN110147711A (en) Video scene recognition methods, device, storage medium and electronic device
CN109816009A (en) Multi-tag image classification method, device and equipment based on picture scroll product
CN112488218A (en) Image classification method, and training method and device of image classification model
CN113962965B (en) Image quality evaluation method, device, equipment and storage medium
WO2017092623A1 (en) Method and device for representing text as vector
CN110414593B (en) Image processing method and device, processor, electronic device and storage medium
WO2017206400A1 (en) Image processing method, apparatus, and electronic device
CN111737473A (en) Text classification method, device and equipment
CN117852624A (en) Training method, prediction method, device and equipment of time sequence signal prediction model
CN112069338B (en) Image processing method, device, electronic device and storage medium
CN114998257B (en) Image evaluation method, device and electronic equipment
CN114841340B (en) Identification method and device for depth counterfeiting algorithm, electronic equipment and storage medium
CN113849679A (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN113569034B (en) Information search method, device, electronic device and storage medium
CN119314497A (en) Model watermarking method, device, computer equipment and storage medium for speech synthesis system
CN114118087A (en) Entity determination method, apparatus, electronic device and storage medium
KR102060110B1 (en) Method, apparatus and computer program for classifying object in contents
CN114328904B (en) Content processing method, device, computer equipment and storage medium
CN117475340A (en) Video data processing method, device, computer equipment and storage medium
CN112669270A (en) Video quality prediction method and device and server
CN112417086B (en) Data processing method, device, server and storage medium
CN111914945A (en) Data processing method and device, image generation method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant