[go: up one dir, main page]

CN113902762A - New scene adaptation method, device, equipment and storage medium - Google Patents

New scene adaptation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113902762A
CN113902762A CN202111324963.3A CN202111324963A CN113902762A CN 113902762 A CN113902762 A CN 113902762A CN 202111324963 A CN202111324963 A CN 202111324963A CN 113902762 A CN113902762 A CN 113902762A
Authority
CN
China
Prior art keywords
new
target
sample
data set
new scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111324963.3A
Other languages
Chinese (zh)
Inventor
林楚然
王福泉
程力行
袁振华
贾东风
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qiku Software Shenzhen Co Ltd
Original Assignee
Qiku Software Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qiku Software Shenzhen Co Ltd filed Critical Qiku Software Shenzhen Co Ltd
Priority to CN202111324963.3A priority Critical patent/CN113902762A/en
Publication of CN113902762A publication Critical patent/CN113902762A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of deep learning and discloses a new scene adaptation method, a device, equipment and a storage medium. The method comprises the following steps: selecting a target positive sample from the old data set; acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample; fusing the target positive sample and the target negative sample to obtain a new sample; determining a classification result corresponding to the new sample; forming a new data set according to the new samples and the corresponding classification results; and training a preset classification model according to the old data set and the new data set. By the mode, the sample data in the existing old data set is used as a positive sample, the scene image is used as a negative sample, the new data set is generated by fusion, and the preset classification model is trained according to the old data set and the new data set, so that the preset classification model can be quickly adapted to a new scene, a large amount of new scene sample data containing a classification target does not need to be collected, the adaptation time is shortened, and the adaptation efficiency is improved.

Description

New scene adaptation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of deep learning technologies, and in particular, to a new scene adaptation method, apparatus, device, and storage medium.
Background
When the existing classification task model faces a new scene, a large amount of new scene sample data needs to be collected in advance, classification results are marked, and the classification task model is retrained according to a sample set, for example, the classification task model for vehicle classification is available, vehicles in outdoor scenes can be well classified, and 3 types of cars, vans and other vehicles can be distinguished, but when new scene requirements exist, for example, vehicles in underground parking lots need to be classified, the classification effect of the classification task model is poor. The new scene may also be other environments, such as application scenes in mountainous areas, pastures, nights, rainy days, and the like. In order to classify a new scene, a large number of vehicle samples of an underground parking lot are generally collected and the classification task model is retrained in the existing scheme. It can be seen that the prior art has the following disadvantages: each new scene needs to collect a large amount of data, and the time is long; the new scene can not be adapted quickly, the adaptation efficiency is low, and the benefit of the original scene sample can not be highlighted.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a new scene adaptation method, a new scene adaptation device, new scene adaptation equipment and a storage medium, and aims to solve the technical problems that when an existing classification task model faces a new scene, a large amount of new scene sample data needs to be collected, time consumption is long, and adaptation efficiency is low.
In order to achieve the above object, the present invention provides a new scene adaptation method, which comprises the following steps:
selecting a target positive sample from the old data set;
acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample;
fusing the target positive sample and the target negative sample to obtain a new sample;
determining a classification result corresponding to the new sample;
forming a new data set according to the new samples and the corresponding classification results;
and training a preset classification model according to the old data set and the new data set.
Optionally, the fusing the target positive sample and the target negative sample to obtain a new sample includes:
randomly generating an area ratio, a starting position and an aspect ratio;
intercepting data to be fused from the target negative sample according to the area proportion and the length-width ratio;
and fusing the data to be fused with the target positive sample according to the initial position to obtain a new sample.
Optionally, the determining a classification result corresponding to the new sample includes:
obtaining an initial classification result corresponding to the target positive sample;
and determining a classification result corresponding to the new sample according to the area proportion and the initial classification result.
Optionally, the obtaining an initial classification result corresponding to the target positive sample includes:
acquiring a target category corresponding to the target positive sample;
and carrying out one-bit effective coding on the target category based on all categories to obtain an initial classification result corresponding to the target positive sample.
Optionally, after the intercepting the data to be fused from the target negative sample according to the area ratio and the aspect ratio, the method further includes:
randomly generating a preset rotation angle;
and fusing the data to be fused with the target positive sample according to the preset rotation angle and the initial position to obtain a new sample.
Optionally, the randomly generating the area ratio includes:
the area proportion is randomly generated based on Dirichlet distribution.
Optionally, the fusing the target positive sample and the target negative sample to obtain a new sample includes:
randomly generating a target area proportion, a target starting position, a target length-width ratio and a plurality of groups of current rotation angles;
intercepting current data to be fused from the target negative sample according to the target area proportion and the target length-width ratio;
carrying out mirror image overturning processing on the current data to be fused to obtain a plurality of current mirror image data;
and fusing each current mirror image data with the target positive sample according to each group of the current rotation angle and the target initial position respectively to obtain a plurality of new samples.
Optionally, the fusing the target positive sample and the target negative sample to obtain a new sample includes:
randomly generating a plurality of area ratios, a plurality of initial positions and a plurality of length-width ratios;
randomly combining the area proportions, the initial positions and the length-width ratios to obtain current area proportions, current initial positions and current length-width ratios corresponding to groups;
intercepting target data from the target negative sample according to the current area proportion and the current aspect ratio of each group;
and fusing the target data and the target positive sample according to the current initial position of each group to obtain a plurality of new samples.
Optionally, the determining a classification result corresponding to the new sample includes:
obtaining an initial classification result corresponding to the target positive sample;
and determining classification results corresponding to the new samples according to the current area proportion of each group and the initial classification result.
Optionally, after the training of the preset classification model according to the old data set and the new data set, the method further includes:
determining a cross entropy loss value corresponding to the preset classification model;
and when the cross entropy loss value is smaller than a preset threshold value, obtaining the trained preset classification model.
In addition, in order to achieve the above object, the present invention further provides a new scene adaptation apparatus, including:
the selecting module is used for selecting a target positive sample from the old data set;
the intercepting module is used for acquiring a new scene image and intercepting a target area from the new scene image to obtain a target negative sample;
the fusion module is used for fusing the target positive sample and the target negative sample to obtain a new sample;
the classification module is used for determining a classification result corresponding to the new sample;
the generating module is used for forming a new data set according to the new samples and the corresponding classification results;
and the training module is used for training a preset classification model according to the old data set and the new data set.
Optionally, the fusion module is further configured to randomly generate an area ratio, an initial position, and an aspect ratio, intercept data to be fused from the target negative sample according to the area ratio and the aspect ratio, and fuse the data to be fused and the target positive sample according to the initial position to obtain a new sample.
Optionally, the classification module is further configured to obtain an initial classification result corresponding to the target positive sample,
and determining a classification result corresponding to the new sample according to the area proportion and the initial classification result.
Optionally, the classification module is further configured to obtain a target class corresponding to the target positive sample, and perform one-bit effective coding on the target class based on all classes to obtain an initial classification result corresponding to the target positive sample.
Optionally, the fusion module is further configured to randomly generate a preset rotation angle, and fuse the data to be fused with the target positive sample according to the preset rotation angle and the start position to obtain a new sample.
Optionally, the fusion module is further configured to randomly generate an area ratio based on a dirichlet distribution.
Optionally, the fusion module is further configured to randomly generate a target area ratio, a target start position, a target length-width ratio, and multiple groups of current rotation angles, intercept current data to be fused from the target negative sample according to the target area ratio and the target length-width ratio, perform mirror image flipping processing on the current data to be fused to obtain multiple current mirror image data, and fuse each current mirror image data with the target positive sample according to each group of current rotation angles and the target start position to obtain multiple new samples.
Optionally, the fusion module is further configured to randomly generate a plurality of area ratios, a plurality of start positions, and a plurality of aspect ratios, randomly combine the plurality of area ratios, the plurality of start positions, and the plurality of aspect ratios to obtain a plurality of groups of corresponding current area ratios, current start positions, and current aspect ratios, intercept target data from the target negative sample according to the current area ratios and the current aspect ratios of the groups, and fuse the target data and the target positive sample according to the current start positions of the groups to obtain a plurality of new samples.
In addition, to achieve the above object, the present invention further provides a new scene adapting device, including: a memory, a processor and a new scene adaptation program stored on the memory and executable on the processor, the new scene adaptation program being configured to implement the new scene adaptation method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium, which stores a new scene adaptation program, and the new scene adaptation program, when executed by a processor, implements the new scene adaptation method as described above.
Selecting a target positive sample from an old data set; acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample; fusing the target positive sample and the target negative sample to obtain a new sample; determining a classification result corresponding to the new sample; forming a new data set according to the new samples and the corresponding classification results; and training a preset classification model according to the old data set and the new data set. By the mode, the sample data in the existing old data set is used as a positive sample, the scene image is used as a negative sample, the new data set is generated by fusion, and the preset classification model is trained according to the old data set and the new data set, so that the preset classification model can be quickly adapted to a new scene, a large amount of new scene sample data containing a classification target does not need to be collected, the adaptation time is shortened, and the adaptation efficiency is improved.
Drawings
Fig. 1 is a schematic structural diagram of a new scene adaptation device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of a new scene adaptation method according to the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of the new scene adaptation method according to the present invention;
FIG. 4 is a flowchart illustrating a third embodiment of a new scene adaptation method according to the present invention;
FIG. 5 is a flowchart illustrating a second embodiment of a new scene adaptation method according to the present invention;
fig. 6 is a block diagram of a first embodiment of the new scene adapter of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a new scene adaptation device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the new scene adaptation apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); optionally, the user interface 1003 may also include a standard wired interface, a wireless interface. Optionally, the network interface 1004 includes a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as a disk Memory. Alternatively, the memory 1005 may be a storage device independent of the processor 1001.
It will be appreciated by those skilled in the art that the architecture shown in fig. 1 does not constitute a limitation of the new scene adaptation device and may comprise more or less components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a new scene adaptation program.
In the new scene adapter apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the new scene adaptation apparatus of the present invention may be arranged in the new scene adaptation apparatus, and the new scene adaptation apparatus invokes, through the processor 1001, the new scene adaptation program stored in the memory 1005, and executes the new scene adaptation method provided in the embodiment of the present invention.
An embodiment of the present invention provides a new scene adaptation method, and referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of the new scene adaptation method according to the present invention.
In this embodiment, the new scene adaptation method includes the following steps:
step S10: a target positive sample is selected from the old dataset.
It can be understood that the execution subject of this embodiment is a new scene adaptation device, and the new scene adaptation device may be a computer, an intelligent AI box, an intelligent camera, or other devices with AI inference capability, which is not limited in this embodiment.
It should be noted that the old data set is an existing data set, which includes sample data of an existing scene and a classification result of a tag, for example, in the case of vehicle classification, the old data set is an outdoor scene data set, which includes image data of each type of vehicle in an outdoor scene collected in advance and a corresponding tag classification result. In a specific implementation, each image may become a sample, the sample is an object directly oriented by the algorithm model, the positive sample is a sample belonging to a specific classification, and the other samples not belonging to the specific classification are negative samples. Optionally, sequentially selecting samples from the old data set as target positive samples according to the arrangement sequence; optionally, different types of image data are selected from the old data set as the target positive sample according to the classification result of each sample.
Step S20: and acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample.
It can be understood that the scene is an actual scene applied by artificial intelligence, different scenes often have different characteristics, the scene related to the new scene image is not in the existing scene, and can be an image without a classification target.
Step S30: and fusing the target positive sample and the target negative sample to obtain a new sample.
In a specific implementation, a new sample may be generated according to a target positive sample and a target negative sample, for example, an area ratio, a starting position, and an aspect ratio are randomly generated, and the target positive sample and the target negative sample are fused according to the area ratio, the starting position, and the aspect ratio to obtain a new sample; the method may further include generating a plurality of new samples according to one target positive sample and one target negative sample, for example, randomly generating an area ratio, a starting position, and an aspect ratio, obtaining data to be fused from the target negative sample according to the randomly generated area ratio and the aspect ratio, transforming the data to be fused to obtain a plurality of different data to be fused, and fusing the target positive sample and the plurality of data to be fused according to the starting position to obtain a plurality of new samples. Specifically, a plurality of different new samples can also be generated through a plurality of groups of different area ratios, starting positions and aspect ratios; and a plurality of different new samples can be generated by rotating the data to be fused at different angles.
Step S40: and determining a classification result corresponding to the new sample.
For example, in the case of vehicle classification, the positive sample is a sample corresponding to a cart a in an outdoor scene, that is, the classification result is the cart a, and the classification result of the corresponding new sample is also the cart a.
Further, in order to improve the classification accuracy of the preset classification model, one-hot coding (one-hot coding) is performed on the classification result of the positive sample, one array is used to represent one classification category, the array only includes one significant bit 1, the positions of the significant bits 1 are different, the classification categories corresponding to the array are different, the classification result corresponding to the new sample is obtained by multiplying the classification result of the positive sample by a preset fusion coefficient P, for example, the classification of N vehicles is defined, and it is specifically assumed that N is 3: the cart a, the truck B, and the other vehicles C, A, B, C represent categories of vehicles, and positive and negative samples of category a are fused using an array with brackets '[ ]', assuming that a preset fusion coefficient P is 0.5, the resulting classification result is a ═ 0.5,0, 0. Specifically, the preset fusion coefficient P is consistent with the randomly generated area ratio.
Step S50: and forming a new data set according to the new samples and the corresponding classification results.
Step S60: and training a preset classification model according to the old data set and the new data set.
It should be understood that, the sample data in the old data set and the new data set are input into a preset classification model, the preset classification model is trained according to the classification result, a loss function value is determined, when the loss function value is greater than a preset threshold value, the characterization model is not trained well, model parameters are adjusted, iterative training is continued until the loss function value is less than or equal to the preset threshold value, and a trained task classification model which is adapted to the new scene is obtained. The preset classification model can be a basic classification model, and the task classification models adapted to various scenes are obtained by training based on a sample data set according to the basic classification model.
In another example, the preset classification model is a task classification model corresponding to an old scene, a new data set is generated through an old data set corresponding to the old scene and a small number of new scene images, and the task classification model corresponding to the old scene is subjected to transfer learning through the new data set to obtain a task classification model adaptive to the new scene.
Further, after the step S60, the method further includes: determining a cross entropy loss value corresponding to the preset classification model; and when the cross entropy loss value is smaller than a preset threshold value, obtaining the trained preset classification model.
It should be noted that, for example, the new sample corresponds to a classification result of a ═ 0.5,0, and although the value changes, the new sample still belongs to the class a, which is equivalent to giving a penalty to the class a, since a part of the image is filled with the new scene image, and not a hundred percent (1 ═ 100%) belongs to the class a, the penalty value is reduced in weight when the preset classification model is trained, so that the cross entropy loss value is calculated. Specifically, the cross entropy loss value is calculated by formula (1):
loss(x,class)=weight[class](-x[class]+log(∑jexp(x[j]))) (1)
wherein, weight is an inter-class weight coefficient, class is a class label, and for class a, class is 0, x [ class ] is a1, and when punishment is given to class a, it is assumed that a preset fusion coefficient P is 0.5, and x [ class ] is a 0.5.
It should be understood that, in the embodiment, the existing algorithm model can be adapted to the new scene without a positive sample of the new scene, so that the effect of the algorithm model is expected, that is, 0 sample is adapted to the new scene, and in the case of vehicle classification, the adaptation of the algorithm model to the new scene can be realized without collecting any vehicle sample of an underground parking lot, and only some negative samples without vehicles need to be collected, so that the model can correctly classify the vehicles in the underground parking lot.
The embodiment selects a target positive sample from an old data set; acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample; fusing the target positive sample and the target negative sample to obtain a new sample; determining a classification result corresponding to the new sample; forming a new data set according to the new samples and the corresponding classification results; and training a preset classification model according to the old data set and the new data set. By the mode, the sample data in the existing old data set is used as a positive sample, the scene image is used as a negative sample, the new data set is generated by fusion, and the preset classification model is trained according to the old data set and the new data set, so that the preset classification model can be quickly adapted to a new scene, a large amount of new scene sample data containing a classification target does not need to be collected, the adaptation time is shortened, and the adaptation efficiency is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a new scene adaptation method according to a second embodiment of the present invention.
Based on the first embodiment, the step S30 of the new scene adaptation method of this embodiment includes:
step S301: the area ratio, starting position and aspect ratio are randomly generated.
It is understood that the fusion manner of the present embodiment is random, and includes a random area ratio (i.e. the preset fusion coefficient P), a random start position, and a random aspect ratio. In a specific implementation, the random area ratio P is any decimal number between 0 and 1, and is a random number. The random initial position represents the same image block and can be randomly fused to different positions; the random aspect ratio characterizes image blocks of the same area, and may have different aspect ratios at random.
Step S302: and intercepting data to be fused from the target negative sample according to the area proportion and the aspect ratio.
In the specific implementation, an intercepting frame is determined according to the area proportion and the length-width ratio, random framing is performed from a target negative sample, and the selected data is intercepted to obtain the data to be fused.
Step S303: and fusing the data to be fused with the target positive sample according to the initial position to obtain a new sample.
It should be noted that, an insertion position is found from the target positive sample according to the starting position, and the data to be fused is inserted from the insertion position, optionally, the insertion position corresponds to the central position of the data to be fused; optionally, the insertion position corresponds to the upper left corner position of the data to be fused, the data to be fused is attached to the target positive sample, and the target positive sample data is filled with the data to be fused to obtain a new sample.
Accordingly, the step S40 includes: obtaining an initial classification result corresponding to the target positive sample; and determining a classification result corresponding to the new sample according to the area proportion and the initial classification result.
Taking the vehicle classification as an example, assuming that the target positive sample is the cart a class, and the corresponding initial classification result is represented by using an array as a ═ 1,0, the positive sample of the class a is fused with the negative sample, assuming that the area ratio P (i.e., the preset fusion coefficient P) is 0.5, and the classification result corresponding to the new sample is determined as a ═ 0.5,0,0 according to the area ratio P and the initial classification result.
Specifically, the obtaining of the initial classification result corresponding to the target positive sample includes: acquiring a target category corresponding to the target positive sample; and carrying out one-bit effective coding on the target category based on all categories to obtain an initial classification result corresponding to the target positive sample.
Note that, the object type is one-hot encoded according to all preset categories, so as to determine the array form corresponding to the object type, that is, the initial classification result, for example, all categories are trolley a, truck B, and other vehicles C, the object category is a, and the corresponding array is represented as a ═ 1,0, 0.
Further, after the step S302, the method further includes: randomly generating a preset rotation angle; and fusing the data to be fused with the target positive sample according to the preset rotation angle and the initial position to obtain a new sample.
It can be understood that the preset rotation angle is any one of 0-360 degrees, when fusion is performed, the target positive sample is a positive direction, the data to be fused is rotated according to the preset rotation angle, the rotated data to be fused is attached to the target positive sample, and the rotated data to be fused fills the target positive sample data to obtain a new sample.
Specifically, the randomly generating the area ratio includes: the area proportion is randomly generated based on Dirichlet distribution.
In this embodiment, a beta distribution (dirichlet distribution) is used to randomly generate the area ratio. Output any fraction between 0-1 through the interface numpy.random.beta (100 ), mostly at and near 0.5.
The embodiment selects a target positive sample from an old data set; acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample; randomly generating an area ratio, a starting position and an aspect ratio; intercepting data to be fused from the target negative sample according to the area proportion and the length-width ratio; fusing the data to be fused with the target positive sample according to the initial position to obtain a new sample; determining a classification result corresponding to the new sample; forming a new data set according to the new samples and the corresponding classification results; and training a preset classification model according to the old data set and the new data set. By the method, the sample data in the existing old data set is used as the positive sample, the scene image is used as the negative sample, the positive sample and the negative sample are fused to generate the new data set based on the area proportion, the initial position and the length-width ratio which are randomly generated, and the preset classification model is trained according to the old data set and the new data set, so that the preset classification model is quickly adapted to the new scene, a large amount of sample data of the new scene containing the classification target does not need to be acquired, the adaptation time is shortened, and the adaptation efficiency is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a new scene adaptation method according to a third embodiment of the present invention.
Based on the first embodiment, the step S30 of the new scene adaptation method of this embodiment includes:
step S304: and randomly generating a target area proportion, a target starting position, a target length-width ratio and a plurality of groups of current rotation angles.
Step S305: and intercepting the current data to be fused from the target negative sample according to the target area proportion and the target length-width ratio.
Step S306: and carrying out mirror image turning processing on the current data to be fused to obtain a plurality of current mirror image data.
It can be understood that the mirror image turning process may be left-right turning or up-down turning, and the turning is performed in different ways to obtain a plurality of current mirror image data.
Step S307: and fusing each current mirror image data with the target positive sample according to each group of the current rotation angle and the target initial position respectively to obtain a plurality of new samples.
It should be noted that, an insertion position of the current mirror image data inserted into the target positive sample is determined according to the target start position, each current mirror image data is rotated according to each group of current rotation angle, and a plurality of rotated current mirror image data are inserted into the insertion position to obtain a plurality of new samples, for example, there are three groups of current rotation angles, three current mirror image data, and there are 9 combination modes, so as to generate 9 new samples.
The embodiment selects a target positive sample from an old data set; acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample; randomly generating a target area proportion, a target starting position, a target length-width ratio and a plurality of groups of current rotation angles; intercepting current data to be fused from a target negative sample according to the target area proportion and the target length-width ratio; carrying out mirror image turning processing on the current data to be fused to obtain a plurality of current mirror image data; fusing each current mirror image data with a target positive sample according to each group of current rotation angles and a target initial position respectively to obtain a plurality of new samples; determining a classification result corresponding to the new sample; forming a new data set according to the new samples and the corresponding classification results; and training a preset classification model according to the old data set and the new data set. By the mode, the sample data in the existing old data set is used as a positive sample, the scene image is used as a negative sample, different intercepting and rotating processing is carried out on the negative sample based on the randomly generated area proportion, the initial position, the length-width ratio and the multiple groups of rotating angles, multiple mirror image data are generated according to different overturning modes, the positive sample and the processed data are fused to generate multiple new samples, a new data set is generated, the preset classification model is trained according to the old data set and the new data set, the preset classification model is enabled to be rapidly adapted to the new scene, a large number of training samples are rapidly generated, a large amount of sample data of the new scene containing the classification target does not need to be collected, adaptation time is shortened, and adaptation efficiency is improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a new scene adaptation method according to a fourth embodiment of the present invention.
Based on the first embodiment, the step S30 of the new scene adaptation method of this embodiment includes:
step S308: several area ratios, several starting positions and several aspect ratios are randomly generated.
Step S309: and randomly combining the area proportions, the initial positions and the length-width ratios to obtain current area proportions, current initial positions and current length-width ratios corresponding to the groups.
It will be appreciated that sets of corresponding current area ratios, current start positions, and current aspect ratios are generated based on the random permutation combination.
Step S310: and intercepting target data from the target negative sample according to the current area proportion and the current aspect ratio of each group.
It should be noted that, the target data corresponding to each group is intercepted from the target negative sample based on the current area ratio and the current aspect ratio of each group, so as to obtain a plurality of different target data.
Step S311: and fusing the target data and the target positive sample according to the current initial position of each group to obtain a plurality of new samples.
It can be understood that the insertion positions corresponding to the target data are determined according to the current starting positions of the groups, and the target data and the target positive samples are fused based on the insertion positions to obtain a plurality of new samples. For example, three area ratios, three start positions, and three aspect ratios are randomly generated, 9 combinations are randomly generated, 9 pieces of target data are randomly extracted from the target negative sample according to the area ratios and the aspect ratios in the 9 combinations, and each piece of target data is fused with the target positive sample according to the start positions in the 9 combinations to obtain 9 new samples.
Accordingly, the step S40 includes: obtaining an initial classification result corresponding to the target positive sample; and determining classification results corresponding to the new samples according to the current area proportion of each group and the initial classification result.
It should be noted that, for example, a vehicle classification is described, assuming that a target positive sample is a cart a type, and the corresponding initial classification result is represented by using an array where a is [1,0,0], the positive sample of the type a is fused with a plurality of target data, assuming that four groups of corresponding area ratios P are 0.5, 0.6, 0.4, and 0.3, and the classification results corresponding to a plurality of new samples are determined as a1 being [0.5,0,0], a1 being [0.6,0,0], a1 being [0.4,0,0], and a1 being [0.3,0,0 ].
The embodiment selects a target positive sample from an old data set; acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample; randomly generating a plurality of area ratios, a plurality of initial positions and a plurality of length-width ratios; randomly combining the area ratios, the initial positions and the length-width ratios to obtain current area ratios, current initial positions and current length-width ratios corresponding to the groups; intercepting target data from the target negative sample according to the current area proportion and the current length-width ratio of each group; fusing the target data and the target positive sample according to the current initial position of each group to obtain a plurality of new samples; determining a classification result corresponding to the new sample; forming a new data set according to the new samples and the corresponding classification results; and training a preset classification model according to the old data set and the new data set. By the method, the sample data in the existing old data set is used as a positive sample, the scene image is used as a negative sample, the positive sample and the negative sample are fused to generate a plurality of new samples based on a plurality of randomly generated groups of area ratios, initial positions and length-width comparison, the new data set is generated, the preset classification model is trained according to the old data set and the new data set, the preset classification model is enabled to be rapidly adapted to the new scene, a large number of training samples are rapidly generated by adopting different fusion modes, a large number of new scene sample data containing classification targets do not need to be collected, time consumed by adaptation is shortened, and adaptation efficiency is improved.
In addition, an embodiment of the present invention further provides a storage medium, where a new scene adaptation program is stored on the storage medium, and the new scene adaptation program, when executed by a processor, implements the new scene adaptation method as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
Referring to fig. 6, fig. 6 is a block diagram illustrating a first embodiment of the new scene adapter of the present invention.
As shown in fig. 6, the new scene adaptation apparatus provided in the embodiment of the present invention includes:
a selecting module 10, configured to select a target positive sample from the old dataset;
the intercepting module 20 is configured to acquire a new scene image, and intercept a target area from the new scene image to obtain a target negative sample;
a fusion module 30, configured to fuse the target positive sample and the target negative sample to obtain a new sample;
a classification module 40, configured to determine a classification result corresponding to the new sample;
a generating module 50, configured to form a new data set according to the new samples and the corresponding classification results;
a training module 60, configured to train a preset classification model according to the old data set and the new data set.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
The embodiment selects a target positive sample from an old data set; acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample; fusing the target positive sample and the target negative sample to obtain a new sample; determining a classification result corresponding to the new sample; forming a new data set according to the new samples and the corresponding classification results; and training the preset classification model according to the old data set and the new data set. By the mode, the sample data in the existing old data set is used as a positive sample, the scene image is used as a negative sample, the new data set is generated by fusion, and the preset classification model is trained according to the old data set and the new data set, so that the preset classification model can be quickly adapted to a new scene, a large amount of new scene sample data containing a classification target does not need to be collected, the adaptation time is shortened, and the adaptation efficiency is improved.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment may refer to a new scene adaptation method provided in any embodiment of the present invention, and are not described herein again.
In an embodiment, the fusion module 30 is further configured to randomly generate an area ratio, a start position, and an aspect ratio, intercept data to be fused from the target negative sample according to the area ratio and the aspect ratio, and fuse the data to be fused and the target positive sample according to the start position to obtain a new sample.
In an embodiment, the classification module 40 is further configured to obtain an initial classification result corresponding to the target positive sample,
and determining a classification result corresponding to the new sample according to the area proportion and the initial classification result.
In an embodiment, the classification module 40 is further configured to obtain a target class corresponding to the target positive sample, and perform one-bit effective coding on the target class based on all classes to obtain an initial classification result corresponding to the target positive sample.
In an embodiment, the fusion module 30 is further configured to randomly generate a preset rotation angle, and fuse the data to be fused with the target positive sample according to the preset rotation angle and the start position to obtain a new sample.
In an embodiment, the fusion module 40 is further configured to randomly generate an area ratio based on a dirichlet distribution.
In an embodiment, the fusion module 40 is further configured to randomly generate a target area ratio, a target start position, a target length-to-width ratio, and multiple groups of current rotation angles, intercept current data to be fused from the target negative sample according to the target area ratio and the target length-to-width ratio, perform mirror image flipping processing on the current data to be fused to obtain multiple current mirror image data, and fuse each current mirror image data with the target positive sample according to each group of current rotation angles and the target start position to obtain multiple new samples.
In an embodiment, the fusion module 40 is further configured to randomly generate a plurality of area ratios, a plurality of start positions, and a plurality of aspect ratios, randomly combine the plurality of area ratios, the plurality of start positions, and the plurality of aspect ratios to obtain a plurality of groups of corresponding current area ratios, current start positions, and current aspect ratios, intercept target data from the target negative sample according to the current area ratios and the current aspect ratios of the groups, and fuse the target data and the target positive sample according to the current start positions of the groups to obtain a plurality of new samples.
In an embodiment, the classification module 40 is further configured to obtain an initial classification result corresponding to the target positive sample, and determine classification results corresponding to the new samples according to the current area ratios of the groups and the initial classification result.
In an embodiment, the training module 60 is further configured to determine a cross entropy loss value corresponding to the preset classification model, and obtain the trained preset classification model when the cross entropy loss value is smaller than a preset threshold.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
The invention discloses A1 and a new scene adaptation method, wherein the new scene adaptation method comprises the following steps:
selecting a target positive sample from the old data set;
acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample;
fusing the target positive sample and the target negative sample to obtain a new sample;
determining a classification result corresponding to the new sample;
forming a new data set according to the new samples and the corresponding classification results;
and training a preset classification model according to the old data set and the new data set.
A2, the new scene adaptation method as in a1, the fusing the target positive sample and the target negative sample to obtain a new sample, comprising:
randomly generating an area ratio, a starting position and an aspect ratio;
intercepting data to be fused from the target negative sample according to the area proportion and the length-width ratio;
and fusing the data to be fused with the target positive sample according to the initial position to obtain a new sample.
A3, the method for adapting new scene as in a2, wherein the determining the classification result corresponding to the new sample comprises:
obtaining an initial classification result corresponding to the target positive sample;
and determining a classification result corresponding to the new sample according to the area proportion and the initial classification result.
A4, the new scene adaptation method as in A3, the obtaining the initial classification result corresponding to the target positive sample, including:
acquiring a target category corresponding to the target positive sample;
and carrying out one-bit effective coding on the target category based on all categories to obtain an initial classification result corresponding to the target positive sample.
A5, the new scene adaptation method as in a2, after the intercepting the data to be fused from the target negative sample according to the area ratio and the aspect ratio, the method further comprising:
randomly generating a preset rotation angle;
and fusing the data to be fused with the target positive sample according to the preset rotation angle and the initial position to obtain a new sample.
A6, the new scene adaptation method as in a2, the randomly generating area proportions, comprising:
the area proportion is randomly generated based on Dirichlet distribution.
A7, the new scene adaptation method as in a1, the fusing the target positive sample and the target negative sample to obtain a new sample, comprising:
randomly generating a target area proportion, a target starting position, a target length-width ratio and a plurality of groups of current rotation angles;
intercepting current data to be fused from the target negative sample according to the target area proportion and the target length-width ratio;
carrying out mirror image overturning processing on the current data to be fused to obtain a plurality of current mirror image data;
and fusing each current mirror image data with the target positive sample according to each group of the current rotation angle and the target initial position respectively to obtain a plurality of new samples.
A8, the new scene adaptation method as in a1, the fusing the target positive sample and the target negative sample to obtain a new sample, comprising:
randomly generating a plurality of area ratios, a plurality of initial positions and a plurality of length-width ratios;
randomly combining the area proportions, the initial positions and the length-width ratios to obtain current area proportions, current initial positions and current length-width ratios corresponding to groups;
intercepting target data from the target negative sample according to the current area proportion and the current aspect ratio of each group;
and fusing the target data and the target positive sample according to the current initial position of each group to obtain a plurality of new samples.
A9, the method for adapting new scene as in A8, wherein the determining the classification result corresponding to the new sample comprises:
obtaining an initial classification result corresponding to the target positive sample;
and determining classification results corresponding to the new samples according to the current area proportion of each group and the initial classification result.
A10 new scene adaptation method as in any of A1-A9, further comprising, after training a preset classification model from the old dataset and the new dataset:
determining a cross entropy loss value corresponding to the preset classification model;
and when the cross entropy loss value is smaller than a preset threshold value, obtaining the trained preset classification model.
The invention also discloses B11, a new scene adaptation device, which comprises:
the selecting module is used for selecting a target positive sample from the old data set;
the intercepting module is used for acquiring a new scene image and intercepting a target area from the new scene image to obtain a target negative sample;
the fusion module is used for fusing the target positive sample and the target negative sample to obtain a new sample;
the classification module is used for determining a classification result corresponding to the new sample;
the generating module is used for forming a new data set according to the new samples and the corresponding classification results;
and the training module is used for training a preset classification model according to the old data set and the new data set.
B12, the new scene adapting device as described in B11, where the fusion module is further configured to randomly generate an area ratio, a start position, and an aspect ratio, intercept data to be fused from the target negative sample according to the area ratio and the aspect ratio, and fuse the data to be fused and the target positive sample according to the start position to obtain a new sample.
B13, the new scene adapting device as described in B12, the classification module further configured to obtain an initial classification result corresponding to the target positive sample,
and determining a classification result corresponding to the new sample according to the area proportion and the initial classification result.
B14, the new scene adapting device according to B13, wherein the classification module is further configured to obtain a target class corresponding to the target positive sample, and perform one-bit effective coding on the target class based on all classes to obtain an initial classification result corresponding to the target positive sample.
B15, the new scene adapting device according to B12, the fusion module is further configured to randomly generate a preset rotation angle, and fuse the data to be fused with the target positive sample according to the preset rotation angle and the start position to obtain a new sample.
B16, the new scene adaptation device as in B12, the fusion module further configured to randomly generate an area ratio based on dirichlet distribution.
B17, the new scene adapting device according to B11, where the fusion module is further configured to randomly generate a target area ratio, a target start position, a target aspect ratio, and multiple groups of current rotation angles, intercept current data to be fused from the target negative sample according to the target area ratio and the target aspect ratio, perform mirror image flipping processing on the current data to be fused to obtain multiple current mirror image data, and fuse each current mirror image data with the target positive sample according to each group of current rotation angles and the target start position to obtain multiple new samples.
B18, the new scene adapting device according to B11, where the fusion module is further configured to randomly generate a plurality of area ratios, a plurality of start positions, and a plurality of aspect ratios, randomly combine the plurality of area ratios, the plurality of start positions, and the plurality of aspect ratios to obtain a plurality of groups of corresponding current area ratios, current start positions, and current aspect ratios, intercept target data from the target negative sample according to the current area ratios and the current aspect ratios of the groups, and fuse the target data and the target positive sample according to the current start positions of the groups to obtain a plurality of new samples.
The invention also discloses C19, a new scene adaptation device, the device includes: a memory, a processor, and a new scene adaptation program stored on the memory and executable on the processor, the new scene adaptation program configured to implement the new scene adaptation method as recited in any one of a1 to a 10.
The invention also discloses D20, a storage medium having stored thereon a new scene adaptation program which, when executed by a processor, implements the new scene adaptation method as described in any of A1 to A10.

Claims (10)

1. A new scene adaptation method, characterized in that the new scene adaptation method comprises:
selecting a target positive sample from the old data set;
acquiring a new scene image, and intercepting a target area from the new scene image to obtain a target negative sample;
fusing the target positive sample and the target negative sample to obtain a new sample;
determining a classification result corresponding to the new sample;
forming a new data set according to the new samples and the corresponding classification results;
and training a preset classification model according to the old data set and the new data set.
2. The method of claim 1, wherein the fusing the target positive sample and the target negative sample to obtain a new sample comprises:
randomly generating an area ratio, a starting position and an aspect ratio;
intercepting data to be fused from the target negative sample according to the area proportion and the length-width ratio;
and fusing the data to be fused with the target positive sample according to the initial position to obtain a new sample.
3. The method of claim 2, wherein the determining the classification result corresponding to the new sample comprises:
obtaining an initial classification result corresponding to the target positive sample;
and determining a classification result corresponding to the new sample according to the area proportion and the initial classification result.
4. The method of claim 3, wherein the obtaining of the initial classification result corresponding to the target positive sample comprises:
acquiring a target category corresponding to the target positive sample;
and carrying out one-bit effective coding on the target category based on all categories to obtain an initial classification result corresponding to the target positive sample.
5. The method of new scene adaptation according to claim 2, wherein after said truncating the data to be fused from the target negative sample according to the area ratio and the aspect ratio, the method further comprises:
randomly generating a preset rotation angle;
and fusing the data to be fused with the target positive sample according to the preset rotation angle and the initial position to obtain a new sample.
6. The new scene adaptation method according to claim 2, wherein said randomly generating area proportions comprises:
the area proportion is randomly generated based on Dirichlet distribution.
7. The method of claim 1, wherein the fusing the target positive sample and the target negative sample to obtain a new sample comprises:
randomly generating a target area proportion, a target starting position, a target length-width ratio and a plurality of groups of current rotation angles;
intercepting current data to be fused from the target negative sample according to the target area proportion and the target length-width ratio;
carrying out mirror image overturning processing on the current data to be fused to obtain a plurality of current mirror image data;
and fusing each current mirror image data with the target positive sample according to each group of the current rotation angle and the target initial position respectively to obtain a plurality of new samples.
8. A new scene adaptation apparatus, characterized in that the new scene adaptation apparatus comprises:
the selecting module is used for selecting a target positive sample from the old data set;
the intercepting module is used for acquiring a new scene image and intercepting a target area from the new scene image to obtain a target negative sample;
the fusion module is used for fusing the target positive sample and the target negative sample to obtain a new sample;
the classification module is used for determining a classification result corresponding to the new sample;
the generating module is used for forming a new data set according to the new samples and the corresponding classification results;
and the training module is used for training a preset classification model according to the old data set and the new data set.
9. A new scene adaptation device, characterized in that the device comprises: a memory, a processor, and a new scene adaptation program stored on the memory and executable on the processor, the new scene adaptation program configured to implement the new scene adaptation method of any of claims 1 to 7.
10. A storage medium having stored thereon a new scene adaptation program which, when executed by a processor, implements a new scene adaptation method as claimed in any one of claims 1 to 7.
CN202111324963.3A 2021-11-10 2021-11-10 New scene adaptation method, device, equipment and storage medium Withdrawn CN113902762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111324963.3A CN113902762A (en) 2021-11-10 2021-11-10 New scene adaptation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111324963.3A CN113902762A (en) 2021-11-10 2021-11-10 New scene adaptation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113902762A true CN113902762A (en) 2022-01-07

Family

ID=79193783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111324963.3A Withdrawn CN113902762A (en) 2021-11-10 2021-11-10 New scene adaptation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113902762A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596452A (en) * 2022-01-25 2022-06-07 深圳大学 Online adaptation method based on mode matching and mobile perception scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596452A (en) * 2022-01-25 2022-06-07 深圳大学 Online adaptation method based on mode matching and mobile perception scene

Similar Documents

Publication Publication Date Title
Jaipuria et al. Deflating dataset bias using synthetic data augmentation
CN110728295B (en) Semi-supervised landform classification model training and landform graph construction method
CN110766038B (en) Unsupervised landform classification model training and landform image construction method
US10635927B2 (en) Systems for performing semantic segmentation and methods thereof
CN111382758B (en) Training image classification model, image classification method, device, equipment and medium
CN108229523B (en) Image detection method, neural network training method, device and electronic equipment
CN107909638B (en) Rendering method, medium, system and electronic device of virtual object
CN114067119B (en) Training method of panorama segmentation model, panorama segmentation method and device
DE112018004584T5 (en) DENSITY COORDINATE HASHING FOR VOLUMETRIC DATA
CN116670687A (en) Method and system for adapting a trained object detection model to domain shift
CN115841644B (en) Control system and method for urban infrastructure engineering equipment based on Internet of Things
CN116958325A (en) Training method and device for image processing model, electronic equipment and storage medium
CN113408673B (en) Generative adversarial network subspace decoupling and generative editing method, system and computer
CN112598007B (en) Method, device and equipment for screening picture training set and readable storage medium
WO2020038589A1 (en) Methods for automatically generating diverse image data
CN114359789A (en) Target detection method, device, equipment and medium for video image
CN113902762A (en) New scene adaptation method, device, equipment and storage medium
CN113793258A (en) Privacy protection method and device for monitoring video image
CN110569379A (en) Method for manufacturing picture data set of automobile parts
CN112686125A (en) Vehicle type determination method and device, storage medium and electronic device
Reutov Generating of synthetic datasets using diffusion models for solving computer vision tasks in urban applications
CN119598407A (en) Vehicle re-identification method, device, equipment and product based on multi-mode pre-training
CN115937792A (en) Intelligent community operation management system based on block chain
CN115439499A (en) Rainy-day image rain removing method and device based on generation countermeasure network
CN112597825A (en) Driving scene segmentation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220107