CN111160569A - Application development method and device based on machine learning model and electronic equipment - Google Patents
Application development method and device based on machine learning model and electronic equipment Download PDFInfo
- Publication number
- CN111160569A CN111160569A CN201911395248.1A CN201911395248A CN111160569A CN 111160569 A CN111160569 A CN 111160569A CN 201911395248 A CN201911395248 A CN 201911395248A CN 111160569 A CN111160569 A CN 111160569A
- Authority
- CN
- China
- Prior art keywords
- machine learning
- model
- user
- training
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an application development method and device based on a machine learning model and electronic equipment. The method comprises the following steps: obtaining the type of a machine learning model set by a user; obtaining one or more machine learning models through one or more times of experiments of automatic training models according to machine learning strategies of corresponding types, wherein the machine learning strategies are used for controlling at least one of data, algorithms and resources related to model training; and generating the application according to the obtained machine learning model. The method simplifies the complex application construction process and solves the problem of high labor cost in artificial intelligence application development.
Description
Technical Field
The present invention relates to the field of application development technologies, and in particular, to an application development method based on a machine learning model, an application development apparatus based on a machine learning model, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of artificial intelligence technology, the application scenarios of artificial intelligence are becoming more and more extensive. For example, computer vision-related artificial intelligence techniques can be applied to face recognition, license plate recognition, bill recognition, bacteria recognition, and the like.
The development of artificial intelligence application is generally more in links and complex in process, and has higher requirements on the capability of developers. The cost required for solving the business problem can be obviously reduced by adopting the existing artificial intelligence application. However, the existing mature applications are generally focused on several mainstream application scenarios, for example, most computer vision-related applications are focused on the fields of face recognition, vehicle recognition, license plate recognition, and the like. For personalized requirements (namely long tail requirements) outside the mainstream scenes, such as bill identification, bacteria identification and other scenes, special artificial intelligence applications still need to be developed.
Therefore, it is necessary to provide a new development method for artificial intelligence application to reduce the development difficulty and meet diversified business requirements.
Disclosure of Invention
The invention aims to provide a new technical scheme for application development based on a machine learning model.
According to a first aspect of the present invention, there is provided an application development method based on a machine learning model, including:
obtaining the type of a machine learning model set by a user;
obtaining one or more machine learning models through one or more times of experiments of automatic training models according to the machine learning strategies corresponding to the types, wherein the machine learning strategies are used for controlling at least one of data, algorithms and resources related to model training;
and generating the application according to the obtained machine learning model.
Optionally, the machine learning model is a computer vision-related machine learning model.
Optionally, the type of the machine learning model includes at least one of an image classification type, an object recognition type, a text localization type, and a text recognition type.
Optionally, the obtaining of the type of the machine learning model set by the user includes:
displaying the candidate types of the machine learning models respectively corresponding to various machine learning tasks to a user;
a type of the machine learning model selected by the user from the candidate types is received.
Optionally, the obtaining one or more machine learning models through one or more experiments of an automatic training model according to the machine learning strategy corresponding to the type includes:
providing a modeling creation interface for setting a model auto-training task according to a machine learning strategy corresponding to the type to a user;
receiving a setting operation executed by a user in a modeling creation interface to acquire a setting item required by an automatic training model; and
and according to the acquired setting items, carrying out one or more times of automatic model training experiments based on the marking data uploaded by the user to obtain one or more machine learning models.
Optionally, the setting item includes at least one of annotation data uploading, a data preprocessing policy, algorithm configuration, and resource configuration.
Optionally, at least one of the data pre-processing policy, the algorithm configuration and the resource configuration provides different levels of configuration policy.
Optionally, the model training module is configured to provide a default level of at least one of the preprocessing strategy, the algorithm configuration, and the resource configuration according to annotation data and a type of the machine learning model.
Optionally, the modeling creation interface is further configured to display a plate picture and to receive a selection of an identification area in the plate picture by a user, where the plate picture is used for the user to specify the identification area, and further includes: and receiving the selection of the user on the identification area in the plate-type picture so as to cut the marked image, so that the cut marked image is consistent with the selected identification area.
Optionally, the method further comprises:
and displaying at least one of the experiment version, the experiment state, the experiment progress, the accuracy, the creation time, the experiment basic information, the experiment log, the training detail index and the experiment evaluation of the experiment to a user.
Optionally, the displaying training detail indicators of the experiment to the user includes: and acquiring indexes of multiple training iterations, and displaying an index evolution process among the multiple training iterations.
Optionally, the method further comprises: creating an experiment evaluation task to evaluate an experiment yield model, and presenting the experiment evaluation of the experiment to a user comprises: and displaying at least one item of evaluation index statistics, resource allocation, real-time logs and error case data under the experimental evaluation task.
Optionally, the one or more experiments of the auto-training model are attributed to the same project, wherein each of the projects generates a respective one of the applications.
Optionally, the creating an experimental assessment task includes: an evaluation data set is selected and resources for the evaluation task are configured.
Optionally, the generating the application according to the obtained machine learning model includes: generating the application based on the trained single machine learning model; or generating the application based on a template process, wherein the template process is used for defining the arrangement process of the trained machine learning models in the application process.
Optionally, the generating the application based on the template flow includes:
providing application parameters related in the template process to a user;
and generating the application utilizing the plurality of machine learning models according to the template flow according to the setting of the application parameters by the user.
Optionally, the template process comprises an OCR recognition process, wherein generating the application based on the OCR recognition process comprises:
providing an operation interface for a user, wherein the operation interface shows the OCR plate-type picture and is used for configuring one or more models respectively applied to each recognition area in the OCR plate-type picture; and
and receiving configuration operation performed in an operation interface by a user to generate application applying the one or more models for each identification area.
Optionally, the plurality of models comprises a localization model and an identification model for the identification zone.
Optionally, the method further comprises: and creating an OCR plate corresponding to the OCR plate picture.
Optionally, the creating an OCR plate corresponding to the OCR plate picture includes:
displaying an OCR sample picture selected by a user in a canvas area;
providing a control for setting an OCR recognition area within or around the canvas area; and
and responding to the operation of the user on the control to set one or more OCR recognition areas with corresponding contents on the displayed OCR sample picture so as to obtain an OCR plate picture.
Optionally, the step of creating an OCR plate further includes:
providing a control within or around the canvas area for editing the OCR sample picture; and
and editing the OCR sample picture in response to the operation of the control by the user, wherein the editing comprises at least one of changing pictures, selecting, moving, cutting, enlarging and reducing.
Optionally, the method further comprises: and the application is on-line, and at least one item of application information, resources and instances, resource monitoring, API call monitoring and application logs is visually displayed to a user.
Optionally, the method further comprises: receiving an example picture uploaded by a user, and displaying a prediction result of an online application for the example picture.
Optionally, the method further comprises: and acquiring the annotation data by issuing an annotation task, and uploading the acquired annotation data for training the model.
Optionally, at least one of the following processes is performed in the uploading process according to a setting of a user: discarding exception files, ignoring exception labels, introducing failures, using recommended configurations, and using custom configurations.
Optionally, the method further comprises presenting a graphical interface regarding the uploaded annotation data in response to a user input, wherein at least one of the following is provided on the graphical interface: details of the upload log or its entry, shortcuts for copying the annotation data path, buttons for viewing the annotation data.
According to a second aspect of the present invention, there is provided an application development apparatus comprising:
the model type acquisition module is used for acquiring the type of the machine learning model set by a user;
the model training module is used for obtaining one or more machine learning models through one or more times of experiments of the automatic training models according to the machine learning strategies corresponding to the types, wherein the machine learning strategies are used for controlling at least one of data, algorithms and resources related to model training;
and the application generation module is used for generating the application according to the obtained machine learning model.
Optionally, the machine learning model is a computer vision-related machine learning model.
Optionally, the type of the machine learning model includes at least one of an image classification type, an object recognition type, a text localization type, and a text recognition type.
Optionally, the model type obtaining module is further configured to:
displaying the candidate types of the machine learning models respectively corresponding to various machine learning tasks to a user;
a type of the machine learning model selected by the user from the candidate types is received.
Optionally, the model training module is configured to:
providing a modeling creation interface for setting a model auto-training task according to a machine learning strategy corresponding to the type to a user;
receiving a setting operation executed by a user in a modeling creation interface to acquire a setting item required by an automatic training model; and
and according to the acquired setting items, carrying out one or more times of automatic model training experiments based on the marking data uploaded by the user to obtain one or more machine learning models.
Optionally, the setting item includes at least one of annotation data uploading, a data preprocessing policy, algorithm configuration, and resource configuration.
Optionally, at least one of the data pre-processing policy, the algorithm configuration and the resource configuration provides different levels of configuration policy.
Optionally, the model training module is configured to provide a default level of at least one of the preprocessing strategy, the algorithm configuration, and the resource configuration according to annotation data and a type of the machine learning model.
Optionally, the modeling creation interface is further configured to display a plate picture and to receive a selection of an identification area in the plate picture by a user, where the plate picture is used for the user to specify the identification area, and further includes: and receiving the selection of the user on the identification area in the plate-type picture so as to cut the marked image, so that the cut marked image is consistent with the selected identification area.
Optionally, the model training module is further configured to:
and displaying at least one of the experiment version, the experiment state, the experiment progress, the accuracy, the creation time, the experiment basic information, the experiment log, the training detail index and the experiment evaluation of the experiment to a user.
Optionally, the model training module is further configured to: and acquiring indexes of multiple training iterations, and displaying an index evolution process among the multiple training iterations.
Optionally, the model training module is further configured to: creating an experiment evaluation task to evaluate an experiment yield model, and presenting the experiment evaluation of the experiment to a user, comprising: and displaying at least one item of evaluation index statistics, resource allocation, real-time logs and error case data under the experimental evaluation task.
Optionally, the one or more experiments of the auto-training model are attributed to the same project, wherein each of the projects generates a respective one of the applications.
Optionally, the model training module is further configured to: an evaluation data set is selected and resources for the evaluation task are configured.
Optionally, the application generation module is configured to: generating the application based on the trained single machine learning model; or generating the application based on a template process, wherein the template process is used for defining the arrangement process of the trained machine learning models in the application process.
Optionally, the application generation module is configured to:
providing application parameters related in the template process to a user;
and generating the application utilizing the plurality of machine learning models according to the template flow according to the setting of the application parameters by the user.
Optionally, the template process includes an OCR recognition process, wherein the application generation module is configured to:
providing an operation interface for a user, wherein the operation interface shows the OCR plate-type picture and is used for configuring one or more models respectively applied to each recognition area in the OCR plate-type picture; and
and receiving configuration operation performed in an operation interface by a user to generate application applying the one or more models for each identification area.
Optionally, the plurality of models comprises a localization model and an identification model for the identification zone.
Optionally, the application generation module is further configured to: and creating an OCR plate corresponding to the OCR plate picture.
Optionally, the application generation module is configured to:
displaying an OCR sample picture selected by a user in a canvas area;
providing a control for setting an OCR recognition area within or around the canvas area; and
and responding to the operation of the user on the control to set one or more OCR recognition areas with corresponding contents on the displayed OCR sample picture so as to obtain an OCR plate picture.
Optionally, the application generation module is further configured to:
providing a control within or around the canvas area for editing the OCR sample picture; and editing the OCR sample picture in response to the operation of the control by the user, wherein the editing comprises at least one of changing pictures, selecting, moving, cutting, enlarging and reducing.
Optionally, the application generation module is further configured to: and the application is on-line, and at least one item of application information, resources and instances, resource monitoring, API call monitoring and application logs is visually displayed to a user.
Optionally, the application generation module is further configured to: receiving an example picture uploaded by a user, and displaying a prediction result of an online application for the example picture.
Optionally, the apparatus further includes an annotation data obtaining module, configured to: and acquiring the annotation data by issuing an annotation task, and uploading the acquired annotation data for training the model.
Optionally, the annotation data acquisition module performs at least one of the following processes according to a setting of a user: discarding exception files, ignoring exception labels, introducing failures, using recommended configurations, and using custom configurations.
Optionally, the annotation data acquisition module is further configured to present a graphical interface regarding the uploaded annotation data in response to an input from a user, wherein at least one of the following items is provided on the graphical interface: details of the upload log or its entry, shortcuts for copying the annotation data path, buttons for viewing the annotation data.
According to a third aspect of the present invention, there is provided an electronic apparatus comprising:
the apparatus according to the second aspect of the invention; or,
a processor and a memory for storing instructions for controlling the processor to perform the method for machine learning model based application development according to the first aspect of the invention.
According to a fourth aspect of the present invention, there is provided a computer-readable storage medium storing executable instructions which, when executed by a processor, implement the method for machine learning model-based application development according to the first aspect of the present invention.
The application development method based on the machine learning model provided by the embodiment can autonomously construct artificial intelligence, particularly visual application services, and meet the requirements of accessing and storing labeled data under a standard path to construct and optimize the model in a one-stop mode until the model is applied to an online mode, so that the online service is provided for an actual business scene. And the integrated, automatic and intelligent artificial intelligence development management is realized by being assisted by a monitoring management suite of data, service and application. The complex application construction process is simplified through low-threshold interface operation, and the problem of high labor cost in artificial intelligence application development is solved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 shows a schematic diagram of an electronic device that may be used to implement an embodiment of the invention.
FIG. 2 shows a flow diagram of a method for machine learning model-based application development, in accordance with an embodiment of the present invention.
Fig. 3 shows a schematic diagram of a plate picture in an example of an embodiment of the invention.
Fig. 4 shows a schematic diagram of an application development device according to an embodiment of the invention.
Fig. 5 shows a schematic view of an electronic device according to an embodiment of the invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
< hardware configuration >
FIG. 1 shows a schematic diagram of an electronic device that may be used to implement an embodiment of the invention.
As shown in fig. 1, the electronic device 1000 includes a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, an output device 1500, and an input device 1600. The processor 1100 is, for example, a central processing unit CPU, a microprocessor MCU, or the like. The memory 1200 is, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, or the like. The interface device 1300 is, for example, a USB interface, a headphone interface, or the like. Communication device 1400 is capable of wired or wireless communication, for example. The output device 1500 is, for example, a liquid crystal display, a touch panel, a speaker, or the like. The input device 1600 is, for example, a touch screen, a keyboard, a mouse, a microphone, etc.
In an embodiment of the present invention, the memory 1200 of the electronic device 1000 is used for storing instructions for controlling the processor 1100 to execute the application development method based on the machine learning model according to the embodiment of the present invention. In the above description, the skilled person will be able to design instructions in accordance with the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
Although a plurality of devices of the electronic apparatus 1000 are illustrated in fig. 1, the present invention may only relate to some of the devices, for example, the electronic apparatus 1000 only relates to the memory 1200, the processor 1100, the output device 1500 and the input device 1600.
The electronic device 1000 shown in fig. 1 is merely illustrative and is in no way intended to limit the present invention, its application, or uses.
< method examples >
The present embodiment provides a method for application development based on a machine learning model, which is implemented by the electronic device 1000 in fig. 1, for example. As shown in fig. 2, the method includes the following steps S1100-S1200.
In step S1100, the type of the machine learning model set by the user is acquired.
In this embodiment, the types of machine learning may be divided according to application scenarios thereof. In one example, the machine learning model is a computer vision-related machine learning model, and the type of the machine learning model comprises at least one of an image classification type, an object recognition type, a text positioning type and a text recognition type.
Image classification is to distinguish different image categories according to semantic information of images. Image classification is an important basic problem in computer vision, and different image categories are distinguished according to semantic information of images, and different category labels are marked on the images. The image classification is the basis of other high-level visual tasks such as image detection, entity segmentation and object tracking, and can be applied to face recognition in the security field, intelligent video analysis, traffic scene recognition in the traffic field and the like.
And object identification, namely, firstly carrying out target positioning on the picture content and then carrying out target classification. The object identification is a process of classifying and marking different objects which may exist in the image after detection positioning framing according to semantic information of the image. Considering that picture data in real life generally describes a scene in which a plurality of objects coincide with each other, it is often difficult to effectively cope with image classification alone. At the moment, the object identification is based on the concept of dividing and treating, the object is firstly positioned and then classified, the accuracy of the identification result can be greatly improved, and the method can be applied to the fields of spaceflight, medicine, communication, industrial automation, robots and military.
Text positioning is the identification of the location of text information in a picture. The text positioning utilizes computer vision intelligent recognition of text information in the picture and positioning to generate a target candidate frame with category information, and can be applied to bill and certificate recognition with various text information.
The text recognition is to intelligently recognize the character content on the picture as the text which can be edited by the computer. In text recognition, image patches with characters as main bodies are input, and corresponding texts editable by a computer are generated. The method has the advantages of remarkably accelerating the business process, providing valuable information and being applicable to industries such as finance, insurance, consultation and the like.
In one example, the step of obtaining the type of the machine learning model set by the user includes the following processes.
First, candidate types of machine learning models respectively corresponding to various machine learning tasks are presented to a user. For example, the user is presented with "image classification", "object recognition", "text positioning", "text recognition" awaiting the selection of the type.
Second, a type of the machine learning model selected by the user from the candidate types is received. For example, in response to a user's submit operation, the type of user selection is obtained based on the state of the selection control.
In step S1200, one or more machine learning models are obtained through one or more experiments of automatically training the models according to the machine learning strategies of the corresponding types, wherein the machine learning strategies are used for controlling at least one of data, algorithms and resources related to model training.
For different types of machine learning models, the data, algorithms, and resources involved in the model training process are also typically different. In this embodiment, data, algorithms, and resources related to model training are controlled by a machine learning strategy.
Data relevant to model training, for example, is a training data set used by the training task. Algorithms involved in model training are, for example, computational models, training parameters, and training metrics used by the training task. Resources related to model training are, for example, CPU resources, GPU resources, memory resources, and the like allocated by the training task.
In one example, for the image classification type, a network such as Resnet, inclusion, or Mobilene may be used to build the machine learning model. The Resnet network is provided for solving the problem that the deep network is difficult to train, the training of the deep neural network can be greatly accelerated under the condition of ensuring the parameter quantity, and the precision is greatly improved. An acceptance structure is introduced into the inclusion network, the width of the network is increased, richer features can be extracted, meanwhile, a 1 x 1 convolution kernel is used for reducing network parameters, BatchNormalization is used for accelerating network training, and overfitting is reduced. The Mobilenet network uses a separable convolution mode to reduce model parameters and calculation amount, and greatly improves the cost performance of the network.
In one example, for the object recognition type, the fast-rcnn method may be employed to build a machine learning model. Wherein, the Faster-rcnn is a two-stage object recognition method, and integrates four basic steps (candidate region generation, feature extraction, classification and position refinement) of object recognition into a deep network framework, so that the calculation is not repeated, and the running speed is improved.
In one example, for text localization types, a DeepText model can be employed to build a machine learning model. Among them, DeepText is a two-stage model based on the improvement of fast-rcnn for text localization, and its structure and fast-rcnn are as same as one rut: firstly, the feature layer uses VGG-16, and secondly, the algorithm consists of RPN for extracting candidate regions and Faster-rcnn for detecting objects.
In one example, for text recognition type, algorithms such as densenert, CTC, etc. may be employed to build the machine learning model. The densenert algorithm uses a backbone network and a CTC loss function, the structure of the network can be flexibly selected, for example, densenert or simplenet can be selected as the backbone network, and whether to select the recurrent neural network RNN can be selected. The CTC (connected dominant Temporal Classification) algorithm extracts features using six layers of CNNs (Convolutional Neural Networks) as a model skeleton, combines timing features using two layers of bidirectional RNNs, and finally calculates loss and decodes sentences using a CTC decoding method.
In one example, the step of obtaining one or more machine learning models through one or more experiments of the auto-training model according to the corresponding type of machine learning strategy further includes the following processes.
First, a user is provided with a modeling creation interface for setting up a model auto-training task according to a corresponding type of machine learning strategy.
Secondly, receiving a setting operation executed in a modeling creation interface by a user to acquire a setting item required by the automatic training model. The setting item comprises at least one of annotation data uploading, data preprocessing strategies, algorithm configuration and resource configuration.
And uploading the annotation data, namely uploading the annotation data for model training. The labeling means that in a supervised learning mode, sample data with result information, called labeled data, is adopted for model training. In the field of computer vision, there are several commonly used annotation means to cope with common image understanding and scene recognition. For example, in an image classification scenario, annotations refer to classification results for image data; in an object recognition scenario, annotation refers to the process of framing a target Region (ROI) in an image, and distinguishing and determining the framed image Region, and thus the annotation is a composite label with the coordinate range of the target Region and the final classification result. A set of data consisting of image data and annotation data is called a data set.
In one example, annotation data is obtained by publishing an annotation task and uploaded for use in training the model. The annotation tasks include, for example, the following requirements: the pictures of the training process are consistent with the picture environment in the application scene; the data integrity is kept, and no pollution is caused; the format of the labeled data conforms to a preset format.
In one example, at least one of the following processes is performed according to a user's setting during the uploading process: discarding exception files, ignoring exception labels, introducing failures, using recommended configurations, and using custom configurations.
In one example, the uploading process further comprises presenting a graphical interface regarding the uploaded annotation data in response to an input by the user, wherein at least one of the following is provided on the graphical interface: details of the upload log or its entry, shortcuts for copying the annotation data path, buttons for viewing the annotation data.
The data preprocessing strategy refers to a strategy for preprocessing the marked data such as transformation, enhancement and the like. The data preprocessing generally comprises two parts of contents, one part is data splitting, the data can be split into two data sets, namely a training data set and a verification data set according to a certain splitting rule, two splitting modes can be supported in a platform, and one or more data sets are randomly split and designated as the verification set. The training set is used for model training, and the verification set is used for evaluating the effect of the model. The second part is data enhancement, and the training data set is subjected to certain operations such as transformation scaling and the like, wherein the operations comprise certain processing methods such as cutting, segmentation, noise and the like, so that the model has stronger adaptability and robustness to various sample pictures in a real environment. Data enhancement methods commonly seen in the field of computer vision include cropping, rotation, and noise. The cropping means selecting a part of the image, cropping the image of the part, and adjusting the size of the image to the original image. Rotation refers to clockwise or counterclockwise rotation of the picture, noting that when rotated, it is preferable to rotate 90-180 degrees or otherwise have dimension problems. Finally, the purpose of noise addition is to make the image unclear and to disturb the observable information of the image through a series of mathematical distribution operations on the picture set. The technical logic of other data enhancement methods is similar to the above described method.
And the algorithm configuration is used for finely adjusting and optimizing the training algorithm of the model, wherein the algorithm configuration comprises a core structure influencing the training process and related hyper-parameters. In the training process of the deep learning network, the hyper-parameters are parameters set before learning is started, and are not data obtained through training. In general, the hyper-parameters need to be optimized, and the learning performance and effect are improved through better hyper-parameters.
And the resource configuration is used for controlling and training the corresponding resources distributed, and comprises GPU, CPU and memory configuration.
In one example, at least one of the data pre-processing policy, the algorithm configuration, and the resource configuration provides different levels of configuration policy.
For example, for the data preprocessing strategy, three levels of configuration strategies of "intelligence", "fine tuning" and "expert" are provided. The intelligent mode is a black box mode, and various data preprocessing methods are built in the intelligent mode according to different data types without user selection. The fine tuning mode allows a user to perform fine tuning, the data preprocessing configuration comprises a plurality of methods which can be performed in the preprocessing stage, and the user can select the preprocessing method according to the training requirement. The expert mode opens all adjustable parameters of the preprocessing method, and a user can adjust related parameters according to training requirements.
For example, for algorithm configuration, three levels of configuration strategies of "intelligence", "fine tuning", "expert" are provided. The intelligent mode is a black box mode, and provides the optimal model and the hyper-parameters of the algorithm for the user automatically according to different project types and data types. The "fine tuning" mode is a fine tuning mode, and different models can be selected according to the type of the item and whether transfer learning is needed or not. The 'expert' mode opens up all parameters of the hyper-parameter configuration, and the user can adjust the relevant parameters according to the training requirements.
For example, for resource configuration, three levels of configuration policies, namely "intelligence", "fine tuning" and "expert" are provided. The intelligent mode is a black box mode, and according to different data types and model types, the optimal resource allocation scheduling is performed by combining the resource allocation of the user and the current resource use condition without the self consideration of the user. The fine adjustment mode is a fine adjustment mode, and since the GPU compression resource is most occupied by image training, the fine adjustment mode can provide the user with GPU resource modification and whether elastic expansion of starting the pre-estimated service is considered. The "expert" model opens GPU, CPU, and memory configuration entries, as well as a range of elastically scalable instances. Wherein, the number of instances refers to the number of services running simultaneously.
In this embodiment, a default level of at least one of the pre-processing policy, the algorithm configuration, and the resource configuration is provided according to the type of the annotation data and the machine learning model. For example, the optimal values of the configurations are automatically determined according to the annotation data and the type determination of the machine learning model, and the optimal values are used as default levels of the configurations, namely the default configuration of the intelligent model.
It should be noted that the present embodiment introduces a model training method based on the transfer learning. The transfer learning refers to transferring knowledge in one field (namely a source field) to another field (namely a target field), so that the target field can obtain a better learning effect, artificial intelligence can be generated in a small data field, and dependence of the artificial intelligence on large data is broken. The transfer learning of the computer vision algorithm means that a general model is trained by using a data set different from a target field as a backbone network model, and then the data of a target scene is used for carrying out optimization training on the backbone network model. For example, in an object detection scene, image features are extracted by using a backbone network, and then subsequent processing is performed based on the extracted features.
In this embodiment, when the user selects to perform the migration learning, candidate backbone network models including a mobile network, a resnet network, an initiation network, and the like are provided to the user, and a subsequent migration learning process is performed based on the backbone network model selected by the user.
The application development method in this embodiment also supports distributed training of machine learning models, and automatically controls the training strategy in a "black box" mode of the algorithm configuration. Based on the automatic control training strategy, the training task of each model may be divided into a plurality of subtasks, and accordingly, the training strategy of each model may be divided into a plurality of subtasks. In this embodiment, the scheduling sub-training strategy is distributed according to data, parameters, and training tasks. And each trainer executes a training subtask, the output weight is submitted to a special evaluator for evaluation, the optimizer produces a super parameter according to an evaluation result, and the super parameter guides each trainer to carry out the next round of training to finally generate a model. The multiple subtasks can be executed in parallel in a distributed system, so that distributed training of a machine learning model is achieved, and model training efficiency is improved.
And finally, performing one or more times of automatic model training experiments based on the marking data uploaded by the user according to the acquired setting items to obtain one or more machine learning models.
In this embodiment, a concept of a project refers to a combination of a series of tasks directed to a certain result, and each project generates a corresponding application. The experiment represents one model training operation in a certain project, and one model can be obtained through one successful experiment. One experiment can be divided into three stages of pretreatment, training and post-treatment, and five states of waiting for starting, queuing, operation failure, operation termination and operation success.
In one example, the user is presented with at least one of the experiment version, the experiment state, the experiment progress, the accuracy and other indexes of the experiment, the creation time, the experiment basic information, the experiment log, the training detail index and the experiment evaluation.
And the experiment version refers to the serial number of the current experiment in the project experiment list, the default experiment version is 1, and then if the experiment is continuously created, the experiment version is increased by the natural number sequence. The experiment state refers to the running state of the current experiment, and comprises queuing, running, successful running and the like. The experiment progress refers to the progress of model training and can be displayed in a progress bar or the like. Accuracy, which refers to the accuracy of the model generated by the current experiment on the validation set, is expressed as a percentage. Creation time, i.e. the time the current experiment was created. The experiment basic information comprises information such as the name of a project to which the experiment belongs, the type of the project to which the experiment belongs, the version of the experiment, the state of the experiment, the progress of the experiment, the creation time and the like. The experiment log refers to the operation information record of the current experiment. The training detail index refers to an accuracy index in an experimental process, and includes training loss (loss on a training set) and the like. And the experimental evaluation refers to the evaluation of the model after the experiment is finished, and comprises the verification accuracy, the verification precision, the verification recall rate, the verification F value and the like.
In one example, training detail indicators for an experiment are presented to a user, including: and acquiring indexes of multiple training iterations, and displaying an index evolution process among the multiple training iterations. For example, a line graph is established by taking the number of iterations as a horizontal axis and the training loss as a vertical axis, and the index evolution process among multiple training iterations is shown through the line graph.
In one example, after the experiment is completed, an experiment evaluation task is created to perform an evaluation step on the experiment yield model. Further, the experimental assessments of the experiment are presented to the user, including: and displaying at least one item of evaluation index statistics, resource allocation, real-time logs and error case data under the experimental evaluation task. And (4) evaluation index statistics, namely the overall profile of each evaluation index. And resource allocation, namely, running resources allocated to the experiment evaluation task. And the real-time log is a real-time record of the running condition of the evaluation task. Error data, i.e. cases where the model identifies errors in the evaluation task, e.g. error pictures.
In the above example, creating the experimental assessment task includes: an evaluation data set is selected and resources for the evaluation task are configured. Based on the checking operation of the user, a data set for evaluation can be determined, and the evaluation result of the corresponding model effect is obtained through the evaluation of the data set. The resource configuration may be in a "smart" mode, i.e., a default configuration.
In step S1300, an application is generated from the obtained machine learning model.
In this embodiment, step S1300 further includes: generating an application based on the trained single machine learning model; or generating the application based on a template process, wherein the template process is used for defining the arrangement process of the trained machine learning models in the application process.
In some application scenarios, such as OCR recognition scenarios, multiple models are required to cooperate with each other to complete a task. Thus, the present embodiments provide a way to generate applications from multiple machine learning models based on a template flow.
In one example, generating an application based on a template flow includes: providing application parameters related in the template process to a user; an application utilizing a plurality of machine learning models is generated according to a template flow according to the user's settings of application parameters.
Taking an OCR recognition scene as an example, the target process in the scene is an OCR recognition process. Generally speaking, the OCR process can be divided into two links of text positioning and text recognition. The process of generating an application based on an OCR recognition procedure includes the following steps.
First, an OCR plate corresponding to the OCR plate picture is created. The method comprises the following steps: displaying an OCR sample picture selected by a user in a canvas area; providing a control for setting an OCR recognition area within or around the canvas area; and setting one or more OCR recognition areas with corresponding contents on the displayed OCR sample picture in response to the operation of the user on the control so as to obtain the OCR plate-type picture. The steps further include: providing a control within or around the canvas area for editing the OCR sample picture; and editing the OCR sample picture in response to the operation of the control by the user, wherein the editing comprises at least one of changing pictures, selecting, moving, cutting, enlarging and reducing.
Secondly, an operation interface which shows the OCR plate-type picture and is used for configuring one or more models respectively applied to each recognition area in the OCR plate-type picture is provided for a user.
Finally, receiving configuration operation executed in the operation interface by a user to generate application applying one or more models for each identification area.
The above process is illustrated below by taking the invoice picture shown in fig. 3 as an example. First, the electronic device 1000 receives a picture uploaded by a user as an OCR sheet. The electronic device 1000 then presents the picture in the canvas area and provides editing controls within or around the canvas area to enable the user to perform such operations as changing pictures, selecting, moving, cropping, zooming in and out. Furthermore, an identification box is provided in the operation interface, through which the user locates the identification area, for example, four identification areas are shown in a dotted box in fig. 3, which sequentially identify area 1, identification area 2, identification area 3, and identification area 4. The user can designate the four identification areas as "ticket name", "date of making a bill", "amount of money in size", and "amount of money in lowercase" in order. Finally, for each recognition zone, a respective positioning model and recognition model may be trained, e.g. for a "capitalization amount" recognition zone, a respective "capitalization amount" positioning model and "capitalization amount" recognition model.
In generating the application, the user may select the application type as "application template" to generate the application based on the template flow. In the example shown in fig. 3, based on the trained positioning model and recognition model for each recognition area, an application capable of recognizing multiple areas of the same picture can be automatically generated based on a preset template flow.
In one example, the modeling creation interface described above is further configured to present a panel picture and to receive a user selection of an identification area in the panel picture, wherein the panel picture is used for the user to specify the identification area, and the correlation process further includes: and receiving the selection of the user on the identification area in the plate-type picture so as to cut the marked image, so that the cut marked image is consistent with the selected identification area.
In one example, the application development method further comprises: and the application is on-line, and at least one item of application information, resources and instances, resource monitoring, API call monitoring and application logs is visually shown to a user. Here, the online refers to deploying an application in a related device to provide a corresponding service.
In one example, after the application is online, an example picture uploaded by a user is received, and a prediction result of the online application for the example picture is displayed. Thus, the user can detect the recognition effect of the generated application.
The application development method based on the machine learning model provided by the embodiment can autonomously construct artificial intelligence, particularly visual application services, and meet the requirements of accessing and storing labeled data under a standard path to construct and optimize the model in a one-stop mode until the model is applied to an online mode, so that the online service is provided for an actual business scene. And the integrated, automatic and intelligent artificial intelligence development management is realized by being assisted by a monitoring management suite of data, service and application. The complex application construction process is simplified through low-threshold interface operation, and the problem of high labor cost in artificial intelligence application development is solved.
< apparatus embodiment >
The embodiment provides an application development device. As shown in fig. 4, the application development apparatus 400 includes a model type acquisition module 410, a model training module 420, and an application generation module 430.
A model type obtaining module 410, configured to obtain a type of the machine learning model set by the user.
The model training module 420 is configured to obtain one or more machine learning models through one or more experiments of automatically training the models according to machine learning strategies of corresponding types, where the machine learning strategies are used to control at least one of data, algorithms, and resources related to model training.
And an application generating module 430, configured to generate an application according to the obtained machine learning model.
In one example, the machine learning model is a computer vision-related machine learning model.
In one example, the type of the machine learning model includes at least one of an image classification type, an object recognition type, a text localization type, and a text recognition type.
In one example, the model type acquisition module 410 is configured to: displaying the candidate types of the machine learning models respectively corresponding to various machine learning tasks to a user; a type of the machine learning model selected by the user from the candidate types is received.
In one example, the model training module 420 is configured to: providing a modeling creation interface for setting a model auto-training task according to a corresponding type of machine learning strategy to a user; receiving a setting operation executed by a user in a modeling creation interface to acquire a setting item required by an automatic training model; and performing one or more times of automatic model training experiments based on the marking data uploaded by the user according to the acquired setting items to obtain one or more machine learning models.
In one example, the setting item includes at least one of annotation data uploading, data preprocessing strategy, algorithm configuration and resource configuration.
In one example, at least one of the data pre-processing policy, the algorithm configuration, and the resource configuration provides different levels of configuration policy.
In one example, the model training module 420 is configured to provide a default level of at least one of preprocessing strategy, algorithm configuration, and resource configuration based on the type of annotation data and machine learning model.
In one example, the modeling creation interface is further for presenting a slab picture and for receiving a user selection of an identification region in the slab picture, wherein the slab picture is for the user to specify the identification region, and further comprising: and receiving the selection of the user on the identification area in the plate-type picture so as to cut the marked image, so that the cut marked image is consistent with the selected identification area.
In one example, the model training module 420 is further configured to: and displaying at least one of indexes such as experiment versions, experiment states, experiment progress, accuracy and the like, creation time, experiment basic information, experiment logs, training detail indexes and experiment evaluation of the experiments to a user.
In one example, the model training module 420 is configured to: and acquiring indexes of multiple training iterations, and displaying an index evolution process among the multiple training iterations.
In one example, the model training module 420 is further configured to: creating an experiment evaluation task to evaluate the experiment yield model, and presenting the experiment evaluation of the experiment to the user comprises: and displaying at least one item of evaluation index statistics, resource allocation, real-time logs and error case data under the experimental evaluation task.
In one example, one or more experiments of the auto-trained model are attributed to the same project, where each project generates a respective one of the applications.
In one example, the model training module 420 is configured to: an evaluation data set is selected and resources for the evaluation task are configured.
In one example, the application generation module 430 is configured to: generating an application based on the trained single machine learning model; or generating the application based on a template process, wherein the template process is used for defining the arrangement process of the trained machine learning models in the application process.
In one example, the application generation module 430 is configured to: providing application parameters related in the template process to a user; an application utilizing a plurality of machine learning models is generated according to a template flow according to the user's settings of application parameters.
In one example, the template process includes an OCR recognition process, and the application generation module 430 is configured to: providing an operation interface for a user, wherein the operation interface shows the OCR plate-type picture and is used for configuring one or more models respectively applied to each recognition area in the OCR plate-type picture; and receiving configuration operation performed in the operation interface by a user to generate application applying one or more models for each identification area.
In one example, the plurality of models includes a localization model and a recognition model for the recognition area.
In one example, the application generation module 430 is further configured to: and creating an OCR plate corresponding to the OCR plate picture.
In one example, the application generation module 430 is configured to: displaying an OCR sample picture selected by a user in a canvas area; providing a control for setting an OCR recognition area within or around the canvas area; and setting one or more OCR recognition areas with corresponding contents on the displayed OCR sample picture in response to the operation of the user on the control so as to obtain the OCR plate-type picture.
In one example, the application generation module 430 is further configured to: providing a control within or around the canvas area for editing the OCR sample picture; and editing the OCR sample picture in response to the operation of the control by the user, wherein the editing comprises at least one of changing pictures, selecting, moving, cutting, enlarging and reducing.
In one example, the application generation module 430 is further configured to: and the application is on-line, and at least one item of application information, resources and instances, resource monitoring, API call monitoring and application logs is visually shown to a user.
In one example, the application generation module 430 is further configured to: and receiving an example picture uploaded by a user, and displaying a prediction result of the online application for the example picture.
In one example, the apparatus further comprises an annotation data acquisition module configured to: and acquiring the annotation data by issuing an annotation task, and uploading the acquired annotation data for training the model.
In one example, the annotation data acquisition module performs at least one of the following processes according to a setting of a user: discarding exception files, ignoring exception labels, introducing failures, using recommended configurations, and using custom configurations.
In one example, the annotation data acquisition module is further configured to present a graphical interface regarding the uploaded annotation data in response to a user input, wherein at least one of the following is provided on the graphical interface: details of the upload log or its entry, shortcuts for copying the annotation data path, buttons for viewing the annotation data.
< electronic device embodiment >
The present embodiment provides an electronic device including the application development apparatus 400 shown in fig. 4. Alternatively, the electronic device is the electronic device 500 shown in fig. 5, which includes a processor 510 and a memory 520. The memory 510 is used for storing instructions for controlling the processor to perform the machine learning model based application development method described in accordance with the method embodiment of the present invention.
< computer-readable storage Medium embodiment >
The present embodiment provides a computer-readable storage medium. The computer readable storage medium stores executable commands that, when executed by a processor, implement a method for machine learning model-based application development as described in accordance with an embodiment of the method of the present invention.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.
Claims (10)
1. A machine learning model-based application development method comprises the following steps:
obtaining the type of a machine learning model set by a user;
obtaining one or more machine learning models through one or more times of experiments of automatic training models according to the machine learning strategies corresponding to the types, wherein the machine learning strategies are used for controlling at least one of data, algorithms and resources related to model training;
and generating the application according to the obtained machine learning model.
2. The method of claim 1, wherein the machine learning model is a computer vision-related machine learning model.
3. The method of claim 1, wherein the type of machine learning model comprises at least one of an image classification type, an object recognition type, a text localization type, a text recognition type.
4. The method of claim 1 or 2, wherein the obtaining of the type of the machine learning model set by the user comprises:
displaying the candidate types of the machine learning models respectively corresponding to various machine learning tasks to a user;
a type of the machine learning model selected by the user from the candidate types is received.
5. The method of claim 1 or 2, wherein the obtaining one or more machine learning models through one or more experiments of an auto-training model according to the machine learning strategy corresponding to the type comprises:
providing a modeling creation interface for setting a model auto-training task according to a machine learning strategy corresponding to the type to a user;
receiving a setting operation executed by a user in a modeling creation interface to acquire a setting item required by an automatic training model; and
and according to the acquired setting items, carrying out one or more times of automatic model training experiments based on the marking data uploaded by the user to obtain one or more machine learning models.
6. The method of claim 5, wherein the setting items comprise at least one of annotation data uploading, data preprocessing strategies, algorithm configurations, and resource configurations.
7. The method of claim 6, wherein at least one of the data pre-processing policy, the algorithm configuration, and the resource configuration provides different levels of configuration policy.
8. An application development apparatus comprising:
the model type acquisition module is used for acquiring the type of the machine learning model set by a user;
the model training module is used for obtaining one or more machine learning models through one or more times of experiments of the automatic training models according to the machine learning strategies corresponding to the types, wherein the machine learning strategies are used for controlling at least one of data, algorithms and resources related to model training;
and the application generation module is used for generating the application according to the obtained machine learning model.
9. An electronic device, comprising:
the apparatus of claim 8; or,
a processor and a memory for storing instructions for controlling the processor to perform the method of any of claims 1-7.
10. A computer-readable storage medium storing executable commands that, when executed by a processor, implement the machine learning model-based application development method of any one of claims 1-7.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911395248.1A CN111160569A (en) | 2019-12-30 | 2019-12-30 | Application development method and device based on machine learning model and electronic equipment |
| PCT/CN2020/141344 WO2021136365A1 (en) | 2019-12-30 | 2020-12-30 | Application development method and apparatus based on machine learning model, and electronic device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911395248.1A CN111160569A (en) | 2019-12-30 | 2019-12-30 | Application development method and device based on machine learning model and electronic equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111160569A true CN111160569A (en) | 2020-05-15 |
Family
ID=70559199
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911395248.1A Pending CN111160569A (en) | 2019-12-30 | 2019-12-30 | Application development method and device based on machine learning model and electronic equipment |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN111160569A (en) |
| WO (1) | WO2021136365A1 (en) |
Cited By (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111695443A (en) * | 2020-05-21 | 2020-09-22 | 平安科技(深圳)有限公司 | Intelligent traffic artificial intelligence open platform, method, medium and electronic device |
| CN111723746A (en) * | 2020-06-22 | 2020-09-29 | 江苏云从曦和人工智能有限公司 | Scene recognition model generation method, system, platform, device and medium |
| CN111813084A (en) * | 2020-07-10 | 2020-10-23 | 重庆大学 | A fault diagnosis method for mechanical equipment based on deep learning |
| CN112101567A (en) * | 2020-09-15 | 2020-12-18 | 厦门渊亭信息科技有限公司 | Automatic modeling method and device based on artificial intelligence |
| CN112364883A (en) * | 2020-09-17 | 2021-02-12 | 福州大学 | American license plate recognition method based on single-stage target detection and deptext recognition network |
| CN112416301A (en) * | 2020-10-19 | 2021-02-26 | 山东产研鲲云人工智能研究院有限公司 | Deep learning model development method and device, and computer-readable storage medium |
| CN112508769A (en) * | 2020-12-28 | 2021-03-16 | 浪潮云信息技术股份公司 | Method for constructing multi-task computer vision application service based on deep learning |
| CN112734911A (en) * | 2021-01-07 | 2021-04-30 | 北京联合大学 | Single image three-dimensional face reconstruction method and system based on convolutional neural network |
| CN112767205A (en) * | 2021-01-26 | 2021-05-07 | 深圳市恩孚电子科技有限公司 | Machine learning teaching method, device, electronic equipment and storage medium |
| CN112948476A (en) * | 2021-02-25 | 2021-06-11 | 第四范式(北京)技术有限公司 | Data access method, device, system and storage medium of machine learning system |
| CN112966439A (en) * | 2021-03-05 | 2021-06-15 | 北京金山云网络技术有限公司 | Machine learning model training method and device and virtual experiment box |
| WO2021136365A1 (en) * | 2019-12-30 | 2021-07-08 | 第四范式(北京)技术有限公司 | Application development method and apparatus based on machine learning model, and electronic device |
| CN113470448A (en) * | 2021-06-30 | 2021-10-01 | 上海松鼠课堂人工智能科技有限公司 | Simulation method, system and equipment based on scientific experiment for generating countermeasure network |
| CN114154641A (en) * | 2020-09-07 | 2022-03-08 | 华为云计算技术有限公司 | AI model training method and device, computing equipment and storage medium |
| CN114443831A (en) * | 2020-10-30 | 2022-05-06 | 第四范式(北京)技术有限公司 | Text classification method and device applying machine learning and electronic equipment |
| CN115114963A (en) * | 2021-09-24 | 2022-09-27 | 中国劳动关系学院 | Intelligent streaming media video big data analysis method based on convolutional neural network |
| CN115660064A (en) * | 2022-11-10 | 2023-01-31 | 北京百度网讯科技有限公司 | Model training method, data processing method and device based on deep learning platform |
| CN116151323A (en) * | 2023-02-18 | 2023-05-23 | 浙江中控信息产业股份有限公司 | Model generation method, device, electronic device and storage medium |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115639592B (en) * | 2021-07-19 | 2025-10-10 | 中国石油化工股份有限公司 | Seismic exploration big data sample collection and annotation method and device |
| CN114064157B (en) * | 2021-11-09 | 2023-09-15 | 中国电力科学研究院有限公司 | Automated process implementation method, system, equipment and media based on page element recognition |
| CN114138446A (en) * | 2021-12-08 | 2022-03-04 | 苏州盈天地资讯科技有限公司 | A Draggable Machine Learning Workflow Component Scheduling Method |
| CN114417980B (en) * | 2021-12-28 | 2025-11-18 | 中国电信股份有限公司 | A method, apparatus, electronic device, and storage medium for establishing a business model. |
| CN116415678A (en) * | 2021-12-31 | 2023-07-11 | 第四范式(北京)技术有限公司 | A learning circle generation method, device, electronic equipment and storage medium |
| CN114358730A (en) * | 2021-12-31 | 2022-04-15 | 中煤科工集团信息技术有限公司 | Coal business processing method and equipment based on machine learning |
| CN114706568B (en) * | 2022-04-22 | 2024-07-05 | 深圳伯德睿捷健康科技有限公司 | Deep learning online coding method and system |
| CN115035328B (en) * | 2022-04-25 | 2024-11-15 | 上海大学 | Converter image incremental automatic machine learning system and its establishment and training method |
| CN114911768B (en) * | 2022-05-24 | 2025-08-15 | 杭州野乐科技有限公司 | Git-based data set version management method, device, equipment and storage medium |
| CN115712852B (en) * | 2022-11-18 | 2026-01-23 | 中国科学院计算技术研究所 | Training method for neural network classification model for identifying human behaviors |
| CN115618239B (en) * | 2022-12-16 | 2023-04-11 | 四川金信石信息技术有限公司 | Management method, system, terminal and medium for deep learning framework training |
| CN117474125B (en) * | 2023-12-21 | 2024-03-01 | 环球数科集团有限公司 | Automatic training machine learning model system |
| CN118334663B (en) * | 2024-06-13 | 2024-08-13 | 杭州宇泛智能科技股份有限公司 | One-stop artificial intelligent image processing model construction method and device |
| CN119357025B (en) * | 2024-12-26 | 2025-06-06 | 浪潮电子信息产业股份有限公司 | A model training benchmark evaluation method, device, program product and medium |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108881446A (en) * | 2018-06-22 | 2018-11-23 | 深源恒际科技有限公司 | A kind of artificial intelligence plateform system based on deep learning |
| CN109815991A (en) * | 2018-12-29 | 2019-05-28 | 北京城市网邻信息技术有限公司 | Training method, device, electronic equipment and the storage medium of machine learning model |
| CN110163233A (en) * | 2018-02-11 | 2019-08-23 | 陕西爱尚物联科技有限公司 | A method of so that machine is competent at more complex works |
| CN110378463A (en) * | 2019-07-15 | 2019-10-25 | 北京智能工场科技有限公司 | A kind of artificial intelligence model standardized training platform and automated system |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190090774A1 (en) * | 2017-09-27 | 2019-03-28 | Regents Of The University Of Minnesota | System and method for localization of origins of cardiac arrhythmia using electrocardiography and neural networks |
| CN110009174B (en) * | 2018-12-13 | 2020-11-06 | 创新先进技术有限公司 | Risk identification model training method, device and server |
| CN110058922B (en) * | 2019-03-19 | 2021-08-20 | 华为技术有限公司 | A method and apparatus for extracting metadata of machine learning tasks |
| CN110210626A (en) * | 2019-05-31 | 2019-09-06 | 京东城市(北京)数字科技有限公司 | Data processing method, device and computer readable storage medium |
| CN111160569A (en) * | 2019-12-30 | 2020-05-15 | 第四范式(北京)技术有限公司 | Application development method and device based on machine learning model and electronic equipment |
-
2019
- 2019-12-30 CN CN201911395248.1A patent/CN111160569A/en active Pending
-
2020
- 2020-12-30 WO PCT/CN2020/141344 patent/WO2021136365A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110163233A (en) * | 2018-02-11 | 2019-08-23 | 陕西爱尚物联科技有限公司 | A method of so that machine is competent at more complex works |
| CN108881446A (en) * | 2018-06-22 | 2018-11-23 | 深源恒际科技有限公司 | A kind of artificial intelligence plateform system based on deep learning |
| CN109815991A (en) * | 2018-12-29 | 2019-05-28 | 北京城市网邻信息技术有限公司 | Training method, device, electronic equipment and the storage medium of machine learning model |
| CN110378463A (en) * | 2019-07-15 | 2019-10-25 | 北京智能工场科技有限公司 | A kind of artificial intelligence model standardized training platform and automated system |
Cited By (25)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021136365A1 (en) * | 2019-12-30 | 2021-07-08 | 第四范式(北京)技术有限公司 | Application development method and apparatus based on machine learning model, and electronic device |
| CN111695443A (en) * | 2020-05-21 | 2020-09-22 | 平安科技(深圳)有限公司 | Intelligent traffic artificial intelligence open platform, method, medium and electronic device |
| CN111695443B (en) * | 2020-05-21 | 2023-01-24 | 平安科技(深圳)有限公司 | Intelligent traffic artificial intelligence open platform, method, medium and electronic device |
| CN111723746A (en) * | 2020-06-22 | 2020-09-29 | 江苏云从曦和人工智能有限公司 | Scene recognition model generation method, system, platform, device and medium |
| CN111813084B (en) * | 2020-07-10 | 2022-10-28 | 重庆大学 | Mechanical equipment fault diagnosis method based on deep learning |
| CN111813084A (en) * | 2020-07-10 | 2020-10-23 | 重庆大学 | A fault diagnosis method for mechanical equipment based on deep learning |
| CN114154641A (en) * | 2020-09-07 | 2022-03-08 | 华为云计算技术有限公司 | AI model training method and device, computing equipment and storage medium |
| WO2022048557A1 (en) * | 2020-09-07 | 2022-03-10 | 华为云计算技术有限公司 | Ai model training method and apparatus, and computing device and storage medium |
| CN112101567A (en) * | 2020-09-15 | 2020-12-18 | 厦门渊亭信息科技有限公司 | Automatic modeling method and device based on artificial intelligence |
| CN112364883B (en) * | 2020-09-17 | 2022-06-10 | 福州大学 | American license plate recognition method based on single-stage target detection and deptext recognition network |
| CN112364883A (en) * | 2020-09-17 | 2021-02-12 | 福州大学 | American license plate recognition method based on single-stage target detection and deptext recognition network |
| CN112416301A (en) * | 2020-10-19 | 2021-02-26 | 山东产研鲲云人工智能研究院有限公司 | Deep learning model development method and device, and computer-readable storage medium |
| CN114443831A (en) * | 2020-10-30 | 2022-05-06 | 第四范式(北京)技术有限公司 | Text classification method and device applying machine learning and electronic equipment |
| CN112508769A (en) * | 2020-12-28 | 2021-03-16 | 浪潮云信息技术股份公司 | Method for constructing multi-task computer vision application service based on deep learning |
| CN112734911A (en) * | 2021-01-07 | 2021-04-30 | 北京联合大学 | Single image three-dimensional face reconstruction method and system based on convolutional neural network |
| CN112767205A (en) * | 2021-01-26 | 2021-05-07 | 深圳市恩孚电子科技有限公司 | Machine learning teaching method, device, electronic equipment and storage medium |
| CN112767205B (en) * | 2021-01-26 | 2025-07-04 | 深圳市恩孚电子科技有限公司 | Machine learning teaching method, device, electronic device and storage medium |
| CN112948476A (en) * | 2021-02-25 | 2021-06-11 | 第四范式(北京)技术有限公司 | Data access method, device, system and storage medium of machine learning system |
| CN112948476B (en) * | 2021-02-25 | 2024-05-31 | 第四范式(北京)技术有限公司 | Data access method, device, system and storage medium of machine learning system |
| CN112966439A (en) * | 2021-03-05 | 2021-06-15 | 北京金山云网络技术有限公司 | Machine learning model training method and device and virtual experiment box |
| CN113470448A (en) * | 2021-06-30 | 2021-10-01 | 上海松鼠课堂人工智能科技有限公司 | Simulation method, system and equipment based on scientific experiment for generating countermeasure network |
| CN115114963A (en) * | 2021-09-24 | 2022-09-27 | 中国劳动关系学院 | Intelligent streaming media video big data analysis method based on convolutional neural network |
| CN115660064A (en) * | 2022-11-10 | 2023-01-31 | 北京百度网讯科技有限公司 | Model training method, data processing method and device based on deep learning platform |
| CN115660064B (en) * | 2022-11-10 | 2023-09-29 | 北京百度网讯科技有限公司 | Model training method based on deep learning platform, data processing method and device |
| CN116151323A (en) * | 2023-02-18 | 2023-05-23 | 浙江中控信息产业股份有限公司 | Model generation method, device, electronic device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021136365A1 (en) | 2021-07-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111160569A (en) | Application development method and device based on machine learning model and electronic equipment | |
| US11176423B2 (en) | Edge-based adaptive machine learning for object recognition | |
| US20200410392A1 (en) | Task-aware command recommendation and proactive help | |
| CN118466957B (en) | Method and system for constructing UI (user interface) by using artificial intelligence | |
| US10929159B2 (en) | Automation tool | |
| CN113255819A (en) | Method and apparatus for identifying information | |
| Liang et al. | User behavior data analysis and product design optimization algorithm based on deep learning | |
| CN119904880B (en) | Sample set construction method, question-answer model training method, question-answer processing method, request processing method and task platform | |
| CN115830313A (en) | Image semantic segmentation and annotation method based on deep learning | |
| Chen et al. | Exploring feature sparsity for out-of-distribution detection | |
| CN119649084A (en) | Illegal image detection method, device, equipment and storage medium | |
| Körner et al. | Mastering Azure Machine Learning: Perform large-scale end-to-end advanced machine learning in the cloud with Microsoft Azure Machine Learning | |
| CN115661542B (en) | A small sample target detection method based on feature relationship transfer | |
| KR20240114674A (en) | Method and apparatus for providing customized video contents through real-time object analysis | |
| US12238451B2 (en) | Predicting video edits from text-based conversations using neural networks | |
| CN119645488B (en) | Code task processing, code error correction and code processing model training method | |
| CN114330588B (en) | Picture classification method, picture classification model training method and related devices | |
| CN120122988B (en) | Code annotation analysis method, computer device, storage medium and product | |
| US11507728B2 (en) | Click to document | |
| Akram et al. | From Data Quality to Model Performance: Navigating the Landscape of Deep Learning Model Evaluation | |
| Borg | From Bugs to Decision Support–Leveraging Historical Issue Reports in Software Evolution | |
| Alashjaee et al. | Cloud-Edge Continuum Framework for Admission Data Management Using Deep Learning Model | |
| CN114330588A (en) | Picture classification method, picture classification model training method and related device | |
| CN119249149A (en) | A data model optimization method, device, computer equipment and storage medium | |
| Nardviriyaku et al. | Convolutional Neural Networks (CNN) for Industrial Parts Recognition: Advancing Manufacturing Digitalization |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |