[go: up one dir, main page]

GB2588614A - Method and system for customising a machine learning model - Google Patents

Method and system for customising a machine learning model Download PDF

Info

Publication number
GB2588614A
GB2588614A GB1915637.1A GB201915637A GB2588614A GB 2588614 A GB2588614 A GB 2588614A GB 201915637 A GB201915637 A GB 201915637A GB 2588614 A GB2588614 A GB 2588614A
Authority
GB
United Kingdom
Prior art keywords
classifier
class
new class
user
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1915637.1A
Other versions
GB201915637D0 (en
GB2588614B (en
Inventor
Zhu Xiatian
Perez-Rua Juan
Xiang Tao
Hospedales Timothy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to GB1915637.1A priority Critical patent/GB2588614B/en
Publication of GB201915637D0 publication Critical patent/GB201915637D0/en
Priority to KR1020200036344A priority patent/KR20210052153A/en
Priority to EP20883551.2A priority patent/EP3997625A4/en
Priority to PCT/KR2020/007560 priority patent/WO2021085785A1/en
Priority to US16/901,685 priority patent/US11797824B2/en
Publication of GB2588614A publication Critical patent/GB2588614A/en
Application granted granted Critical
Publication of GB2588614B publication Critical patent/GB2588614B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Machine learning (ML) model 112 on user device 102, such as smartphone, is customised by: determining whether to add a user-requested new class to the ML model; in the positive, obtaining one extracted feature from a sample of the new class from a ML learning model comprising a feature extractor 114,124 and a base portion 116,126 of a classifier; storing the extracted feature on the user device as a representation of the new class in a local portion of the classifier 118. The base portion may be on a server and is a matrix of classifier weight vectors. New classes can be shared. Samples comprise: images, videos, audio. By splitting the classifier into two portions, one base with original classes and a local portion 118 with new classes, the ML model can be updated quickly, without retraining from scratch, and locally as global changes to the classifier are not required.

Description

Intellectual Property Office Application No. GII1915637.1 RTM Date:24 April 2020 The following terms are registered trade marks and should be read as such wherever they occur in this document: Bluetooth Python Thread WiFi ZigBee Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo Method and System for Customising a Machine Learning Model
Field
[001] The present application generally relates to a method and system for customising a machine learning (ML) model, and in particular to methods for enabling a user to add new classes to a machine learning model to customise the ML model on their devices.
Background
[2] Generally speaking, existing artificial intelligence (Al) based recognition models are trained offline for a fixed base set of categories or classes, and the models may then be provided to devices such as smartphones, robots/robotic devices, or any other image and/or sound recognition systems, to be implemented on those devices. The models, once trained, cannot be altered on the devices to, for example, add in new categories/classes that the model can recognise/identify. This is because existing Al based recognition models typically require many samples of the new classes to be obtained and for the model to be retrained using both the original and new samples (which is time-consuming), and/or require the models to be retrained using cloud computing (which is expensive and therefore undesirable). However, users often desire the ability to personalise an Al model to add in classes which are relevant to the user.
[3] The present applicant has recognised the need for an improved technique for customising a machine learning model.
Summary
[4] In a first approach of the present techniques, there is provided a method for customising a machine learning model on a user device, the method comprising: receiving a user request for a new class; determining whether the new class is new and should be added to the machine learning model; obtaining, when the new class is determined to be new, at least one sample representative of the new class; obtaining, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample; and storing, on the user device, the at least one extracted feature as a representation of the new class in a local portion of the classifier of the machine learning model.
[5] As mentioned above, there is a desire to enable a user to customise a machine learning model that a company has created and provided to the user. For example, a user may purchase a device such as a smartphone, virtual assistant device, or robot which can implement a machine learning model. The machine learning model may be stored on the device and implemented on the device, or may be partly implemented on the device and partly implemented elsewhere (e.g. on a cloud or remote server). The machine learning model may have been trained to perform a particular task such as image classification or object recognition. The machine learning model may have been trained using a set of samples (e.g. images), and a set of classes may have been determined. The classes may be used by a classifier to analyse new samples (e.g. images captured by a camera of a smartphone) for classification/categorisation purposes. However, the original training of the machine learning model may have been performed using a specific set of samples and therefore, a specific set of classes may be created. The specific set of samples may have been chosen to be suitable for most users or the most common or general classification/categorisation purposes (e.g. identifying whether an image contains a dog or a cat). The user may wish for the machine learning model to be customised/personalised so that particular classes that are specific to the user are used by the model. For example, the user may wish the model to not only be able to identify whether an image contains a dog, but also identify whether the image contains their dog. In order to enable this additional, personalised functionality, the classifier of the machine learning model needs to contain a class that describes the user's dog.
[6] The present techniques enable a machine learning or Al model/algorithm to be customised in a time-efficient, resource-efficient and cost-effective manner, while also ensuring the model remains accurate. This is achieved by locally extending the classifier of the machine learning model on the user device (e.g. smartphone). In other words, global changes to the classifier that was created during the training process are not made or required - this means that the model can be updated quickly as the model does not need to be retrained from scratch. Furthermore, this means it is not necessary to use cloud computing to update/customise the model, which is expensive. The model can be updated locally, i.e. on the user's device, which means the customisation process uses available resources in an efficient manner. In the present techniques, the classifier is effectively split into two portions - a base portion containing the original classes of the classifier obtained during the training process, and a local portion containing the new classes that are created specifically for a user based on samples the user inputs. When the machine learning model is run to classify/categorise samples, both the base portion and the local portion of the classifier are used. This means that the customised model still contains all the original information from when the model was originally trained and the new classes created by/for the user, thereby ensuring no loss in model accuracy or functionality. For example, the model may still be able to recognise dogs and cats in images, but it can now also recognise a user's dog.
[7] The ML model comprises a feature extractor. Each time the ML model is run on the user device, the feature extractor is used to extract features from an input sample. The extracted features can then be used by the classifier of the ML model (i.e. the base portion and the local portion) to determine whether the features correspond to a particular class. The feature extractor may be provided on the user device. Additionally or alternatively, the feature extractor may reside in an external server/cloud server, and the processing performed by the feature extractor may therefore be performed off-device.
[8] Thus, the step of obtaining at least one extracted feature may comprise: transmitting the user request and at least one sample to a server comprising the feature extractor and base portion of the classifier of the machine learning model; and receiving, from the server, the at least one extracted feature from the feature extractor.
[9] Alternatively, the step of obtaining at least one extracted feature may comprise: applying, on the user device, the feature extractor to the at least one sample; and extracting at least one feature from the at least one sample.
[10] The base portion of the classifier may be a matrix comprising a plurality of columns, where each column is a classifier weight vector corresponding to a class. The step of storing the at least one extracted feature as a representation of the new class may comprise: storing a classifier weight vector corresponding to the new class on the user device. Thus, when the customised ML model is run, a full forward pass through the whole model is performed. That is, each sample goes through the feature extractor and the obtained feature vector is compared to the classifier weight vectors of the classifier. In some cases, the feature vector may be compared to the weight vectors of the base portion of the classifier first (either in the cloud or on the device), and then compared to the weight vector(s) of the local portion of the classifier (on the device). More specifically, the classification process may comprise calculating the dot product between the feature vector obtained for an input sample, and each weight vector in the classifier. The model outputs a class that is most likely representative of the sample, i.e. the class for which cosine distance between the feature vector and weight vector is shortest.
[11] The base portion (i.e. the original portion) of the classifier may be regularised using an orthogonality constraint. This may result in the base portion of the classifier (i.e. the base classifier weight vectors) being structured in a way that they are more compatible with new weight vectors that are added to the classifier (i.e. the local portion). In other words, the orthogonality constraint may be chosen to help make the model customisable and may result in better generalisation performance since the base portion of the classifier is more amenable. The orthogonality constraint may make the base classifier weight vectors more distant On terms of a cosine distance) from each other. Optionally, the method of customising the model may comprise regularising the classifier weight vector corresponding to the new class, using the same orthogonality constraint. However, it may not be necessary to regularise the local portion of the classifier because of the relative size of the local portion compared with the base portion. In other words, regularising the base portion (which may contain hundreds of classes) may lead to efficiencies in matching a class to a new input sample, but regularising the local portion (which may only contain a few additional classes) may make negligible improvements to the processing. Nevertheless, regularising the weight vectors of the local portion of the classifier may be performed on the device as a relatively cheap, fine-tuning step.
[12] A user may make a request for a new class in a number of ways. These are described below in more detail with reference to the Figures. For instance, the user request may comprise one or more samples (e.g. an image of the user's dog) representative of the new class, or may comprise at least one keyword (e.g. "my dog") to be associated with the new class, or may comprise one or more samples and one or more keywords. The customisation method may vary depending on the contents of the user request. The customisation method may comprise determining if the new class requested by the user is actually new, or if it is closely matched/very similar to an existing class in the classifier (either in the base portion, or in the local portion if this already exists). In the latter case, the method may comprise suggesting to the user that a similar or substantially identical class already exists. The user may accept the suggestion to, for example, link their keyword(s) to the existing class.
Alternatively, the user may reject the suggestion and the method may continue the process to add the user's proposed class to the model.
[13] Thus, the step of receiving a user request for a new class may comprise: receiving at least one keyword to be associated with the new class.
[14] In an example, the step of determining whether the new class is new may comprise: determining whether the at least one keyword matches one of a plurality of predefined keywords in the base portion of the classifier of the machine learning model; identifying, when the at least one keyword matches one of the plurality of predefined keywords, a class corresponding to the matched predefined keyword; and outputting example samples corresponding to the identified class and a suggestion to assign the at least one keyword to the identified class.
[15] In this example, the method may further comprise: receiving user confirmation that the at least one keyword is to be assigned to the identified class; and assigning responsive to the receiving, the at least one keyword to the identified class. Alternatively, the method may further comprise: receiving user input disapproving of the at least one keyword being assigned to the identified class; and beginning, responsive to the receiving, the steps to add the new class to the machine learning model.
[16] Alternatively, the step of determining whether the new class is new may comprise: determining whether the at least one keyword matches one of a plurality of predefined keywords in the base portion of the classifier of the machine learning model; receiving, when the at least one keyword does not match any of the plurality of predefined keywords, at least one sample representative of the new class; determining whether features of the at least one sample match an existing class in the classifier; and outputting, when the features of the at least one sample match an existing class, example samples corresponding to the matched existing class and a suggestion to assign the at least one keyword to the existing class.
[017] In this example, the method may further comprise: receiving user confirmation that the at least one keyword is to be assigned to the matched existing class; and assigning, responsive to the receiving, the at least one keyword to the matched existing class. Alternatively, the method may further comprise: receiving user input disapproving of the at least one keyword being assigned to the matched existing class; and beginning, responsive to the receiving, the steps to add the new class to the machine learning model.
[018] The present techniques may be advantageous from a user privacy perspective. This is because the new class is stored on the user device, rather than being stored in the cloud or added to the base portion of the classifier which other users can use/access. However, it may be desirable for the new class defined by the user to be shared across the user's other devices (e.g. from a smartphone to their laptop, virtual assistant, robot butler, smart fridge, etc.). Thus, the method may further comprise: sharing the new class stored in the local portion of the classifier with one or more devices used by the user of the user device. This may happen automatically. For example, if the ML model is used as part of a camera application, when the model is updated on a user's smartphone, the model may automatically be shared with any of the user's other devices running the same camera application. Thus, the sharing may form part of software application synchronisation across multiple devices.
[19] In some cases, with the user's permission, the method may comprise: sharing the new class stored in the local portion of the classifier with a server comprising the base portion of the classifier.
[20] The step of obtaining the at least one sample representative of the new class may comprise obtaining one or more of: an image, an audio file, an audio clip, a video, and a frame of a video.
[21] In a related approach of the present techniques, there is provided a non-transitory data carrier carrying processor control code to implement the methods described herein.
[022] As will be appreciated by one skilled in the art, the present techniques may be embodied as a system, method or computer program product. Accordingly, present techniques may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
[023] Furthermore, the present techniques may take the form of a computer program product embodied in a computer readable medium having computer readable program code embodied thereon. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable medium may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
[024] Computer program code for carrying out operations of the present techniques may be written in any combination of one or more programming languages, including object oriented programming languages and conventional procedural programming languages. Code components may be embodied as procedures, methods or the like, and may comprise sub-components which may take the form of instructions or sequences of instructions at any of the levels of abstraction, from the direct machine instructions of a native instruction set to high-level compiled or interpreted language constructs.
[25] Embodiments of the present techniques also provide a non-transitory data carrier carrying code which, when implemented on a processor, causes the processor to carry out any of the methods described herein.
[26] The techniques further provide processor control code to implement the above-described methods, for example on a general purpose computer system or on a digital signal processor (DSP). The techniques also provide a carrier carrying processor control code to, when running, implement any of the above methods, in particular on a non-transitory data carrier. The code may be provided on a carrier such as a disk, a microprocessor, CD-or DVDROM, programmed memory such as non-volatile memory (e.g. Flash) or read-only memory (firmware), or on a data carrier such as an optical or electrical signal carrier. Code (and/or data) to implement embodiments of the techniques described herein may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as Python, C, or assembly code, code for setting up or controlling an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array), or code for a hardware description language such as Verilog (RTM) or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate, such code and/or data may be distributed between a plurality of coupled components in communication with one another. The techniques may comprise a controller which includes a microprocessor, working memory and program memory coupled to one or more of the components of the system.
[027] It will also be clear to one of skill in the art that all or part of a logical method according to embodiments of the present techniques may suitably be embodied in a logic apparatus comprising logic elements to perform the steps of the above-described methods, and that such logic elements may comprise components such as logic gates in, for example a programmable logic array or application-specific integrated circuit. Such a logic arrangement may further be embodied in enabling elements for temporarily or permanently establishing logic structures in such an array or circuit using, for example, a virtual hardware descriptor language, which may be stored and transmitted using fixed or transmittable carrier media.
[028] In an embodiment, the present techniques may be realised in the form of a data carrier having functional data thereon, said functional data comprising functional computer data structures to, when loaded into a computer system or network and operated upon thereby, enable said computer system to perform all the steps of the above-described method.
[29] The above-mentioned features described with respect to the first approach also apply to the second and third approaches.
[30] In a second approach of the present techniques, there is provided an electronic user device comprising: a user interface for receiving a user request for a new class; and at least one processor coupled to memory and arranged to: determine whether the new class is new and should be added to the machine learning model, obtain, when the new class is determined to be new, at least one sample representative of the new class, obtain, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample, and store, on the user device, the at least one extracted feature as a representation of the new class in a local portion of the classifier of the machine learning model.
[31] In a third approach of the present techniques, there is provided a system for implementing a machine learning model, the system comprising: a server comprising: a feature extractor and a base portion of a classifier of the machine learning model; and an electronic user device comprising: a user interface for receiving a user request for a new class; and at least one processor coupled to memory and arranged to: determine whether the new class is new and should be added to the machine learning model; obtain, when the new class is determined to be new, at least one sample representative of the new class; obtain, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample; and store, on the user device, the at least one extracted feature as a representation of the new class in a local portion of the classifier of the machine learning model.
[32] In some cases, the user device may not comprise a feature extractor and thus, may not be able to extract features from a received sample (either as part of the process to add in new classes, or as part of the process to categorise/classify samples using the classifier). Accordingly, the step of obtaining at least one extracted feature may comprise: transmitting, using a communication module, the user request and at least one sample from the user device to the server; and receiving, from the server, the at least one extracted feature from the feature extractor.
[33] Alternatively, the electronic user device may comprise a feature extractor. In this case, the step of obtaining at least one extracted feature may comprise: applying, on the user device, the feature extractor to the at least one sample; and extracting at least one feature from the at least one sample.
[34] Whether the feature extractor is applied on the user device or on the server, a weight vector is obtained for the new class in the input sample. The base portion of the classifier is a matrix comprising a plurality of columns, where each column is a classifier weight vector corresponding to a class. The step of storing the at least one extracted feature as a representation of the new class may comprise: storing a classifier weight vector corresponding to the new class on the user device.
[35] As mentioned above, the at least one processor of the user device may regularise the classifier weight vector corresponding to the new class, using an orthogonality constraint.
[36] The customised model may be implemented as follows. The at least one processor of the user device may: receive a sample to be analysed by the customised machine learning model. The processor may obtain at least one feature extracted from the received sample -this may be obtained by either running a feature extractor on the device, or by receiving the feature(s) from the server running the feature extractor. The processor may transmit, from the user device to the server, the at least one feature extracted from the received sample for analysis using the base portion of the classifier. (This may not be necessary if the feature extractor is running on the server). The processer may analyse the at least one feature extracted from the received sample using the local portion of the classifier. The processor may obtain information on whether the extracted feature(s) match a class in the base portion of the classifier -this may be obtained either from the server (which performs the analysis using a base portion stored in the server) or may be obtained by the processor running the analysis on the device itself (if the base portion is stored on the device). The processor may then determine whether the at least one feature extracted feature matches a class defined by the base portion or the local portion of the classifier. The determination may be output to the user. For example, the processor may output a message via the user interface of the device to indicate if the sample does not match a class in the classifier (e.g. "no match found") or if the sample matches a class in the classifier (e.g. "my dog").
[37] The at least one processor of the user device may share the new class stored in the local portion of the classifier with the server. As mentioned above, the new class may be shared with other devices in the system that are owned by the user, and/or with the server.
[38] The step of obtaining the at least one sample representative of the new class may comprise obtaining one or more of: an image, an audio file, an audio clip, a video, and a frame of a video.
Brief description of drawings
[39] Implementations of the present techniques will now be described, by way of example only, with reference to the accompanying drawings, in which: [40] Figure 1 is a flowchart of example steps to customise a machine learning model; [41] Figure 2 is a block diagram of a system for customising and implementing a machine learning model; [42] Figure 3 illustrates an example application of the customisation method; [43] Figures 4A and 4B show block diagrams of existing techniques for implementing a machine learning model; [44] Figures 5A and 5B show block diagrams of the present techniques for customising and implementing a machine learning model; [45] Figure 6A illustrates a technique for customising a machine learning model using samples that are images; [046] Figure 6B illustrates a technique for customising a machine learning model using samples that are frames of a video; and [047] Figure 7 illustrates a flow chart of example steps to check whether a user-requested class already exists in the machine learning model.
Detailed description of drawings
[048] Broadly speaking, the present techniques relate to devices, methods and systems for customising a machine learning (ML) model by enabling a user to add new classes to the machine learning model. This may enable the model to, for example, recognise objects that are specific to the user, which may improve/enhance user experience. For example, the machine learning model may be used to classify or categorise objects in images captured by a user using their user device (e.g. smartphone). This may enable a user to more readily search through images on their device to find images belonging to a particular class. In another example, the model may be customised to identify a user's version of an object the model already recognises. A user may wish for a robot butler which has an image recognition functionality to recognise the user's particular mug out of a collection of mugs. Thus, the present techniques enable a machine learning model which is used for recognition to be personalised and customised.
[49] The terms "class" and "classification" are used interchangeably herein with the terms "category" and "categorisation". 20 [50] Figure 1 is a flowchart of example steps to customise a machine learning model. The method may begin by receiving, on a user device, a user request for a new class (step S100). The user device may be any electronic device, such as, but not limited to, a smartphone, tablet, laptop, computer or computing device, virtual assistant device, robot or robotic device, or image capture system/device. The user may make this request in any suitable way, such as by launching an app on the device which enables the user to interact with the machine learning model. The app may be an app associated with a camera on the device or an app used to collate images and videos captured by the camera, for example.
[051] The method may comprise first determining whether the new class is actually new and therefore, should be added to the machine learning model (step S102). This check may be performed in order to avoid duplication of classes in the model, which could make the model inefficient to run. Some example techniques for determining if the new class requested by the user is actually new are described below with reference to Figure 7.
[52] If at step 5102 the new class is determined to be new, the method may comprise obtaining at least one sample representative of the new class (step S104). The at least one sample may be one or more of: an image, an audio file, an audio clip, a video, and a frame of a video. Typically, the at least one sample may be a set of images which all show the same object (or features) which is to be used to define the new class. For example, if the user wishes the machine learning model to identify the user's dog in images and videos, the user may provide one or more photos of the user's dog as the input samples that are representative of the new class. Where multiple samples are obtained, the samples may be all the same type/file type (e.g. images) or could be of different types (e.g. images and videos). In other words, the user could provide both photos and videos of the user's dog as the input samples.
[53] A single sample that is representative of the new class may be sufficient to customise the machine learning model. However, as with all machine learning techniques, more samples usually results in improved or better outcomes. The method may comprise a step of asking a user to input more samples if the samples that have been provided/obtained are not of a good enough quality or are not sufficient to enable the new class to be defined and added to the model.
[54] In some cases, the user request at step S100 may comprise the sample(s) representative of the new class -in this case, at step 5104 the method simply uses the sample(s) already received. In some cases, the user request at step 5100 may not include any samples -in this case, at step 5104, the method may comprise prompting the user to provide/input the sample(s). Alternatively, the sample(s) may have been received during the checking step 5102, and therefore at step S104, the method comprises using the samples obtained during this checking process.
[55] The method may comprise obtaining, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample (step S106). As explained in more detail with respect to Figure 2, the machine learning model may be implemented entirely on the user device, entirely on a remote server/cloud server, or partly on the user device and partly on the server. Whichever way the model is implemented, the extracted features are obtained at step S106, and are then stored on the user device as a representation of the new class in a local portion of the classifier of the machine learning model (S108).
[56] The present techniques provide a customisable machine learning or Al model/algorithm, that can be customised in a time-efficient, resource-efficient and cost-effective manner, while also ensuring the model remains accurate. This is achieved by locally extending the classifier of the machine learning model on the user device (e.g. smartphone).
In other words, global changes to the classifier that was created during the training process are not made or required -this means that the model can be updated quickly as the model does not need to be retrained from scratch. Furthermore, this means it is not necessary to use cloud computing to update/customise the model, which is expensive. The model can be updated locally, i.e. on the user's device, which means the customisation process uses available resources in an efficient manner. In the present techniques, the classifier is effectively split into two portions -a base portion containing the original classes of the classifier obtained during the training process, and a local portion containing the new classes that are created specifically for a user based on samples the user inputs. When the machine learning model is run to classify/categorise samples, both the base portion and the local portion of the classifier are used. This means that the customised model still contains all the original information from when the model was originally trained and the new classes created by/for the user, thereby ensuring no loss in model accuracy or functionality. For example, the model may still be able to recognise dogs and cats in images, but it can now also recognise a user's dog.
[57] Figure 2 is a block diagram of a system 100 that may be used for customising and implementing a machine learning model. The system may comprise at least one electronic user device 102 and at least one server 120. For the sake of simplicity, only one user device and one server are shown in Figure 2, but it will be understood that the system may comprise multiple servers, and a plurality of user devices -a subset of the user devices may belong to the same individual user. The electronic user device 102 may be any user device, such as, but not limited to, a smartphone, tablet, laptop, computer or computing device, virtual assistant device, robot or robotic device, consumer good/appliance (e.g. a smart fridge), an internet of things device, or image capture system/device.
[58] The user device 102 may comprise an app (i.e. a software application) 104, via which the user may be able to make a request to add a new class to the machine learning model. The user device 102 may comprise a communication module 106 to enable the user device to communicate with the server 120 and any other device within the system 100. The communication module 106 may be any communication module suitable for sending and receiving data. The communication module may communicate with the server 120 or other components of the system 100 using any one or more of: wireless communication (e.g. WiFi), hypertext transfer protocol (HTTP), message queuing telemetry transport (MQTT), a wireless mobile telecommunication protocol, short range communication such as radio frequency communication (RFID) or near field communication (NEC), or by using the communication protocols specified by ZigBee, Thread, Bluetooth, Bluetooth LE, !Rye over Low Power Wireless Standard (6LoWPAN), Constrained Application Protocol (CoAP), wired communication. The communication module 106 may use a wireless mobile (cellular) telecommunication protocol to communicate with the server 120 or other components of the system, e.g. 3G, 4G, 5G, 6G etc. The communication module 106 may communicate with other devices in the system 100 using wired communication techniques, such as via metal cables or fibre optic cables. The user device 102 may use more than one communication technique to communicate with other components in the system 100. It will be understood that this is a non-exhaustive list of communication techniques that the communication module 106 may use. It will also be understood that intermediary devices (such as a gateway) may be located between the user device 102 and other components in the system 100, to facilitate communication between the machines/components.
[59] The user device 102 may comprise storage 110. Storage 110 may comprise a volatile memory, such as random access memory (RAM), for use as temporary memory, and/or non-volatile memory such as Flash, read only memory (ROM), or electrically erasable programmable ROM (EEPROM), for storing data, programs, or instructions, for example.
[60] User device 102 may comprise one or more interfaces (not shown) that enable the device to receive inputs and/or generate outputs (e.g. audio and/or visual inputs and outputs, or control commands, etc.) For example, the user device 102 may comprise a display screen to enable app 104 to be displayed on the device and for a user to enter their request for a new class using the app 104. The display screen may be used to display prompts or notifications generated by the system 100 or user device 102.
[061] The user device 102 comprises at least one processor or processing circuitry 108. The processor 108 controls various processing operations performed by the user device 102, such as communication with other components in system 100, and implementing all or part of a machine learning model on the device 102. The processor may comprise processing logic to process data and generate output data/messages in response to the processing. The processor may comprise one or more of: a microprocessor, a microcontroller, and an integrated circuit.
[62] The user device 102 comprises a machine learning model 112. The machine learning model of system 100 comprises a feature extractor and a base portion of a classifier. In some cases, the user device 102 may comprise feature extractor 114 and base portion 116 of a classifier. In other cases, these components may be provided on server 120. The machine learning model 112 comprises a local portion 118 of the classifier -this is stored on the user device 102.
[63] When a user requests a new class to be added to the model (e.g. via app 104), the at least one processor 108 may be coupled to memory and may be arranged to determine whether the new class is new and should be added to the machine learning model. This is described in more detail with reference to Figure 7. The processor 108 may obtain, when the new class is determined to be new, at least one sample representative of the new class. The sample(s) may be obtained via the app 104. The processor 108 may obtain, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample. Depending on how the system 100 is arranged, the extracted feature may be obtained from feature extractor 114 running on user device 102, or from feature extractor 124 running on server 124. The processor 108 may store, on the user device (e.g. in storage 110), the at least one extracted feature as a representation of the new class in a local portion 118 of the classifier of the machine learning model 112.
[64] The server 120 may comprise, among other things, a machine learning model 122. The machine learning model 122 may comprise a feature extractor 124 and a base portion of a classifier 126 of the machine learning model 122. The feature extractor 124 is identical to the feature extractor 114, and the base portion 126 of the classifier is identical to the base portion 116 of the classifier.
[65] Each time the ML model is run on the user device 102, the feature extractor 114 or 124 is used to extract features from an input sample. The extracted features can then be used by the classifier of the ML model (i.e. the base portion and the local portion) to determine whether the features correspond to a particular class. As already explained, the feature extractor 114 may be provided on the user device 102. Additionally or alternatively, the feature extractor 124 may reside in the external server/cloud server 120, and the processing performed by the feature extractor 124 may therefore be performed off-device.
[066] Thus, in order for the processor 108 of user device 102 to obtain at least one extracted feature from the sample(s), the processor may: transmit the user request and at least one sample to the server 120 comprising the feature extractor 124 and base portion 126 of the classifier of the machine learning model; and receiving, from the server 120, the at least one extracted feature from the feature extractor 124. Alternatively, the processor 108 may: apply, on the user device 102, the feature extractor 114 to the at least one sample; and extract at least one feature from the at least one sample.
[067] Figure 3 illustrates an example application of the customisation method. A user device may be able to capture images of an environment. The user device 200 may be, for example, a robot butler device which may be able to move through an environment. A user 206 may wish for the user device 200 to locate their cup/mug, so that the user 206 can retrieve their mug. In order to achieve this, the user device 200 needs to be able identify the user's mug. The user device 200 may already be able to identify mugs/cups from other objects, as the user device 200 may comprise an image recognition system. That is, the base portion of the classifier of machine learning model underpinning the image recognition system may comprise a classifier weight vector corresponding to the class "mug". However, in order for the user device 200 to find the user's mug in an environment, the ML model needs to learn about the user's mug and store a classifier weight vector corresponding to the user's mug.
Once the model has been customised in this way, if the user 206 queries the user device 200 about the location of their mug, the user device 200 may be able to scan an environment 202 and identify a mug 204 which matches the classifier weight vector corresponding to the user's mug 204.
[068] Figures 4A and 4B show block diagrams of existing techniques for implementing a machine learning model. Figure 4A conceptually shows a non-generalised "few shot learning" method in which a feature extractor 400 is able to extract features from a sample and produce classification weights 402 for new classes only. This is disadvantageous because the resulting model only contains new classes, i.e. it does not contain any of the original base classes.
Thus, by personalising the model using the technique shown in Figure 4A, the resulting classifier only contains classifier weight vectors for the new (personal) classes and none of the base classes, and so cannot perform broader classification.
[69] Figure 4B shows conceptually how existing methods, such as that shown in Figure 4A lead to inferior model classification performance, as the methods do not regularise the classifier weights.
[70] Figures 5A and 5B show block diagrams of the present techniques for customising and implementing a machine learning model. Figure 5A conceptually shows how in the generalised few shot learning method of the present techniques, a feature extractor 500 is able to extract features from a sample and save the classification weight vector 504 for the new classes in the classifier alongside the original base classification weight vectors 502. That is, the original classes that were produced when the machine learning model was originally trained are retained and used together with the new classes whenever the model is applied/used.
[71] Thus, base portion 116, 126 of the classifier may be a matrix comprising a plurality of columns, where each column is a classifier weight vector corresponding to a class. Storing the at least one extracted feature as a representation of the new class may comprise the processor 108 storing a classifier weight vector corresponding to the new class on the user device 102, e.g. in storage 110. Thus, when the customised ML model 112 is run, a full forward pass through the whole model is performed. That is, each sample goes through the feature extractor 114, 124 and the obtained feature vector is compared to the classifier weight vectors of the classifier. In some cases, the feature vector may be compared to the weight vectors of the base portion of the classifier first (either in the cloud 120 or on the device 102), and then compared to the weight vector(s) of the local portion 118 of the classifier (on the device 102). More specifically, the classification process may comprise calculating the dot product between the feature vector obtained for an input sample, and each weight vector in the classifier. The model outputs a class that is most likely representative of the sample, i.e. the class for which cosine distance between the feature vector and weight vector is shortest.
[72] Figure 53 shows how the base portion 502 of the classifier may be regularised using an orthogonality constraint. This may result in the base portion of the classifier (i.e. the base classifier weight vectors) being structured in a way that they are more compatible with new weight vectors that are added to the classifier (i.e. the local portion). In other words, the orthogonality constraint may be chosen to help make the model customisable and may result in better generalisation performance since the base portion of the classifier is more amenable.
Optionally, the method of customising the model may comprise regularising the classifier weight vector corresponding to the new class, using the same orthogonality constraint.
However, it may not be necessary to regularise the local portion of the classifier because of the relative size of the local portion compared with the base portion. In other words, regularising the base portion (which may contain hundreds of classes) may lead to efficiencies in matching a class to a new input sample, but regularising the local portion (which may only contain a few additional classes) may make negligible improvements to the processing.
Nevertheless, regularising the weight vectors of the local portion of the classifier may be performed on the device as a relatively cheap, fine-tuning step. Thus, by regularising the classifier matrix, the performance of the model may be improved relative to the existing techniques shown in Figures 4A and 4B.
[73] A user may make a request for a new class in a number of ways. For instance, the user request may comprise one or more samples (e.g. an image of the user's dog) representative of the new class, or may comprise at least one keyword (e.g. "my dog") to be associated with the new class, or may comprise one or more samples and one or more keywords. The customisation method may vary depending on the contents of the user request.
[74] Figure 6A illustrates a technique for customising a machine learning model using samples that are images. In this example, a user may wish to find pictures of his dog on his smartphone. However, as a dog lover, the user may have thousands of pictures of other dogs on his device. An image gallery app on a smartphone may be able to locate all pictures containing dogs using text-based keyword searching, but may not be able to locate pictures containing the user's dog because no keyword or class corresponds to the user's dog.
[075] At step S600, the user may launch the image gallery app on their smartphone and enter the settings section of the app. In the settings section, the user may be able to "add a new search category". Thus, the process for customising the machine learning model may be built-into the app. The app may prompt the user to enter a request for a new class. The app may prompt the user to enter new category keywords. In this case, at step S602 the user may enter the keywords "German shepherd", "my dog" and "Laika" (the dog's name) to be associated with the new category. At step S604 the user may add pictures of his dog into the app, either by capturing an image of the dog using his camera or from the image gallery. This new category may now be saved. When the user subsequently enters keywords "my dog" into the search function of the image gallery app, the user is provided with images of his dog (step S606).
[76] A user may also use the settings section of the image gallery app to remove categories he no longer requires, by removing the keywords associated with the categories. This may cause the classifier weight vectors associated with those keywords to be removed from the local portion of the classifier.
[77] Figure 6B illustrates a technique for customising a machine learning model using samples that are frames of a video. In this example, a user may not like the default open hand gesture that is used by his smartphone to give a command to the camera of the smartphone to take a selfie (as shown in step S610). The user wants to register a new gesture for the "take a self ie" command. At step S612, the user enters the camera settings section and selects "set new selfie-taking gesture". The user begins recording a user-defined gesture, which may be a left-to-right head motion (step S614). At step S616 the user may confirm his choice. The user can now use his new gesture to activate the selfie-taking action (step S618).
[078] As mentioned above, it may be desirable to only register a new class in the local portion of the classifier of the machine learning model if the base portion of the classifier (or the local portion if this already exists), does not already contain the same or substantially identical class. If a similar or substantially identical class already exists in the classifier, the user may inform that the class exists and propose linking their keywords with the existing class. The user may accept the suggestion or may reject the suggestion, in which case the process to add the user's proposed class to the model may continue.
[079] Figure 7 illustrates a flow chart of example steps to check whether a user-requested class already exists in the machine learning model.
[80] The process begins by receiving a user request for a new class (step S700). This may comprise receiving at least one keyword to be associated with the new class (step S702).
[81] The process may comprise determining whether the at least one keyword matches one of a plurality of predefined keywords in the classifier of the machine learning model (step S704). If a local portion of the classifier already exists, the process may comprise matching the keywords to associated with the base portion and local portion of the classifier.
[82] If the keyword(s) match any predefined keyword, the process may comprise identifying a class corresponding to the matched predefined keyword (step S706). The process may comprise outputting a suggestion to the user (via the user interface of the user device) to assign the at least one keyword to the identified existing class (step S708). The process may also output example samples corresponding to the identified class, to help the user to understand why the requested new class is similar/identical to the identified existing class. At step S710, the process may comprise awaiting a user response to the proposal/suggestion.
In some cases, the process may comprise determining if the user has approved the suggestion. If the user approves the suggestion, the process may comprise receiving user confirmation that the at least one keyword is to be assigned to the identified class; and assigning responsive to the receiving, the at least one keyword to the identified class (step S712). Alternatively, the process may comprise: receiving user input disapproving of the at least one keyword being assigned to the identified class (at step S710); and beginning, responsive to the receiving, the steps to add the new class to the machine learning model (step S714). For example, the process may continue to step S104 of Figure 1.
[083] If at step S704 the keyword(s) entered by the user does not match any of the plurality of predefined keywords, the process may comprise receiving at least one sample representative of the new class (step S716). The process may then extract features from the at least one received sample (using a feature extractor on the user device 102 or server 120), and determine whether the features match an existing class in the classifier (step S718). This may be determined by calculating the dot product between the feature vector generated using the extracted features from the received sample and each classifier weight vector of the classifier, as described above. If it is determined at step S718 that the extracted features match an existing class, then the process outputs a suggestion to assign the received keyword to the identified class (step S708). The process may also output example samples corresponding to the identified class, to help the user to understand why the requested new class is similar/identical to the identified existing class. At step S710, the process may comprise awaiting a user response to the proposal/suggestion. In some cases, the process may comprise determining if the user has approved the suggestion. If the user approves the suggestion, the process may comprise receiving user confirmation that the at least one keyword is to be assigned to the identified class; and assigning responsive to the receiving, the at least one keyword to the identified class (step S712). Alternatively, the process may comprise: receiving user input disapproving of the at least one keyword being assigned to the identified class (at step S710); and beginning, responsive to the receiving, the steps to add the new class to the machine learning model (step S714). For example, the process may continue to step S104 of Figure 1. Similarly, if at step S718 it is determined that the extracted features do not match an existing class, the process proceeds to step 3714 (i.e. may continue to step S104 of Figure 1).
[84] The present techniques may be advantageous from a user privacy perspective. This is because, as shown in Figure 2, the new class is stored on the user device, rather than being stored in the cloud or added to the base portion of the classifier which other users can use/access. However, it may be desirable for the new class defined by the user to be shared across the user's other devices (e.g. from a smartphone to their laptop, virtual assistant, robot butler, smart fridge, etc.). Thus, the method may further comprise: sharing the new class stored in the local portion of the classifier with one or more devices used by the user of the user device. This may happen automatically. For example, if the ML model is used as part of a camera application, when the model is updated on a user's smartphone, the model may automatically be shared with any of the user's other devices running the same camera application. Thus, the sharing may form part of software application synchronisation across multiple devices.
[85] In some cases, with the user's permission, the method may comprise: sharing the new class stored in the local portion of the classifier with a server comprising the base portion of the classifier.
[86] Those skilled in the art will appreciate that while the foregoing has described what is considered to be the best mode and where appropriate other modes of performing present techniques, the present techniques should not be limited to the specific configurations and methods disclosed in this description of the preferred embodiment. Those skilled in the art will recognise that present techniques have a broad range of applications, and that the embodiments may take a wide range of modifications without departing from any inventive concept as defined in the appended claims.

Claims (25)

  1. CLAIMS1. A method for customising a machine learning model on a user device, the method comprising: receiving a user request for a new class; determining whether the new class is new and should be added to the machine learning model; obtaining, when the new class is determined to be new, at least one sample representative of the new class; obtaining, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample; and storing, on the user device, the at least one extracted feature as a representation of the new class in a local portion of the classifier of the machine learning model.
  2. 2. The method as claimed in claim 1 wherein the step of obtaining at least one extracted feature comprises: transmitting the user request and at least one sample to a server comprising the feature extractor and base portion of the classifier of the machine learning model; and receiving, from the server, the at least one extracted feature from the feature extractor. 20
  3. 3. The method as claimed in claim 1 wherein the step of obtaining at least one extracted feature comprises: applying, on the user device, the feature extractor to the at least one sample; and extracting at least one feature from the at least one sample.
  4. 4. The method as claimed in any of claims 1 to 3 wherein the base portion of the classifier is a matrix comprising a plurality of columns, where each column is a classifier weight vector corresponding to a class and the step of storing the at least one extracted feature as a representation of the new class comprises: storing a classifier weight vector corresponding to the new class on the user device.
  5. 5. The method as claimed in claim 4 further comprising: regularising the classifier weight vector corresponding to the new class, using an orthogonality constraint.
  6. 6. The method as claimed in any of claims 1 to 5 wherein the step of receiving a user request for a new class comprises: receiving at least one keyword to be associated with the new class.
  7. 7. The method as claimed in claim 6 wherein the step of determining whether the new class is new comprises: determining whether the at least one keyword matches one of a plurality of predefined keywords in the base portion of the classifier of the machine learning model; identifying, when the at least one keyword matches one of the plurality of predefined keywords, a class corresponding to the matched predefined keyword; and outputting example samples corresponding to the identified class and a suggestion to assign the at least one keyword to the identified class.
  8. S. The method as claimed in claim 7 further comprising: receiving user confirmation that the at least one keyword is to be assigned to the identified class; and assigning responsive to the receiving, the at least one keyword to the identified class.
  9. 9. The method as claimed in claim 7 further comprising: receiving user input disapproving of the at least one keyword being assigned to the identified class; and beginning, responsive to the receiving, the steps to add the new class to the machine learning model.
  10. 10. The method as claimed in claim 6 wherein the step of determining whether the new class is new comprises: determining whether the at least one keyword matches one of a plurality of predefined keywords in the base portion of the classifier of the machine learning model; receiving, when the at least one keyword does not match any of the plurality of predefined keywords, at least one sample representative of the new class; determining whether features of the at least one sample match an existing class in the classifier; and outputting, when the features of the at least one sample match an existing class, example samples corresponding to the matched existing class and a suggestion to assign the at least one keyword to the existing class.
  11. 11. The method as claimed in claim 10 further comprising: receiving user confirmation that the at least one keyword is to be assigned to the matched existing class; and assigning, responsive to the receiving, the at least one keyword to the matched existing 5 class.
  12. 12. The method as claimed in claim 10 further comprising: receiving user input disapproving of the at least one keyword being assigned to the matched existing class; and beginning, responsive to the receiving, the steps to add the new class to the machine learning model.
  13. 13. The method as claimed in any preceding claim further comprising: sharing the new class stored in the local portion of the classifier with one or more devices used by the user of the user device.
  14. 14. The method as claimed in any preceding claim further comprising: sharing the new class stored in the local portion of the classifier with a server comprising the base portion of the classifier.
  15. 15. The method as claimed in any preceding claim wherein the step of obtaining the at least one sample representative of the new class comprises obtaining one or more of: an image, an audio file, an audio clip, a video, and a frame of a video.
  16. 16. A non-transitory data carrier carrying code which, when implemented on a processor, causes the processor to carry out the method of any of claims 1 to 15.
  17. 17. An electronic user device comprising: a user interface for receiving a user request for a new class; and at least one processor coupled to memory and arranged to: determine whether the new class is new and should be added to the machine learning model, obtain, when the new class is determined to be new, at least one sample representative of the new class, obtain, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample, and store, on the user device, the at least one extracted feature as a representation of the new class in a local portion of the classifier of the machine learning model.
  18. 18. A system for implementing a machine learning model, the system comprising: a server comprising: a feature extractor and a base portion of a classifier of the machine learning model; and an electronic user device comprising: a user interface for receiving a user request for a new class; and at least one processor coupled to memory and arranged to: determine whether the new class is new and should be added to the machine learning model; obtain, when the new class is determined to be new, at least one sample representative of the new class; obtain, from a machine learning model comprising a feature extractor and a base portion of a classifier, at least one extracted feature from the at least one sample; and store, on the user device, the at least one extracted feature as a representation of the new class in a local portion of the classifier of the machine learning model.
  19. 19. The system as claimed in claim 18 wherein the step of obtaining at least one extracted feature comprises: transmitting, using a communication module, the user request and at least one sample to the server; and receiving, from the server, the at least one extracted feature from the feature extractor.
  20. 20. The system as claimed in claim 18 wherein the electronic user device comprises a feature extractor and wherein the step of obtaining at least one extracted feature comprises: applying, on the user device, the feature extractor to the at least one sample; and extracting at least one feature from the at least one sample.
  21. 21. The system as claimed in any of claims 18 to 20, wherein the base portion of the classifier is a matrix comprising a plurality of columns, where each column is a classifier weight vector corresponding to a class and the step of storing the at least one extracted feature as a representation of the new class comprises: storing a classifier weight vector corresponding to the new class on the user device.
  22. 22. The system as claimed in claim 21 wherein the at least one processor of the user device: regularises the classifier weight vector corresponding to the new class, using an orthogonality constraint.
  23. 23. The system as claimed in any of claims 18 to 22 wherein the at least one processor of the user device: receives a sample to be analysed by the machine learning model; obtains at least one feature extracted from the received sample; transmits, from the user device to the server, the at least one feature extracted from the received sample for analysis using the base portion of the classifier; analyses the at least one feature extracted from the received sample using the local portion of the classifier; and determines whether the at least one feature extracted feature matches a class defined by the base portion or the local portion of the classifier.
  24. 24. The system as claimed in any of claims 18 to 23 wherein the at least one processor of the user device: shares the new class stored in the local portion of the classifier with the server.
  25. 25. The system as claimed in any of claims 18 to 24 wherein the step of obtaining the at least one sample representative of the new class comprises obtaining one or more of: an image, an audio file, an audio clip, a video, and a frame of a video.
GB1915637.1A 2019-10-29 2019-10-29 Method and system for customising a machine learning model Active GB2588614B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB1915637.1A GB2588614B (en) 2019-10-29 2019-10-29 Method and system for customising a machine learning model
KR1020200036344A KR20210052153A (en) 2019-10-29 2020-03-25 Electronic apparatus and method for controlling thereof
EP20883551.2A EP3997625A4 (en) 2019-10-29 2020-06-11 ELECTRONIC DEVICE AND ASSOCIATED CONTROL METHOD
PCT/KR2020/007560 WO2021085785A1 (en) 2019-10-29 2020-06-11 Electronic apparatus and method for controlling thereof
US16/901,685 US11797824B2 (en) 2019-10-29 2020-06-15 Electronic apparatus and method for controlling thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1915637.1A GB2588614B (en) 2019-10-29 2019-10-29 Method and system for customising a machine learning model

Publications (3)

Publication Number Publication Date
GB201915637D0 GB201915637D0 (en) 2019-12-11
GB2588614A true GB2588614A (en) 2021-05-05
GB2588614B GB2588614B (en) 2023-01-11

Family

ID=68768880

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1915637.1A Active GB2588614B (en) 2019-10-29 2019-10-29 Method and system for customising a machine learning model

Country Status (2)

Country Link
KR (1) KR20210052153A (en)
GB (1) GB2588614B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240305572A1 (en) * 2023-03-09 2024-09-12 Abb Schweiz Ag Method for Providing an Efficient Communication in a Hierarchical Network of Distributed Devices

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625876B (en) * 2022-03-17 2024-04-16 北京字节跳动网络技术有限公司 Method for generating author characteristic model, method and device for processing author information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100177956A1 (en) * 2009-01-13 2010-07-15 Matthew Cooper Systems and methods for scalable media categorization
EP2689650A1 (en) * 2012-07-27 2014-01-29 Honda Research Institute Europe GmbH Trainable autonomous lawn mower
US20150254532A1 (en) * 2014-03-07 2015-09-10 Qualcomm Incorporated Photo management
US20150324688A1 (en) * 2014-05-12 2015-11-12 Qualcomm Incorporated Customized classifier over common features
WO2016149147A1 (en) * 2015-03-17 2016-09-22 Qualcomm Incorporated Sample selection for retraining classifiers
US20180039887A1 (en) * 2016-08-08 2018-02-08 EyeEm Mobile GmbH Systems, methods, and computer program products for extending, augmenting and enhancing searching and sorting capabilities by learning and adding concepts on the fly
US20180330238A1 (en) * 2017-05-09 2018-11-15 Neurala, Inc. Systems and methods to enable continual, memory-bounded learning in artificial intelligence and deep learning continuously operating applications across networked compute edges

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100177956A1 (en) * 2009-01-13 2010-07-15 Matthew Cooper Systems and methods for scalable media categorization
EP2689650A1 (en) * 2012-07-27 2014-01-29 Honda Research Institute Europe GmbH Trainable autonomous lawn mower
US20150254532A1 (en) * 2014-03-07 2015-09-10 Qualcomm Incorporated Photo management
US20150324688A1 (en) * 2014-05-12 2015-11-12 Qualcomm Incorporated Customized classifier over common features
WO2016149147A1 (en) * 2015-03-17 2016-09-22 Qualcomm Incorporated Sample selection for retraining classifiers
US20180039887A1 (en) * 2016-08-08 2018-02-08 EyeEm Mobile GmbH Systems, methods, and computer program products for extending, augmenting and enhancing searching and sorting capabilities by learning and adding concepts on the fly
US20180330238A1 (en) * 2017-05-09 2018-11-15 Neurala, Inc. Systems and methods to enable continual, memory-bounded learning in artificial intelligence and deep learning continuously operating applications across networked compute edges

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240305572A1 (en) * 2023-03-09 2024-09-12 Abb Schweiz Ag Method for Providing an Efficient Communication in a Hierarchical Network of Distributed Devices

Also Published As

Publication number Publication date
GB201915637D0 (en) 2019-12-11
KR20210052153A (en) 2021-05-10
GB2588614B (en) 2023-01-11

Similar Documents

Publication Publication Date Title
US11566915B2 (en) Method, device and system for processing a flight task
US20210056344A1 (en) Method and electronic apparatus for processing image and training image tag classification model
US8452451B1 (en) Methods and systems for robotic command language
KR102428920B1 (en) Image display device and operating method for the same
US10353480B2 (en) Connecting assistant device to devices
US10157455B2 (en) Method and device for providing image
KR102728476B1 (en) Electronic apparatus and control method thereof
KR102663375B1 (en) Apparatus and method for automatically focusing the audio and the video
US11797824B2 (en) Electronic apparatus and method for controlling thereof
KR20180055708A (en) Device and method for image processing
WO2021006650A1 (en) Method and system for implementing a variable accuracy neural network
US9984486B2 (en) Method and apparatus for voice information augmentation and displaying, picture categorization and retrieving
US20190228294A1 (en) Method and system for processing neural network model using plurality of electronic devices
CN108960283B (en) Classification task increment processing method and device, electronic equipment and storage medium
CN109271533A (en) A kind of multimedia document retrieval method
US20200349355A1 (en) Method for determining representative image of video, and electronic apparatus for processing the method
KR102125402B1 (en) Method, system, and non-transitory computer readable record medium for filtering image using keyword extracted form image
EP3340154A1 (en) Method and system for remote management of virtual message for a moving object
US20200202068A1 (en) Computing apparatus and information input method of the computing apparatus
GB2588614A (en) Method and system for customising a machine learning model
KR102680336B1 (en) Chatbot operation server for automatically performiong functions requested by users in conjunction with llm server and external server and method for operation thereof
KR20190023787A (en) User-definded machine learning apparatus for smart phone user and method thereof
KR102733653B1 (en) Method, system, and non-transitory computer readable record medium for recommending profile photos
KR20190032051A (en) Device and method for providing response to question about device usage
KR20200072456A (en) Method, system, and non-transitory computer readable record medium for filtering image using keyword extracted form image