EP3611854B1 - Method and apparatus for defending against adversarial attacks - Google Patents
Method and apparatus for defending against adversarial attacks Download PDFInfo
- Publication number
- EP3611854B1 EP3611854B1 EP18188739.9A EP18188739A EP3611854B1 EP 3611854 B1 EP3611854 B1 EP 3611854B1 EP 18188739 A EP18188739 A EP 18188739A EP 3611854 B1 EP3611854 B1 EP 3611854B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- signals
- machine learning
- portions
- learning system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 34
- 238000010801 machine learning Methods 0.000 claims description 30
- 230000015654 memory Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 230000005291 magnetic effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 229920000954 Polyglycolide Polymers 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000003472 neutralizing effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229920000747 poly(lactic acid) Polymers 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 235000010409 propane-1,2-diol alginate Nutrition 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04K—SECRET COMMUNICATION; JAMMING OF COMMUNICATION
- H04K1/00—Secret communication
Definitions
- Various example embodiments relate generally to methods and apparatus for defending adversarial attacks.
- Machine learning systems operate based on sensor information.
- the use of machine learning systems in these processing systems is increasing, particularly in sensitive applications including user identification and verification, online shopping and interaction with private data.
- One example is the use of machine learning systems to process an input signal, such as a sampled audio signal, to provide user identification and verification.
- a machine learning system typically a trained neural network, processes an input signal and produces a classification outcome, for instance a user identification.
- An adversarial attack is a perturbation added to an original signal which can cause a machine learning system into produce an incorrect classification outcome. Often the perturbation is of a small amplitude compared to the original signal. In the case of an audio original signal, the addition of the perturbation to the audio original signal is inaudible to a user.
- PCT publication WO2018/085697 A1 discloses a training method for neutral networks to improve resistance to adversarial attacks by training the neural network on a variational information bottleneck objective.
- PCT publication WO2016/010989 A1 discloses a system for user authentication using voice recognition.
- Example embodiments will now be described, including methods and apparatus that remove a portion from a first signal.
- the portion is selected at random from a plurality of portions included in the first signal.
- the methods and apparatus also replace the portion removed from the first signal with a replacement portion, which may mitigate the vulnerability of machine learning systems to adversarial attacks.
- processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- ROM read only memory
- RAM random access memory
- non-volatile storage non-volatile storage.
- Other hardware conventional or custom, may also be included. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
- FIG. 1 illustrates an example environment in which embodiments may be employed.
- FIG. 1 shows a user 10, a first device 20 and a second device 30.
- Each of the devices 20, 30 may have at least one microphone (not shown) which records sound incident thereon. For instance, if the user 10 speaks, the sound of the user 10 speaking, U, will arrive at the or each microphone in each device 20, 30 and may be recorded.
- These recordings may be processed by a machine learning system, for example to categorise the identity of the user 10.
- the machine learning system may be provided on one or more of the devices 20, 30, while in other case, the machine learning system may be in a system remote from the devices 20, 30 and with which the devices 20, 30 may communicate.
- a perturbation source 40 which emits a perturbation signal P, for instance as a perturbation sound.
- the perturbation signal P will also be incident on the microphones of the devices 20, 30, and so will also be recorded.
- the machine learning system processes is a combination of the user speaking U and the perturbation sound signal P. This leads to the possibility of an adversarial attack involving a perturbation sound signal P that is recorded together with user speech U to cause the machine learning system to produce an incorrect classification.
- the embodiments are not restricted to two devices, and in some embodiments only one of the devices 20, 30 may be present, while in other embodiments more than two devices may be present.
- each of the signals S may be a recording of the same audio sound, for instance by using a plurality of microphones, one for each signal S, which may be present in one or more of the devices 20, 30.
- Each signal S may be a digitized audio signal comprising a plurality of samples.
- the method 100 comprises, at 102, removing a portion from each signal S.
- Each signal S comprises a plurality of portions, wherein the portion removed from each signal S is selected at random from the plurality of portions.
- the selection of the portion to be removed may be performed independently and at random for each of the signals S, with the result that with high likelihood different portions may be removed from each of the signals S.
- random sub-sampling may be used to remove the portion from each of the signals S.
- more than one portion may be removed from each signal S.
- a different number of portions may be removed from each of the signals S1...Sn.
- each portion removed from each signal S comprises at least one sample.
- the method 100 further comprises, at 104, replacing each portion removed from each signal S with a replacement portion.
- data imputation techniques may be used to create each replacement portion from the signal with the portion removed therefrom.
- generative up-sampling may be used to create each replacement portion from the corresponding signal with the portion removed therefrom.
- other generative models may be used which characterise explicitly (such as fully visible belief networks or variational auto-encoders) or implicitly (such as generative stochastic networks or generative adversarial networks) the probabilistic distribution of a signal with the portion removed therefrom to create the replacement portion for that signal.
- Randomly removing a portion from each signal S and replacing each portion with a replacement portion may reduce susceptibility to adversarial attacks, for instance from the perturbation source 40.
- An attacker does not control which portions are removed from each signal and replaced. Since the portions are removed at random, an attacker cannot tailor the perturbation signal P to account for the removed portions due to the random selection of the portions.
- an attacker could try to make the perturbation signal P as sparse as possible in the time domain as a means to minimise the chances that non-zero components of the perturbation signal P are present in the removed portion(s). However, this would in effect decrease the support of the perturbation signal P and would necessarily result in an increase in the amplitude of the non-zero components of perturbation signal P.
- the increased amplitude of the non-zero components of perturbation signal P may be noticeable by the user, thereby neutralising the effectiveness of the adversarial attack.
- each signal S1, S2, ..., Sn with a replacement portion is denoted as signal S1', S2', ..., Sn', respectively, and denoted collectively as signals S' hereafter.
- the method 100 further comprises, at 106, aligning the signals S'.
- each of the signals S may have been sampled from a different microphone.
- variations in the start time when each of the signals S was first sampled variations in the schedulers of the processors used to sample the microphones, and differences between the processor loads, each of which might lead to misalignment of the signals S and thus the signals S'.
- Such misalignment might deteriorate the quality of a combined signal generated from the signals S', and therefore might decrease the overall performance achievable by a machine learning system that processed such a combined signal.
- FIG. 3 illustrates an example misalignment of two signals S1' and S2'.
- the signal S1' comprises a sequence of samples 200 which includes a replacement portion comprising replacement sample 210.
- the signal S2' comprises a sequence of samples 220 which includes a replacement portion comprising replacement sample 230.
- sampling of the signal S2' started after sampling of the signal S1'. Further, in the illustrated example shown in FIG. 3 , there is some misalignment of the samples 200, 220 forming the signals S1' and S2', respectively.
- the signals S' are aligned in the time domain.
- techniques such as Profile Hidden Markov Models or Continuous Profile Generative Models may be used to align the signals S'.
- Other suitable techniques known to the skilled person may be used in other embodiments.
- subsections of the signals S' are aligned - that is, the signals S' are piece-wise aligned where each piece comprises one or more samples in a sequence of samples.
- the method 100 further comprises, at 108, combining the signals S' into a combined signal Sout.
- the aligned signals S' are combined.
- the combined signal Sout may be an average of the signals S', however in other embodiments alternative transformations may be used to combine the signals S'.
- the combined signal Sout comprises a sequence of samples 240.
- the samples 240 are formed by combining the signals S1 and S2.
- Dashed lines 'A' in FIG. 3 are used to denote the samples 200, 220 in the signals S1 and S2 which have been aligned and combined to form samples 240 in the combined signal Sout.
- Combining the signals S' into the combined signal Sout may further reduce susceptibility to adversarial attacks.
- the perturbation signal P might be present in only some of the signals S' given random removal of some portions in each signal, ambient noise and hardware limitations on most microphones.
- Combining the signals S' into the combined signal Sout can be considered as a form of smoothing/interpolation across the different signals S', such that the perturbation signal P being present in only some of the signals S' results in the perturbation signal P having a reduced amplitude in the combined signal Sout, which may reduce the effectiveness of the perturbation signal P.
- the method 100 further comprises, at 110, sending the combined signal Sout to a machine learning system.
- the method 100 may be implemented within a machine learning system as a pre-processing method, in which case 110 may be considered as sending the combined signal Sout from a pre-processor to the machine learning system.
- FIG. 4 depicts a high-level block diagram of an apparatus 300 suitable for use in performing functions described herein according to some embodiments.
- the apparatus will be described with reference to signals S1...Sn (not shown) as described above in relation to FIG. 2 , and again collectively referred to as signals S.
- the apparatus 300 may comprise at least one microphone 310, each of which may record one of the signals S1...Sn. In some embodiments at least one of the signals S1...Sn may be received by the apparatus 300 from another device.
- the apparatus 300 comprises means 320 configured to perform removing a portion from each of the signals S.
- the portion removed from each signal S is selected at random from a plurality of portions included in that signal.
- the selection of the portion to be removed may be performed independently and at random for each of the signals S, with the result that with high likelihood different portions may be removed from each of the signals S.
- means 320 may comprise a random sub-sampler used to remove the portion from each of the signals S.
- means 320 may be configured to remove more than one portion from each signal S.
- means 320 may be configured to remove a different number of portions from each of the signals S1...Sn.
- each portion removed from each signal S comprises at least one sample.
- the apparatus 300 further comprises means 330 configured to perform replacing each portion removed from each signal S with a replacement portion.
- means 330 may be configured to use data imputation techniques to create each replacement portion from the signal with the portion removed therefrom.
- means 330 comprises a generative up-sampler that creates each replacement portion from the corresponding signal with the portion removed therefrom.
- means 330 comprises another generative system, such as a fully visible belief network, a variational auto-encoder, a generative stochastic network and/or a generative adversarial network.
- each signal S1, S2, ..., Sn with a replacement portion is denoted as signal S1', S2', ..., Sn' (not shown), respectively, and denoted collectively as signals S' hereafter.
- the apparatus 300 further comprises means 340 configured to perform aligning the signals S'.
- means 340 is configured to perform aligning the signals S' in the time domain using Profile Hidden Markov Models or Continuous Profile Generative Models. Other suitable techniques known to the skilled person may be used in other embodiments.
- means 340 are configured to perform aligning subsections of the signals S', such that the signals S' are piece-wise aligned where each piece comprises one or more samples in a sequence of samples.
- the apparatus 300 further comprises means 350 configured to perform combining the signals S' into a combined signal Sout (not shown). In embodiments where the signals S' are aligned, the aligned signals S' are combined. In one embodiment, means 350 is configured to perform combining the signals S' by averaging the signals S', however in other embodiments means 350 may be configured to use alternative transformations to combine the signals S'.
- the apparatus 300 further comprises means 360 configured to perform sending the combined signal Sout to a machine learning system.
- the means 360 is configured to send the combined signal Sout to a machine learning system remote from the apparatus 300, for instance over a communications network (not shown).
- the apparatus 300 may include the machine learning system and the means 320, 330, 340, 350 and 360 may be implemented as a pre-processor within the machine learning system. Such an arrangement may neutralise the effectiveness of an adversarial attack implemented using malicious software in a device 20, 30, for instance malicious software that adds a perturbation signal P to a signal recorded by a microphone in the device 20, 30.
- the apparatus 300 may comprise a single microphone 310 that is configured to record a first signal S1. In such embodiments where there is only the first signal S1, the means 340 and 350 may be omitted.
- Embodiments described herein may allow applications based on audio interfaces with wearable and mobile devices 30, 40 to be done more safely in the presence of adversarial attacks.
- the devices 30, 40 may be any combination of laptop computer, tablet, phone or watch with one microphone each.
- a user using such devices to access sensitive information will typically be required to perform a two-factor authentication process.
- Such authentication is currently performed without the ease of an audio interface given the present level of insecurity with audio authentication.
- the user would simply speak and microphones on devices 30, 40 are activated, record the user speaking.
- the user's voice may be used to provide authentication safely as the processing of the recorded signals as described in embodiments herein may enable user verification by a machine learning system to be performed safely against adversarial attacks.
- Embodiments described herein may also be employed in earbuds or headsets with a microphone, and may provide additional security, for instance when used together with a personal assistant software to place orders or interact with various types of sensitive information.
- the means 320, 330, 340, 350 and 360 comprises at least one processor 410 (e.g., a central processing unit (CPU) and/or other suitable processor(s)) and at least one memory 420 (e.g., random access memory (RAM), read only memory (ROM), or the like).
- processor 410 e.g., a central processing unit (CPU) and/or other suitable processor(s)
- memory 420 e.g., random access memory (RAM), read only memory (ROM), or the like.
- the means 320, 330, 340, 350 and 360 further comprises computer program code 430 and various input/output devices 440 (e.g., a user input device (such as a keyboard, a keypad, a mouse, a microphone, a touch-sensitive display or the like), a user output device 450 (such as a display, a speaker, or the like), and storage devices (e.g., a tape drive, a floppy drive, a hard disk drive, a compact disk drive, non-volatile memory or the like)).
- various input/output devices 440 e.g., a user input device (such as a keyboard, a keypad, a mouse, a microphone, a touch-sensitive display or the like), a user output device 450 (such as a display, a speaker, or the like), and storage devices (e.g., a tape drive, a floppy drive, a hard disk drive, a compact disk drive, non-volatile memory or the like)
- the computer program code 430 can be loaded into the memory 420 and executed by the processor 410 to implement functions as discussed herein and, thus, computer program code 430 (including associated data structures) can be stored on a computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette, or the like.
- a computer readable storage medium e.g., RAM memory, magnetic or optical drive or diskette, or the like.
- a further embodiment is a computer program product comprising a computer readable storage medium having computer readable program code embodied therein, the computer readable program code being configured to implement one of the above methods when being loaded on a computer, a processor, or a programmable hardware component.
- the computer readable storage medium is non-transitory.
- program storage devices e.g., digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions where said instructions perform some or all of the steps of methods described herein.
- the program storage devices may be, e.g., digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media.
- the embodiments are also intended to cover computers programmed to perform said steps of methods described herein or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), programmed to perform said steps of the above-described methods.
- any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention.
- any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Telephone Function (AREA)
Description
- Various example embodiments relate generally to methods and apparatus for defending adversarial attacks.
- Many processing systems operate based on sensor information. The use of machine learning systems in these processing systems is increasing, particularly in sensitive applications including user identification and verification, online shopping and interaction with private data. One example is the use of machine learning systems to process an input signal, such as a sampled audio signal, to provide user identification and verification. A machine learning system, typically a trained neural network, processes an input signal and produces a classification outcome, for instance a user identification.
- It is now recognized that many machine learning systems are vulnerable to adversarial attacks. An adversarial attack is a perturbation added to an original signal which can cause a machine learning system into produce an incorrect classification outcome. Often the perturbation is of a small amplitude compared to the original signal. In the case of an audio original signal, the addition of the perturbation to the audio original signal is inaudible to a user.
-
PCT publication WO2018/085697 A1 -
PCT publication WO2016/010989 A1 - The invention is defined by the appended claims.
- Example embodiments will now be described with reference to the accompanying drawings, in which:
-
FIG. 1 illustrates an example environment in which some embodiments described herein may be employed; -
FIG. 2 shows a method according to some embodiments described herein; -
FIG. 3 illustrates an example misalignment of two signals S1' and S2'; -
FIG. 4 shows an apparatus according to some embodiments described herein; and -
FIG. 5 depicts a high-level block diagram of an apparatus 400 suitable for use in performing functions described herein according to some embodiments. - Example embodiments will now be described, including methods and apparatus that remove a portion from a first signal. The portion is selected at random from a plurality of portions included in the first signal. The methods and apparatus also replace the portion removed from the first signal with a replacement portion, which may mitigate the vulnerability of machine learning systems to adversarial attacks.
- Functional blocks denoted as "means configured to perform ..." (a certain function) shall be understood as functional blocks comprising circuitry that is adapted for performing or configured to perform a certain function. A means being configured to perform a certain function does, hence, not imply that such means necessarily is performing said function (at a given time instant). Moreover, any entity described herein as "means", may correspond to or be implemented as "one or more modules", "one or more devices", "one or more units", etc. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term "processor" or "controller" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional or custom, may also be included. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
-
FIG. 1 illustrates an example environment in which embodiments may be employed.FIG. 1 shows auser 10, afirst device 20 and asecond device 30. Each of thedevices user 10 speaks, the sound of theuser 10 speaking, U, will arrive at the or each microphone in eachdevice user 10. In some cases, the machine learning system may be provided on one or more of thedevices devices devices - Also shown in
FIG. 1 is aperturbation source 40 which emits a perturbation signal P, for instance as a perturbation sound. The perturbation signal P will also be incident on the microphones of thedevices - It will be appreciated by the skilled person that the embodiments are not restricted to two devices, and in some embodiments only one of the
devices - Referring to
FIG. 2 , there is shown a plurality of signals S1, S2, ... Sn, referred to collectively as signals S. In some embodiments each of the signals S may be a recording of the same audio sound, for instance by using a plurality of microphones, one for each signal S, which may be present in one or more of thedevices - Some embodiments relate to a
method 100 as shown inFIG. 2 . Themethod 100 comprises, at 102, removing a portion from each signal S. Each signal S comprises a plurality of portions, wherein the portion removed from each signal S is selected at random from the plurality of portions. In some embodiments, the selection of the portion to be removed may be performed independently and at random for each of the signals S, with the result that with high likelihood different portions may be removed from each of the signals S. In some embodiments, random sub-sampling may be used to remove the portion from each of the signals S. As will be apparent to a skilled person, in some embodiments more than one portion may be removed from each signal S. Further, in some embodiments a different number of portions may be removed from each of the signals S1...Sn. In some embodiments, each portion removed from each signal S comprises at least one sample. - The
method 100 further comprises, at 104, replacing each portion removed from each signal S with a replacement portion. In some embodiments, data imputation techniques may be used to create each replacement portion from the signal with the portion removed therefrom. In some embodiments, generative up-sampling may be used to create each replacement portion from the corresponding signal with the portion removed therefrom. In other embodiments, other generative models may be used which characterise explicitly (such as fully visible belief networks or variational auto-encoders) or implicitly (such as generative stochastic networks or generative adversarial networks) the probabilistic distribution of a signal with the portion removed therefrom to create the replacement portion for that signal. - Randomly removing a portion from each signal S and replacing each portion with a replacement portion may reduce susceptibility to adversarial attacks, for instance from the
perturbation source 40. An attacker does not control which portions are removed from each signal and replaced. Since the portions are removed at random, an attacker cannot tailor the perturbation signal P to account for the removed portions due to the random selection of the portions. In principle, an attacker could try to make the perturbation signal P as sparse as possible in the time domain as a means to minimise the chances that non-zero components of the perturbation signal P are present in the removed portion(s). However, this would in effect decrease the support of the perturbation signal P and would necessarily result in an increase in the amplitude of the non-zero components of perturbation signal P. The increased amplitude of the non-zero components of perturbation signal P may be noticeable by the user, thereby neutralising the effectiveness of the adversarial attack. - In
FIG. 2 , each signal S1, S2, ..., Sn with a replacement portion is denoted as signal S1', S2', ..., Sn', respectively, and denoted collectively as signals S' hereafter. - In some embodiments, the
method 100 further comprises, at 106, aligning the signals S'. In practical systems, each of the signals S may have been sampled from a different microphone. Thus there may be variations in the start time when each of the signals S was first sampled, variations in the schedulers of the processors used to sample the microphones, and differences between the processor loads, each of which might lead to misalignment of the signals S and thus the signals S'. Such misalignment might deteriorate the quality of a combined signal generated from the signals S', and therefore might decrease the overall performance achievable by a machine learning system that processed such a combined signal. -
FIG. 3 illustrates an example misalignment of two signals S1' and S2'. The signal S1' comprises a sequence ofsamples 200 which includes a replacement portion comprisingreplacement sample 210. The signal S2' comprises a sequence ofsamples 220 which includes a replacement portion comprisingreplacement sample 230. - In the illustrated example shown in
FIG. 3 , sampling of the signal S2' started after sampling of the signal S1'. Further, in the illustrated example shown inFIG. 3 , there is some misalignment of thesamples - Returning now to
FIG. 2 , at 106 the signals S' are aligned in the time domain. In some embodiments, techniques such as Profile Hidden Markov Models or Continuous Profile Generative Models may be used to align the signals S'. Other suitable techniques known to the skilled person may be used in other embodiments. In some embodiments, subsections of the signals S' are aligned - that is, the signals S' are piece-wise aligned where each piece comprises one or more samples in a sequence of samples. - In some embodiments, the
method 100 further comprises, at 108, combining the signals S' into a combined signal Sout. In embodiments where the signals S' are aligned, the aligned signals S' are combined. In one embodiment, the combined signal Sout may be an average of the signals S', however in other embodiments alternative transformations may be used to combine the signals S'. - Referring again to the illustrated example in
FIG. 3 , an example combined signal Sout is shown. The combined signal Sout comprises a sequence ofsamples 240. Thesamples 240 are formed by combining the signals S1 and S2. Dashed lines 'A' inFIG. 3 are used to denote thesamples samples 240 in the combined signal Sout. - Combining the signals S' into the combined signal Sout may further reduce susceptibility to adversarial attacks. At any given moment in time, the perturbation signal P might be present in only some of the signals S' given random removal of some portions in each signal, ambient noise and hardware limitations on most microphones. In addition, due to distances between the physical locations of the microphones used to capture the signals S, there may be time offsets between the location of the perturbation signal P relative to the user signal U. Combining the signals S' into the combined signal Sout can be considered as a form of smoothing/interpolation across the different signals S', such that the perturbation signal P being present in only some of the signals S' results in the perturbation signal P having a reduced amplitude in the combined signal Sout, which may reduce the effectiveness of the perturbation signal P.
- Referring now to
FIG. 2 , in some embodiments, themethod 100 further comprises, at 110, sending the combined signal Sout to a machine learning system. As will be described in more detail below, in some embodiments themethod 100 may be implemented within a machine learning system as a pre-processing method, in whichcase 110 may be considered as sending the combined signal Sout from a pre-processor to the machine learning system. - In some embodiments, there may be only a first signal S1, in which case aligning the signals S' at 106 and combining the signals S' at 108 may be omitted.
-
FIG. 4 depicts a high-level block diagram of anapparatus 300 suitable for use in performing functions described herein according to some embodiments. The apparatus will be described with reference to signals S1...Sn (not shown) as described above in relation toFIG. 2 , and again collectively referred to as signals S. Theapparatus 300 may comprise at least onemicrophone 310, each of which may record one of the signals S1...Sn. In some embodiments at least one of the signals S1...Sn may be received by theapparatus 300 from another device. - The
apparatus 300 comprisesmeans 320 configured to perform removing a portion from each of the signals S. The portion removed from each signal S is selected at random from a plurality of portions included in that signal. In some embodiments, the selection of the portion to be removed may be performed independently and at random for each of the signals S, with the result that with high likelihood different portions may be removed from each of the signals S. In some embodiments, means 320 may comprise a random sub-sampler used to remove the portion from each of the signals S. As will be apparent to a skilled person, in some embodiments means 320 may be configured to remove more than one portion from each signal S. Further, in some embodiments means 320 may be configured to remove a different number of portions from each of the signals S1...Sn. In some embodiments, each portion removed from each signal S comprises at least one sample. - The
apparatus 300 further comprisesmeans 330 configured to perform replacing each portion removed from each signal S with a replacement portion. In some embodiments, means 330 may be configured to use data imputation techniques to create each replacement portion from the signal with the portion removed therefrom. In some embodiments, means 330 comprises a generative up-sampler that creates each replacement portion from the corresponding signal with the portion removed therefrom. In other embodiments, means 330 comprises another generative system, such as a fully visible belief network, a variational auto-encoder, a generative stochastic network and/or a generative adversarial network. In like manner to the description above in relation toFIG. 2 , each signal S1, S2, ..., Sn with a replacement portion is denoted as signal S1', S2', ..., Sn' (not shown), respectively, and denoted collectively as signals S' hereafter. - In some embodiments, the
apparatus 300 further comprisesmeans 340 configured to perform aligning the signals S'. In some embodiments, means 340 is configured to perform aligning the signals S' in the time domain using Profile Hidden Markov Models or Continuous Profile Generative Models. Other suitable techniques known to the skilled person may be used in other embodiments. In some embodiments, means 340 are configured to perform aligning subsections of the signals S', such that the signals S' are piece-wise aligned where each piece comprises one or more samples in a sequence of samples. - In some embodiments, the
apparatus 300 further comprisesmeans 350 configured to perform combining the signals S' into a combined signal Sout (not shown). In embodiments where the signals S' are aligned, the aligned signals S' are combined. In one embodiment, means 350 is configured to perform combining the signals S' by averaging the signals S', however in other embodiments means 350 may be configured to use alternative transformations to combine the signals S'. - In some embodiments, the
apparatus 300 further comprisesmeans 360 configured to perform sending the combined signal Sout to a machine learning system. In some embodiments themeans 360 is configured to send the combined signal Sout to a machine learning system remote from theapparatus 300, for instance over a communications network (not shown). In other embodiments, theapparatus 300 may include the machine learning system and themeans device device - In some embodiments, the
apparatus 300 may comprise asingle microphone 310 that is configured to record a first signal S1. In such embodiments where there is only the first signal S1, themeans - Embodiments described herein may allow applications based on audio interfaces with wearable and
mobile devices devices devices - Embodiments described herein may also be employed in earbuds or headsets with a microphone, and may provide additional security, for instance when used together with a personal assistant software to place orders or interact with various types of sensitive information.
- Referring now to
FIG. 5 , in some embodiments, themeans computer program code 430 and various input/output devices 440 (e.g., a user input device (such as a keyboard, a keypad, a mouse, a microphone, a touch-sensitive display or the like), a user output device 450 (such as a display, a speaker, or the like), and storage devices (e.g., a tape drive, a floppy drive, a hard disk drive, a compact disk drive, non-volatile memory or the like)). Thecomputer program code 430 can be loaded into thememory 420 and executed by theprocessor 410 to implement functions as discussed herein and, thus, computer program code 430 (including associated data structures) can be stored on a computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette, or the like. - It will be appreciated that the functions depicted and described herein may be implemented in software (e.g., via implementation of software on one or more processors, for executing on a general purpose computer (e.g., via execution by one or more processors) so as to implement a special purpose computer, or the like) and/or may be implemented in hardware (e.g., using a general purpose computer, one or more application specific integrated circuits (ASIC), and/or any other hardware equivalents).
- A further embodiment is a computer program product comprising a computer readable storage medium having computer readable program code embodied therein, the computer readable program code being configured to implement one of the above methods when being loaded on a computer, a processor, or a programmable hardware component. In some embodiments, the computer readable storage medium is non-transitory.
- A person of skill in the art would readily recognize that steps of various above-described methods can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, e.g., digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions where said instructions perform some or all of the steps of methods described herein. The program storage devices may be, e.g., digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. The embodiments are also intended to cover computers programmed to perform said steps of methods described herein or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), programmed to perform said steps of the above-described methods.
- It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
- The scope of the invention is defined only by the appended claims.
Claims (14)
- An apparatus (300) comprising means configured to perform:removing (320) a portion from a first signal, the first signal including a perturbation configured to cause a machine learning system to produce an incorrect classification of the first signal, the portion being selected at random from a plurality of portions included in the first signal;replacing (330) the portion removed from the first signal with a replacement portion; andsending (360) the first signal with the replacement portion to a machine learning system as an input thereto.
- The apparatus of claim 1, wherein the means are further configured to perform:
generating the replacement portion from the first signal with the portion removed. - The apparatus of claim 1 or 2 further comprising:
a microphone (310) configured to produce the first signal. - An apparatus (300) comprising means configured to perform:removing (320) a portion from each of a plurality of signals, each signal including a perturbation configured to cause a machine learning system to produce an incorrect classification of the signal, wherein the plurality of signals includes a first signal and wherein the portion removed from the first signal is selected at random from a plurality of portions included in the first signal;replacing (330) the portion removed from each signal with a respective replacement portion;combining (350) the plurality of signals having replacement portions to obtain a combined signal; andsending (360) the combined signal to a machine learning system as an input thereto.
- The apparatus of claim 4, wherein each signal comprises a plurality of portions, and the means are further configured to perform:
selecting each portion removed from each signal independently and at random from the respective plurality of portions. - The apparatus of claim 4 or 5, wherein the means are further configured to perform:
generating each replacement portion from the respective signal having the respective selected portion removed. - The apparatus of claims 4 to 6, wherein the means are further configured to perform:
aligning (340) the plurality of signals prior to combining the plurality of signals. - The apparatus of claims 4 to 7, wherein the means are further configured to perform:
removing (320) one or more further portions from each of the plurality of signals. - The apparatus of any of claims 4 to 8, further comprising:
a plurality of microphones (310) configured to produce the plurality of signals. - The apparatus of any preceding claim, wherein the means comprises:at least one processor (410); andat least one memory (420) including computer program code (430), the at least one memory and computer program code configured to, with the at least one processor, cause the performance of the apparatus.
- A method (100) comprising:removing (102) a portion from a first signal, the first signal including a perturbation configured to cause a machine learning system to produce an incorrect classification of the first signal, the portion being selected at random from a plurality of portions included in the first signal;replacing (104) the portion removed from the first signal with a replacement portion; andsending (110) the first signal with the replacement portion to a machine learning system as an input thereto.
- A method comprising:removing (102) a portion from each of a plurality of signals, each signal including a perturbation configured to cause a machine learning system to produce an incorrect classification of the signal, each signal comprising a plurality of portions, each portion removed from each of the plurality of signals being selected independently and at random from the respective plurality of portions;replacing (104) the portion removed from each signal with a respective replacement portion;aligning (106) the plurality of signals;combining (108) the plurality of signals having replacement portions to obtain a combined signal; andsending (110) the combined signal to the machine learning system as an input thereto.
- The method of claim 12, further comprising:
removing one or more further portions from each of the plurality of signals. - A computer readable storage medium storing instructions which, when executed by a computer, cause the computer to perform a method, the method comprising:removing (102) a portion from a first signal, the first signal including a perturbation configured to cause a machine learning system to produce an incorrect classification of the first signal, the portion being selected at random from a plurality of portions included in the first signal;replacing (104) the portion removed from the first signal with a replacement portion; andsending (110) the first signal with the replacement portion to a machine learning system as an input thereto.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18188739.9A EP3611854B1 (en) | 2018-08-13 | 2018-08-13 | Method and apparatus for defending against adversarial attacks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18188739.9A EP3611854B1 (en) | 2018-08-13 | 2018-08-13 | Method and apparatus for defending against adversarial attacks |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3611854A1 EP3611854A1 (en) | 2020-02-19 |
EP3611854B1 true EP3611854B1 (en) | 2021-09-22 |
Family
ID=63442390
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18188739.9A Active EP3611854B1 (en) | 2018-08-13 | 2018-08-13 | Method and apparatus for defending against adversarial attacks |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP3611854B1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12242613B2 (en) | 2020-09-30 | 2025-03-04 | International Business Machines Corporation | Automated evaluation of machine learning models |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7360252B1 (en) * | 1999-04-30 | 2008-04-15 | Macrovision Corporation | Method and apparatus for secure distribution of software |
EP2557521A3 (en) * | 2003-07-07 | 2014-01-01 | Rovi Solutions Corporation | Reprogrammable security for controlling piracy and enabling interactive content |
US10296733B2 (en) * | 2014-07-14 | 2019-05-21 | Friday Harbor Llc | Access code obfuscation using speech input |
CN109923560A (en) * | 2016-11-04 | 2019-06-21 | 谷歌有限责任公司 | Neural network is trained using variation information bottleneck |
-
2018
- 2018-08-13 EP EP18188739.9A patent/EP3611854B1/en active Active
Also Published As
Publication number | Publication date |
---|---|
EP3611854A1 (en) | 2020-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12015637B2 (en) | Systems and methods for end-to-end architectures for voice spoofing detection | |
EP3412014B1 (en) | Liveness determination based on sensor signals | |
US11114104B2 (en) | Preventing adversarial audio attacks on digital assistants | |
US20190013026A1 (en) | System and method for efficient liveness detection | |
US8516562B2 (en) | Multi-channel multi-factor authentication | |
US20090309698A1 (en) | Single-Channel Multi-Factor Authentication | |
CN111091835B (en) | Model training method, voiceprint recognition method, system, device and medium | |
US20090288148A1 (en) | Multi-channel multi-factor authentication | |
US11380303B2 (en) | System and method for call classification | |
Gong et al. | Protecting voice controlled systems using sound source identification based on acoustic cues | |
KR20060063647A (en) | System and method for user authentication | |
CN112949708A (en) | Emotion recognition method and device, computer equipment and storage medium | |
US10896664B1 (en) | Providing adversarial protection of speech in audio signals | |
Wang et al. | Vsmask: Defending against voice synthesis attack via real-time predictive perturbation | |
US20180063106A1 (en) | User authentication using audiovisual synchrony detection | |
US20220303388A1 (en) | System and method for handling unwanted telephone calls through a branching node | |
Guo et al. | Phantomsound: Black-box, query-efficient audio adversarial attack via split-second phoneme injection | |
EP3611854B1 (en) | Method and apparatus for defending against adversarial attacks | |
EP2560122B1 (en) | Multi-Channel Multi-Factor Authentication | |
Liu et al. | Protecting your voice from speech synthesis attacks | |
US20210150469A1 (en) | Authenticating a user by delivery device using unique voice signatures | |
EP3873075A1 (en) | System and method for call classification | |
Shahid et al. | " Is this my president speaking?" Tamper-proofing Speech in Live Recordings | |
US11388286B2 (en) | System and method for handling unwanted telephone calls | |
Phipps et al. | Enhancing cyber security using audio techniques: a public key infrastructure for sound |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200312 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20210409 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018023838 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 1433142 Country of ref document: AT Kind code of ref document: T Effective date: 20211015 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20210922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211222 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211222 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1433142 Country of ref document: AT Kind code of ref document: T Effective date: 20210922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211223 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220122 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220124 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602018023838 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20220623 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220813 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220831 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220831 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20220831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220813 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20180813 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20210922 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240702 Year of fee payment: 7 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240701 Year of fee payment: 7 |