WO2013191662A1 - Procédé de correction d'erreurs grammaticales d'une phrase d'entrée - Google Patents
Procédé de correction d'erreurs grammaticales d'une phrase d'entrée Download PDFInfo
- Publication number
- WO2013191662A1 WO2013191662A1 PCT/SG2013/000261 SG2013000261W WO2013191662A1 WO 2013191662 A1 WO2013191662 A1 WO 2013191662A1 SG 2013000261 W SG2013000261 W SG 2013000261W WO 2013191662 A1 WO2013191662 A1 WO 2013191662A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- new
- sentence
- hypotheses
- classifiers
- input sentence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/253—Grammatical analysis; Style critique
Definitions
- Various embodiments relate to correcting grammatical errors of an input sentence, which find educational applications for natural language processing and in particular to automatic grammatical error correction.
- Grammatical error correction has been recognized as an important application of natural language processing. This technology can particularly benefit learners of English as a foreign language.
- The.dominant paradigm that underlies most grammar correction, systems to date is supervised multi-class classification.
- a classifier is trained to predict a word from a confusion set of possible choices, given some feature representation of the surrounding sentence context. During test time, the classifier predicts the most likely word from the confusion set for each instance extracted from the test data. If the prediction differs from the observed word used by the writer and the classifier is sufficiently confident in its prediction, the observed word is replaced by the prediction.
- Embodiments of the invention provide a computer system and method for automatic correction of grammatical errors made in texts written by learners of a language. This enables correction of complete sentences, which can contain multiple and interacting errors.
- a computer system that comprises a decoder model that performs a beam search over possible hypotheses (i.e. corrected versions of the sentence) to find the best possible correction for an input sentence.
- the search starts from the original input sentence.
- a set of proposers generates new hypotheses by making an incremental change to the current hypothesis.
- a set of experts scores these hypotheses on criteria of grammatical correctness. These experts include discriminative classifiers for specific error types, such as article and preposition errors.
- the final score for a hypothesis is a linear combination of the expert scores according to the decoder model.
- the weights of the decoder model are trained on a development set of error-annotated sentences.
- a method for correcting grammatical errors of an input sentence comprising: receiving the input sentence as a current hypothesis; generating a plurality of new hypotheses from the current hypothesis, each of the new hypotheses being a new sentence originating from the input sentence, with a portion of the input sentence being changed; analysing each of the new hypotheses to compute a score for each of the plurality of new hypotheses; comparing the scores of the plurality of new hypotheses; and generating an output sentence from the new hypotheses with the highest score.
- a computer system for correcting grammatical errors of an input sentence
- the computer system comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the computer system at least to perform: receiving the input sentence as a current hypothesis; generating a plurality of new hypotheses from the current hypothesis, each of the new hypotheses being a new sentence originating from the input sentence, with a portion of the input sentence being changed; analysing each of the new hypotheses to compute a score for each of the plurality of new hypotheses; comparing the scores of the plurality of new hypotheses; and generating an output sentence from the new hypotheses with the highest score.
- Figure 1 shows a flowchart that illustrates a method, according to a first embodiment, for correcting grammatical errors of an input sentence.
- Figure 2 shows a computer system, according to a second embodiment, for correcting grammatical errors of an input sentence.
- the phrase "input sentence" depends on the stage at which grammatical error correction is occurring.
- the input sentence may mean an original sentence or a starting sentence which may contain no or any number of grammatical errors.
- the input sentence may be a sentence generated from the hypothesis of a previous iteration, i.e. the original sentence having undergone one or more successive iterations of grammatical error correction.
- hypothesis may mean a sentence that is tested for its grammatical correctness or whether its syntax and semantics are in accordance with rules or conventions that are established for the language (e.g. English) in which the sentence is written.
- the form of the hypothesis depends on the stage at which the sentence is being tested for correctness. At initiation, the hypothesis is identical to the original sentence. At a later stage, the hypothesis becomes a sentence that is generated from a previous sentence which may already have undergone one or more successive such sentence generations. At such later stages, the hypothesis thus originates from the input sentence, i.e. it is based on the input sentence and may be referred throughout the present specification as a "new hypothesis/hypotheses".
- portion of the sentence may mean a length of a sentence that encompasses only one word or spans over two or more words (i.e. a phrase) including punctuation, if any, that exists in the phrase.
- the term "score” may mean a value, between 0.0 and 1.0, that measures a probability of the grammatical correctness of a hypothesis.
- the phrase "incremental change” may mean a slight change to the input sentence, so that the sentence for each new hypothesis differs by a correction of one or more words or a phrase of the input sentence.
- classifier may mean an algorithm programmed to determine whether it is configured to process a portion of each of the plurality of new hypotheses that is parsed to the algorithm.
- exemplary classifiers include an article classifier, a preposition classifier, a noun form classifier and a verb form classifier.
- the list may also include classifiers that are programmed to process further grammatical aspects of the language being analysed.
- the phrase "confidence level” may mean a probability score provided by each classifier that provides a measure of the grammatical correctness of the hypothesis being analysed.
- syntax may mean the order in which words in a sentence is arranged, whereby the order may or may not be in accordance with the rules or the conventions that are established for the language in which the sentence is written.
- semantics may mean the meaning of the words in a sentence as well as the meaning that is composed of several combined words or all the words in a sentence.
- the term "iteration" may mean that when generating a plurality of new hypotheses of a current hypothesis, the results from processing one or more previous input sentences to correct their respective grammatical errors are considered.
- the plurality of new hypotheses that are generated may be based on earlier hypotheses, which are generated from one or more previous input sentences, having the highest score range.
- Figure 1 shows a flowchart 100 that illustrates a method, according to a first embodiment, for correcting grammatical errors of an input sentence.
- step 102 the input sentence is received as a current hypothesis.
- step 104 a plurality of new hypotheses is generated from the current hypothesis.
- Each of the new hypotheses is a new sentence originating from the input sentence, with a portion of the input sentence being changed.
- each of the new hypotheses is analysed to compute a score for each of the plurality of new hypotheses.
- the scores of the plurality of new hypotheses are compared.
- an output sentence from the new hypotheses with the highest score is generated.
- Embodiments of the invention comprise a beam-search decoder for grammatical error correction that combines the advantages of a classification approach with the ability to correct entire sentences with multiple and interacting errors.
- the beam-search conducted by the decoder is the size of the search that the decoder performs, i.e. with reference to Figure 1, the number of new hypotheses that are generated, to determine a sentence that is grammatically correct, has a proper syntax and/or semantics.
- the decoder Starting from an original input sentence or a sentence generated from a previous hypothesis, the decoder performs a search over possible new hypotheses to find the best possible correction for the input sentence.
- the task of the decoder is to find the best hypothesis (i.e., corrected sentence) for a given input sentence.
- the decoder needs to be able to generate new hypotheses from current ones, and also to discriminate good hypotheses from bad ones.
- Proposers take a hypothesis and output a set of new hypotheses, where each new hypothesis is the result of making an incremental change to the current hypothesis. Accordingly, proposers generate a plurality of new hypothesis from a current hypothesis. Experts subsequently score these hypotheses on particular aspects of grammaticality. Accordingly, experts analyse each of the new hypothesis to compute a score for each of the plurality of new hypotheses. This can be a general language model score, or scores on specific aspects such as article and preposition choice. The expert scores serve as features for the decoder.
- the final score for a hypothesis may be a linear combination of the expert scores according to the decoder model.
- the weights of the decoder model may be trained on a development set of error-annotated sentences.
- the modular design of the framework makes it easy to extend the decoder to new error categories by adding specific proposer and expert models.
- the proposer modules (also referred to as "proposers") generate new hypotheses from the current hypothesis. Because the space of all possible hypotheses is exponential, each proposer only makes a small incremental change to the current hypothesis in each step. Each change may correspond to a single correction of a word or phrase in the current hypothesis.
- Preposition proposer for each prepositional phrase (PP), propose a set of new hypotheses by changing the observed preposition. For each preposition, define a confusion set of possible correction choices.
- Noun category proposer for each noun in the hypothesis, propose a new hypothesis by changing the noun form from singular to plural or vice versa.
- Verb inflection proposer for each verb in the hypothesis, propose a set of new hypotheses by changing the verb's inflection, for example changing a verb in the base form to the third person singular form or vice versa.
- Punctuation proposer propose new hypotheses by inserting missing commas, periods, and hyphens based on a set of rules.
- Spelling proposer propose new hypotheses by replacing each word that is flagged by a spell checker, with its corrected form as proposed by the spell checker.
- the expert modules (also referred to as "experts") score each hypothesis on particular aspects of grammaticality. This helps the decoder to discriminate grammatically fluent hypotheses from non-fluent ones.
- Two types of expert models are employed. The first is a standard N-gram language model. An N-gram language model computes the probability of a word conditioned on the N-1 previous words. The probability of a sentence is the product of the probabilities of its words. The probability of the hypothesis under the language model as a feature (see Equation 1 below) may be used. To avoid a bias towards shorter hypotheses, the probability is normalized by the length of the hypothesis. The intuition is that grammatical sentences should on average have higher probability than ungrammatical ones. The language model expert is not specialized for a particular type of error. score lm , , (1)
- the second type of experts is based on supervised classifiers.
- Each of these one or more classifiers analyses a grammatical aspect of each corresponding new sentence (from a respective new hypothesis) to provide a score for each of the plurality of new hypotheses, based on a confidence level provided by these one or more classifiers.
- the confidence level provided by each of the one or more classifiers is derived from a weight assigned to each approximately matching portion of the respective new sentence found in each of the one or more classifiers to which a grammatical aspect of the respective new sentence is mapped.
- the weight assigned to each approximately matching portion of the respective new sentence found in each of the one or more classifiers depends on the syntax and/or semantics of the new sentence for each of the plurality of new hypotheses.
- a classifier is first trained on a set of examples of inputs and their correct classes.
- each of the weights in each of the one or more classifiers is obtained from training each of the one or more classifiers on a set of training sentences and one or more correct syntaxes and/or semantics for each of the training sentences.
- the set of training sentences and the one or more correct syntaxes and/or semantics for each of the training sentences is used to train the one or more classifiers to map the grammatical aspect of the respective new sentence to the appropriate one or more classifiers.
- the classifier can be used to predict the classes of new, unseen examples.
- Supervised classifiers can be used for particular grammatical errors by letting the classifier predict the correct word for a particular sentence context.
- the sentence context is encoded in a set of features which forms the input X; the possible correction choices form the classes Y.
- a classifier can be trained to predict the correct preposition, given a feature representation of the surrounding context, e.g., the words to the left and right of the preposition.
- the following list shows examples of expert classifiers for typical grammatical errors made by language learners. Natural language processing tools within these supervised classifiers automatically analyse the syntax of the hypothesis to determine which classifiers is to be used for each portion of the hypothesis. Additional experts can be added to the framework to accommodate more error types.
- Article classifier expert the classifier predicts the correct article (a/an, the, null article) for a noun phrase (NP).
- Preposition classifier expert the classifier predicts the correct preposition for a prepositional phrase (PP).
- Noun number classifier expert the classifier predicts whether a noun should be in the singular or plural form.
- Verb form classifier expert the classifier predicts the correct morphological form (e.g., base form, third person singular, gerund) for a verb.
- a classifier can output a numerical confidence score (a real number) for each class.
- the class with the highest score does not have to be the class that corresponds to the word choice observed in the hypothesis. For example, assume a hypothesis with the text "He leaves at the morning.” The preposition classifier expert would predict a numerical score for each preposition that can take the position of "at”. Assume that these scores are: at (0.1), for (-0.3), in (0.9), of (0.2), on (0.1), with (0.1).
- Two types of features may be defined based on the classifier output. The first feature, called average score, is the average score assigned to the word choice observed in the hypothesis (see Equation (2) below).
- avg - l ⁇ * T f ⁇ x y )) (2)
- u is the classifier weight vector
- >f are the feature vector and the hypothesis class, respectively, for the z ' -th instance extracted from the hypothesis h
- f is a feature map that computes the expert classifier features.
- the average score reflects how much the expert model on average "likes" the hypothesis. In the present example, the average score would just be 0.1, the score assigned to the preposition "at”. The intuition is that higher scores on average reflect more grammatical word choices.
- delta score is the difference between the highest scoring class and the score assigned to the hypothesis word choice in any instance from the hypothesis.
- a decoder performs a search for the best possible correction.
- the decoder needs to decide which hypotheses to keep pursuing, which hypotheses to discard, and which hypothesis is finally the best correction.
- the decoder combines the features associated with each hypothesis into an overall hypothesis score. This may be done through a linear model, which has the form as described in Equation (4) below:
- w the decoder model weight vector and f E is a feature map that computes the expert features for all experts in the set E for the hypothesis h.
- the weights w are tuned on a development set of error-annotated sentences using an optimization algorithm such as Minimum Error Rate Training or pair- wise ranking optimization.
- the Fl measure between the decoder corrections and the gold error-annotated corrections may be optimized.
- the term "gold error-annotated correction” here refers to a reference correction provided by a human expert. Other optimization objectives are possible.
- the decoder may perform a beam search over possible hypothesis candidates to find the best hypothesis correction h for an input sentence e.
- the decoding process proceeds as follows.
- the decoder starts with the input sentence as the initial hypothesis, i.e., the initial hypothesis is that all words are correct. It then performs a beam search over the space of possible correction hypotheses. The search proceeds in iterations until the beam is empty or the maximum number of iterations has been reached.
- the decoder takes each hypothesis in the beam and generates new hypothesis candidates using all the available proposers.
- the hypotheses are evaluated by the expert models and scored using the decoder model. Hypotheses that have been explored before are not considered again to avoid cycles in the search.
- the search space may be pruned by only accepting the most promising hypotheses to the pool of hypotheses for future consideration.
- the plurality of new hypotheses when performing grammatical error correction of an input sentence, may be based on hypotheses with the highest score range from a previous iteration. If a hypothesis has a higher score compared to the best hypothesis found so far in previous iterations, it is added to the pool. Otherwise, a simulated annealing strategy may be used where a hypothesis with a lower score can still be accepted with a certain probability which depends on the "temperature" of the system. The probability for a hypothesis with a lower score to be accepted decreases with the search as the temperature decreases. From all hypotheses in the pool, the top k hypotheses may be selected and added to the beam for the next search iteration.
- decode (e, w, P, E, k, M)
- Figure 2 shows a computer system 200, according to a second embodiment, for correcting grammatical errors of an input sentence 203.
- the computer system 200 comprises at least one processor 201 and at least one memory 214 including computer program code.
- the at least one memory 214 and the computer program code are configured to, with the at least one processor 201, cause the computer system 200 at least to perform the following: i) receive the input sentence 203 as a current hypothesis (represented using the arrow labeled 202); ii) generate a plurality of new hypotheses (204A, 204B, ... 204N) from the current hypothesis.
- Each of the new hypotheses is a new sentence originating from the input sentence 203, with a portion of the input sentence 203 being changed; iii) analyse each of the new hypotheses (204A, 204B, ... 204N) to compute a score for each of the plurality of new hypotheses (204A, 204B, ...
- a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
- Embodiments of the invention may also relate to an apparatus for performing the operations herein.
- This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer.
- a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus.
- any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
- Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261663086P | 2012-06-22 | 2012-06-22 | |
| US61/663,086 | 2012-06-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2013191662A1 true WO2013191662A1 (fr) | 2013-12-27 |
Family
ID=49769131
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SG2013/000261 Ceased WO2013191662A1 (fr) | 2012-06-22 | 2013-06-24 | Procédé de correction d'erreurs grammaticales d'une phrase d'entrée |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2013191662A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10360301B2 (en) * | 2016-10-10 | 2019-07-23 | International Business Machines Corporation | Personalized approach to handling hypotheticals in text |
| CN111914540A (zh) * | 2019-05-10 | 2020-11-10 | 阿里巴巴集团控股有限公司 | 语句鉴定方法及装置、存储介质和处理器 |
| RU2753183C1 (ru) * | 2020-05-21 | 2021-08-12 | Федеральное государственное автономное образовательное учреждение высшего образования «Московский физико-технический институт (национальный исследовательский университет)» (МФТИ) | Система и способ корректировки орфографических ошибок |
| CN114626365A (zh) * | 2022-03-14 | 2022-06-14 | 腾讯科技(深圳)有限公司 | 作文纠错模型的缺陷确定方法、装置、设备及存储介质 |
| US11593557B2 (en) | 2020-06-22 | 2023-02-28 | Crimson AI LLP | Domain-specific grammar correction system, server and method for academic text |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6012075A (en) * | 1996-11-14 | 2000-01-04 | Microsoft Corporation | Method and system for background grammar checking an electronic document |
| US6085206A (en) * | 1996-06-20 | 2000-07-04 | Microsoft Corporation | Method and system for verifying accuracy of spelling and grammatical composition of a document |
| US7013262B2 (en) * | 2002-02-12 | 2006-03-14 | Sunflare Co., Ltd | System and method for accurate grammar analysis using a learners' model and part-of-speech tagged (POST) parser |
| US7349840B2 (en) * | 1994-09-30 | 2008-03-25 | Budzinski Robert L | Memory system for storing and retrieving experience and knowledge with natural language utilizing state representation data, word sense numbers, function codes, directed graphs and/or context memory |
| US20090192787A1 (en) * | 2007-10-08 | 2009-07-30 | David Blum | Grammer checker |
| US20110313757A1 (en) * | 2010-05-13 | 2011-12-22 | Applied Linguistics Llc | Systems and methods for advanced grammar checking |
-
2013
- 2013-06-24 WO PCT/SG2013/000261 patent/WO2013191662A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7349840B2 (en) * | 1994-09-30 | 2008-03-25 | Budzinski Robert L | Memory system for storing and retrieving experience and knowledge with natural language utilizing state representation data, word sense numbers, function codes, directed graphs and/or context memory |
| US6085206A (en) * | 1996-06-20 | 2000-07-04 | Microsoft Corporation | Method and system for verifying accuracy of spelling and grammatical composition of a document |
| US6012075A (en) * | 1996-11-14 | 2000-01-04 | Microsoft Corporation | Method and system for background grammar checking an electronic document |
| US7013262B2 (en) * | 2002-02-12 | 2006-03-14 | Sunflare Co., Ltd | System and method for accurate grammar analysis using a learners' model and part-of-speech tagged (POST) parser |
| US20090192787A1 (en) * | 2007-10-08 | 2009-07-30 | David Blum | Grammer checker |
| US20110313757A1 (en) * | 2010-05-13 | 2011-12-22 | Applied Linguistics Llc | Systems and methods for advanced grammar checking |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10360301B2 (en) * | 2016-10-10 | 2019-07-23 | International Business Machines Corporation | Personalized approach to handling hypotheticals in text |
| CN111914540A (zh) * | 2019-05-10 | 2020-11-10 | 阿里巴巴集团控股有限公司 | 语句鉴定方法及装置、存储介质和处理器 |
| RU2753183C1 (ru) * | 2020-05-21 | 2021-08-12 | Федеральное государственное автономное образовательное учреждение высшего образования «Московский физико-технический институт (национальный исследовательский университет)» (МФТИ) | Система и способ корректировки орфографических ошибок |
| US11593557B2 (en) | 2020-06-22 | 2023-02-28 | Crimson AI LLP | Domain-specific grammar correction system, server and method for academic text |
| CN114626365A (zh) * | 2022-03-14 | 2022-06-14 | 腾讯科技(深圳)有限公司 | 作文纠错模型的缺陷确定方法、装置、设备及存储介质 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7031101B2 (ja) | 方法、システムおよび有形コンピュータ可読デバイス | |
| US11023680B2 (en) | Method and system for detecting semantic errors in a text using artificial neural networks | |
| Dahlmeier et al. | A beam-search decoder for grammatical error correction | |
| US9176936B2 (en) | Transliteration pair matching | |
| US10346548B1 (en) | Apparatus and method for prefix-constrained decoding in a neural machine translation system | |
| CN107688803B (zh) | 字符识别中识别结果的校验方法和装置 | |
| EP2653982A1 (fr) | Procédé et système de correction de fautes d'orthographe statistique | |
| CN104933158B (zh) | 数学问题求解模型的训练方法和装置、推理方法和装置 | |
| CN110705262B (zh) | 一种改进的应用于医技检查报告的智能纠错方法 | |
| CN110147546B (zh) | 一种英语口语的语法校正方法及装置 | |
| Matteson et al. | Rich character-level information for Korean morphological analysis and part-of-speech tagging | |
| WO2013191662A1 (fr) | Procédé de correction d'erreurs grammaticales d'une phrase d'entrée | |
| KR20150092879A (ko) | n-gram 데이터 및 언어 분석에 기반한 문법 오류 교정장치 및 방법 | |
| CN117592465A (zh) | 多模型协同自适应策略下的富文本拼写纠错方法及系统 | |
| JP5635025B2 (ja) | 助詞誤り訂正装置、方法、及びプログラム | |
| Zhang et al. | Syntax-based grammaticality improvement using CCG and guided search | |
| US12393795B2 (en) | Modeling ambiguity in neural machine translation | |
| JP5555542B2 (ja) | 自動単語対応付け装置とその方法とプログラム | |
| Thi Xuan Huong et al. | Using large n-gram for vietnamese spell checking | |
| Zhang et al. | A unified framework for grammar error correction | |
| CN115422929A (zh) | 文本纠错方法和系统 | |
| Savci et al. | TurkishLex: Development of a Context-Aware Spell Checker for Detecting and Correcting Spelling Errors in Turkish Texts | |
| Fedchuk et al. | Mathematical model of a decision support system for identification and correction of errors in Ukrainian texts based on machine learning | |
| Wolters et al. | Historical dutch spelling normalization with pretrained language models | |
| Shinozaki et al. | Semi-Supervised Learning of a Pronunciation Dictionary from Disjoint Phonemic Transcripts and Text. |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13806707 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 13806707 Country of ref document: EP Kind code of ref document: A1 |