[go: up one dir, main page]

WO2018151043A1 - Procédé de traitement d'image et programme informatique - Google Patents

Procédé de traitement d'image et programme informatique Download PDF

Info

Publication number
WO2018151043A1
WO2018151043A1 PCT/JP2018/004598 JP2018004598W WO2018151043A1 WO 2018151043 A1 WO2018151043 A1 WO 2018151043A1 JP 2018004598 W JP2018004598 W JP 2018004598W WO 2018151043 A1 WO2018151043 A1 WO 2018151043A1
Authority
WO
WIPO (PCT)
Prior art keywords
character
image
color
groups
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2018/004598
Other languages
English (en)
Japanese (ja)
Inventor
栄 竹内
克 犬嶋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sofnec Co ltd
Original Assignee
Sofnec Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sofnec Co ltd filed Critical Sofnec Co ltd
Publication of WO2018151043A1 publication Critical patent/WO2018151043A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control

Definitions

  • the present invention relates to an image processing method capable of creating a binary image suitable for extracting meaningful information such as characters and signs from an image including various colors, particularly a photographed image.
  • Patent Document 1 proposes an “image recognition method” capable of accurately recognizing an image of each color from a color document including various colors.
  • the recognition process is performed for each of a plurality of image data obtained by separating the color image data for each color without making the color image a binary image. Therefore, for example, it is possible to make use of the color of the original by representing different characters for each color in the color original. Also, as long as the color of the character and the background color of the color document are different, they can both be converted to black, preventing the loss of the character, preventing the layout from becoming unrecognizable, and smoothly recognizing the character. You can move on to processing.
  • An object of the present invention is to create a binary image for reliably extracting characters from a natural image including various colors such as a moving image broadcast on a television.
  • Binary that displays the step and the background area group as the same group divides the N + 1 group into two, and displays the pixels included in one with the same color and the pixels included in the other with the other color
  • a plurality of binary images are created after grouping the background area and the character area. Even if complete character data cannot be extracted with only individual binary images, character data can be extracted with high accuracy by combining information obtained from the plurality of binary images.
  • the input color image is converted into the coordinates of the color space of the L * a * b * color system in pixel units, and the character area is specified for the converted image. It is good to perform the process.
  • the color representation is converted to the L * a * b * value, which is a color expression that better reflects the characteristics of human vision compared to the RGB values, the similarity of colors can be evaluated without a sense of incongruity for humans.
  • the background region is grouped by a K-means method for L colors
  • the character region is grouped by a K-means method for initially M + L colors. It is preferable to repeat the process of deleting the group having the smallest number of pixels belonging to M (M> N) groups unique to, until M reaches the final group number N. ⁇
  • the character portions can be appropriately grouped by separately applying the K-means method to the background region and the character region.
  • grouping character areas if a character area has a color close to the pixel value obtained by grouping the background area, the pixel belongs to the background. As a result, even in the area specified as the character area, the pixels originally belonging to the background are appropriately classified as being in the background area, so that the accuracy of the binarization process is increased.
  • the initial M colors used for grouping specific to the character area are eight colors of R, G, B, C (cyan), M (magenta), Y (yellow), white, and black. Good. Since there are many pure colors such as black and blue, the K-means processing for the character area is preferably started from these colors.
  • character data can be extracted with high accuracy by combining them.
  • FIG. 1 is a functional block diagram illustrating a configuration of an image processing apparatus according to an embodiment of the present invention. It is a flowchart explaining the outline of the process which concerns on embodiment of this invention. It is explanatory drawing which illustrates the image after specifying the original image and character area which concern on embodiment of this invention. It is a flowchart explaining the background area grouping process which concerns on embodiment of this invention. It is a flowchart explaining the character area grouping process which concerns on embodiment of this invention. It is a figure for demonstrating the number in the case of dividing a group into 2 in order to produce the binary image which concerns on embodiment of this invention. It is a figure which illustrates the binary image which is an output result concerning the embodiment of the present invention.
  • the image processing apparatus 1 is realized by a computer such as a personal computer or a smartphone and a computer program (corresponding to a computer program according to claims 5 to 8) installed in the computer.
  • the image processing apparatus 1 includes a processing unit 2, a storage unit 3, and a communication interface unit 4.
  • an input operation unit such as a mouse and a keyboard used by the operator during operation
  • an output unit such as a display and a printer, a camera, and the like are provided as appropriate.
  • the storage unit 3 stores an input image (hereinafter, “processing target image”), a learning sample for specifying a character region, various parameters, various intermediate processing results by the processing unit 2, and the like. It is realized by a storage device such as The parameters include parameters of a convolutional neural network (hereinafter referred to as “CNN”) used to specify a character area, the number of groups when grouping the background area and the character area, and a representative of each group. Contains the initial pixel value.
  • the intermediate processing result includes the progress of the specified character region, the progress associated with the application of the K-means method, such as the group to which each pixel belongs.
  • the storage unit 3 also includes programs for causing a computer to function as the image processing apparatus 1. These programs are read into the memory, and the read program code is executed by a CPU (not shown) so that the processing unit 2 can execute the program. Each part operates. Next, the processing unit 2 will be described.
  • the processing unit 2 includes an image acquisition unit 21, a character region identification processing unit 22, a background region grouping processing unit 23, a character region grouping processing unit 24, and a binary image creation unit 25.
  • an outline of processing by the image processing apparatus 1 will be described with reference to FIG.
  • the image acquisition unit 21 acquires a processing target image from an external communication network or information processing apparatus via the communication interface unit 4, and uses color information of each pixel of this image as coordinates in the L * a * b * color space. (Step S1 in FIG. 2). After the character area specifying process (steps S2 to S5 in FIG. 2), the process is performed based on each pixel after conversion. Here, the conversion is performed because the L * a * b * color space can display coordinates closer to human color recognition than the RGB color space, and therefore follows the human color recognition almost accurately. This is because the colors can be separated.
  • the character area specifying processing unit 22 specifies a character area excluding the background from the processing target image by the machine learning function implemented in the character area existence determining unit 22b (step S2 in FIG. 2).
  • the present invention is characterized in that it is divided into a background area and a character area, and grouping is performed by applying the K-means method to each.
  • the character area specification processing unit 22 determines whether each pixel included in the processing target image is included in the background area or the character area.
  • the entity of the character area existence determination unit 22b is CNN, and the parameters of the CNN are adjusted in advance by the machine learning unit 22a.
  • the machine learning unit 22a and the character area existence determination unit 22b will be described later in ⁇ 2. This will be described in “Preprocessing by Image Processing Device (Character Area Identification)”
  • the character area grouping processing unit 24 groups each pixel included in the character area for each character area specified by the character area specifying processing unit 22 by applying the K-means method.
  • the initial number of groups is M + L, which is a total of M, which is unique to the character region and L, which is the same as the background region.
  • M is an integer of M> N.
  • the binary image creation unit 25 creates a plurality of binary images based on the result of grouping processing in two stages of the background region and the character region.
  • the purpose of the present invention is to convert the original image into a binary image for character extraction. Therefore, for the pixel in the background area, only the information that the pixel exists in the background area is necessary. Therefore, the L groups classified into the background area are treated as one group without being distinguished, and the image to be processed is selected as one of (N + 1) groups combined with the N groups classified into the character area. Classify each pixel of. The number when (N + 1) groups are color-coded with two colors (usually white and black) is 2 (N + 1) .
  • Image acquiring unit 21 receives the color image to be processed, for each pixel, to coordinate converts the color information from the RGB color space such as the L * a * b * color space. That is, each pixel is represented by lightness L * , hue a * , and saturation b * in pixel units. Subsequently, a character area specifying process is performed on the graphic to be processed after being converted into the coordinates of the L * a * b * color space. Hereinafter, this process will be described in detail.
  • a character area in which character data included in one image exists is identified and distinguished from an image area (“background area”) in the background of the character.
  • background area an image area in the background of the character.
  • CNN which is a kind of machine learning is used. Therefore, before describing the processing by the character region identification processing unit 22, the function of machine learning by the image processing apparatus 1 will be described.
  • the image processing apparatus 1 includes a machine learning unit 22a, collects a large amount of learning images in advance, takes out learning samples, performs machine learning, verifies the results, and adjusts the parameters for machine learning. Specifically, learning images are collected, and positive samples are extracted from the character regions of these images, and negative samples are extracted from other regions. A positive sample that is entirely contained within the character area has a likelihood that its center is contained in the character area, and a negative sample that is not contained within the character area has its center located in the character area. Is assumed to be 0.0. This likelihood is teacher data, and the learning sample is associated with the teacher data. Each time a new learning sample is input, the likelihood is calculated, and if this likelihood deviates from the teacher data, the parameters are adjusted.
  • the learning sample extracted from within the character region is necessarily within the character region, so the likelihood should be 1.0.
  • parameter adjustment that is, learning of CNN is performed until a desired accuracy is realized so as to reduce this difference as much as possible. To do.
  • the parameter adjusted in this way is exported to the character area existence determination unit 22b.
  • the machine learning has been briefly described above. Returning to the description of the character area specifying process.
  • the character area specifying processing by the character area specifying processing unit 22 is performed using CNN after machine learning as follows.
  • the character area identification processing unit 22 scans the processing target image and extracts a small area (hereinafter referred to as “unit area”) having the same size as the learning sample. For example, scanning is performed from the upper left of the image toward the right end with a predetermined movement amount, and when reaching the right end, the image is moved downward by a predetermined movement amount and scanned toward the left end. This is repeated over the entire processing target image.
  • the extracted unit area is input to the character area existence determination unit 22b each time, and the character area specification processing unit 22 obtains the likelihood that the center of the unit area exists in the character area as an output result.
  • the character region specifying processing unit 22 scans the entire processing target image and acquires the likelihood of the center of the unit region, it determines whether the likelihood is within the character region depending on whether the likelihood is equal to or greater than a preset threshold value. judge. Based on the determination result, it is determined whether each pixel of the processing target image belongs to the character area or the background area. For example, the processing target image shown in FIG. 3A is separated into a character area and a background area like the binary image in FIG. In FIG. 3B, the background area is blacked out and the character area is outlined. In this example, there are three character areas, chA, chB, and chC. In the subsequent character area grouping process, these three character areas are processed separately. This is because the characters that are gathered at a grouped position often have the same color, and the characters that are separated from each other often have a different color.
  • the machine learning unit 22a described above operates asynchronously with the binarization processing of the color image of the present embodiment. Therefore, the machine learning unit 22a may be realized by a computer different from the image processing apparatus 1.
  • the character area presence / absence determining unit 22b which is the result of the machine learning, may also be realized by another computer so as to transmit / receive data to / from the character area specifying processing unit 22 via a communication line.
  • the background area grouping processing unit 23 classifies each pixel included in the background area of the processing target image into a predetermined number L of groups according to the K-means method.
  • the number of groups L is set, and the processing counter Np for the K-means method is initialized to 1 (step S31).
  • (l 1, 2,..., L) is calculated.
  • S (l) is the distance between the pixel of interest and the provisional representative pixel value of each group in the coordinates of the L * a * b * color space.
  • step S36 the pixel value average of the pixels included in the same group is calculated, the representative pixel value of each group is updated with the obtained average value, and 1 is added to the processing counter Np.
  • the process of step S36 returns to the process of step S33. That is, the processes in steps S33 to S36 are repeated until the grouping of all pixels is stabilized.
  • step S34 If the process of the K-means method is executed twice or more (“Np> 1” in step S34) and the K-means process has converged (Yes in step S35), the final representative pixel value bgCLR of each group (L) is confirmed (step S37), and these values are referred to in the subsequent character area grouping process.
  • Main processing 2 by image processing apparatus (grouping of character areas) >> The character area grouping processing unit 24 classifies each pixel included in the character area of the processing target image into a predetermined number of groups according to the K-means method.
  • K-means method a description will be given with reference to FIG.
  • the L groups obtained by grouping the background regions described above are also used in the processing by the K-means method.
  • the number of groups unique to the character area is finally N, but initially eight is set. That is, the process by the K-means method is started with a total of (8 + L) groups of 8 character specifics and L backgrounds (step S41 in FIG. 5).
  • the processing counter Np of the K-means method is initialized to 1.
  • the eight colors are R, G, and B of the three primary colors of light, C (cyan), M (magenta), Y (yellow), and white and black.
  • Qij be the pixel value of the pixel of interest (i, j).
  • (M 1, 2,..., 8) is calculated (step S42).
  • bgS (l) is the distance between the pixel of interest in the coordinates of the L * a * b * color space and the fixed pixel value of each group in the background region
  • chS (m) is a provisional value for each group specific to the character region. This is the distance from the representative pixel value.
  • step S45 the group belonging to the background area is not updated.
  • step S45 the processing counter Np
  • step S42 to S45 are repeated with the same number of groups until the grouping is stabilized (No in step S44). If the grouping is stable (Yes in step S44) and the number of character unique groups exceeds N (No in step S46), the character unique group having the smallest number of pixels in the group is deleted (step S47).
  • the number of groups is a total (7 + L) of 7 character areas and L backgrounds. The number of groups belonging to the background area remains L and is not to be deleted. After deleting one group, the process counter Np is reinitialized to 1, and the process returns to step S42 again.
  • the pixels classified into the group deleted in step S47 are absorbed by the group having the closest representative pixel value among the remaining groups in step S42 to be executed again.
  • the group that has absorbed such pixels recalculates the pixel value average including the pixels that have been absorbed in the subsequent step S45.
  • the processing by the K-means method is started from eight groups specific to characters in addition to the L groups of backgrounds, but these eight are more than the target N.
  • the reason why the initial value of the number of character unique groups is set to 8 is that the K-means method is affected by the initial value, and therefore it is desirable to reduce it from a value larger than the target value in a stepwise manner. Furthermore, if the process by the K-means method is executed with a very small number of groups, there is a possibility that a group of character colors to be left disappears. Considering these, it is appropriate to start processing from eight groups. However, if the number of groups remains eight, the number of binary images that are output results of the present embodiment is 510, which takes too much time for mounting. Therefore, we decided to reduce the number of groups step by step.
  • Main processing 3 by image processing apparatus (binary image creation for processing target figure) >>
  • the binary image creation unit 25 receives a result of the processing target image being classified into N + L groups, and performs a binarization process for converting pixels belonging to each group into white or black.
  • FIG. 7 shows two of the binary images obtained from the processing target graphic illustrated in FIG.
  • character data corresponding to the original image in FIG. 3A cannot be extracted without omission.
  • omission of character data can be reduced by combining a plurality of binary images.
  • Character data can be extracted with high accuracy even in an image in which gradation is applied to the color of characters or characters are drawn in a striped pattern.
  • An appropriate value of N may be set in consideration of the number of colors included in the processing target image and the required accuracy.
  • the plurality of output binary image data is sent to, for example, an external device that performs character recognition or displayed on the screen. How to use the obtained binary image is a subject of an invention different from the present invention.
  • the original data is converted into the coordinates of the L * a * b * color space, but the original color information of the original data may be used as it is.
  • L * a * b * is only desirable because it matches the characteristics of human vision.
  • eight pure colors are used as initial values when grouping character areas, three colors of RGB or four colors of CMYK may be used.
  • noise removal by smoothing may be performed prior to the character region specifying process. That is, if a large amount of noise is included in the binary image obtained in the final stage of the present invention, the accuracy of subsequent processing (for example, character recognition processing) based on these binary images is reduced. It is desirable to smooth the image with a filter or the like and output a binary image with less noise.
  • the purpose of binarization is extraction of character data.
  • the present invention may be intended to extract not only characters but also pictograms (pictograms) and traffic signs. This is because, like letters, they appeal to the eye and convey information and call attention.
  • the present invention is suitable for use in extracting character data overlaid on a television image, but can also be used for an image obtained by scanning a color printed matter with a scanner.
  • Image processing device 2 Processing unit 21: Image acquisition unit 22: Character region specifying processing unit 23: Background region grouping processing unit 24: Character region grouping processing unit 25: Binary image creation unit 3: Storage unit 4: Communication interface part

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Analysis (AREA)
  • Color Image Communication Systems (AREA)
  • Character Input (AREA)

Abstract

La présente invention aborde le problème de la réalisation d'un procédé de traitement d'image et d'un programme informatique qui permettent de créer, à partir d'une image qui comprend diverses couleurs, en particulier une image photographiée, une image binaire qui est appropriée pour extraire un texte, des étiquettes ou d'autres informations de ce type qui contiennent une signification. Pour ce faire, l'invention réalise un procédé comprenant : une étape d'identification d'une région de texte correspondant à une image en couleur dont une région d'arrière-plan a été exclue ; une étape de classification de pixels inclus dans la région d'arrière-plan en L groupes (avec L >= 2) ; une étape de classification de pixels inclus dans la région de texte en N+L groupes, où N+L est la somme de L de la région d'arrière-plan et N (avec n >= 2) est unique à la région de texte ; et une étape de création d'une image binaire dans laquelle, en traitant les groupes de régions d'arrière-plan comme un seul et même groupe et en séparant N+1 groupes en deux groupes, les pixels qui sont inclus dans l'un des deux groupes sont affichés avec une couleur unique, tandis que les pixels qui sont inclus dans l'autre des deux groupes sont affichés avec une autre couleur unique.
PCT/JP2018/004598 2017-02-15 2018-02-09 Procédé de traitement d'image et programme informatique Ceased WO2018151043A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-026282 2017-02-15
JP2017026282A JP6294524B1 (ja) 2017-02-15 2017-02-15 画像処理方法、及びコンピュータプログラム

Publications (1)

Publication Number Publication Date
WO2018151043A1 true WO2018151043A1 (fr) 2018-08-23

Family

ID=61628998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/004598 Ceased WO2018151043A1 (fr) 2017-02-15 2018-02-09 Procédé de traitement d'image et programme informatique

Country Status (2)

Country Link
JP (1) JP6294524B1 (fr)
WO (1) WO2018151043A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020084720A1 (fr) 2018-10-24 2020-04-30 富士通フロンテック株式会社 Dispositif d'inspection de billet de banque, procédé d'inspection de billet de banque et programme d'inspection de billet de banque
JP2021047797A (ja) * 2019-09-20 2021-03-25 トッパン・フォームズ株式会社 機械学習装置、機械学習方法、及びプログラム
JP7431005B2 (ja) * 2019-09-20 2024-02-14 Toppanエッジ株式会社 学習データ生成装置、学習データ生成方法、及びプログラム
JP7416614B2 (ja) * 2019-12-24 2024-01-17 Go株式会社 学習モデルの生成方法、コンピュータプログラム、情報処理装置、及び情報処理方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001291056A (ja) * 2000-04-05 2001-10-19 Fujitsu Ltd 文書画像認識装置及び記録媒体
JP2010067223A (ja) * 2008-09-12 2010-03-25 Canon Inc 画像処理装置、画像処理方法、及び、画像処理プログラム

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3918143B2 (ja) * 2000-12-28 2007-05-23 独立行政法人科学技術振興機構 植物認識システム
JP5608511B2 (ja) * 2010-10-25 2014-10-15 日立オムロンターミナルソリューションズ株式会社 画像補正装置および画像補正方法
JP5887242B2 (ja) * 2012-09-28 2016-03-16 日立オムロンターミナルソリューションズ株式会社 画像処理装置、画像処理方法、及びプログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001291056A (ja) * 2000-04-05 2001-10-19 Fujitsu Ltd 文書画像認識装置及び記録媒体
JP2010067223A (ja) * 2008-09-12 2010-03-25 Canon Inc 画像処理装置、画像処理方法、及び、画像処理プログラム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
vol. j83-d-II, no. 5, 2000, pages 1294 - 1304 *

Also Published As

Publication number Publication date
JP2018132953A (ja) 2018-08-23
JP6294524B1 (ja) 2018-03-14

Similar Documents

Publication Publication Date Title
JP4764231B2 (ja) 画像処理装置、制御方法、コンピュータプログラム
US6865290B2 (en) Method and apparatus for recognizing document image by use of color information
US9524028B2 (en) Visual language for human computer interfaces
CN112614060A (zh) 人脸图像头发渲染方法、装置、电子设备和介质
US12277688B2 (en) Multi-task text inpainting of digital images
KR100422709B1 (ko) 영상 의존적인 얼굴 영역 추출방법
EP2645332B1 (fr) Dispositif de traitement d'image qui divise une image en plusieurs régions
JP2003228712A (ja) イメージからテキスト状のピクセルを識別する方法
JP6294524B1 (ja) 画像処理方法、及びコンピュータプログラム
CN116630984A (zh) 一种基于印章去除的ocr文字识别方法及系统
CN112906819A (zh) 图像识别方法、装置、设备及存储介质
JP6671613B2 (ja) 文字認識方法及びコンピュータプログラム
JP2004199622A (ja) 画像処理装置、画像処理方法、記録媒体およびプログラム
JP3636936B2 (ja) 濃淡画像の2値化方法および濃淡画像の2値化プログラムを記録した記録媒体
JP4370950B2 (ja) 画像処理装置
CN111062862A (zh) 基于颜色的数据增强方法和系统及计算机设备和存储介质
US8295602B2 (en) Image processing apparatus and image processing method
CN112927321B (zh) 基于神经网络的图像智能设计方法、装置、设备及存储介质
JP2005210650A (ja) 画像処理装置
Papamarkou et al. Conversion of color documents to grayscale
JP4228905B2 (ja) 画像処理装置及びプログラム
JPH08123901A (ja) 文字抽出装置及び該装置を用いた文字認識装置
KR20050014072A (ko) 색분포 학습을 통한 얼굴영역 추출 방법
JP2005269269A (ja) 画像処理装置
KR102343562B1 (ko) 스케치 기반의 캐릭터를 가상환경의 활동 캐릭터로 생성하기 위한 스케치 기반의 활동캐릭터 생성 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18753780

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18753780

Country of ref document: EP

Kind code of ref document: A1