[go: up one dir, main page]

WO2022131418A1 - Simplified method for automatically detecting landmarks of three-dimensional dental scan data, and computer readable medium having program recorded thereon for performing same by computer - Google Patents

Simplified method for automatically detecting landmarks of three-dimensional dental scan data, and computer readable medium having program recorded thereon for performing same by computer Download PDF

Info

Publication number
WO2022131418A1
WO2022131418A1 PCT/KR2020/018786 KR2020018786W WO2022131418A1 WO 2022131418 A1 WO2022131418 A1 WO 2022131418A1 KR 2020018786 W KR2020018786 W KR 2020018786W WO 2022131418 A1 WO2022131418 A1 WO 2022131418A1
Authority
WO
WIPO (PCT)
Prior art keywords
landmark
dimensional
scan data
depth image
direction vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2020/018786
Other languages
French (fr)
Korean (ko)
Inventor
신봉주
김한나
최진혁
김영준
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Imagoworks Inc
Original Assignee
Imagoworks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imagoworks Inc filed Critical Imagoworks Inc
Publication of WO2022131418A1 publication Critical patent/WO2022131418A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C9/00Impression cups, i.e. impression trays; Impression methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to a method for automatically detecting a landmark of dental 3D scan data and a computer-readable recording medium in which a program for executing the same on a computer is recorded, and more particularly, to a dental CT image and digital It relates to a method for automatically detecting landmarks in dental three-dimensional scan data that can reduce time and effort for matching impression models, and to a computer-readable recording medium in which a program for executing the same in a computer is recorded.
  • CT Computed Tomography
  • CBCT Cone Beam Computed Tomography
  • CT three-dimensional volume data required when diagnosing oral and maxillofacial conditions or establishing surgery and treatment plans in dentistry and plastic surgery It includes not only hard tissue corresponding to bones or teeth, but also various information such as soft tissue such as the tongue or lips, and the location and shape of neural tube existing inside the bone.
  • metallic materials in the oral cavity such as implants, orthodontic devices, and dental crowns, which the patient has previously treated, metal artifacts occur in the CT, an X-ray-based image, and the teeth and surrounding areas are greatly distorted and identified. and difficulties in diagnosis.
  • a three-dimensional digital scan model is acquired and used. This is obtained by directly scanning the patient's oral cavity, or data is obtained by scanning the patient's plaster impression model. .
  • a matching process of overlapping data of different modalities is performed.
  • the same location on the CT scan data is matched by the user manually setting each landmark.
  • scan data of the same patient acquired at different times for treatment progress or before and after comparison are matched in the same way. Since registration results are important basic data for treatment and surgery, it is very important to increase the accuracy of registration.
  • the location of the landmark which is the matching criterion, requires high accuracy because it is the basis for planning work to place the implant in the optimal position by grasping the location of the neural tube, tissue, etc.
  • manually marking a three-dimensional landmark in two different types of data on a consistent basis or at a fixed location is difficult and takes a lot of time, and there is a deviation for each user.
  • An object of the present invention is to automatically detect landmarks in 3D scan data for dental use in order to reduce time and effort for registration of dental CT images and 3D scan data. To provide a detection method.
  • Another object of the present invention is to provide a computer-readable recording medium in which a program for executing the automatic landmark detection method of the dental three-dimensional scan data is recorded on a computer.
  • a method for automatically detecting a landmark of a dental 3D scan data includes generating a 2D depth image by projecting the 3D scan data, a fully convolutional neural network model (detecting a two-dimensional landmark in the two-dimensional depth image using a fully-connected convolutional neural network model) detecting the landmark.
  • generating the 2D depth image may include determining the projection direction vector through principal component analysis of the 3D scan data.
  • the average of the normal vector of the 3D scan data is when, If w 3 is determined as the projection direction vector, Then, -w 3 may be determined as the projection direction vector.
  • the 2D depth image may be formed on a projection plane defined by the projection direction vector as a normal vector and a first distance away from the 3D scan data.
  • the detecting of the 3D landmark may include inversely projecting the 2D landmark onto the 3D scan data in a direction opposite to the projection direction vector by using the projection direction vector.
  • the fully convolutional neural network model includes a convolution process for detecting landmark features in the two-dimensional depth image, and a deconvolution process for adding landmark location information to the detected landmark features. can be performed.
  • the convolution process and the deconvolution process may be repeatedly performed in the fully convolutional neural network model.
  • the result of the deconvolution process may be in the form of a heat map corresponding to the number of the two-dimensional landmarks.
  • the pixel coordinates having the largest value in the heat map may indicate the position of the two-dimensional landmark.
  • the detecting of the two-dimensional landmark may further include learning the convolutional neural network model.
  • learning of the convolutional neural network model a learning 2D depth image and user-defined landmark information may be input.
  • the user-defined landmark information may use the type of the learning landmark and information on the correct answer position in the learning 2D depth image of the learning landmark.
  • a program for executing the method of automatically detecting a landmark of the dental 3D scan data in a computer may be recorded in a computer-readable recording medium.
  • the method for automatically detecting landmarks of 3D scan data for dental use since the landmarks of 3D scan data are automatically detected using deep learning, the user's Effort and time can be reduced, and the accuracy of landmarks in 3D scan data can be increased.
  • the landmark of the 3D scan data is automatically detected using deep learning, the accuracy of the registration of the dental CT image and the 3D scan data is improved, and the user's efforts for the registration of the dental CT image and the 3D scan data are reduced. time can be reduced.
  • FIG. 1 is a flowchart illustrating a method for automatically detecting a landmark of dental 3D scan data according to the present embodiment.
  • FIG. 2 is a perspective view illustrating an example of a landmark of 3D scan data.
  • 3 is a conceptual diagram illustrating a method of generating a 2D depth image by projecting 3D scan data.
  • FIG. 4 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image.
  • FIG. 5 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image.
  • FIG. 6 is a plan view illustrating an example of a two-dimensional depth image.
  • FIG. 7 is a plan view illustrating an example of a two-dimensional depth image.
  • FIG. 8 is a conceptual diagram illustrating an example of training data of a fully convolutional neural network for detecting two-dimensional landmarks.
  • FIG. 9 is a conceptual diagram illustrating a fully convolutional neural network for detecting a two-dimensional landmark.
  • FIG. 10 is a conceptual diagram illustrating a landmark detection unit.
  • 11 is a plan view illustrating an example of a two-dimensional landmark.
  • FIG. 12 is a conceptual diagram illustrating a method of detecting a 3D landmark by back-projecting a 2D landmark onto 3D scan data.
  • first, second, etc. may be used to describe various elements, but the elements should not be limited by the terms. The above terms may be used for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
  • 1 is a flowchart illustrating a simplified automatic landmark detection method of dental 3D scan data according to the present embodiment.
  • 2 is a perspective view illustrating an example of a landmark of 3D scan data.
  • the simplified automatic landmark detection method of the 3D scan data for dentistry generates a 2D depth image by projecting the 3D scan data (S100), the 2D depth image A step of detecting a two-dimensional landmark by applying to a fully-connected convolutional neural network (S200), and back-projecting the two-dimensional landmark to the three-dimensional scan data 3D of the three-dimensional scan data It may include detecting the landmark (S300).
  • the generating of the 2D depth image ( S100 ) may be a process of imaging the depth of 3D scan data of the virtual camera.
  • the automatic two-dimensional landmark detection step (S200) is a step of detecting a landmark in a two-dimensional image using a fully convolutional neural network deep learning model.
  • the landmark three-dimensional projection step (S300) the two-dimensional landmark detected in the previous two-dimensional landmark automatic detection step (S200) may be three-dimensionalized and reflected in the scan data.
  • Figure 2 shows three landmarks (LM1, LM2, LM3) of the three-dimensional scan data.
  • the landmark is located at a predetermined interval or at the top of a specific tooth (incisor, canine, molar, etc.) to estimate the shape of a dental arch.
  • the landmark can be automatically detected at once by applying the same method to all landmarks without additional processing according to the location or characteristics of the landmark.
  • the landmarks of the 3D scan data may be points indicating specific positions of teeth.
  • the landmark of the 3D scan data may include three points LM1 , LM2 , and LM3 .
  • the 3D scan data may be data representing the patient's upper jaw or data representing the patient's mandible.
  • the first landmark LM1 and the third landmark LM3 of the 3D scan data may represent an outermost point of a tooth of the 3D scan data in a lateral direction, respectively.
  • the second landmark LM2 of the 3D scan data is the first landmark LM1 and the third landmark LM3 in the arch including the first landmark LM1 and the third landmark LM3 ) can be a point between
  • the second landmark LM2 of the 3D scan data may indicate between two central incisors of a patient.
  • 3 is a conceptual diagram illustrating a method of generating a 2D depth image by projecting 3D scan data.
  • 4 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image.
  • 5 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image.
  • the depth image performs principal component analysis of each 3D point p(x, y, z) of the scan data and the scan data when projecting the 3D scan data onto a 2D plane. It is an image showing the vertical distance information between the plane UVs defined through The pixel value of the 2D image represents the distance d(u,v) from the 2D plane defined above to the surface of the scan data.
  • PCA principal component analysis
  • the covariance for the three-dimensional n point coordinates to save The covariance may indicate how the three-dimensional n point coordinates are distributed along the x, y, and z axes.
  • the result of eigendecomposition of the covariance ⁇ is can be expressed as procession
  • the column vector is composed of the eigenvector w(p,q,r) of ⁇ .
  • diagonal matrix is the eigenvalue ⁇ of the diagonal element ⁇ .
  • w 1 having the largest eigenvalue ⁇ in FIG. 3 may be a direction connecting both ends of the teeth in the lateral direction
  • w 2 having the second largest eigenvalue ⁇ may be in the front direction of the patient or the posterior direction of the patient
  • w 3 having the smallest eigenvalue ⁇ may be in a direction from the tooth root to the occlusal surface or vice versa.
  • the average of the normal vector of the triangle set in the 3D scan data use the then determine w 3 as the projection direction, If -w 3 is determined as the projection direction when generating a depth image.
  • the projection plane uses a projection direction vector as a normal vector, defines a predetermined distance from the 3D scan data, and generates a depth image.
  • the three axial directions of the three-dimensional scan data obtained through the principal component analysis are w 1 , w 2 , and w 3 , among which the eigenvalue ⁇ of w 1 is the largest, and the eigenvalue ⁇ of w 3 is smallest
  • the projection direction is determined using the direction vector w 3 with the smallest eigenvalue ⁇ . Normal vector average of the triangle set of 3D scan data If the tooth is elongated toward the top, it may be formed in the upper direction, and if the tooth is elongated toward the lower side, it may be formed in the lower direction.
  • w 3 is a direction that generally coincides with the direction of tooth extraction, , and the case of using the w 3 vector as the projection direction vector is exemplified.
  • the three axial directions of the three-dimensional scan data obtained through principal component analysis are w 1 , w 2 , and w 3 , among which the eigenvalue ⁇ of w 1 is the largest, and the eigenvalue ⁇ of w 3 is smallest
  • the projection direction is determined using the direction vector w 3 with the smallest eigenvalue ⁇ .
  • w 3 is in the direction substantially opposite to the direction of tooth extraction, , and exemplifies a case where -w 3 vector is used as a projection direction vector.
  • the teeth may be well formed so that the teeth do not overlap in the two-dimensional depth image.
  • 6 is a plan view illustrating an example of a two-dimensional depth image.
  • 7 is a plan view illustrating an example of a two-dimensional depth image.
  • the two-dimensional depth image is an image having a depth value (d) with respect to two-dimensional coordinates ⁇ u, v ⁇ , and when the two-dimensional depth image is back-projected in a direction opposite to the projection direction, the three-dimensional scan data can be restored.
  • 8 is a conceptual diagram illustrating an example of training data of a fully convolutional neural network for detecting two-dimensional landmarks.
  • 9 is a conceptual diagram illustrating a fully convolutional neural network for detecting a two-dimensional landmark.
  • 10 is a conceptual diagram illustrating a landmark detection unit.
  • 11 is a plan view illustrating an example of a two-dimensional landmark.
  • a landmark deep learning model using a fully convolutional neural network is trained using the depth image and user-defined landmark information as inputs.
  • the user-defined landmark information used during learning includes 1) the type of landmark to be found (eg, divided by indices 0, 1, 2) and 2) the two-dimensional depth image of the landmark. It may be the correct position coordinates (u i ,v i ).
  • the landmark detector for automatic landmark detection may include a fully convolutional neural network.
  • the fully convolutional neural network may be a neural network deep learning model composed of convolutional layers.
  • a fully convolutional neural network largely includes two processes as shown in FIG. 9 .
  • the convolution process the feature of each landmark is detected and classified in the depth image through a plurality of pre-trained convolution layers. By combining this with the entire image information through the deconvolution process, location information is added to the feature, and the location of the landmark on the image is output as a heatmap.
  • each heat map image may be output as many as the number of user-defined landmarks used when learning the deep learning model. For example, if the number of the user-defined landmarks is three, three heat map images corresponding to the three landmarks may be output.
  • the convolution process can be said to be a process of extracting only features from the 2D depth image instead of losing location information.
  • the landmark feature may be extracted through the convolution process.
  • the deconvolution process can be referred to as a process of reviving the lost location information for the landmarks extracted in the convolution process.
  • a deep learning neural network model in which a fully convolutional neural network is repeatedly superposed may be used for more precise detection.
  • the convolution process and the deconvolution process may be repeatedly performed.
  • the number of times the convolution process and the deconvolution process are repeatedly performed may be determined in consideration of the accuracy of the landmark detection result.
  • the landmark detector may include four overlapping neural networks (four convolution processes and four deconvolution processes).
  • the landmark detection unit may construct a system in which the two-dimensional depth image is input and a heat map indicating the location of a desired target landmark is output for each channel according to the learning model user-defined landmark index.
  • the final result heat map can be obtained by summing the output heat map data of each step of the nested neural network for each channel.
  • the pixel coordinates having the largest value in the result heat map data indicate the location of the detected landmark. Since the heat map is output for each channel in the order of the user-defined landmark index used during training, location information of the desired landmark can be obtained.
  • the 2D landmarks in the 2D depth image are expressed as L1, L2, and L3.
  • FIG. 12 is a conceptual diagram illustrating a method of detecting a 3D landmark by back-projecting a 2D landmark onto 3D scan data.
  • the two-dimensional coordinates of the landmarks (L1, L2, L3) obtained in the landmark automatic detection step (S200) are coordinates of the landmarks (LM1, LM2, LM3) of the three-dimensional scan data convert to
  • the coordinates of the final 3D landmark may be calculated using the projection information used in generating the depth image ( S100 ).
  • the two-dimensional landmarks L1, L2, and L3 are back-projected onto the three-dimensional scan data using the projection information used in generating the depth image (S100), and the three-dimensional landmark LM1 of the three-dimensional scan data , LM2, LM3) can be obtained.
  • the landmarks (LM1, LM2, LM3) of the three-dimensional scan data are automatically detected using deep learning, the user for extracting the landmarks (LM1, LM2, LM3) of the three-dimensional scan data It is possible to reduce the effort and time of the 3D scan data and to increase the accuracy of the landmarks (LM1, LM2, LM3) of the 3D scan data.
  • landmarks (LM1, LM2, LM3) of 3D scan data are automatically detected using deep learning, the accuracy of registration between dental CT images and 3D scan data is improved, and dental CT images and 3D scan data are automatically detected. The user's effort and time for registration can be reduced.
  • a computer-readable recording medium in which a program for executing the above-described simplified automatic landmark detection method of dental 3D scan data is recorded on a computer may be provided.
  • the above-described method can be written as a program that can be executed on a computer, and can be implemented in a general-purpose digital computer that operates the program using a computer-readable medium.
  • the structure of data used in the above-described method may be recorded in a computer-readable medium through various means.
  • the computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software.
  • Examples of computer-readable recording media include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floppy disks, and ROMs, RAMs, flash memories, etc.
  • Hardware devices specially configured to store and execute program instructions are included. Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention.
  • the above-described simplified automatic landmark detection method of 3D scan data for dentistry may be implemented in the form of a computer program or application executed by a computer stored in a recording medium.
  • the present invention relates to a simplified automatic landmark detection method of dental three-dimensional scan data and a computer-readable recording medium in which a program for executing the same on a computer is recorded, for extracting landmarks of three-dimensional scan data
  • the user's effort and time can be reduced, and the effort and time for matching the dental CT image and the digital impression model can be reduced.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Primary Health Care (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Databases & Information Systems (AREA)
  • Robotics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

A method for automatically detecting landmarks of three-dimensional dental scan data comprises the steps of: generating a two-dimensional depth image by projecting three-dimensional scan data; detecting two-dimensional landmarks in the two-dimensional depth image by using a fully-connected convolutional neural network model; and detecting three-dimensional landmarks of the three-dimensional scan data by back-projecting the two-dimensional landmarks onto the three-dimensional scan data.

Description

치과용 3차원 스캔 데이터의 간소화된 랜드마크 자동 검출 방법 및 이를 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체A computer-readable recording medium in which a simplified automatic landmark detection method of dental 3D scan data and a program for executing it on a computer are recorded

본 발명은 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법 및 이를 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체에 관한 것으로, 더욱 상세하게는 자동으로 수행되어 치과 CT 영상과 디지털 인상 모델의 정합을 위한 시간과 노력을 감소시킬 수 있는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법 및 이를 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체에 관한 것이다.The present invention relates to a method for automatically detecting a landmark of dental 3D scan data and a computer-readable recording medium in which a program for executing the same on a computer is recorded, and more particularly, to a dental CT image and digital It relates to a method for automatically detecting landmarks in dental three-dimensional scan data that can reduce time and effort for matching impression models, and to a computer-readable recording medium in which a program for executing the same in a computer is recorded.

치과, 성형외과 등에서 구강 및 악안면 상태를 진단하거나 수술 및 치료 계획을 수립할 때 필수로 요구되는 3차원 볼륨 데이터인 CT(Computed Tomography) 또는 CBCT(Cone Beam Computed Tomography) (이하 CT로 통칭) 데이터는 뼈나 치아에 해당하는 경조직 뿐만 아니라, 혀 또는 입술과 같은 연조직, 뼈 내부에 존재하는 신경관 위치 및 형태 등의 다양한 정보를 포함하고 있다. 하지만, 환자가 기존에 시술 받은 임플란트, 교정장치, 치아 크라운 등의 구강 내에 존재하는 금속성 물질들로 인해, X-선 기반의 영상인 CT에서 Metal Artifact 현상이 일어나, 치아 및 주변부가 크게 왜곡되어 식별 및 진단에 어려움이 있다. 또한, 잇몸의 형상 또는 치아와의 경계면을 특정하기 어렵다. 이와 같은 제한된 치아 및 구강 정보를 보완하기 위해 3차원 디지털 스캔 모델을 획득해 사용한다. 이는 환자의 구강을 직접 스캔하여 획득되거나, 환자의 석고 인상 모형을 스캔하여 데이터를 획득되며, stl, obj, ply 등의 점과 면 정보로 이루어진 3차원 모델 파일 형식(이하 스캔 데이터)의 데이터이다.CT (Computed Tomography) or CBCT (Cone Beam Computed Tomography) (collectively referred to as CT) data, which are three-dimensional volume data required when diagnosing oral and maxillofacial conditions or establishing surgery and treatment plans in dentistry and plastic surgery It includes not only hard tissue corresponding to bones or teeth, but also various information such as soft tissue such as the tongue or lips, and the location and shape of neural tube existing inside the bone. However, due to the presence of metallic materials in the oral cavity, such as implants, orthodontic devices, and dental crowns, which the patient has previously treated, metal artifacts occur in the CT, an X-ray-based image, and the teeth and surrounding areas are greatly distorted and identified. and difficulties in diagnosis. In addition, it is difficult to specify the shape of the gum or the interface with the teeth. To supplement this limited dental and oral information, a three-dimensional digital scan model is acquired and used. This is obtained by directly scanning the patient's oral cavity, or data is obtained by scanning the patient's plaster impression model. .

CT 데이터와 함께 스캔 데이터를 활용할 때는, 서로 다른 모달리티의 데이터를 중첩하는 정합 과정을 거친다. 일반적으로 스캔 데이터를 CT 상에서 동일한 위치를 사용자가 각각 수동으로 랜드마크를 설정하여 정합한다. 또한, 치료 경과 혹은 전후 비교를 위해 다른 시기에 획득된 동일 환자의 스캔 데이터를 같은 방식으로 정합하기도 한다. 정합 결과는 치료, 수술 등의 중요한 기초 자료가 되기 때문에 정합의 정확성을 높이는 것은 아주 중요하다. 특히 임플란트의 경우, 신경관, 조직 등의 위치를 파악하여 최적의 위치에 임플란트 식립하기 위한 계획 작업의 기초가 되므로 정합 기준이 되는 랜드마크의 위치는 높은 정확도를 요구한다. 하지만, 수동으로 서로 다른 두 종류의 데이터에서 3차원 상의 랜드마크를 일관된 기준으로 혹은 일정한 위치에 마킹하는 것은 까다로우면서도 많은 시간이 소요되며 사용자별 편차가 존재한다. When using scan data together with CT data, a matching process of overlapping data of different modalities is performed. In general, the same location on the CT scan data is matched by the user manually setting each landmark. In addition, scan data of the same patient acquired at different times for treatment progress or before and after comparison are matched in the same way. Since registration results are important basic data for treatment and surgery, it is very important to increase the accuracy of registration. In particular, in the case of implants, the location of the landmark, which is the matching criterion, requires high accuracy because it is the basis for planning work to place the implant in the optimal position by grasping the location of the neural tube, tissue, etc. However, manually marking a three-dimensional landmark in two different types of data on a consistent basis or at a fixed location is difficult and takes a lot of time, and there is a deviation for each user.

랜드마크를 얻기 위해, 마커를 구강에 직접 부착하여 스캔 데이터를 생성하는 경우에는 환자에게 불편함을 초래할 수 있고, 구강 내부는 연조직이기 때문에 마커 고정이 어려우므로 적절한 방법이 되기 어려운 문제가 있다. In order to obtain a landmark, when a marker is directly attached to the oral cavity to generate scan data, it may cause inconvenience to the patient, and since the inside of the oral cavity is a soft tissue, it is difficult to fix the marker, so it is difficult to be an appropriate method.

본 발명이 이루고자 하는 목적은 치과 CT 영상과 3차원 스캔 데이터의 정합을 위한 시간과 노력을 감소시키기 위해 3차원 스캔 데이터의 랜드마크를 자동으로 검출할 수 있는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법을 제공하는 것이다. An object of the present invention is to automatically detect landmarks in 3D scan data for dental use in order to reduce time and effort for registration of dental CT images and 3D scan data. To provide a detection method.

본 발명이 이루고자 하는 다른 목적은 상기 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법을 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체를 제공하는 것이다.Another object of the present invention is to provide a computer-readable recording medium in which a program for executing the automatic landmark detection method of the dental three-dimensional scan data is recorded on a computer.

상기한 본 발명의 목적을 실현하기 위한 일 실시예에 따른 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법은 3차원 스캔 데이터를 투영하여 2차원 깊이 영상을 생성하는 단계, 완전 컨볼루션 신경망 모델(fully-connected convolutional neural network model)을 이용하여 상기 2차원 깊이 영상 내에서 2차원 랜드마크를 검출하는 단계 및 상기 2차원 랜드마크를 상기 3차원 스캔 데이터에 역투영하여 상기 3차원 스캔 데이터의 3차원 랜드마크를 검출하는 단계를 포함한다.A method for automatically detecting a landmark of a dental 3D scan data according to an embodiment for realizing the object of the present invention includes generating a 2D depth image by projecting the 3D scan data, a fully convolutional neural network model ( detecting a two-dimensional landmark in the two-dimensional depth image using a fully-connected convolutional neural network model) detecting the landmark.

본 발명의 일 실시예에 있어서, 상기 2차원 깊이 영상을 생성하는 단계는 상기 3차원 스캔 데이터의 주성분 분석을 통해 상기 투영 방향 벡터를 결정하는 단계를 포함할 수 있다.In an embodiment of the present invention, generating the 2D depth image may include determining the projection direction vector through principal component analysis of the 3D scan data.

본 발명의 일 실시예에 있어서, 상기 투영 방향 벡터를 결정하는 단계는 상기 3차원 스캔 데이터의 3차원 n개의 점 좌표

Figure PCTKR2020018786-appb-I000001
집합을 행렬로 나타낸
Figure PCTKR2020018786-appb-I000002
Figure PCTKR2020018786-appb-I000003
의 평균값
Figure PCTKR2020018786-appb-I000004
를 중심으로 이동시키는 단계(
Figure PCTKR2020018786-appb-I000005
), 상기 3차원 n개의 점 좌표에 대한 공분산
Figure PCTKR2020018786-appb-I000006
을 계산하는 단계, 상기 공분산 Σ를 고유분해(
Figure PCTKR2020018786-appb-I000007
)하는 단계(
Figure PCTKR2020018786-appb-I000008
이고
Figure PCTKR2020018786-appb-I000009
이며,) 및 w1={w1p,w1q,w1r},w2={w2p,w2q,w2r},w3={w3p,w3q,w3r} 중 고유값 λ가 가장 작은 방향 벡터 w3을 이용하여 상기 투영 방향 벡터를 결정하는 단계를 포함할 수 있다.In an embodiment of the present invention, the determining of the projection direction vector includes coordinates of n three-dimensional points of the three-dimensional scan data.
Figure PCTKR2020018786-appb-I000001
set as a matrix
Figure PCTKR2020018786-appb-I000002
cast
Figure PCTKR2020018786-appb-I000003
mean value of
Figure PCTKR2020018786-appb-I000004
moving to the center (
Figure PCTKR2020018786-appb-I000005
), the covariance for the three-dimensional n point coordinates
Figure PCTKR2020018786-appb-I000006
calculating the eigen decomposition (
Figure PCTKR2020018786-appb-I000007
) step (
Figure PCTKR2020018786-appb-I000008
ego
Figure PCTKR2020018786-appb-I000009
, and w 1 ={w 1p ,w 1q ,w 1r },w 2 ={w 2p ,w 2q ,w 2r },w 3 ={w 3p ,w 3q ,w 3r } and determining the projection direction vector using the smallest direction vector w 3 .

본 발명의 일 실시예에 있어서, 상기 투영 방향 벡터를 결정하는 단계는 상기 3차원 스캔 데이터의 법선 벡터 평균이

Figure PCTKR2020018786-appb-I000010
일 때,
Figure PCTKR2020018786-appb-I000011
이면 w3을 상기 투영 방향 벡터로 결정하고,
Figure PCTKR2020018786-appb-I000012
이면 -w3을 상기 투영 방향 벡터로 결정할 수 있다.In one embodiment of the present invention, in the determining of the projection direction vector, the average of the normal vector of the 3D scan data is
Figure PCTKR2020018786-appb-I000010
when,
Figure PCTKR2020018786-appb-I000011
If w 3 is determined as the projection direction vector,
Figure PCTKR2020018786-appb-I000012
Then, -w 3 may be determined as the projection direction vector.

본 발명의 일 실시예에 있어서, 상기 2차원 깊이 영상은 상기 투영 방향 벡터를 법선 벡터로 하고 상기 3차원 스캔 데이터로부터 제1 거리만큼 떨어진 곳에 정의된 투영 평면 상에 형성될 수 있다.In an embodiment of the present invention, the 2D depth image may be formed on a projection plane defined by the projection direction vector as a normal vector and a first distance away from the 3D scan data.

본 발명의 일 실시예에 있어서, 상기 3차원 랜드마크를 검출하는 단계는 상기 투영 방향 벡터를 이용하여 상기 투영 방향 벡터의 역방향으로 상기 2차원 랜드마크를 상기 3차원 스캔 데이터에 역투영할 수 있다.In an embodiment of the present invention, the detecting of the 3D landmark may include inversely projecting the 2D landmark onto the 3D scan data in a direction opposite to the projection direction vector by using the projection direction vector. .

본 발명의 일 실시예에 있어서, 상기 완전 컨볼루션 신경망 모델은 상기 2차원 깊이 영상에서 랜드마크 특징을 검출하는 컨볼루션 과정 및 상기 검출된 랜드마크 특징에 랜드마크 위치 정보를 추가하는 디컨볼루션 과정을 수행할 수 있다.In an embodiment of the present invention, the fully convolutional neural network model includes a convolution process for detecting landmark features in the two-dimensional depth image, and a deconvolution process for adding landmark location information to the detected landmark features. can be performed.

본 발명의 일 실시예에 있어서, 상기 완전 컨볼루션 신경망 모델에서 상기 컨볼루션 과정 및 상기 디컨볼루션 과정이 중복 수행될 수 있다. In an embodiment of the present invention, the convolution process and the deconvolution process may be repeatedly performed in the fully convolutional neural network model.

본 발명의 일 실시예에 있어서, 상기 디컨볼루션 과정의 결과는 상기 2차원 랜드마크의 개수에 대응하는 히트맵의 형태일 수 있다.In an embodiment of the present invention, the result of the deconvolution process may be in the form of a heat map corresponding to the number of the two-dimensional landmarks.

본 발명의 일 실시예에 있어서, 상기 히트맵에서 가장 큰 값을 갖는 픽셀 좌표가 상기 2차원 랜드마크의 위치를 나타낼 수 있다.In one embodiment of the present invention, the pixel coordinates having the largest value in the heat map may indicate the position of the two-dimensional landmark.

본 발명의 일 실시예에 있어서, 상기 2차원 랜드마크를 검출하는 단계는 상기 컨볼루션 신경망 모델을 학습하는 단계를 더 포함할 수 있다. 상기 컨볼루션 신경망 모델을 학습하는 단계는 학습 2차원 깊이 영상 및 사용자 정의 랜드마크 정보를 입력할 수 있다. 상기 사용자 정의 랜드마크 정보는 학습 랜드마크의 종류 및 상기 학습 랜드마크의 상기 학습 2차원 깊이 영상에서의 정답 위치 정보를 이용할 수 있다.In an embodiment of the present invention, the detecting of the two-dimensional landmark may further include learning the convolutional neural network model. In the learning of the convolutional neural network model, a learning 2D depth image and user-defined landmark information may be input. The user-defined landmark information may use the type of the learning landmark and information on the correct answer position in the learning 2D depth image of the learning landmark.

본 발명의 일 실시예에 있어서, 상기 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법을 컴퓨터에서 실행시키기 위한 프로그램은 컴퓨터로 읽을 수 있는 기록 매체에 기록될 수 있다.In an embodiment of the present invention, a program for executing the method of automatically detecting a landmark of the dental 3D scan data in a computer may be recorded in a computer-readable recording medium.

본 발명에 따른 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법에 따르면, 3차원 스캔 데이터의 랜드마크를 딥러닝을 이용하여 자동으로 검출하므로, 3차원 스캔 데이터의 랜드마크를 추출하기 위한 사용자의 노력과 시간을 줄일 수 있고, 3차원 스캔 데이터의 랜드마크의 정확도를 높일 수 있다.According to the method for automatically detecting landmarks of 3D scan data for dental use according to the present invention, since the landmarks of 3D scan data are automatically detected using deep learning, the user's Effort and time can be reduced, and the accuracy of landmarks in 3D scan data can be increased.

또한, 3차원 스캔 데이터의 랜드마크를 딥러닝을 이용하여 자동으로 검출하므로, 치과 CT 영상과 3차원 스캔 데이터의 정합의 정확도를 높이고 치과 CT 영상과 3차원 스캔 데이터의 정합을 위한 사용자의 노력과 시간을 줄일 수 있다. In addition, since the landmark of the 3D scan data is automatically detected using deep learning, the accuracy of the registration of the dental CT image and the 3D scan data is improved, and the user's efforts for the registration of the dental CT image and the 3D scan data are reduced. time can be reduced.

도 1은 본 실시예에 따른 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법을 나타내는 흐름도이다.1 is a flowchart illustrating a method for automatically detecting a landmark of dental 3D scan data according to the present embodiment.

도 2는 3차원 스캔 데이터의 랜드마크의 예시를 나타내는 사시도이다.2 is a perspective view illustrating an example of a landmark of 3D scan data.

도 3은 3차원 스캔 데이터를 투영하여 2차원 깊이 영상을 생성하는 방법을 나타내는 개념도이다.3 is a conceptual diagram illustrating a method of generating a 2D depth image by projecting 3D scan data.

도 4는 2차원 깊이 영상을 생성할 때의 투영 방향의 예시를 나타내는 사시도이다.4 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image.

도 5는 2차원 깊이 영상을 생성할 때의 투영 방향의 예시를 나타내는 사시도이다.5 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image.

도 6은 2차원 깊이 영상의 예시를 나타내는 평면도이다.6 is a plan view illustrating an example of a two-dimensional depth image.

도 7은 2차원 깊이 영상의 예시를 나타내는 평면도이다.7 is a plan view illustrating an example of a two-dimensional depth image.

도 8은 2차원 랜드마크를 검출하는 완전 컨볼루션 신경망의 트레이닝 데이터의 예시를 나타내는 개념도이다.8 is a conceptual diagram illustrating an example of training data of a fully convolutional neural network for detecting two-dimensional landmarks.

도 9는 2차원 랜드마크를 검출하는 완전 컨볼루션 신경망을 나타내는 개념도이다.9 is a conceptual diagram illustrating a fully convolutional neural network for detecting a two-dimensional landmark.

도 10은 랜드마크 검출부를 나타내는 개념도이다.10 is a conceptual diagram illustrating a landmark detection unit.

도 11은 2차원 랜드마크의 예시를 나타내는 평면도이다.11 is a plan view illustrating an example of a two-dimensional landmark.

도 12는 2차원 랜드마크를 3차원 스캔 데이터에 역투영하여 3차원 랜드마크를 검출하는 방법을 나타내는 개념도이다.12 is a conceptual diagram illustrating a method of detecting a 3D landmark by back-projecting a 2D landmark onto 3D scan data.

본문에 개시되어 있는 본 발명의 실시예들에 대해서, 특정한 구조적 내지 기능적 설명들은 단지 본 발명의 실시예를 설명하기 위한 목적으로 예시된 것으로, 본 발명의 실시예들은 다양한 형태로 실시될 수 있으며 본문에 설명된 실시예들에 한정되는 것으로 해석되어서는 아니 된다.With respect to the embodiments of the present invention disclosed in the text, specific structural or functional descriptions are only exemplified for the purpose of describing the embodiments of the present invention, and the embodiments of the present invention may be embodied in various forms and the text It should not be construed as being limited to the embodiments described in .

본 발명은 다양한 변경을 가할 수 있고 여러 가지 형태를 가질 수 있는바, 특정 실시예들을 도면에 예시하고 본문에 상세하게 설명하고자 한다. 그러나 이는 본 발명을 특정한 개시 형태에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다.Since the present invention can have various changes and can have various forms, specific embodiments are illustrated in the drawings and described in detail in the text. However, this is not intended to limit the present invention to the specific disclosed form, it should be understood to include all modifications, equivalents and substitutes included in the spirit and scope of the present invention.

제1, 제2 등의 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되어서는 안 된다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로 사용될 수 있다. 예를 들어, 본 발명의 권리 범위로부터 이탈되지 않은 채 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다.Terms such as first, second, etc. may be used to describe various elements, but the elements should not be limited by the terms. The above terms may be used for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, a first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.

어떤 구성요소가 다른 구성요소에 "연결되어" 있다거나 "접속되어" 있다고 언급된 때에는, 그 다른 구성요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있지만, 중간에 다른 구성요소가 존재할 수도 있다고 이해되어야 할 것이다. 반면에, 어떤 구성요소가 다른 구성요소에 "직접 연결되어" 있다거나 "직접 접속되어" 있다고 언급된 때에는, 중간에 다른 구성요소가 존재하지 않는 것으로 이해되어야 할 것이다. 구성요소들 간의 관계를 설명하는 다른 표현들, 즉 "~사이에"와 "바로 ~사이에" 또는 "~에 이웃하는"과 "~에 직접 이웃하는" 등도 마찬가지로 해석되어야 한다.When an element is referred to as being “connected” or “connected” to another element, it is understood that it may be directly connected or connected to the other element, but other elements may exist in between. it should be On the other hand, when it is said that a certain element is "directly connected" or "directly connected" to another element, it should be understood that the other element does not exist in the middle. Other expressions describing the relationship between elements, such as "between" and "immediately between" or "neighboring to" and "directly adjacent to", etc., should be interpreted similarly.

본 출원에서 사용한 용어는 단지 특정한 실시예를 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 출원에서, "포함하다" 또는 "가지다" 등의 용어는 기재된 특징, 숫자, 단계, 동작, 구성요소, 부분품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성요소, 부분품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terms used in the present application are only used to describe specific embodiments, and are not intended to limit the present invention. The singular expression includes the plural expression unless the context clearly dictates otherwise. In the present application, terms such as “comprise” or “have” are intended to designate that the described feature, number, step, operation, component, part, or combination thereof exists, and includes one or more other features or numbers, It should be understood that the possibility of the presence or addition of steps, operations, components, parts or combinations thereof is not precluded in advance.

다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미이다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥상 가지는 의미와 일치하는 의미인 것으로 해석되어야 하며, 본 출원에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.Unless defined otherwise, all terms used herein, including technical and scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries should be interpreted as meanings consistent with the context of the related art, and unless explicitly defined in the present application, they are not to be interpreted in an ideal or excessively formal meaning. .

한편, 어떤 실시예가 달리 구현 가능한 경우에 특정 블록 내에 명기된 기능 또는 동작이 순서도에 명기된 순서와 다르게 일어날 수도 있다. 예를 들어, 연속하는 두 블록이 실제로는 실질적으로 동시에 수행될 수도 있고, 관련된 기능 또는 동작에 따라서는 상기 블록들이 거꾸로 수행될 수도 있다.On the other hand, when a certain embodiment can be implemented differently, functions or operations specified in a specific block may occur in a different order from that specified in the flowchart. For example, two consecutive blocks may be performed substantially simultaneously, or the blocks may be performed in reverse according to a related function or operation.

이하, 첨부한 도면들을 참조하여, 본 발명의 바람직한 실시예를 보다 상세하게 설명하고자 한다. 도면상의 동일한 구성요소에 대해서는 동일한 참조부호를 사용하고 동일한 구성요소에 대해서 중복된 설명은 생략한다.Hereinafter, preferred embodiments of the present invention will be described in more detail with reference to the accompanying drawings. The same reference numerals are used for the same components in the drawings, and repeated descriptions of the same components are omitted.

도 1은 본 실시예에 따른 치과용 3차원 스캔 데이터의 간소화된 랜드마크 자동 검출 방법을 나타내는 흐름도이다. 도 2는 3차원 스캔 데이터의 랜드마크의 예시를 나타내는 사시도이다.1 is a flowchart illustrating a simplified automatic landmark detection method of dental 3D scan data according to the present embodiment. 2 is a perspective view illustrating an example of a landmark of 3D scan data.

도 1 및 도 2를 참조하면, 상기 치과용 3차원 스캔 데이터의 간소화된 랜드마크 자동 검출 방법은 3차원 스캔 데이터를 투영하여 2차원 깊이 영상을 생성하는 단계(S100), 상기 2차원 깊이 영상을 완전 컨볼루션 신경망(fully-connected convolutional neural network)에 적용하여 2차원 랜드마크를 검출하는 단계(S200) 및 상기 2차원 랜드마크를 상기 3차원 스캔 데이터에 역투영하여 상기 3차원 스캔 데이터의 3차원 랜드마크를 검출하는 단계(S300)를 포함할 수 있다.1 and 2, the simplified automatic landmark detection method of the 3D scan data for dentistry generates a 2D depth image by projecting the 3D scan data (S100), the 2D depth image A step of detecting a two-dimensional landmark by applying to a fully-connected convolutional neural network (S200), and back-projecting the two-dimensional landmark to the three-dimensional scan data 3D of the three-dimensional scan data It may include detecting the landmark (S300).

상기 2차원 깊이 영상 생성 단계(S100)는 가상 카메라에 대한 3차원 스캔 데이터의 깊이를 영상화시키는 과정일 수 있다. 상기 2차원 랜드마크 자동 검출 단계(S200)는 완전 컨볼루션 신경망 딥러닝 모델을 이용하여 2차원 영상에서 랜드마크를 검출하는 단계이다. 상기 랜드마크 3차원 투영 단계(S300)에서는 앞선 2차원 랜드마크 자동 검출 단계(S200)에서 검출한 2차원 랜드마크를 3차원화하여 스캔 데이터에 반영할 수 있다.The generating of the 2D depth image ( S100 ) may be a process of imaging the depth of 3D scan data of the virtual camera. The automatic two-dimensional landmark detection step (S200) is a step of detecting a landmark in a two-dimensional image using a fully convolutional neural network deep learning model. In the landmark three-dimensional projection step (S300), the two-dimensional landmark detected in the previous two-dimensional landmark automatic detection step (S200) may be three-dimensionalized and reflected in the scan data.

도 2는 3차원 스캔 데이터의 3개의 랜드마크(LM1, LM2, LM3)를 도시하였다. 본 실시예에서, 상기 랜드마크는 일정 간격 혹은 특정 치아(앞니, 송곳니, 어금니 등)의 상단에 위치하여 치아 아치(Dental arch)의 형태를 추정할 수 있도록 한다. 상기 랜드마크의 위치 또는 특징에 따른 추가적인 처리 없이 모든 랜드마크에 대해 같은 방법을 적용하여 랜드마크를 한 번에 자동 검출할 수 있다.Figure 2 shows three landmarks (LM1, LM2, LM3) of the three-dimensional scan data. In this embodiment, the landmark is located at a predetermined interval or at the top of a specific tooth (incisor, canine, molar, etc.) to estimate the shape of a dental arch. The landmark can be automatically detected at once by applying the same method to all landmarks without additional processing according to the location or characteristics of the landmark.

상기 3차원 스캔 데이터의 랜드마크는 치아의 특정 위치를 나타내는 점들일 수 있다. 예를 들어, 상기 3차원 스캔 데이터의 랜드마크는 3개의 점(LM1, LM2, LM3)을 포함할 수 있다. 여기서, 상기 3차원 스캔 데이터는 환자의 상악을 나타내는 데이터일 수도 있고, 환자의 하악을 나타내는 데이터일 수도 있다. 예를 들어, 상기 3차원 스캔 데이터의 제1 랜드마크(LM1) 및 제3 랜드마크(LM3)는 각각 횡 방향으로 상기 3차원 스캔 데이터의 치아의 최외곽점을 나타낼 수 있다. 상기 3차원 스캔 데이터의 제2 랜드마크(LM2)는 상기 제1 랜드마크(LM1) 및 제3 랜드마크(LM3)를 포함하는 아치에서 상기 제1 랜드마크(LM1) 및 제3 랜드마크(LM3)의 사이의 점일 수 있다. 예를 들어, 상기 3차원 스캔 데이터의 제2 랜드마크(LM2)는 환자의 2개의 중절치의 사이를 나타낼 수 있다.The landmarks of the 3D scan data may be points indicating specific positions of teeth. For example, the landmark of the 3D scan data may include three points LM1 , LM2 , and LM3 . Here, the 3D scan data may be data representing the patient's upper jaw or data representing the patient's mandible. For example, the first landmark LM1 and the third landmark LM3 of the 3D scan data may represent an outermost point of a tooth of the 3D scan data in a lateral direction, respectively. The second landmark LM2 of the 3D scan data is the first landmark LM1 and the third landmark LM3 in the arch including the first landmark LM1 and the third landmark LM3 ) can be a point between For example, the second landmark LM2 of the 3D scan data may indicate between two central incisors of a patient.

도 3은 3차원 스캔 데이터를 투영하여 2차원 깊이 영상을 생성하는 방법을 나타내는 개념도이다. 도 4는 2차원 깊이 영상을 생성할 때의 투영 방향의 예시를 나타내는 사시도이다. 도 5는 2차원 깊이 영상을 생성할 때의 투영 방향의 예시를 나타내는 사시도이다.3 is a conceptual diagram illustrating a method of generating a 2D depth image by projecting 3D scan data. 4 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image. 5 is a perspective view illustrating an example of a projection direction when generating a two-dimensional depth image.

도 1 내지 도 5를 참조하면, 깊이 영상은 3차원 스캔 데이터를 2차원 평면상에 투영할 때, 상기 스캔 데이터의 각 3차원 점 p(x,y,z)와 상기 스캔 데이터의 주성분 분석을 통해 정의된 평면 UV 간의 수직거리 정보를 나타내는 영상이다. 2차원 영상의 픽셀 값은 앞에서 정의된 2차원 평면에서 상기 스캔데이터 표면까지의 거리 d(u,v)를 나타낸다.1 to 5 , the depth image performs principal component analysis of each 3D point p(x, y, z) of the scan data and the scan data when projecting the 3D scan data onto a 2D plane. It is an image showing the vertical distance information between the plane UVs defined through The pixel value of the 2D image represents the distance d(u,v) from the 2D plane defined above to the surface of the scan data.

이때, 투영하는 방향 및 평면을 정하기 위해 주성분 분석(PCA; Principal Component Analysis)을 수행할 수 있다. 먼저 스캔 데이터의 3차원 n개의 점 좌표

Figure PCTKR2020018786-appb-I000013
집합을 행렬로 나타낸
Figure PCTKR2020018786-appb-I000014
의 평균값
Figure PCTKR2020018786-appb-I000015
를 중심으로 데이터를 이동시킨다(
Figure PCTKR2020018786-appb-I000016
).In this case, principal component analysis (PCA) may be performed to determine the projection direction and plane. First, the three-dimensional coordinates of n points in the scan data
Figure PCTKR2020018786-appb-I000013
set as a matrix
Figure PCTKR2020018786-appb-I000014
mean value of
Figure PCTKR2020018786-appb-I000015
Move the data around (
Figure PCTKR2020018786-appb-I000016
).

이후, 상기 3차원 n개의 점 좌표에 대한 공분산

Figure PCTKR2020018786-appb-I000017
을 구한다. 상기 공분산은 3차원 n개의 점 좌표들이 x, y, z 축으로 어떻게 분산되어 있는지를 나타낼 수 있다. 상기 공분산 Σ를 고유분해한 결과는
Figure PCTKR2020018786-appb-I000018
로 나타낼 수 있다. 행렬
Figure PCTKR2020018786-appb-I000019
는 열 벡터가 Σ의 고유 벡터 w(p,q,r)로 구성된다. 대각행렬
Figure PCTKR2020018786-appb-I000020
는 대각원소가 Σ의 고유값 λ이다. w={w1,w2,w3} 중 고유값 λ가 가장 작은 방향 벡터 w3은 치아의 뿌리에서 교합면 방향과 같거나(도 4) 그 반대 방향(도 5)일 수 있다. 예를 들어, 도 3에서 고유값 λ이 가장 큰 w1은 횡 방향으로 치아의 양쪽 끝을 연결하는 방향일 수 있고, 고유값 λ이 두번째로 큰 w2는 환자의 정면 방향 또는 환자의 후면 방향일 수 있으며, 고유값 λ이 가장 작은 w3은 치아 뿌리에서 교합면을 향하는 방향 또는 그 반대 방향일 수 있다. 상기 방향 벡터 w3은 w3={w3p,w3q,w3r}로 표현할 수 있다.Then, the covariance for the three-dimensional n point coordinates
Figure PCTKR2020018786-appb-I000017
to save The covariance may indicate how the three-dimensional n point coordinates are distributed along the x, y, and z axes. The result of eigendecomposition of the covariance Σ is
Figure PCTKR2020018786-appb-I000018
can be expressed as procession
Figure PCTKR2020018786-appb-I000019
The column vector is composed of the eigenvector w(p,q,r) of Σ. diagonal matrix
Figure PCTKR2020018786-appb-I000020
is the eigenvalue λ of the diagonal element Σ. The direction vector w 3 having the smallest eigenvalue λ among w={w 1 ,w 2 ,w 3 } may be the same as the direction of the occlusal surface at the root of the tooth ( FIG. 4 ) or the opposite direction ( FIG. 5 ). For example, w 1 having the largest eigenvalue λ in FIG. 3 may be a direction connecting both ends of the teeth in the lateral direction, and w 2 having the second largest eigenvalue λ may be in the front direction of the patient or the posterior direction of the patient , and w 3 having the smallest eigenvalue λ may be in a direction from the tooth root to the occlusal surface or vice versa. The direction vector w 3 may be expressed as w 3 ={w 3p ,w 3q ,w 3r }.

치아 뿌리에서 교합면으로 향하는 방향과 같은 방향의 w3을 찾기 위해 3차원 스캔 데이터가 가지고 있는 삼각형 집합의 법선 벡터 평균

Figure PCTKR2020018786-appb-I000021
을 이용한다.
Figure PCTKR2020018786-appb-I000022
이면 w3을 투영 방향으로 결정하고,
Figure PCTKR2020018786-appb-I000023
이면 -w3을 깊이 영상 생성 시 투영 방향으로 결정한다. 투영 평면은 투영 방향 벡터를 법선 벡터로 하고 3차원 스캔 데이터로부터 일정 거리만큼 떨어진 곳에 정의하고 깊이 영상을 생성한다. To find w 3 in the same direction from the tooth root to the occlusal surface, the average of the normal vector of the triangle set in the 3D scan data
Figure PCTKR2020018786-appb-I000021
use the
Figure PCTKR2020018786-appb-I000022
then determine w 3 as the projection direction,
Figure PCTKR2020018786-appb-I000023
If -w 3 is determined as the projection direction when generating a depth image. The projection plane uses a projection direction vector as a normal vector, defines a predetermined distance from the 3D scan data, and generates a depth image.

도 4에서, 주성분 분석을 통해 얻어진 상기 3차원 스캔데이터의 3개의 축 방향은 각각 w1, w2, w3이고, 그 중 w1의 고유값 λ가 가장 크고, w3의 고유값 λ가 가장 작다. 여기서, 고유값 λ가 가장 작은 방향 벡터 w3을 이용하여 투영 방향이 결정된다. 3차원 스캔 데이터가 가지고 있는 삼각형 집합의 법선 벡터 평균

Figure PCTKR2020018786-appb-I000024
는 치아가 상부를 향해 정출되어 있으면, 상부 방향으로 형성되고, 치아가 하부를 향해 정출되어 있으면, 하부 방향으로 형성될 수 있다. 도 4에서는 w3이 치아의 정출 방향과 대체로 일치하는 방향이므로,
Figure PCTKR2020018786-appb-I000025
으로 계산되며, w3 벡터를 투영 방향 벡터로 이용하는 경우를 예시한다.In FIG. 4 , the three axial directions of the three-dimensional scan data obtained through the principal component analysis are w 1 , w 2 , and w 3 , among which the eigenvalue λ of w 1 is the largest, and the eigenvalue λ of w 3 is smallest Here, the projection direction is determined using the direction vector w 3 with the smallest eigenvalue λ. Normal vector average of the triangle set of 3D scan data
Figure PCTKR2020018786-appb-I000024
If the tooth is elongated toward the top, it may be formed in the upper direction, and if the tooth is elongated toward the lower side, it may be formed in the lower direction. In Fig. 4, w 3 is a direction that generally coincides with the direction of tooth extraction,
Figure PCTKR2020018786-appb-I000025
, and the case of using the w 3 vector as the projection direction vector is exemplified.

도 5에서, 주성분 분석을 통해 얻어진 상기 3차원 스캔데이터의 3개의 축 방향은 각각 w1, w2, w3이고, 그 중 w1의 고유값 λ가 가장 크고, w3의 고유값 λ가 가장 작다. 여기서, 고유값 λ가 가장 작은 방향 벡터 w3을 이용하여 투영 방향이 결정된다. 도 5에서는 w3이 치아의 정출 방향과 대체로 반대 방향이므로,

Figure PCTKR2020018786-appb-I000026
으로 계산되며, -w3 벡터를 투영 방향 벡터로 이용하는 경우를 예시한다.In FIG. 5 , the three axial directions of the three-dimensional scan data obtained through principal component analysis are w 1 , w 2 , and w 3 , among which the eigenvalue λ of w 1 is the largest, and the eigenvalue λ of w 3 is smallest Here, the projection direction is determined using the direction vector w 3 with the smallest eigenvalue λ. In Fig. 5, w 3 is in the direction substantially opposite to the direction of tooth extraction,
Figure PCTKR2020018786-appb-I000026
, and exemplifies a case where -w 3 vector is used as a projection direction vector.

이와 같이, 상기 주성분 분석에서 고유값 λ가 가장 작은 방향 벡터 w3을 이용하여 투영 방향을 결정하므로, 2차원 깊이 영상에서 치아가 겹치지 않도록 잘 형성될 수 있다. As such, since the projection direction is determined using the direction vector w 3 having the smallest eigenvalue λ in the principal component analysis, the teeth may be well formed so that the teeth do not overlap in the two-dimensional depth image.

도 6은 2차원 깊이 영상의 예시를 나타내는 평면도이다. 도 7은 2차원 깊이 영상의 예시를 나타내는 평면도이다.6 is a plan view illustrating an example of a two-dimensional depth image. 7 is a plan view illustrating an example of a two-dimensional depth image.

도 6 및 도 7은 상기 2차원 깊이 영상 생성 단계(S100)를 통해 얻어진 2차원 깊이 영상의 예시이다. 영상 내에서 밝게 표시된 부분은 투영 평면으로부터의 거리가 큰 지점을 의미하고, 영상 내에서 어둡게 표시된 부분은 투영 평면으로부터의 거리가 가까운 지점을 의미한다. 즉, 상기 2차원 깊이 영상은 2차원 좌표 {u, v}에 대한 깊이 값(d)을 가지고 있는 영상이며, 상기 2차원 깊이 영상을 상기 투영 방향의 반대 방향으로 역투영하면 상기 3차원 스캔 데이터를 복원할 수 있다. 6 and 7 are examples of 2D depth images obtained through the 2D depth image generating step S100. A bright portion in the image means a point having a large distance from the projection plane, and a dark portion in the image means a point having a short distance from the projection plane. That is, the two-dimensional depth image is an image having a depth value (d) with respect to two-dimensional coordinates {u, v}, and when the two-dimensional depth image is back-projected in a direction opposite to the projection direction, the three-dimensional scan data can be restored.

도 8은 2차원 랜드마크를 검출하는 완전 컨볼루션 신경망의 트레이닝 데이터의 예시를 나타내는 개념도이다. 도 9는 2차원 랜드마크를 검출하는 완전 컨볼루션 신경망을 나타내는 개념도이다. 도 10은 랜드마크 검출부를 나타내는 개념도이다. 도 11은 2차원 랜드마크의 예시를 나타내는 평면도이다.8 is a conceptual diagram illustrating an example of training data of a fully convolutional neural network for detecting two-dimensional landmarks. 9 is a conceptual diagram illustrating a fully convolutional neural network for detecting a two-dimensional landmark. 10 is a conceptual diagram illustrating a landmark detection unit. 11 is a plan view illustrating an example of a two-dimensional landmark.

도 1 내지 도 11을 참조하면, 상기 깊이 영상 및 사용자 정의 랜드마크 정보를 입력으로 하여 완전 컨볼루션 신경망을 이용한 랜드마크 딥러닝 모델을 학습시킨다. 도 10에서 나타낸 바와 같이, 학습 시 사용되는 사용자 정의 랜드마크 정보는 1) 찾고자 하는 랜드마크의 종류 (예컨대, 인덱스 0,1,2로 구분)와 2) 해당 랜드마크의 2차원 깊이 영상에서의 정답 위치 좌표 (ui,vi)일 수 있다. 1 to 11 , a landmark deep learning model using a fully convolutional neural network is trained using the depth image and user-defined landmark information as inputs. As shown in FIG. 10 , the user-defined landmark information used during learning includes 1) the type of landmark to be found (eg, divided by indices 0, 1, 2) and 2) the two-dimensional depth image of the landmark. It may be the correct position coordinates (u i ,v i ).

상기 랜드마크 자동 검출을 위한 랜드마크 검출부는 완전 컨볼루션 신경망을 포함할 수 있다. 상기 완전 컨볼루션 신경망은 컨볼루션 층으로 구성된 신경망 딥러닝 모델일 수 있다. The landmark detector for automatic landmark detection may include a fully convolutional neural network. The fully convolutional neural network may be a neural network deep learning model composed of convolutional layers.

완전 컨볼루션 신경망은 도 9와 같이 크게 두 과정이 포함되어 있다. Convolution 과정에서는 사전에 학습된 다수의 컨볼루션 층을 거쳐 깊이 영상에서 각 랜드마크의 특징을 검출하고 분류한다. 이를 Deconvolution 과정을 통해 전체 영상 정보와 결합함으로써 특징에 위치 정보를 더해주게 되어 영상 상의 랜드마크의 위치가 히트맵(heatmap)으로 출력된다. 이때, 딥러닝 모델 학습 시 이용한 사용자 정의 랜드마크 개수만큼 각각 히트맵 영상이 출력될 수 있다. 예를 들어, 상기 사용자 정의 랜드마크의 개수가 3개라면, 3개의 랜드마크에 대응되는 3개의 히트맵 영상이 출력될 수 있다. A fully convolutional neural network largely includes two processes as shown in FIG. 9 . In the convolution process, the feature of each landmark is detected and classified in the depth image through a plurality of pre-trained convolution layers. By combining this with the entire image information through the deconvolution process, location information is added to the feature, and the location of the landmark on the image is output as a heatmap. In this case, each heat map image may be output as many as the number of user-defined landmarks used when learning the deep learning model. For example, if the number of the user-defined landmarks is three, three heat map images corresponding to the three landmarks may be output.

즉, 상기 Convolution 과정은 상기 2차원 깊이 영상에서 위치 정보를 잃어가는 대신에 특징만을 추출해 나가는 과정이라 할 수 있다. 상기 Convolution 과정을 통해 상기 랜드마크의 특징이 추출될 수 있다. 반면, 상기 Deconvolution 과정은 상기 Convolution 과정에서 추출된 상기 랜드마크들에 대해, 사라진 위치 정보를 다시 되살리는 과정이라 할 수 있다. That is, the convolution process can be said to be a process of extracting only features from the 2D depth image instead of losing location information. The landmark feature may be extracted through the convolution process. On the other hand, the deconvolution process can be referred to as a process of reviving the lost location information for the landmarks extracted in the convolution process.

본 실시예에서는, 보다 정밀한 검출을 위해 완전 컨볼루션 신경망이 반복 중첩된 딥러닝 신경망 모델을 사용할 수 있다. In this embodiment, a deep learning neural network model in which a fully convolutional neural network is repeatedly superposed may be used for more precise detection.

상기 완전 컨볼루션 신경망 모델에서 상기 컨볼루션 과정 및 상기 디컨볼루션 과정이 중복 수행될 수 있다. 상기 완전 컨볼루션 신경망 모델에서 상기 컨볼루션 과정 및 상기 디컨볼루션 과정이 중복 수행되는 횟수는 랜드마크 검출 결과의 정확성을 고려하여 결정될 수 있다. In the fully convolutional neural network model, the convolution process and the deconvolution process may be repeatedly performed. In the fully convolutional neural network model, the number of times the convolution process and the deconvolution process are repeatedly performed may be determined in consideration of the accuracy of the landmark detection result.

도 10에서 보듯이, 예를 들어, 상기 랜드마크 검출부는 4개의 중첩 신경망(4개의 convolution 과정 및 4개의 deconvolution 과정)을 포함할 수 있다. As shown in FIG. 10 , for example, the landmark detector may include four overlapping neural networks (four convolution processes and four deconvolution processes).

상기 랜드마크 검출부는 상기 2차원 깊이 영상이 입력되고, 원하는 목적 랜드마크의 위치를 나타내는 히트맵이 학습 모델 사용자 정의 랜드마크 인덱스에 따른 채널 별로 출력되는 시스템을 구축할 수 있다. 중첩된 신경망의 각 단계별 출력 히트맵 데이터를 채널 별로 모두 합하여 최종 결과 히트맵을 얻을 수 있다. 결과 히트맵 데이터에서 가장 큰 값을 갖는 픽셀 좌표가 검출된 랜드마크의 위치를 나타낸다. 학습 시 사용한 사용자 정의 랜드마크 인덱스 순서대로 히트맵이 채널 별로 출력되므로 원하는 랜드마크의 위치 정보를 얻을 수 있다.The landmark detection unit may construct a system in which the two-dimensional depth image is input and a heat map indicating the location of a desired target landmark is output for each channel according to the learning model user-defined landmark index. The final result heat map can be obtained by summing the output heat map data of each step of the nested neural network for each channel. The pixel coordinates having the largest value in the result heat map data indicate the location of the detected landmark. Since the heat map is output for each channel in the order of the user-defined landmark index used during training, location information of the desired landmark can be obtained.

도 11은 완전 컨볼루션 신경망을 모델을 이용하여 상기 2차원 깊이 영상의 랜드마크를 자동 검출한 결과를 나타낸다. 상기 2차원 깊이 영상에서의 상기 2차원 랜드마크들은 L1, L2, L3로 표현하였다. 11 shows a result of automatically detecting a landmark of the 2D depth image using a fully convolutional neural network model. The 2D landmarks in the 2D depth image are expressed as L1, L2, and L3.

도 12는 2차원 랜드마크를 3차원 스캔 데이터에 역투영하여 3차원 랜드마크를 검출하는 방법을 나타내는 개념도이다.12 is a conceptual diagram illustrating a method of detecting a 3D landmark by back-projecting a 2D landmark onto 3D scan data.

도 1 내지 도 12를 참조하면, 상기 랜드마크 자동 검출 단계(S200)에서 얻은 랜드마크(L1, L2, L3)의 2차원 좌표를 3차원 스캔 데이터의 랜드마크(LM1, LM2, LM3)의 좌표로 변환한다. 깊이 영상 생성(S100) 시 사용된 투영 정보를 이용하여 최종 3차원 랜드마크의 좌표를 계산할 수 있다. 상기 깊이 영상 생성(S100) 시 사용된 투영 정보를 이용하여 상기 2차원 랜드마크(L1, L2, L3)를 상기 3차원 스캔 데이터 상에 역투영하여 상기 3차원 스캔 데이터의 3차원 랜드마크(LM1, LM2, LM3)를 얻을 수 있다. 1 to 12, the two-dimensional coordinates of the landmarks (L1, L2, L3) obtained in the landmark automatic detection step (S200) are coordinates of the landmarks (LM1, LM2, LM3) of the three-dimensional scan data convert to The coordinates of the final 3D landmark may be calculated using the projection information used in generating the depth image ( S100 ). The two-dimensional landmarks L1, L2, and L3 are back-projected onto the three-dimensional scan data using the projection information used in generating the depth image (S100), and the three-dimensional landmark LM1 of the three-dimensional scan data , LM2, LM3) can be obtained.

본 실시예에 따르면, 3차원 스캔 데이터의 랜드마크(LM1, LM2, LM3)를 딥러닝을 이용하여 자동으로 검출하므로, 3차원 스캔 데이터의 랜드마크(LM1, LM2, LM3)를 추출하기 위한 사용자의 노력과 시간을 줄일 수 있고, 3차원 스캔 데이터의 랜드마크(LM1, LM2, LM3)의 정확도를 높일 수 있다.According to this embodiment, since the landmarks (LM1, LM2, LM3) of the three-dimensional scan data are automatically detected using deep learning, the user for extracting the landmarks (LM1, LM2, LM3) of the three-dimensional scan data It is possible to reduce the effort and time of the 3D scan data and to increase the accuracy of the landmarks (LM1, LM2, LM3) of the 3D scan data.

또한, 3차원 스캔 데이터의 랜드마크(LM1, LM2, LM3)를 딥러닝을 이용하여 자동으로 검출하므로, 치과 CT 영상과 3차원 스캔 데이터의 정합의 정확도를 높이고 치과 CT 영상과 3차원 스캔 데이터의 정합을 위한 사용자의 노력과 시간을 줄일 수 있다. In addition, since landmarks (LM1, LM2, LM3) of 3D scan data are automatically detected using deep learning, the accuracy of registration between dental CT images and 3D scan data is improved, and dental CT images and 3D scan data are automatically detected. The user's effort and time for registration can be reduced.

본 발명의 일 실시예에 의하면, 전술한 치과용 3차원 스캔 데이터의 간소화된 랜드마크 자동 검출 방법을 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체가 제공될 수 있다. 전술한 방법은 컴퓨터에서 실행될 수 있는 프로그램으로 작성 가능하고, 컴퓨터 판독 가능 매체를 이용하여 상기 프로그램을 동작시키는 범용 디지털 컴퓨터에서 구현될 수 있다. 또한, 전술한 방법에서 사용된 데이터의 구조는 컴퓨터 판독 가능 매체에 여러 수단을 통하여 기록될 수 있다. 상기 컴퓨터 판독 가능 매체는 프로그램 명령, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 매체에 기록되는 프로그램 명령은 본 발명을 위하여 특별히 설계되고 구성된 것들이거나 컴퓨터 소프트웨어 분야의 통상의 기술자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능 기록 매체의 예에는 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM, DVD와 같은 광기록 매체, 플롭티컬 디스크와 같은 자기-광 매체 및 롬, 램, 플래시 메모리 등과 같은 프로그램 명령을 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령의 예에는 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드를 포함한다. 상기된 하드웨어 장치는 본 발명의 동작을 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있다.According to an embodiment of the present invention, a computer-readable recording medium in which a program for executing the above-described simplified automatic landmark detection method of dental 3D scan data is recorded on a computer may be provided. The above-described method can be written as a program that can be executed on a computer, and can be implemented in a general-purpose digital computer that operates the program using a computer-readable medium. In addition, the structure of data used in the above-described method may be recorded in a computer-readable medium through various means. The computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination. The program instructions recorded on the medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the art of computer software. Examples of computer-readable recording media include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floppy disks, and ROMs, RAMs, flash memories, etc. Hardware devices specially configured to store and execute program instructions are included. Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention.

또한, 전술한 치과용 3차원 스캔 데이터의 간소화된 랜드마크 자동 검출 방법 은 기록 매체에 저장되는 컴퓨터에 의해 실행되는 컴퓨터 프로그램 또는 애플리케이션의 형태로도 구형될 수 있다.In addition, the above-described simplified automatic landmark detection method of 3D scan data for dentistry may be implemented in the form of a computer program or application executed by a computer stored in a recording medium.

본 발명은 치과용 3차원 스캔 데이터의 간소화된 랜드마크 자동 검출 방법 및 이를 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체에 대한 것으로, 3차원 스캔 데이터의 랜드마크를 추출하기 위한 사용자의 노력과 시간을 줄일 수 있고, 치과 CT 영상과 디지털 인상 모델의 정합을 위한 노력과 시간을 줄일 수 있다. The present invention relates to a simplified automatic landmark detection method of dental three-dimensional scan data and a computer-readable recording medium in which a program for executing the same on a computer is recorded, for extracting landmarks of three-dimensional scan data The user's effort and time can be reduced, and the effort and time for matching the dental CT image and the digital impression model can be reduced.

상기에서는 본 발명의 바람직한 실시예를 참조하여 설명하였지만, 해당 기술분야의 숙련된 당업자는 하기의 특허청구범위에 기재된 본 발명의 사상 및 영역으로부터 벗어나지 않는 범위 내에서 본 발명을 다양하게 수정 및 변경시킬 수 있음을 이해할 것이다.Although the above has been described with reference to the preferred embodiments of the present invention, those skilled in the art can variously modify and change the present invention without departing from the spirit and scope of the present invention as set forth in the claims below. you will understand that you can

Claims (12)

3차원 스캔 데이터를 투영하여 2차원 깊이 영상을 생성하는 단계;generating a two-dimensional depth image by projecting the three-dimensional scan data; 완전 컨볼루션 신경망 모델(fully-connected convolutional neural network model)을 이용하여 상기 2차원 깊이 영상 내에서 2차원 랜드마크를 검출하는 단계; 및detecting a two-dimensional landmark in the two-dimensional depth image using a fully-connected convolutional neural network model; and 상기 2차원 랜드마크를 상기 3차원 스캔 데이터에 역투영하여 상기 3차원 스캔 데이터의 3차원 랜드마크를 검출하는 단계를 포함하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.and detecting the three-dimensional landmark of the three-dimensional scan data by back-projecting the two-dimensional landmark onto the three-dimensional scan data. 제1항에 있어서, 상기 2차원 깊이 영상을 생성하는 단계는The method of claim 1 , wherein the generating of the two-dimensional depth image comprises: 상기 3차원 스캔 데이터의 주성분 분석을 통해 상기 투영 방향 벡터를 결정하는 단계를 포함하는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.and determining the projection direction vector through principal component analysis of the 3D scan data. 제2항에 있어서, 상기 투영 방향 벡터를 결정하는 단계는 3. The method of claim 2, wherein determining the projection direction vector comprises: 상기 3차원 스캔 데이터의 3차원 n개의 점 좌표
Figure PCTKR2020018786-appb-I000027
집합을 행렬로 나타낸
Figure PCTKR2020018786-appb-I000028
Figure PCTKR2020018786-appb-I000029
의 평균값
Figure PCTKR2020018786-appb-I000030
를 중심으로 이동시키는 단계(
Figure PCTKR2020018786-appb-I000031
);
3D coordinates of n points of the 3D scan data
Figure PCTKR2020018786-appb-I000027
set as a matrix
Figure PCTKR2020018786-appb-I000028
cast
Figure PCTKR2020018786-appb-I000029
mean value of
Figure PCTKR2020018786-appb-I000030
moving to the center (
Figure PCTKR2020018786-appb-I000031
);
상기 3차원 n개의 점 좌표에 대한 공분산
Figure PCTKR2020018786-appb-I000032
을 계산하는 단계;
Covariance for the three-dimensional n point coordinates
Figure PCTKR2020018786-appb-I000032
calculating ;
상기 공분산 Σ를 고유분해(
Figure PCTKR2020018786-appb-I000033
)하는 단계 (
Figure PCTKR2020018786-appb-I000034
이고
Figure PCTKR2020018786-appb-I000035
이며,); 및
The covariance Σ is an eigendecomposition (
Figure PCTKR2020018786-appb-I000033
) step (
Figure PCTKR2020018786-appb-I000034
ego
Figure PCTKR2020018786-appb-I000035
is,); and
w1={w1p,w1q,w1r},w2={w2p,w2q,w2r},w3={w3p,w3q,w3r} 중 고유값 λ가 가장 작은 방향 벡터 w3을 이용하여 상기 투영 방향 벡터를 결정하는 단계를 포함하는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.w 1 ={w 1p ,w 1q ,w 1r },w 2 ={w 2p ,w 2q ,w 2r },w 3 ={w 3p ,w 3q ,w 3r } is the direction vector with the smallest eigenvalue λ and determining the projection direction vector by using w 3 .
제3항에 있어서, 상기 투영 방향 벡터를 결정하는 단계는 4. The method of claim 3, wherein determining the projection direction vector comprises: 상기 3차원 스캔 데이터의 법선 벡터 평균이
Figure PCTKR2020018786-appb-I000036
일 때,
The normal vector average of the three-dimensional scan data is
Figure PCTKR2020018786-appb-I000036
when,
Figure PCTKR2020018786-appb-I000037
이면 w3을 상기 투영 방향 벡터로 결정하고,
Figure PCTKR2020018786-appb-I000037
If w3 is determined as the projection direction vector,
Figure PCTKR2020018786-appb-I000038
이면 -w3을 상기 투영 방향 벡터로 결정하는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.
Figure PCTKR2020018786-appb-I000038
An automatic landmark detection method of 3D scan data for dentistry, characterized in that the back surface -w3 is determined as the projection direction vector.
제2항에 있어서, 상기 2차원 깊이 영상은 The method of claim 2, wherein the two-dimensional depth image is 상기 투영 방향 벡터를 법선 벡터로 하고 상기 3차원 스캔 데이터로부터 제1 거리만큼 떨어진 곳에 정의된 투영 평면 상에 형성되는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.Using the projection direction vector as a normal vector, the method for automatically detecting a landmark of 3D scan data for dentistry, characterized in that it is formed on a defined projection plane at a distance from the 3D scan data by a first distance. 제2항에 있어서, 상기 3차원 랜드마크를 검출하는 단계는 The method of claim 2, wherein the detecting of the three-dimensional landmark comprises: 상기 투영 방향 벡터를 이용하여 상기 투영 방향 벡터의 역방향으로 상기 2차원 랜드마크를 상기 3차원 스캔 데이터에 역투영하는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.Using the projection direction vector, the automatic landmark detection method of 3D scan data for dentistry, characterized in that the reverse projection of the 2D landmark onto the 3D scan data in the reverse direction of the projection direction vector. 제1항에 있어서, 상기 완전 컨볼루션 신경망 모델은 The method of claim 1, wherein the fully convolutional neural network model is 상기 2차원 깊이 영상에서 랜드마크 특징을 검출하는 컨볼루션 과정; 및 a convolution process for detecting landmark features in the two-dimensional depth image; and 상기 검출된 랜드마크 특징에 랜드마크 위치 정보를 추가하는 디컨볼루션 과정을 수행하는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.A method of automatically detecting landmarks in dental 3D scan data, characterized in that performing a deconvolution process of adding landmark location information to the detected landmark features. 제7항에 있어서, 상기 완전 컨볼루션 신경망 모델에서 상기 컨볼루션 과정 및 상기 디컨볼루션 과정이 중복 수행되는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.The method of claim 7, wherein the convolution process and the deconvolution process are repeatedly performed in the fully convolutional neural network model. 제7항에 있어서, 상기 디컨볼루션 과정의 결과는 상기 2차원 랜드마크의 개수에 대응하는 히트맵의 형태인 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.The method of claim 7, wherein the result of the deconvolution process is in the form of a heat map corresponding to the number of the two-dimensional landmarks. 제9항에 있어서, 상기 히트맵에서 가장 큰 값을 갖는 픽셀 좌표가 상기 2차원 랜드마크의 위치를 나타내는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.The method of claim 9, wherein the pixel coordinates having the largest value in the heat map indicate the position of the two-dimensional landmark. 제1항에 있어서, 상기 2차원 랜드마크를 검출하는 단계는 상기 컨볼루션 신경망 모델을 학습하는 단계를 더 포함하고, The method of claim 1, wherein detecting the two-dimensional landmark further comprises learning the convolutional neural network model, 상기 컨볼루션 신경망 모델을 학습하는 단계는 학습 2차원 깊이 영상 및 사용자 정의 랜드마크 정보를 입력하며, In the step of learning the convolutional neural network model, input a learning two-dimensional depth image and user-defined landmark information, 상기 사용자 정의 랜드마크 정보는 학습 랜드마크의 종류 및 상기 학습 랜드마크의 상기 학습 2차원 깊이 영상에서의 정답 위치 정보를 이용하는 것을 특징으로 하는 치과용 3차원 스캔 데이터의 랜드마크 자동 검출 방법.The user-defined landmark information is a type of learning landmark and an automatic landmark detection method of 3D scan data for dental use, characterized in that the correct position information in the learning 2D depth image of the learning landmark is used. 제1항 내지 11항 중 어느 한 항의 방법을 컴퓨터에서 실행시키기 위한 프로그램이 기록된 컴퓨터로 읽을 수 있는 기록 매체.A computer-readable recording medium in which a program for executing the method of any one of claims 1 to 11 in a computer is recorded.
PCT/KR2020/018786 2020-12-14 2020-12-21 Simplified method for automatically detecting landmarks of three-dimensional dental scan data, and computer readable medium having program recorded thereon for performing same by computer Ceased WO2022131418A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0174713 2020-12-14
KR1020200174713A KR102334519B1 (en) 2020-12-14 2020-12-14 Automated and simplified method for extracting landmarks of three dimensional dental scan data and computer readable medium having program for performing the method

Publications (1)

Publication Number Publication Date
WO2022131418A1 true WO2022131418A1 (en) 2022-06-23

Family

ID=78936321

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/018786 Ceased WO2022131418A1 (en) 2020-12-14 2020-12-21 Simplified method for automatically detecting landmarks of three-dimensional dental scan data, and computer readable medium having program recorded thereon for performing same by computer

Country Status (2)

Country Link
KR (2) KR102334519B1 (en)
WO (1) WO2022131418A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11842484B2 (en) 2021-01-04 2023-12-12 James R. Glidewell Dental Ceramics, Inc. Teeth segmentation using neural networks
US12136208B2 (en) 2021-03-31 2024-11-05 James R. Glidewell Dental Ceramics, Inc. Automatic clean up of jaw scans
US12210802B2 (en) 2021-04-30 2025-01-28 James R. Glidewell Dental Ceramics, Inc. Neural network margin proposal
US12295806B2 (en) 2022-01-10 2025-05-13 James R. Glidewell Dental Ceramics, Inc. Automatic determination of trim-line for aligners

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102758841B1 (en) 2022-03-22 2025-01-23 주식회사 레이 3-dimension face scan system and method of producing 3-dimensional face scan data
KR102684725B1 (en) * 2022-05-17 2024-07-15 주식회사 애마슈 Method and apparatus for detecting landmark from three dimensional volume image
KR102891748B1 (en) 2023-01-04 2025-12-03 주식회사 레이 computer program cmprising a simulation method for prosthodontics
KR102785244B1 (en) * 2023-01-05 2025-03-20 사회복지법인 삼성생명공익재단 3 dimensional heatmap generating method using 2 dimensional medical image and image processing apparatus
KR20240159672A (en) * 2023-04-27 2024-11-06 디디에이치 주식회사 Dental image analysis method and apparatus thereof
KR102726869B1 (en) 2024-04-04 2024-11-06 주식회사 덴티움 Apparatus for providing information required for orthodontic diagnosis using virtual cephalometric images reconstructed from cone beam computed tomography images and method therefor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081712A1 (en) * 2005-10-06 2007-04-12 Xiaolei Huang System and method for whole body landmark detection, segmentation and change quantification in digital images
KR20140114308A (en) * 2013-03-18 2014-09-26 삼성전자주식회사 system and method for automatic registration of anatomic points in 3d medical images
KR20170126669A (en) * 2016-05-10 2017-11-20 (주)바텍이우홀딩스 Apparatus and method for aligning 3d head image
KR20190137388A (en) * 2018-06-01 2019-12-11 오스템임플란트 주식회사 Cephalo image processing method for orthodontic treatment planning, apparatus, and method thereof
KR20200006506A (en) * 2018-07-10 2020-01-20 주식회사 디오 Method and apparatus for automatically aligning three-dimensional oral image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102099390B1 (en) 2018-08-21 2020-04-09 디디에이치 주식회사 Dental image analyzing method for orthodontic daignosis and apparatus using the same
KR20200083822A (en) 2018-12-28 2020-07-09 디디에이치 주식회사 Computing device for analyzing dental image for orthodontic daignosis and dental image analyzing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070081712A1 (en) * 2005-10-06 2007-04-12 Xiaolei Huang System and method for whole body landmark detection, segmentation and change quantification in digital images
KR20140114308A (en) * 2013-03-18 2014-09-26 삼성전자주식회사 system and method for automatic registration of anatomic points in 3d medical images
KR20170126669A (en) * 2016-05-10 2017-11-20 (주)바텍이우홀딩스 Apparatus and method for aligning 3d head image
KR20190137388A (en) * 2018-06-01 2019-12-11 오스템임플란트 주식회사 Cephalo image processing method for orthodontic treatment planning, apparatus, and method thereof
KR20200006506A (en) * 2018-07-10 2020-01-20 주식회사 디오 Method and apparatus for automatically aligning three-dimensional oral image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11842484B2 (en) 2021-01-04 2023-12-12 James R. Glidewell Dental Ceramics, Inc. Teeth segmentation using neural networks
US12236594B2 (en) 2021-01-04 2025-02-25 James R. Glidewell Dental Ceramics, Inc. Teeth segmentation using neural networks
US12136208B2 (en) 2021-03-31 2024-11-05 James R. Glidewell Dental Ceramics, Inc. Automatic clean up of jaw scans
US12210802B2 (en) 2021-04-30 2025-01-28 James R. Glidewell Dental Ceramics, Inc. Neural network margin proposal
US12295806B2 (en) 2022-01-10 2025-05-13 James R. Glidewell Dental Ceramics, Inc. Automatic determination of trim-line for aligners

Also Published As

Publication number Publication date
KR102369067B1 (en) 2022-03-03
KR102334519B1 (en) 2021-12-06

Similar Documents

Publication Publication Date Title
WO2022131418A1 (en) Simplified method for automatically detecting landmarks of three-dimensional dental scan data, and computer readable medium having program recorded thereon for performing same by computer
WO2022124462A1 (en) Method for automatically detecting landmark in three-dimensional dental scan data, and computer-readable recording medium with program for executing same in computer recorded thereon
KR102373500B1 (en) Method and apparatus for detecting landmarks from medical 3d volume image based on deep learning
EP2564375B1 (en) Virtual cephalometric imaging
WO2022131419A1 (en) Method for determining registration accuracy of three-dimensional dental ct image and three-dimensional digital impression model, and computer-readable recording medium in which program for executing same in computer is recorded
US20060127854A1 (en) Image based dentition record digitization
JP2010524529A (en) Computer-aided creation of custom tooth setup using facial analysis
CN113052902B (en) Tooth treatment monitoring method
WO2016003257A2 (en) Tooth model generation method for dental procedure simulation
US20210007834A1 (en) Method for evaluating a dental situation with the aid of a deformed dental arch model
KR102533659B1 (en) Automated registration method of 3d facial scan data and 3d volumetric medical image data using deep learning and computer readable medium having program for performing the method
CN114399551B (en) Method and system for positioning tooth root orifice based on mixed reality technology
JP2001112743A (en) Three-dimensional jaw motion display device and method and storage medium storing three-dimensional jaw motion display program
WO2025161076A1 (en) Method for using anatomical landmarks of balance organ to construct three-dimensional cephalometric coordinate system
KR20200012707A (en) Method for predicting anatomical landmarks and device for predicting anatomical landmarks using the same
CN113096236B (en) Virtual articulator design method for functional occlusal surface of dental crown bridge
CN115471506B (en) Position adjustment method of mouth sweeping model, storage medium and electronic equipment
JP7811808B2 (en) A method for automatically matching 3D facial scan data and 3D volumetric medical image data using deep learning, and a computer-readable recording medium having a program recorded thereon for executing the method on a computer
CN119559160A (en) Point cloud-based method, system and device for predicting facial changes after implantation in edentulous jaws
CN118267135A (en) Occlusion relation establishment method for digital model, electronic equipment and storage medium
WO2025263671A1 (en) Method for automatically segmenting teeth from three-dimensional volume data, and computer-readable recording medium having, recorded thereon, program for executing same on computer
KR20240161035A (en) System and method for providing periodontitis diagnosis information using artificial intelligence
CN120678545A (en) Angle classification automatic identification method, electronic device and medium
CN118845261A (en) Method for establishing occlusal relationship of digital model, electronic device and storage medium
WO2023229152A1 (en) Automatic three-dimensional face scan matching device having artificial intelligence applied thereto, method for driving same, and computer program stored in medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20966072

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20966072

Country of ref document: EP

Kind code of ref document: A1