WO2025169329A1 - Information processing device, information processing method, and recording medium - Google Patents
Information processing device, information processing method, and recording mediumInfo
- Publication number
- WO2025169329A1 WO2025169329A1 PCT/JP2024/004051 JP2024004051W WO2025169329A1 WO 2025169329 A1 WO2025169329 A1 WO 2025169329A1 JP 2024004051 W JP2024004051 W JP 2024004051W WO 2025169329 A1 WO2025169329 A1 WO 2025169329A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- information processing
- target image
- singular
- search target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
Definitions
- This disclosure relates to the technical fields of information processing devices, information processing methods, and recording media.
- This disclosure aims to improve the related technologies mentioned above.
- One aspect of the information processing device disclosed herein includes a detection means that uses a computational model to detect singular points contained in a search target image and each of a plurality of registered pattern images, and a selection means that selects a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points; when a pattern image is input, the computational model outputs singular point information indicating the singular points contained in the pattern image.
- One aspect of the information processing method disclosed herein is an information processing method that uses a computational model executed by a computer to detect singular points contained in a search target image and each of a plurality of registered pattern images, and selects a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points.
- the computational model outputs singular point information indicating the singular points contained in the pattern image.
- One aspect of the recording medium disclosed herein is an information processing method that uses a computational model to detect singular points contained in a search target image and each of a plurality of registered pattern images, and selects a target image from the plurality of registered pattern images to be matched with the search target image based on the singular points.
- the computational model has recorded thereon a computer program that causes a computer to execute the information processing method, which, when a pattern image is input, outputs singular point information indicating the singular points contained in the pattern image.
- FIG. 1 is a block diagram showing a configuration of an information processing device according to the present disclosure.
- 1 is a flowchart showing a flow of information processing operations in an information processing device according to the present disclosure.
- 1 is a block diagram showing a configuration of an information processing device according to the present disclosure.
- 1 is a flowchart showing a flow of information processing operations in an information processing device according to the present disclosure.
- FIG. 1 is a schematic diagram illustrating an outline of a singularity.
- 1 is a schematic diagram illustrating an information processing method in an information processing device according to the present disclosure.
- 1 is a block diagram showing a configuration of an information processing device according to the present disclosure.
- 1 is a schematic diagram illustrating an information processing method in an information processing device according to the present disclosure.
- 1 is a block diagram showing a configuration of an information processing device according to the present disclosure.
- 1 is a schematic diagram illustrating an information processing method in an information processing device according to the present disclosure.
- a first embodiment of an information processing device, an information processing method, and a recording medium will be described.
- a first embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 1 according to this disclosure.
- the information processing device 1 includes a calculation device 11, a storage device 12, and a communication device 13.
- the calculation device 11, the storage device 12, and the communication device 13 may be connected via a data bus 16.
- the computing device 11 includes at least one processor (i.e., one processor or multiple processors) as hardware.
- the processor may include, for example, a processor conforming to a von Neumann computer architecture.
- a processor conforming to a von Neumann computer architecture may include at least one of a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit).
- the processor may include, for example, a processor conforming to a non-von Neumann computer architecture.
- a processor conforming to a non-von Neumann computer architecture may include at least one of an FPGA (Field Programmable Gate Array) and an ASIC (Application Specific Circuit).
- the arithmetic device 11 reads a computer program 121 including at least one of computer program code and computer program instructions.
- the arithmetic device 11 may read a computer program 121 stored in the storage device 12.
- the arithmetic device 11 may read a computer program 121 stored in a computer-readable, non-transitory storage medium using a storage medium reading device (not shown) provided in the information processing device 1.
- the computer program 121 read from the storage medium may be stored in the storage device 12.
- the arithmetic device 11 may acquire (i.e., download or read) the computer program 121 from a device (not shown) located outside the information processing device 1 via the communication device 13 (or another communication device).
- the downloaded computer program 121 may be stored in the storage device 12.
- the arithmetic device 11 executes the loaded computer program 121.
- logical functional blocks for executing the processing to be performed by the information processing device 1 e.g., the information processing described below
- the arithmetic device 11, together with the storage device 12, etc. in which the computer program 121 is recorded can function as a controller or computer for realizing the logical functional blocks for executing the processing to be performed by the information processing device 1.
- the at least one processor provided in the arithmetic device 11, the memory (recording medium) provided in the storage device 12, etc., and the computer program 121 are configured so that the information processing device 1 performs the processing to be performed by the information processing device 1 (e.g., the information processing described below).
- the arithmetic device 11 may output information to another computer, cloud server, or other device (not shown) located outside the information processing device 1 via the communication device 13 (or other communication device).
- the recording medium for recording the computer program 121 executed by the arithmetic device 11 may be at least one of the following: a CD-ROM, CD-R, CD-RW, flexible disk, MO, DVD-ROM, DVD-RAM, DVD-R, DVD+R, DVD-RW, DVD+RW, and optical disks such as Blu-ray (registered trademark), magnetic media such as magnetic tape, magneto-optical disk, semiconductor memory such as USB memory, and any other medium capable of storing a program.
- the recording medium may include a device capable of recording the computer program 121 (for example, a general-purpose device or dedicated device in which the computer program 121 is implemented in a state in which it can be executed in at least one of the forms of software and firmware).
- each process or function included in the computer program 121 may be realized by a logical processing block realized within the arithmetic device 11 when the arithmetic device 11 (i.e., processor) executes the computer program 121, or may be realized by hardware such as a predetermined gate array (FPGA (Field Programmable Gate Array), ASIC (Application Specific Integrated Circuit)) included in the arithmetic device 11, or may be realized in a form that combines logical processing blocks and partial hardware modules that realize some of the hardware elements.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- a computational model M that can be constructed by machine learning is implemented within the computational device 11 by the computational device 11 executing the computer program 121.
- An example of a computational model M that can be constructed by machine learning is a computational model M that includes a neural network (so-called artificial intelligence (AI)).
- learning of the computational model M may include learning of neural network parameters (e.g., at least one of weights and biases).
- the computational device 11 executes at least the detection process using the computational model M.
- a computational model M that has been constructed by machine learning may be implemented in the computational device 11.
- a computational model M that has been constructed by offline machine learning using training data may be implemented in the computational device 11.
- the computational model M implemented in the computational device 11 may be updated by online machine learning on the computational device 11.
- the calculation device 11 may execute information processing using a calculation model M implemented in a device external to the calculation device 11 (i.e., a device provided external to the information processing device 1) in addition to or instead of the calculation model M implemented in the calculation device 11.
- FIG. 1 shows an example of logical functional blocks implemented within the calculation device 11 to perform information processing.
- a detection unit 111 and a selection unit 112 are implemented within the calculation device 11.
- the detection unit 111 executes the process of detecting singular points using the calculation model M described above. The processes performed by the detection unit 111 and the selection unit 112 will be explained with reference to FIG. 2.
- the storage device 12 includes at least one memory capable of storing desired data.
- the storage device 12 includes at least one memory containing desired data.
- the storage device 12 may store a computer program 121 executed by the arithmetic device 11.
- the storage device 12 (memory) may be used as the above-mentioned recording medium for recording the computer program 121 executed by the arithmetic device 11.
- the storage device 12 may temporarily store data used by the arithmetic device 11 when the arithmetic device 11 is executing the computer program 121.
- the storage device 12 may also store data that the information processing device 1 will store long-term.
- the storage device 12 may include at least one of RAM (Random Access Memory), ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device.
- RAM Random Access Memory
- ROM Read Only Memory
- the storage device 12 may include a non-temporary recording medium.
- the communication device 13 is capable of communicating with devices external to the information processing device 1 and the information processing device 2 via a communication network (not shown).
- the communication device 13 may be a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), Bluetooth (registered trademark), or USB (Universal Serial Bus).
- Ethernet registered trademark
- Wi-Fi registered trademark
- Bluetooth registered trademark
- USB Universal Serial Bus
- Figure 2 is a flowchart showing an example of the flow of the information processing method executed by information processing device 1.
- the detection unit 111 uses the computational model M to detect singular points S contained in the image for the search target image Q to be searched and for each of the multiple registered pattern images (step S11).
- the computational model M outputs singular point information indicating the singular points S contained in the pattern image.
- the detection unit 111 detects singular points contained in the search target image Q (sometimes referred to as "query singular points") and singular points contained in each of the multiple registered registered pattern images (sometimes referred to as "target singular points").
- the selection unit 112 selects a target image to be matched with the search target image from among the multiple registered pattern images based on the singular points (step S12). In other words, the selection unit 112 selects a target to be matched with the search target image from among the multiple registered pattern images based on the singular points.
- the query singular points and the target singular points may be detected at different times. For example, the target singular points may be detected when the registered pattern image is registered and registered together with the registered pattern image.
- the information processing device 1 selects a target image to be matched with the search target image Q in accordance with the search target image Q. In other words, the information processing device 1 limits the targets for matching, thereby reducing the processing load for matching.
- a second embodiment of an information processing device, an information processing method, and a recording medium will be described. Below, a second embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 2 disclosed herein.
- Figure 3 is a block diagram showing the configuration of the information processing device 2 disclosed herein.
- the information processing device 2 may further include an input device 14 and an output device 15 in addition to the calculation device 11, the storage device 12, and the communication device 13. However, the information processing device 2 does not have to include at least one of the input device 14 and the output device 15.
- the calculation device 11, the storage device 12, the communication device 13, the input device 14, and the output device 15 may be connected via a data bus 16.
- a reception unit 213, an extraction unit 214, and a matching unit 215 are further implemented within the calculation device 11 in the second embodiment.
- a registered palm print image database DB may be implemented within the storage device 12 in the second embodiment.
- the registered palm print image database DB may also be implemented in a storage device external to the information processing device 2.
- Figure 4 is a flowchart showing the flow of the information processing operation in the information processing device 2.
- the reception unit 213 receives a request to match a search target image Q (step S21).
- the request to match a search target image Q is a request to match the search target image Q with a registered palm print image R registered in the registered palm print image database DB.
- FIG. 5(a) shows an example of a search target image Q and a registered palm print image R.
- Registered palm print image R is an image of an imprinted palm print. It is preferable that registered palm print image R is an image of the entire palm print.
- search target image Q may be an image of the entire palm print, or an image of a portion of the palm print. Search target image Q may be an image of an imprinted palm print, or an image of a left-behind palm print. Note that an imprinted palm print is a palm print obtained by imprinting.
- the detection unit 211 uses the computational model M to detect singular points S contained in the search target image Q and each of the multiple registered palm print images R (step S22).
- Figure 5(b) shows an example of the detected query singular points QS and target singular points RS.
- the computational model M When a pattern image is input, the computational model M outputs singularity information indicating the singularity S contained in the pattern image. As illustrated in FIG. 5, when a search target image Q is input, the computational model M outputs singularity information indicating the query singularity QS contained in the search target image Q. Furthermore, when a registered palmprint image R is input, the computational model M outputs singularity information indicating the target singularity RS contained in the registered palmprint image R.
- the singularity information may indicate at least one of the type of singularity S, the position of the singularity S, and the direction of the singularity S.
- the computational model M is constructed by machine learning.
- the computational model M may be constructed by machine learning using training data.
- the training data may be data in which a pattern image is accompanied by correct answer information indicating at least one of the type of singular point contained in the pattern image, the position of the singular point, and the direction of the singular point.
- the computational model M is trained so that when a pattern image is input, it can output correct answer information.
- the computational model M is constructed so that when a pattern image is input, it can infer at least one of the type of singular point contained in the pattern image, the position of the singular point, and the direction of the singular point.
- the computational model M may extract the center Sc of the looped ridge as the position of the core. Furthermore, as shown in FIG. 6(b), the computational model M may extract a position Sd within a delta formed by the ridge as the position of the delta.
- the position of the singular point S may be represented by coordinates in the pattern image. The position of the singular point S may also be a relative position in the palm print region.
- the direction of the singularity point S may be determined according to the direction of the ridges surrounding the singularity point S.
- the computational model M may infer the direction of the singularity point S as shown by the arrow in Figure 5(a).
- the computational model M may also infer the direction of the singularity point S as shown by the arrow in Figure 5(b).
- the computational model M may infer three directions.
- the detection unit 211 Based on the singularity information, the detection unit 211 performs at least one of the following: identifying the type of the detected singularity, identifying the position of the singularity, and identifying the direction of the singularity.
- the detection unit 211 may detect the query singularity and the target singularity at different times. For example, the detection unit 211 may detect the target singularity at the time when the registered palm print image R is registered in the registered palm print image database DB. In this case, information indicating the target singularity may be registered in association with the registered palm print image R in the registered palm print image database DB.
- the selection unit 212 selects a target image to be matched with the search target image Q from the multiple registered palm print images R based on the singular points S (step S23).
- the selection unit 212 may select multiple target images. In other words, the selection unit 212 excludes from the multiple registered palm print images R those that do not need to be used as targets for matching. In other words, the selection unit 212 retains from the multiple registered palm print images R those that need to be used as targets for matching.
- the selection unit 212 reduces the targets of processing from step S24 onwards. In other words, the selection unit 212 narrows down the targets of processing from step S24 onwards. In other words, the information processing device 2 performs filtering processing before feature point matching.
- the extraction unit 214 extracts feature points from the search target image Q and each of the target images (step S24).
- the extraction unit 214 may extract the end points and branch points of palm print ridges as feature points.
- the matching unit 215 matches the search target image Q with the target image based on the feature points (step S25). For example, the matching unit 215 may find a similarity between the positional relationship of the feature points included in the search target image Q and the positional relationship of the feature points included in the target image, and match the search target image Q with the target image. [2-3: Technical Effects of Information Processing Device 2]
- palm prints Compared to fingerprints, palm prints have a larger area to be matched. For this reason, palm print matching often requires higher computational costs than fingerprint matching.
- the information processing device 2 detects singularities and reduces the number of matching targets, thereby reducing the computational cost for matching compared to when the number of matching targets is not reduced. Furthermore, the information processing device 2 can detect singularities that are useful for selecting target images using a computational model constructed by machine learning. [3: Third embodiment]
- a third embodiment of an information processing device, an information processing method, and a recording medium will be described.
- a third embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 3 according to this disclosure.
- Information processing device 3 like information processing device 1 and information processing device 2, is configured as a device for selecting a target image. Furthermore, like information processing device 2, information processing device 3 may be configured as a device for matching palm prints.
- the third embodiment differs from the first and second embodiments in the operation of the selection unit 312.
- the selection unit 312 selects a target image based on a comparison between the positional relationship of singular points S included in the search target image Q and the positional relationship of singular points S included in the registered palm print image R. As illustrated in FIG. 8, the selection unit 312 may designate the singular point S located closest to the ball of the foot as A, and the adjacent singular points S in a counterclockwise direction as B, C, and D, and compare the positional relationship of the singular points S.
- FIG. 8(a) shows an example of a query singular point QS
- FIG. 8(b) shows an example of a target singular point RS.
- the selection unit 312 may select a target image based on the similarity between the positional relationship of the query singular points QS included in the search target image Q and the positional relationship of the target singular points RS included in the registered palmprint image R. If the similarity between the positional relationship of the query singular points QS included in the search target image Q and the positional relationship of the target singular points RS included in the registered palmprint image R is greater than a criterion, the selection unit 312 may select the registered palmprint image R as the target image, and if it is less than the criterion, exclude the registered palmprint image R from the target images.
- the selection unit 312 may select a target image based on at least a portion of the difference in distance between two corresponding singular points S between the search target image Q and the registered palm print image R, and the difference in each of the angles of a triangle formed by three points.
- FIG. 8(c) shows an example of the distance between two points and the angle of a triangle formed by three points in the search target image Q
- FIG. 8(d) shows an example of the distance between two points and the angle of a triangle formed by three points in the registered palm print image R.
- the selection unit 312 may select as the target image a registered palm print image R in which the difference in distance between corresponding singular points S is less than a predetermined value and the angle formed by the corresponding singular points S is less than a predetermined value.
- the selection unit 312 may also select as the target image a registered palm print image R in which the difference in x coordinates of corresponding singular points S is less than a predetermined value, the difference in y coordinates of corresponding singular points S is less than a predetermined value, and the angle formed by the corresponding singular points S is less than a predetermined value.
- the selection unit 312 may exclude the corresponding registered palm print image R from being subjected to matching.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
この開示は、情報処理装置、情報処理方法、及び記録媒体の技術分野に関する。 This disclosure relates to the technical fields of information processing devices, information processing methods, and recording media.
掌紋の照合に、掌紋の画像から抽出された特徴量を用いることが知られている。例えば特許文献1は、あらかじめ登録されている掌紋の画像からの特徴量の抽出と、照合時に取得される掌紋の画像からの特徴量の抽出とを行い、掌紋照合を行うことを開示している。 It is known that features extracted from palm print images are used for palm print matching. For example, Patent Document 1 discloses palm print matching by extracting features from pre-registered palm print images and from palm print images acquired during matching.
この開示は、上述した関連する技術を改善することを目的とする。 This disclosure aims to improve the related technologies mentioned above.
この開示の情報処理装置の一の態様は、演算モデルを用いて、検索対象の検索対象画像及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点を検出する検出手段と、前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する選択手段とを備え、前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する。 One aspect of the information processing device disclosed herein includes a detection means that uses a computational model to detect singular points contained in a search target image and each of a plurality of registered pattern images, and a selection means that selects a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points; when a pattern image is input, the computational model outputs singular point information indicating the singular points contained in the pattern image.
この開示の情報処理方法の一の態様は、コンピュータが実行する、演算モデルを用いて、検索対象の検索対象画像及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点を検出し、前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する情報処理方法であり、前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する。 One aspect of the information processing method disclosed herein is an information processing method that uses a computational model executed by a computer to detect singular points contained in a search target image and each of a plurality of registered pattern images, and selects a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points. When a pattern image is input, the computational model outputs singular point information indicating the singular points contained in the pattern image.
この開示の記録媒体の一の態様は、演算モデルを用いて、検索対象の検索対象画像及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点を検出し、前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する情報処理方法であり、前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する情報処理方法をコンピュータに実行させるコンピュータプログラムが記録されている。 One aspect of the recording medium disclosed herein is an information processing method that uses a computational model to detect singular points contained in a search target image and each of a plurality of registered pattern images, and selects a target image from the plurality of registered pattern images to be matched with the search target image based on the singular points. The computational model has recorded thereon a computer program that causes a computer to execute the information processing method, which, when a pattern image is input, outputs singular point information indicating the singular points contained in the pattern image.
以下、図面を参照しながら、情報処理装置、情報処理方法、及び記録媒体の実施形態について説明する。
[1:第1実施形態]
Hereinafter, embodiments of an information processing device, an information processing method, and a recording medium will be described with reference to the drawings.
[1: First embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第1実施形態について説明する。以下では、この開示にかかる情報処理装置1を用いて、情報処理装置、情報処理方法、及び記録媒体の第1実施形態について説明する。
[1-1:情報処理装置1の構成]
A first embodiment of an information processing device, an information processing method, and a recording medium will be described. Hereinafter, a first embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 1 according to this disclosure.
[1-1: Configuration of information processing device 1]
図1を参照しながら、この開示にかかる情報処理装置1の構成について説明する。図1は、この開示にかかる情報処理装置1の構成を示すブロック図である。 The configuration of the information processing device 1 according to this disclosure will be described with reference to Figure 1. Figure 1 is a block diagram showing the configuration of the information processing device 1 according to this disclosure.
図1に示すように、情報処理装置1は、演算装置11と、記憶装置12と、通信装置13とを備えている。演算装置11と、記憶装置12と、通信装置13とは、データバス16を介して接続されていてもよい。 As shown in FIG. 1, the information processing device 1 includes a calculation device 11, a storage device 12, and a communication device 13. The calculation device 11, the storage device 12, and the communication device 13 may be connected via a data bus 16.
演算装置11は、少なくとも一つのプロセッサ(つまり、一つのプロセッサ又は複数のプロセッサ)をハードウェアとして含む。プロセッサは、例えば、ノイマン型のコンピュータアーキテクチャに準拠したプロセッサを含んでいてもよい。ノイマン型のコンピュータアーキテクチャに準拠したプロセッサは、CPU(Central Processing Unit)及びGPU(Graphics Processing Unit)の少なくとも一つを含んでいてもよい。プロセッサは、例えば、非ノイマン型のコンピュータアーキテクチャに準拠したプロセッサを含んでいてもよい。非ノイマン型のコンピュータアーキテクチャに準拠したプロセッサは、FPGA(Field Programmable Gate Array)及びASIC(Application Specific Circuit)のうちの少なくとも一つを含んでいてもよい。 The computing device 11 includes at least one processor (i.e., one processor or multiple processors) as hardware. The processor may include, for example, a processor conforming to a von Neumann computer architecture. A processor conforming to a von Neumann computer architecture may include at least one of a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit). The processor may include, for example, a processor conforming to a non-von Neumann computer architecture. A processor conforming to a non-von Neumann computer architecture may include at least one of an FPGA (Field Programmable Gate Array) and an ASIC (Application Specific Circuit).
演算装置11は、コンピュータプログラムコード及びコンピュータプログラム指令の少なくとも一つを含むコンピュータプログラム121を読み込む。例えば、演算装置11は、記憶装置12が記憶しているコンピュータプログラム121を読み込んでもよい。例えば、演算装置11は、コンピュータで読み取り可能であって且つ一時的でない記録媒体が記憶しているコンピュータプログラム121を、情報処理装置1が備える図示しない記録媒体読み取り装置を用いて読み込んでもよい。記録媒体から読み取られたコンピュータプログラム121は、記憶装置12に記憶されてもよい。演算装置11は、通信装置13(或いは、その他の通信装置)を介して、情報処理装置1の外部に配置される不図示の装置からコンピュータプログラム121を取得してもよい(つまり、ダウンロードしてもよい又は読み込んでもよい)。ダウンロードされたコンピュータプログラム121は、記憶装置12に記憶されてもよい。 The arithmetic device 11 reads a computer program 121 including at least one of computer program code and computer program instructions. For example, the arithmetic device 11 may read a computer program 121 stored in the storage device 12. For example, the arithmetic device 11 may read a computer program 121 stored in a computer-readable, non-transitory storage medium using a storage medium reading device (not shown) provided in the information processing device 1. The computer program 121 read from the storage medium may be stored in the storage device 12. The arithmetic device 11 may acquire (i.e., download or read) the computer program 121 from a device (not shown) located outside the information processing device 1 via the communication device 13 (or another communication device). The downloaded computer program 121 may be stored in the storage device 12.
演算装置11は、読み込んだコンピュータプログラム121を実行する。その結果、演算装置11内には、情報処理装置1が行うべき処理(例えば、後述する情報処理)を実行するための論理的な機能ブロックが実現される。言い換えれば、演算装置11は、コンピュータプログラム121が記録された記憶装置12等と共に(言い換えれば、記憶装置12と記憶装置12等に記録されたコンピュータプログラム121と共に)、情報処理装置1が行うべき処理を実行するための論理的な機能ブロックを実現するためのコントローラ又はコンピュータとして機能可能である。つまり、演算装置11が備える少なくとも一つのプロセッサと共に、記憶装置12等が備えるメモリ(記録媒体)とコンピュータプログラム121とは、情報処理装置1が行うべき処理(例えば、後述する情報処理)を情報処理装置1が行うように構成されている。演算装置11は、通信装置13(或いは、その他の通信装置)を介して、情報処理装置1の外部に設けられている他のコンピュータ、クラウドサーバ等の不図示の装置に情報を出力してもよい。 The arithmetic device 11 executes the loaded computer program 121. As a result, logical functional blocks for executing the processing to be performed by the information processing device 1 (e.g., the information processing described below) are realized within the arithmetic device 11. In other words, the arithmetic device 11, together with the storage device 12, etc. in which the computer program 121 is recorded (in other words, together with the storage device 12 and the computer program 121 recorded in the storage device 12, etc.), can function as a controller or computer for realizing the logical functional blocks for executing the processing to be performed by the information processing device 1. In other words, the at least one processor provided in the arithmetic device 11, the memory (recording medium) provided in the storage device 12, etc., and the computer program 121 are configured so that the information processing device 1 performs the processing to be performed by the information processing device 1 (e.g., the information processing described below). The arithmetic device 11 may output information to another computer, cloud server, or other device (not shown) located outside the information processing device 1 via the communication device 13 (or other communication device).
尚、演算装置11が実行するコンピュータプログラム121を記録する記録媒体としては、CD-ROM、CD-R、CD-RWやフレキシブルディスク、MO、DVD-ROM、DVD-RAM、DVD-R、DVD+R、DVD-RW、DVD+RW及びBlu-ray(登録商標)等の光ディスク、磁気テープ等の磁気媒体、光磁気ディスク、USBメモリ等の半導体メモリ、及び、その他プログラムを格納可能な任意の媒体の少なくとも一つが用いられてもよい。記録媒体には、コンピュータプログラム121を記録可能な機器(例えば、コンピュータプログラム121がソフトウェア及びファームウェア等の少なくとも一方の形態で実行可能な状態に実装された汎用機器又は専用機器)が含まれていてもよい。更に、コンピュータプログラム121に含まれる各処理や機能は、演算装置11(つまり、プロセッサ)がコンピュータプログラム121を実行することで演算装置11内に実現される論理的な処理ブロックによって実現されてもよいし、演算装置11が備える所定のゲートアレイ(FPGA(Field Programmable Gate Array)、ASIC(Application Specific Integrated Circuit))等のハードウェアによって実現されてもよいし、論理的な処理ブロックとハードウェアの一部の要素を実現する部分的ハードウェアモジュールとが混在する形式で実現してもよい。 The recording medium for recording the computer program 121 executed by the arithmetic device 11 may be at least one of the following: a CD-ROM, CD-R, CD-RW, flexible disk, MO, DVD-ROM, DVD-RAM, DVD-R, DVD+R, DVD-RW, DVD+RW, and optical disks such as Blu-ray (registered trademark), magnetic media such as magnetic tape, magneto-optical disk, semiconductor memory such as USB memory, and any other medium capable of storing a program. The recording medium may include a device capable of recording the computer program 121 (for example, a general-purpose device or dedicated device in which the computer program 121 is implemented in a state in which it can be executed in at least one of the forms of software and firmware). Furthermore, each process or function included in the computer program 121 may be realized by a logical processing block realized within the arithmetic device 11 when the arithmetic device 11 (i.e., processor) executes the computer program 121, or may be realized by hardware such as a predetermined gate array (FPGA (Field Programmable Gate Array), ASIC (Application Specific Integrated Circuit)) included in the arithmetic device 11, or may be realized in a form that combines logical processing blocks and partial hardware modules that realize some of the hardware elements.
演算装置11内には、演算装置11がコンピュータプログラム121を実行することで、機械学習によって構築可能な演算モデルMが実装されている。機械学習によって構築可能な演算モデルMの一例として、例えば、ニューラルネットワークを含む演算モデルM(いわゆる、人工知能(AI:Artificial Intelligence))があげられる。この場合、演算モデルMの学習は、ニューラルネットワークのパラメータ(例えば、重み及びバイアスの少なくとも一つ)の学習を含んでいてもよい。演算装置11は、演算モデルMを用いて、少なくとも検出処理を実行する。演算装置11には、機械学習により構築済みの演算モデルMが実装されていてもよい。演算装置11には、教師データを用いたオフラインでの機械学習により構築済みの演算モデルMが実装されていてもよい。また、演算装置11に実装された演算モデルMは、演算装置11上においてオンラインでの機械学習によって更新されてもよい。或いは、演算装置11は、演算装置11に実装されている演算モデルMに加えて又は代えて、演算装置11の外部の装置(つまり、情報処理装置1の外部に設けられる装置)に実装された演算モデルMを用いて、情報処理を実行してもよい。 A computational model M that can be constructed by machine learning is implemented within the computational device 11 by the computational device 11 executing the computer program 121. An example of a computational model M that can be constructed by machine learning is a computational model M that includes a neural network (so-called artificial intelligence (AI)). In this case, learning of the computational model M may include learning of neural network parameters (e.g., at least one of weights and biases). The computational device 11 executes at least the detection process using the computational model M. A computational model M that has been constructed by machine learning may be implemented in the computational device 11. A computational model M that has been constructed by offline machine learning using training data may be implemented in the computational device 11. Furthermore, the computational model M implemented in the computational device 11 may be updated by online machine learning on the computational device 11. Alternatively, the calculation device 11 may execute information processing using a calculation model M implemented in a device external to the calculation device 11 (i.e., a device provided external to the information processing device 1) in addition to or instead of the calculation model M implemented in the calculation device 11.
図1には、情報処理を実行するために演算装置11内に実現される論理的な機能ブロックの一例が示されている。図1に示すように、演算装置11内には、検出部111と、選択部112とが実現される。検出部111は、上述した演算モデルMを用いて、特異点の検出処理を実行する。尚、検出部111、及び選択部112のそれぞれが行う処理については、図2を参照しながら説明する。 FIG. 1 shows an example of logical functional blocks implemented within the calculation device 11 to perform information processing. As shown in FIG. 1, a detection unit 111 and a selection unit 112 are implemented within the calculation device 11. The detection unit 111 executes the process of detecting singular points using the calculation model M described above. The processes performed by the detection unit 111 and the selection unit 112 will be explained with reference to FIG. 2.
記憶装置12は、所望のデータを記憶可能な少なくとも一つのメモリを含む。言い換えれば、記憶装置12は、所望のデータを含む少なくとも一つのメモリを含む。例えば、記憶装置12は、演算装置11が実行するコンピュータプログラム121を記憶していてもよい。この場合、記憶装置12(メモリ)は、演算装置11が実行するコンピュータプログラム121を記録する上述した記録媒体として用いられてもよい。記憶装置12は、演算装置11がコンピュータプログラム121を実行している場合に演算装置11が一時的に使用するデータを一時的に記憶してもよい。記憶装置12は、情報処理装置1が長期的に保存するデータを記憶してもよい。尚、記憶装置12は、RAM(Random Access Memory)、ROM(Read Only Memory)、ハードディスク装置、光磁気ディスク装置、SSD(Solid State Drive)及びディスクアレイ装置のうちの少なくとも一つを含んでいてもよい。つまり、記憶装置12は、一時的でない記録媒体を含んでいてもよい。 The storage device 12 includes at least one memory capable of storing desired data. In other words, the storage device 12 includes at least one memory containing desired data. For example, the storage device 12 may store a computer program 121 executed by the arithmetic device 11. In this case, the storage device 12 (memory) may be used as the above-mentioned recording medium for recording the computer program 121 executed by the arithmetic device 11. The storage device 12 may temporarily store data used by the arithmetic device 11 when the arithmetic device 11 is executing the computer program 121. The storage device 12 may also store data that the information processing device 1 will store long-term. The storage device 12 may include at least one of RAM (Random Access Memory), ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device. In other words, the storage device 12 may include a non-temporary recording medium.
通信装置13は、不図示の通信ネットワークを介して、情報処理装置1情報処理装置2の外部の装置と通信可能である。通信装置13は、イーサネット(登録商標)、Wi-Fi(登録商標)、Bluetooth(登録商標)、USB(Universal Serial Bus)等の規格に基づく通信インターフェースであってもよい。
[1-2:情報処理装置1が実行する情報処理方法]
The communication device 13 is capable of communicating with devices external to the information processing device 1 and the information processing device 2 via a communication network (not shown). The communication device 13 may be a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), Bluetooth (registered trademark), or USB (Universal Serial Bus).
[1-2: Information Processing Method Executed by Information Processing Device 1]
図2を参照しながら、情報処理装置1が実行する情報処理方法について説明する。図2は、情報処理装置1が実行する情報処理方法の流れの一例を示すフローチャートである。 The information processing method executed by information processing device 1 will be described with reference to Figure 2. Figure 2 is a flowchart showing an example of the flow of the information processing method executed by information processing device 1.
図2に示す様に、検出部111は、演算モデルMを用いて、検索対象の検索対象画像Q及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点Sを検出する(ステップS11)。演算モデルMは、紋様画像が入力されると、紋様画像に含まれる特異点Sを示す特異点情報を出力する。検出部111は、検索対象画像Qに含まれる特異点(「クエリ特異点」と称する場合がある)、及び、登録されている複数の登録紋様画像の各々に含まれる特異点(「ターゲット特異点」と称する場合がある)を検出する。 As shown in FIG. 2, the detection unit 111 uses the computational model M to detect singular points S contained in the image for the search target image Q to be searched and for each of the multiple registered pattern images (step S11). When a pattern image is input, the computational model M outputs singular point information indicating the singular points S contained in the pattern image. The detection unit 111 detects singular points contained in the search target image Q (sometimes referred to as "query singular points") and singular points contained in each of the multiple registered registered pattern images (sometimes referred to as "target singular points").
選択部112は、特異点に基づいて、複数の登録紋様画像から、検索対象画像と照合される対象画像を選択する(ステップS12)。選択部112は、特異点に基づいて、複数の登録紋様画像から、検索対象画像との照合の対象を選択すると言い換えてもよい。なお、クエリ特異点とターゲット特異点とは、異なるタイミングで検出されてもよい。例えば、ターゲット特異点は、登録紋様画像が登録されるタイミングで検出され、登録紋様画像とともに登録されていてもよい。
[1-3:情報処理装置1の技術的効果]
The selection unit 112 selects a target image to be matched with the search target image from among the multiple registered pattern images based on the singular points (step S12). In other words, the selection unit 112 selects a target to be matched with the search target image from among the multiple registered pattern images based on the singular points. Note that the query singular points and the target singular points may be detected at different times. For example, the target singular points may be detected when the registered pattern image is registered and registered together with the registered pattern image.
[1-3: Technical Effects of Information Processing Device 1]
この開示にかかる情報処理装置1は、検索対象画像Qとの照合の対象である対象画像を、検索対象画像Qに応じて選択する。すなわち、情報処理装置1は、照合の対象を限定するので、照合の処理負荷を小さくすることができる。
[2:第2実施形態]
The information processing device 1 according to this disclosure selects a target image to be matched with the search target image Q in accordance with the search target image Q. In other words, the information processing device 1 limits the targets for matching, thereby reducing the processing load for matching.
[2: Second embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第2実施形態について説明する。以下では、この開示にかかる情報処理装置2を用いて、情報処理装置、情報処理方法、及び記録媒体の第2実施形態について説明する。 A second embodiment of an information processing device, an information processing method, and a recording medium will be described. Below, a second embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 2 disclosed herein.
第2実施形態は、紋様画像である掌紋画像にこの開示を適用する場合を説明する。なお、この開示は、掌紋画像の他、足の裏の紋様の画像等の、複数の特異点Sを含み得る、比較的広い範囲の紋様画像に適用することができる。
[2-1:情報処理装置2の構成]
In the second embodiment, a case where this disclosure is applied to a palm print image, which is a pattern image, is described. Note that this disclosure can be applied to a relatively wide range of pattern images that may include multiple singular points S, such as images of patterns on the soles of feet, in addition to palm print images.
[2-1: Configuration of information processing device 2]
図3を参照しながら、この開示にかかる情報処理装置2の構成について説明する。図3は、この開示にかかる情報処理装置2の構成を示すブロック図である。 The configuration of the information processing device 2 disclosed herein will be described with reference to Figure 3. Figure 3 is a block diagram showing the configuration of the information processing device 2 disclosed herein.
図3に示すように、情報処理装置2は、演算装置11と、記憶装置12と、通信装置13とに加え、入力装置14と、出力装置15とを更に備えていてもよい。但し、情報処理装置2は、入力装置14及び出力装置15のうちの少なくとも一つを備えていなくてもよい。演算装置11と、記憶装置12と、通信装置13と、入力装置14と、出力装置15とは、データバス16を介して接続されていてもよい。 As shown in FIG. 3, the information processing device 2 may further include an input device 14 and an output device 15 in addition to the calculation device 11, the storage device 12, and the communication device 13. However, the information processing device 2 does not have to include at least one of the input device 14 and the output device 15. The calculation device 11, the storage device 12, the communication device 13, the input device 14, and the output device 15 may be connected via a data bus 16.
図3に示すように、第2実施形態における演算装置11内には、検出部211と、選択部212とに加え、受付部213と、抽出部214と、照合部215とが更に実現される。また、第2実施形態における記憶装置12内には、登録掌紋画像データベースDBが実現されていてもよい。なお、登録掌紋画像データベースDBは、情報処理装置2外の記憶装置に実現されてもよい。 As shown in FIG. 3, in addition to a detection unit 211 and a selection unit 212, a reception unit 213, an extraction unit 214, and a matching unit 215 are further implemented within the calculation device 11 in the second embodiment. Furthermore, a registered palm print image database DB may be implemented within the storage device 12 in the second embodiment. The registered palm print image database DB may also be implemented in a storage device external to the information processing device 2.
入力装置14は、情報処理装置2の外部からの情報処理装置2に対する情報の入力を受け付ける装置である。例えば、入力装置14は、情報処理装置2のオペレータが操作可能な操作装置(例えば、キーボード、マウス及びタッチパネルのうちの少なくとも一つ)を含んでいてもよい。例えば、入力装置14は情報処理装置2に対して外付け可能な記録媒体にデータとして記録されている情報を読み取り可能な読取装置を含んでいてもよい。 The input device 14 is a device that accepts information input to the information processing device 2 from outside the information processing device 2. For example, the input device 14 may include an operating device (e.g., at least one of a keyboard, mouse, and touch panel) that can be operated by an operator of the information processing device 2. For example, the input device 14 may include a reading device that can read information recorded as data on a recording medium that can be attached externally to the information processing device 2.
出力装置15は、情報処理装置2の外部に対して情報を出力する装置である。例えば、出力装置15は、情報を画像として出力してもよい。つまり、出力装置15は、出力したい情報を示す画像を表示可能な表示装置(いわゆる、ディスプレイ)を含んでいてもよい。例えば、出力装置15は、情報を音声として出力してもよい。つまり、出力装置15は、音声を出力可能な音声装置(いわゆる、スピーカ)を含んでいてもよい。例えば、出力装置15は、紙面に情報を出力してもよい。つまり、出力装置15は、紙面に所望の情報を印刷可能な印刷装置(いわゆる、プリンタ)を含んでいてもよい。 The output device 15 is a device that outputs information to the outside of the information processing device 2. For example, the output device 15 may output information as an image. That is, the output device 15 may include a display device (a so-called display) that can display an image showing the information to be output. For example, the output device 15 may output information as sound. That is, the output device 15 may include an audio device (a so-called speaker) that can output sound. For example, the output device 15 may output information on paper. That is, the output device 15 may include a printing device (a so-called printer) that can print desired information on paper.
情報処理装置2は、情報処理装置1と同様に、対象画像を選択するための装置として構成されている。さらに、情報処理装置2は、掌紋を照合する装置として構成されている。情報処理装置2は、選択された対象画像を用いて、掌紋を照合する。
[2-2:情報処理装置2が実行する情報処理方法]
The information processing device 2 is configured as a device for selecting a target image, similar to the information processing device 1. Furthermore, the information processing device 2 is configured as a device for matching palm prints. The information processing device 2 matches palm prints using the selected target image.
[2-2: Information Processing Method Executed by Information Processing Device 2]
図4を参照しながら、情報処理装置2における情報処理動作について説明する。図4は、情報処理装置2における情報処理動作の流れを示すフローチャートである。 The information processing operation in the information processing device 2 will be described with reference to Figure 4. Figure 4 is a flowchart showing the flow of the information processing operation in the information processing device 2.
図4に示す様に、受付部213は、検索対象画像Qの照合要求を受け付ける(ステップS21)。検索対象画像Qの照合要求とは、検索対象画像Qと登録掌紋画像データベースDBに登録されている登録掌紋画像Rとの照合の要求である。図5(a)は、検索対象画像Q、及び登録掌紋画像Rの一例を示している。 As shown in FIG. 4, the reception unit 213 receives a request to match a search target image Q (step S21). The request to match a search target image Q is a request to match the search target image Q with a registered palm print image R registered in the registered palm print image database DB. FIG. 5(a) shows an example of a search target image Q and a registered palm print image R.
登録掌紋画像Rは押捺掌紋の画像である。登録掌紋画像Rは掌紋の全体的な画像であることが好ましい。これに対し、検索対象画像Qは、掌紋の全体的な画像であってもよいし、掌紋の一部分の画像であってもよい。検索対象画像Qは、押捺掌紋の画像であってもよいし、遺留された掌紋の画像であってもよい。なお、押捺掌紋とは、押捺により取得された掌紋である。 Registered palm print image R is an image of an imprinted palm print. It is preferable that registered palm print image R is an image of the entire palm print. In contrast, search target image Q may be an image of the entire palm print, or an image of a portion of the palm print. Search target image Q may be an image of an imprinted palm print, or an image of a left-behind palm print. Note that an imprinted palm print is a palm print obtained by imprinting.
検出部211は、演算モデルMを用いて、検索対象の検索対象画像Q及び、登録されている複数の登録掌紋画像Rの各々について、画像に含まれる特異点Sを検出する(ステップS22)。図5(b)は、検出されたクエリ特異点QS、及びターゲット特異点RSの一例を示している。 The detection unit 211 uses the computational model M to detect singular points S contained in the search target image Q and each of the multiple registered palm print images R (step S22). Figure 5(b) shows an example of the detected query singular points QS and target singular points RS.
演算モデルMは、紋様画像が入力されると、紋様画像に含まれる特異点Sを示す特異点情報を出力する。図5に例示するように、演算モデルMは、検索対象画像Qが入力されると、検索対象画像Qに含まれるクエリ特異点QSを示す特異点情報を出力する。また、演算モデルMは、登録掌紋画像Rが入力されると、登録掌紋画像Rに含まれるターゲット特異点RSを示す特異点情報を出力する。特異点情報は、特異点Sの種別、特異点Sの位置、及び特異点Sの方向の少なくとも1つを示していてもよい。 When a pattern image is input, the computational model M outputs singularity information indicating the singularity S contained in the pattern image. As illustrated in FIG. 5, when a search target image Q is input, the computational model M outputs singularity information indicating the query singularity QS contained in the search target image Q. Furthermore, when a registered palmprint image R is input, the computational model M outputs singularity information indicating the target singularity RS contained in the registered palmprint image R. The singularity information may indicate at least one of the type of singularity S, the position of the singularity S, and the direction of the singularity S.
演算モデルMは、機械学習により構築されている。演算モデルMは、教師データを用いた機械学習により構築されていてもよい。教師データは、紋様画像に、紋様画像に含まれる特異点の種別、特異点の位置、及び特異点の方向の少なくとも1つを示す正解情報を付したデータであってもよい。この場合、演算モデルMは、紋様画像が入力されると、正解情報を出力できるように学習される。演算モデルMは、紋様画像が入力されると、紋様画像に含まれる特異点の種別、特異点の位置、及び特異点の方向の少なくとも1つを推論するように構築される。 The computational model M is constructed by machine learning. The computational model M may be constructed by machine learning using training data. The training data may be data in which a pattern image is accompanied by correct answer information indicating at least one of the type of singular point contained in the pattern image, the position of the singular point, and the direction of the singular point. In this case, the computational model M is trained so that when a pattern image is input, it can output correct answer information. The computational model M is constructed so that when a pattern image is input, it can infer at least one of the type of singular point contained in the pattern image, the position of the singular point, and the direction of the singular point.
特異点Sの種別は、少なくともコア(「中心」と言い換えてもよい)と、デルタ(「三角州」と言い換えてもよい)とを含む。図6(a)に例示するように、掌紋の隆線が流れる方向が急激に変化する箇所をコアと称する。また、図6(b)に例示するように、掌紋の隆線により三角州が形成される箇所をデルタと称する。 Types of singular points S include at least cores (which may be referred to as "centers") and deltas (which may be referred to as "deltas"). As shown in Figure 6(a), a point where the direction of the palm print ridges changes suddenly is called a core. As shown in Figure 6(b), a point where a delta is formed by the palm print ridges is called a delta.
例えば、演算モデルMは、図6(a)に例示するように、ループ状の隆線の中心Scを、コアの位置として抽出してもよい。また、演算モデルMは、図6(b)に例示するように、隆線により形成された三角州内の位置Sdを、デルタの位置として抽出してもよい。特異点Sの位置は、紋様画像における座標で表してもよい。特異点Sの位置は、掌紋領域における相対的な位置であってもよい。 For example, as shown in FIG. 6(a), the computational model M may extract the center Sc of the looped ridge as the position of the core. Furthermore, as shown in FIG. 6(b), the computational model M may extract a position Sd within a delta formed by the ridge as the position of the delta. The position of the singular point S may be represented by coordinates in the pattern image. The position of the singular point S may also be a relative position in the palm print region.
特異点Sの方向は、特異点Sの周囲の隆線の方向に応じて定めてもよい。例えば、演算モデルMは、図5(a)に例示する矢印ように、特異点Sの方向を推論してもよい。また、演算モデルMは、図5(b)に例示する矢印ように、特異点Sの方向を推論してもよい。すなわち、演算モデルMは、特異点Sの種別がデルタの場合、3つの方向を推論してもよい。 The direction of the singularity point S may be determined according to the direction of the ridges surrounding the singularity point S. For example, the computational model M may infer the direction of the singularity point S as shown by the arrow in Figure 5(a). The computational model M may also infer the direction of the singularity point S as shown by the arrow in Figure 5(b). In other words, when the type of the singularity point S is delta, the computational model M may infer three directions.
検出部211は、特異点情報に基づき、検出した特異点の種別の特定、特異点の位置の特定、及び特異点の方向の特定の少なくとも何れかを行う。 Based on the singularity information, the detection unit 211 performs at least one of the following: identifying the type of the detected singularity, identifying the position of the singularity, and identifying the direction of the singularity.
なお、検出部211は、クエリ特異点とターゲット特異点とを、異なるタイミングで検出してもよい。例えば、検出部211は、ターゲット特異点を、登録掌紋画像Rが登録掌紋画像データベースDBに登録されるタイミングで検出してもよい。この場合、登録掌紋画像データベースDBには、登録掌紋画像Rに対応付けてターゲット特異点を示す情報が登録されていてもよい。 Note that the detection unit 211 may detect the query singularity and the target singularity at different times. For example, the detection unit 211 may detect the target singularity at the time when the registered palm print image R is registered in the registered palm print image database DB. In this case, information indicating the target singularity may be registered in association with the registered palm print image R in the registered palm print image database DB.
選択部212は、特異点Sに基づいて、複数の登録掌紋画像Rから、検索対象画像Qと照合される対象画像を選択する(ステップS23)。選択部212は、複数の対象画像を選択してもよい。選択部212は、複数の登録掌紋画像Rから、照合の対象とする必要のないものを除外すると言い換えてもよい。選択部212は、複数の登録掌紋画像Rから、照合の対象とする必要のあるものを残すと言い換えてもよい。 The selection unit 212 selects a target image to be matched with the search target image Q from the multiple registered palm print images R based on the singular points S (step S23). The selection unit 212 may select multiple target images. In other words, the selection unit 212 excludes from the multiple registered palm print images R those that do not need to be used as targets for matching. In other words, the selection unit 212 retains from the multiple registered palm print images R those that need to be used as targets for matching.
すなわち、選択部212は、ステップS24以降の処理の対象を削減する。選択部212は、ステップS24以降の処理の対象を絞ると言い換えてもよい。また、情報処理装置2は、特徴点照合の前にフィルタリング処理を実施すると言い換えてもよい。 In other words, the selection unit 212 reduces the targets of processing from step S24 onwards. In other words, the selection unit 212 narrows down the targets of processing from step S24 onwards. In other words, the information processing device 2 performs filtering processing before feature point matching.
抽出部214は、検索対象画像Q、及び対象画像の各々から特徴点を抽出する(ステップS24)。抽出部214は、特徴点として、掌紋の隆線の端点、分岐点を抽出してもよい。 The extraction unit 214 extracts feature points from the search target image Q and each of the target images (step S24). The extraction unit 214 may extract the end points and branch points of palm print ridges as feature points.
照合部215は、特徴点に基づき検索対象画像Qと対象画像とを照合する(ステップS25)。例えば、照合部215は、検索対象画像Qに含まれる特徴点の位置関係と対象画像に含まれる特徴点の位置関係との類似度を求め、検索対象画像Qと対象画像とを照合してもよい。
[2-3:情報処理装置2の技術的効果]
The matching unit 215 matches the search target image Q with the target image based on the feature points (step S25). For example, the matching unit 215 may find a similarity between the positional relationship of the feature points included in the search target image Q and the positional relationship of the feature points included in the target image, and match the search target image Q with the target image.
[2-3: Technical Effects of Information Processing Device 2]
押捺掌紋は、指紋に比べて、照合対象となる部分多い。このため、掌紋の照合は、指紋の照合と比較して、演算コストが大きくなる場合が多い。 Compared to fingerprints, palm prints have a larger area to be matched. For this reason, palm print matching often requires higher computational costs than fingerprint matching.
この開示にかかる情報処理装置2は、特異点検出を行い、照合対象を削減するので、照合対象を削減しない場合と比べて、照合における演算コストを削減することができる。また、情報処理装置2は、機械学習により構築された演算モデルを用いて、対象画像の選択に有用な特異点を検出することができる。
[3:第3実施形態]
The information processing device 2 according to this disclosure detects singularities and reduces the number of matching targets, thereby reducing the computational cost for matching compared to when the number of matching targets is not reduced. Furthermore, the information processing device 2 can detect singularities that are useful for selecting target images using a computational model constructed by machine learning.
[3: Third embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第3実施形態について説明する。以下では、この開示にかかる情報処理装置3を用いて、情報処理装置、情報処理方法、及び記録媒体の第3実施形態について説明する。
[3-1:情報処理装置3が実行する情報処理方法]
A third embodiment of an information processing device, an information processing method, and a recording medium will be described. Hereinafter, a third embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 3 according to this disclosure.
[3-1: Information processing method executed by information processing device 3]
情報処理装置3は、情報処理装置1、及び情報処理装置2と同様に、対象画像を選択するための装置として構成されている。さらに、情報処理装置3は、情報処理装置2と同様に、掌紋を照合する装置として構成されていてもよい。第3実施形態は、第1実施形態、及び第2実施形態と、選択部312の動作が異なる。 Information processing device 3, like information processing device 1 and information processing device 2, is configured as a device for selecting a target image. Furthermore, like information processing device 2, information processing device 3 may be configured as a device for matching palm prints. The third embodiment differs from the first and second embodiments in the operation of the selection unit 312.
第3実施形態において、選択部312は、検索対象画像Qに含まれる特異点Sの位置関係と、登録掌紋画像Rに含まれる特異点Sの位置関係との比較に基づき、対象画像を選択する。図8に例示するように、選択部312は、母指球の最も近くに位置する特異点SをAとし、反時計回りに隣り合う特異点Sを順にB,C,Dとして、特異点Sの位置関係を比較してもよい。図8(a)はクエリ特異点QSの一例を示し、図8(b)はターゲット特異点RSの一例を示している。 In the third embodiment, the selection unit 312 selects a target image based on a comparison between the positional relationship of singular points S included in the search target image Q and the positional relationship of singular points S included in the registered palm print image R. As illustrated in FIG. 8, the selection unit 312 may designate the singular point S located closest to the ball of the foot as A, and the adjacent singular points S in a counterclockwise direction as B, C, and D, and compare the positional relationship of the singular points S. FIG. 8(a) shows an example of a query singular point QS, and FIG. 8(b) shows an example of a target singular point RS.
例えば、選択部312は、検索対象画像Qに含まれるクエリ特異点QSの位置関係と、登録掌紋画像Rに含まれるターゲット特異点RSの位置関係との類似度に基づき、対象画像を選択してもよい。選択部312は、検索対象画像Qに含まれるクエリ特異点QSの位置関係と登録掌紋画像Rに含まれるターゲット特異点RSの位置関係との類似度が、基準よりも大きい場合は登録掌紋画像Rを対象画像として選択し、基準よりも小さい場合は登録掌紋画像Rを対象画像から除外してもよい。 For example, the selection unit 312 may select a target image based on the similarity between the positional relationship of the query singular points QS included in the search target image Q and the positional relationship of the target singular points RS included in the registered palmprint image R. If the similarity between the positional relationship of the query singular points QS included in the search target image Q and the positional relationship of the target singular points RS included in the registered palmprint image R is greater than a criterion, the selection unit 312 may select the registered palmprint image R as the target image, and if it is less than the criterion, exclude the registered palmprint image R from the target images.
具体的に、選択部312は、検索対象画像Qと登録掌紋画像Rとの対応する特異点Sの、2点の間の距離の差、及び、3点からなる三角形の角度の各々の差の少なくとも一部に基づき、対象画像を選択してもよい。図8(c)は検索対象画像Qにおける2点の間の距離、3点からなる三角形の角度の一例を示し、図8(d)は登録掌紋画像Rにおける2点の間の距離、3点からなる三角形の角度の一例を示している。例えば、選択部312は、対応する特異点S間の距離の差が所定値未満、かつ、対応する特異点Sからなる角度が所定値未満の登録掌紋画像Rを、対象画像として選択してもよい。また、選択部312は、対応する特異点Sのx座標の差が所定値未満、かつ、対応する特異点Sのy座標の差が所定値未満、かつ、対応する特異点Sからなる角度が所定値未満の登録掌紋画像Rを、対象画像として選択してもよい。または、選択部312は、対応する特異点S間の距離の差が所定値以上の場合、又は、対応する特異点Sからなる角度が所定値以上の場合、該当する登録掌紋画像Rを照合の対象から除外してもよい。 Specifically, the selection unit 312 may select a target image based on at least a portion of the difference in distance between two corresponding singular points S between the search target image Q and the registered palm print image R, and the difference in each of the angles of a triangle formed by three points. FIG. 8(c) shows an example of the distance between two points and the angle of a triangle formed by three points in the search target image Q, and FIG. 8(d) shows an example of the distance between two points and the angle of a triangle formed by three points in the registered palm print image R. For example, the selection unit 312 may select as the target image a registered palm print image R in which the difference in distance between corresponding singular points S is less than a predetermined value and the angle formed by the corresponding singular points S is less than a predetermined value. The selection unit 312 may also select as the target image a registered palm print image R in which the difference in x coordinates of corresponding singular points S is less than a predetermined value, the difference in y coordinates of corresponding singular points S is less than a predetermined value, and the angle formed by the corresponding singular points S is less than a predetermined value. Alternatively, if the difference in distance between corresponding singular points S is equal to or greater than a predetermined value, or if the angle formed by corresponding singular points S is equal to or greater than a predetermined value, the selection unit 312 may exclude the corresponding registered palm print image R from being subjected to matching.
または、選択部312は、検索対象画像Qに含まれる特異点Sの全てについて、2点の間の距離の差、及び、3点からなる三角形の角度の各々の差を求め、差を累積し、累積された差が所定値未満の登録掌紋画像Rを、対象画像として選択してもよい。選択部312は、特異点Sの位置関係が検索対象画像Qと明らかに異なる登録掌紋画像Rを照合対象から除外することができればよい。 Alternatively, the selection unit 312 may calculate the difference in distance between two points and the difference in each angle of a triangle formed by three points for all of the singular points S included in the search target image Q, accumulate the differences, and select as the target image a registered palm print image R for which the accumulated difference is less than a predetermined value. The selection unit 312 only needs to be able to exclude from the matching target a registered palm print image R whose positional relationship of the singular points S is clearly different from that of the search target image Q.
さらに、選択部312は、特異点Sの方向を用いてもよい。例えば、選択部312は、距離と角度の制限に、検索対象画像Qと登録掌紋画像Rとの対応する特異点Sの方向の差が所定値未満か否かの制限を加え、登録掌紋画像Rを、対象画像として選択してもよい。 Furthermore, the selection unit 312 may use the direction of the singular point S. For example, the selection unit 312 may add a restriction to the distance and angle restrictions, such as whether the difference in direction of the corresponding singular point S between the search target image Q and the registered palm print image R is less than a predetermined value, and select the registered palm print image R as the target image.
また、選択部312は、検索対象画像Qと登録掌紋画像Rとの対応する位置に存在する特異点の種別の比較を行ってもよい。例えば、選択部312は、検索対象画像Qと登録掌紋画像Rとの対応する位置に存在する特異点Sであっても、種別が異なる場合(例えば、一方がコアであり、他方がデルタである場合等)は、特異点Sは対応しないと判断してもよい。 The selection unit 312 may also compare the types of singular points present at corresponding positions between the search target image Q and the registered palm print image R. For example, the selection unit 312 may determine that singular points S do not correspond even if they exist at corresponding positions between the search target image Q and the registered palm print image R, if the types are different (for example, one is a core and the other is a delta).
なお、登録掌紋画像Rにおける2点のターゲット特異点RSの間の距離、及び、登録掌紋画像Rにおける3点のターゲット特異点RSからなる三角形の角度の各々は、事前に求められていてもよい。この場合、距離、及び角度を示す情報は、登録掌紋画像Rに対応付けられて、登録掌紋画像データベースDBに登録されていてもよい。
[3-2:情報処理装置3の技術的効果]
It should be noted that the distance between two target singular points RS in the registered palm print image R and the angle of a triangle formed by three target singular points RS in the registered palm print image R may be determined in advance. In this case, information indicating the distance and angle may be associated with the registered palm print image R and registered in the registered palm print image database DB.
[3-2: Technical Effects of Information Processing Device 3]
この開示にかかる情報処理装置3は、全体的に検索対象画像Qとは類似しない登録掌紋画像Rを照合の対象外とするので、照合の処理負荷を小さくしながらも、照合の精度を維持することができる。
[4:第4実施形態]
The information processing device 3 disclosed herein excludes registered palm print images R that are not similar overall to the search target image Q from the matching target, thereby reducing the processing load for matching while maintaining matching accuracy.
[4: Fourth embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第4実施形態について説明する。以下では、この開示にかかる情報処理装置4を用いて、情報処理装置、情報処理方法、及び記録媒体の第4実施形態について説明する。
[4-1:情報処理装置4が実行する情報処理方法]
Fourth Embodiment of Information Processing Apparatus, Information Processing Method, and Recording Medium Will Be Described Hereinafter, a fourth embodiment of an information processing apparatus, an information processing method, and a recording medium will be described using an information processing apparatus 4 according to this disclosure.
[4-1: Information processing method executed by information processing device 4]
情報処理装置4は、情報処理装置1から情報処理装置3と同様に、対象画像を選択するための装置として構成されている。さらに、情報処理装置4は、情報処理装置2、及び情報処理装置3と同様に、掌紋を照合する装置として構成されていてもよい。第4実施形態は、第1実施形態から第3実施形態と、選択部412の動作が異なる。 Information processing device 4 is configured as a device for selecting a target image, similar to information processing device 1 to information processing device 3. Furthermore, information processing device 4 may be configured as a device for matching palm prints, similar to information processing device 2 and information processing device 3. The fourth embodiment differs from the first to third embodiments in the operation of the selection unit 412.
第4実施形態において、選択部412は、検索対象画像Qに含まれる何れかの特異点の種別に基づいて、対象画像を選択する。選択部412は、検索対象画像Qに含まれる何れかの特異点の種別、及び方向に基づいて、対象画像を選択してもよい。選択部412は、検索対象画像Qに含まれる1の特異点の種別、及び方向に基づいて、対象画像を選択してもよい。
[a:検索対象画像Qが掌紋の部分の画像である場合]
In the fourth embodiment, the selection unit 412 selects a target image based on the type of any singular point included in the search target image Q. The selection unit 412 may select a target image based on the type and direction of any singular point included in the search target image Q. The selection unit 412 may select a target image based on the type and direction of one singular point included in the search target image Q.
[a: When the search target image Q is an image of a palm print portion]
第4実施形態において、検索対象画像Qは、遺留された掌紋の画像等の掌紋の一部を含む画像であってもよい。図10は、選択部412の動作の概要を例示する。図10(a)は、第4実施形態における検索対象画像Qを例示している。図10(a)に例示される検索対象画像Qは、1つのコアSを含んでいる。 In the fourth embodiment, the search target image Q may be an image including part of a palm print, such as an image of a palm print that has been left behind. Figure 10 illustrates an example of the outline of the operation of the selection unit 412. Figure 10(a) illustrates an example of a search target image Q in the fourth embodiment. The search target image Q illustrated in Figure 10(a) includes one core S.
図10(b)及び図10(c)は、登録掌紋画像Rを例示している。図10(b)に例示する登録掌紋画像Rbは、1つのコアb、及び4つのデルタを含んでいる。また、図10(c)に例示する登録掌紋画像Rcは、2つのコアc1,c2、及び2つのデルタを含んでいる。 FIGS. 10(b) and 10(c) show examples of registered palm print images R. The registered palm print image Rb shown in FIG. 10(b) includes one core b and four deltas. The registered palm print image Rc shown in FIG. 10(c) includes two cores c1 and c2 and two deltas.
図10(b)に例示する場合であれば、第4実施形態における選択部412は、登録掌紋画像Rbにおけるコアbの所定の周辺領域Abを対象画像として選択してもよい。また、図10(c)に例示する場合であれば、第4実施形態における選択部412は、登録掌紋画像Rcにおけるコアc1の所定の周辺領域Ac1、及び登録掌紋画像Rcにおけるコアc2の所定の周辺領域Ac2の各々を対象画像として選択してもよい。つまり、第4実施形態における選択部412は、対象画像として、登録掌紋画像Rの部分領域を選択する。選択部412は、照合の対象とする領域を限定すると言い換えてもよい。 In the case illustrated in FIG. 10(b), the selection unit 412 in the fourth embodiment may select, as the target image, a predetermined peripheral area Ab of core b in registered palm print image Rb. Furthermore, in the case illustrated in FIG. 10(c), the selection unit 412 in the fourth embodiment may select, as the target image, each of a predetermined peripheral area Ac1 of core c1 in registered palm print image Rc and a predetermined peripheral area Ac2 of core c2 in registered palm print image Rc. In other words, the selection unit 412 in the fourth embodiment selects, as the target image, a partial area of registered palm print image R. In other words, the selection unit 412 limits the area to be matched.
また、図10に例示するように、選択部412は、検索対象画像Qの特異点Sの方向に合わせて、特異点Sの所定の周辺領域を対象画像として選択してもよい。 Also, as illustrated in FIG. 10, the selection unit 412 may select a predetermined peripheral area of the singular point S as the target image, in accordance with the direction of the singular point S of the search target image Q.
所定の周辺領域Aは、特異点Sと関連し得る領域であってもよい。所定の周辺領域は、特異点Sの影響を受け得る領域であってもよい。
[b:検索対象画像Qが掌紋の全体の画像である場合]
The predetermined surrounding area A may be an area that may be associated with the singular point S. The predetermined surrounding area A may be an area that may be influenced by the singular point S.
[b: When the search target image Q is an image of the entire palm print]
または、選択部412は、例えば図8(a)に例示するような掌紋の全体の画像である検索対象画像Qから、特異点Sの所定の周辺領域を切り取り、特異点Sの種別、及び特異点Sの方向に基づいて、登録掌紋画像Rの部分領域を切り取り、対象画像として選択してもよい。 Alternatively, the selection unit 412 may cut out a predetermined area surrounding the singular point S from the search target image Q, which is an image of the entire palm print, as shown in FIG. 8(a), and then cut out a partial area of the registered palm print image R based on the type and direction of the singular point S, and select it as the target image.
なお、第4実施形態においても、第3実施形態における選択部312による選択動作を実施してもよい。例えば、特異点Sの位置関係の比較に基づき登録掌紋画像Rを選択し、選択した登録掌紋画像Rから部分領域を切り取り、対象画像として選択してもよい。
[4-2:情報処理装置4の技術的効果]
In the fourth embodiment, the selection operation by the selection unit 312 in the third embodiment may also be performed. For example, a registered palm print image R may be selected based on a comparison of the positional relationships of the singular points S, and a partial region may be cut out from the selected registered palm print image R and selected as the target image.
[4-2: Technical Effects of Information Processing Device 4]
この開示にかかる情報処理装置4は、検索対象画像Qが掌紋の部分の画像の場合、同様な特徴を有している可能性の高い部分領域を、特異点に基づいて選択することができる。また、情報処理装置4は、検索対象画像Qが掌紋の全体の画像である場合にも、例えば、照合に有用な部分領域を選択することができる。このように、情報処理装置4は、照合の処理負荷を小さくしながらも、照合の精度を維持することができる。
[5:付記]
When the search target image Q is an image of a portion of a palm print, the information processing device 4 according to this disclosure can select, based on singular points, partial regions that are likely to have similar features. Furthermore, even when the search target image Q is an image of the entire palm print, the information processing device 4 can select, for example, partial regions that are useful for matching. In this way, the information processing device 4 can maintain matching accuracy while reducing the processing load of matching.
[5: Supplementary Note]
以上説明した実施形態に関して、更に以下の付記のようにも記載されうるが、以下には限られない。
[付記1]
演算モデルを用いて、検索対象の検索対象画像及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点を検出する検出手段と、
前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する選択手段と
を備え、
前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する
情報処理装置。
[付記2]
前記演算モデルは、機械学習に基づき構築されている
付記1に記載の情報処理装置。
[付記3]
前記紋様画像は掌紋画像である
付記1に記載の情報処理装置。
[付記4]
前記演算モデルは、前記紋様画像が入力されると、前記紋様画像に含まれる前記特異点の種別、前記特異点の位置、及び前記特異点の方向の少なくとも1つを推論する
請求項1に記載の情報処理装置。
[付記5]
前記特異点情報は、前記特異点の種別、前記特異点の位置、及び前記特異点の方向の少なくとも1つを示す
付記1に記載の情報処理装置。
[付記6]
前記検出手段は、前記特異点情報に基づき、前記特異点の種別の特定、前記特異点の位置の特定、及び前記特異点の方向の特定の少なくとも何れかを行う
付記5に記載の情報処理装置。
[付記7]
前記選択手段は、前記検索対象画像に含まれる特異点の位置関係と、前記登録紋様画像に含まれる特異点の位置関係との比較に基づき、前記対象画像を選択する
付記1に記載の情報処理装置。
[付記8]
前記選択手段は、前記検索対象画像に含まれる特異点の位置関係と、前記登録紋様画像に含まれる特異点の位置関係との類似度に基づき、前記対象画像を選択する
付記1に記載の情報処理装置。
[付記9]
前記選択手段は、前記検索対象画像と前記登録紋様画像との対応する2点の特異点の間の距離の差、及び、前記検索対象画像と前記登録紋様画像との対応する3点からなる三角形の角度の各々の差の少なくとも一部に基づき、前記対象画像を選択する
付記1に記載の情報処理装置。
[付記10]
前記選択手段は、前記検索対象画像と前記登録紋様画像との対応する特異点の種別の比較に基づき、前記対象画像を選択する
付記1に記載の情報処理装置。
[付記11]
前記選択手段は、前記検索対象画像に含まれる何れかの特異点の種別に基づいて、前記登録紋様画像の部分領域を選択する
付記1に記載の情報処理装置。
[付記12]
前記選択手段は、前記検索対象画像に含まれる何れかの特異点の種別、及び方向に基づいて、前記登録紋様画像の部分領域を選択する
付記1に記載の情報処理装置。
[付記13]
前記検索対象画像、及び前記対象画像の各々から特徴点を抽出する抽出手段と、
前記特徴点に基づき前記検索対象画像と前記対象画像とを照合する照合手段と
を備える請求項1に記載の情報処理装置。
[付記14]
コンピュータが実行する、
演算モデルを用いて、検索対象の検索対象画像及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点を検出し、
前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する
情報処理方法であり、
前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する
情報処理方法。
[付記15]
演算モデルを用いて、検索対象の検索対象画像及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点を検出し、
前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する
情報処理方法であり、
前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する
情報処理方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体。
The above-described embodiment may be further described as follows, but is not limited to the following.
[Appendix 1]
a detection means for detecting, using a computational model, singular points included in an image for a search target image and each of a plurality of registered pattern images;
a selection means for selecting a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points,
When a pattern image is input, the computational model outputs singular point information indicating singular points included in the pattern image.
[Appendix 2]
The information processing device according to claim 1, wherein the computational model is constructed based on machine learning.
[Appendix 3]
The information processing device according to claim 1, wherein the pattern image is a palm print image.
[Appendix 4]
The information processing apparatus according to claim 1 , wherein when the pattern image is input, the computational model infers at least one of the type of the singular point included in the pattern image, the position of the singular point, and the direction of the singular point.
[Appendix 5]
The information processing device according to claim 1, wherein the singularity information indicates at least one of a type of the singularity, a position of the singularity, and a direction of the singularity.
[Appendix 6]
The information processing device according to claim 5, wherein the detection means performs at least one of identifying a type of the singular point, identifying a position of the singular point, and identifying a direction of the singular point based on the singular point information.
[Appendix 7]
The information processing device according to claim 1, wherein the selection means selects the target image based on a comparison between a positional relationship of a singular point included in the search target image and a positional relationship of a singular point included in the registered pattern image.
[Appendix 8]
The information processing device according to claim 1, wherein the selection means selects the target image based on a degree of similarity between a positional relationship of a singular point included in the search target image and a positional relationship of a singular point included in the registered pattern image.
[Appendix 9]
The information processing device according to Supplementary Note 1, wherein the selection means selects the target image based on at least a part of a difference in distance between two corresponding singular points between the image to be searched and the registered pattern image, and a difference in each angle of a triangle formed by three corresponding points between the image to be searched and the registered pattern image.
[Supplementary Note 10]
The information processing device according to claim 1, wherein the selection means selects the target image based on a comparison of types of corresponding singular points between the search target image and the registered pattern image.
[Appendix 11]
The information processing device according to claim 1, wherein the selection means selects a partial region of the registered pattern image based on a type of any singular point included in the image to be searched.
[Appendix 12]
The information processing device according to claim 1, wherein the selection means selects a partial region of the registered pattern image based on a type and a direction of any singular point included in the image to be searched.
[Appendix 13]
extraction means for extracting feature points from the search target image and each of the target images;
The information processing apparatus according to claim 1 , further comprising: a matching unit that matches the search target image with the target image based on the feature points.
[Appendix 14]
The computer executes
Using the computational model, a singular point contained in the image is detected for the search target image and each of the plurality of registered pattern images;
selecting a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points,
When a pattern image is input, the computational model outputs singular point information indicating singular points included in the pattern image.
[Appendix 15]
Using the computational model, a singular point contained in the image is detected for the search target image and each of the plurality of registered pattern images;
selecting a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points,
The computational model outputs singular point information indicating singular points contained in the pattern image when a pattern image is input.
この開示は、請求の範囲及び明細書全体から読み取ることのできる発明の要旨又は思想に反しない範囲で適宜変更可能であり、そのような変更を伴う情報処理装置、情報処理方法、及び記録媒体もまたこの開示の技術思想に含まれる。 This disclosure may be modified as appropriate within the scope of the claims and the spirit or concept of the invention as can be read from the entire specification, and information processing devices, information processing methods, and recording media incorporating such modifications are also included within the technical concept of this disclosure.
1,2,3,4 情報処理装置
111,211 検出部
112,212,312,412 選択部
213 受付部
214 抽出部
215 照合部
DB 登録掌紋画像データベース
Q 検索対象画像
R 登録掌紋画像
S 特異点
QS クエリ特異点
RS ターゲット特異点
1, 2, 3, 4 Information processing device 111, 211 Detection unit 112, 212, 312, 412 Selection unit 213 Reception unit 214 Extraction unit 215 Matching unit DB Registered palm print image database Q Search target image R Registered palm print image S Singular point QS Query singular point RS Target singular point
Claims (15)
前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する選択手段と
を備え、
前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する
情報処理装置。 a detection means for detecting, using a computational model, singular points included in an image for a search target image and each of a plurality of registered pattern images;
a selection means for selecting a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points,
When a pattern image is input, the computational model outputs singular point information indicating singular points included in the pattern image.
請求項1に記載の情報処理装置。 The information processing device according to claim 1 , wherein the computational model is constructed based on machine learning.
請求項1に記載の情報処理装置。 The information processing device according to claim 1 , wherein the pattern image is a palm print image.
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1 , wherein when the pattern image is input, the computational model infers at least one of the type of the singular point included in the pattern image, the position of the singular point, and the direction of the singular point.
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1 , wherein the singularity information indicates at least one of a type of the singularity, a position of the singularity, and a direction of the singularity.
請求項5に記載の情報処理装置。 The information processing apparatus according to claim 5 , wherein the detection means performs at least one of identifying a type of the singular point, identifying a position of the singular point, and identifying a direction of the singular point based on the singular point information.
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1 , wherein the selection means selects the target image based on a comparison between a positional relationship of specific points included in the search target image and a positional relationship of specific points included in the registered pattern image.
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1 , wherein the selection means selects the target image based on a similarity between a positional relationship of specific points included in the search target image and a positional relationship of specific points included in the registered pattern image.
請求項1に記載の情報処理装置。 2. The information processing device according to claim 1, wherein the selection means selects the target image based on at least a part of a difference in distance between two corresponding singular points between the search target image and the registered pattern image, and a difference in each angle of a triangle formed by three corresponding points between the search target image and the registered pattern image.
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1 , wherein the selection means selects the target image based on a comparison of types of corresponding singular points between the search target image and the registered pattern image.
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1 , wherein the selection means selects a partial region of the registered pattern image based on a type of any singular point included in the image to be searched.
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1 , wherein the selection means selects a partial region of the registered pattern image based on the type and direction of any singular point included in the image to be searched.
前記特徴点に基づき前記検索対象画像と前記対象画像とを照合する照合手段と
を備える請求項1に記載の情報処理装置。 extraction means for extracting feature points from the search target image and each of the target images;
The information processing apparatus according to claim 1 , further comprising: a matching unit that matches the search target image with the target image based on the feature points.
演算モデルを用いて、検索対象の検索対象画像及び、登録されている複数の登録紋様画像の各々について、画像に含まれる特異点を検出し、
前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する
情報処理方法であり、
前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する
情報処理方法。 The computer executes
Using the computational model, a singular point contained in the image is detected for the search target image and each of the plurality of registered pattern images;
selecting a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points,
When a pattern image is input, the computational model outputs singular point information indicating singular points included in the pattern image.
前記特異点に基づいて、前記複数の登録紋様画像から、前記検索対象画像と照合される対象画像を選択する
情報処理方法であり、
前記演算モデルは、紋様画像が入力されると、前記紋様画像に含まれる特異点を示す特異点情報を出力する
情報処理方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体。 Using the computational model, a singular point contained in the image is detected for the search target image and each of the plurality of registered pattern images;
selecting a target image to be matched with the search target image from the plurality of registered pattern images based on the singular points,
The computational model outputs singular point information indicating singular points contained in the pattern image when the pattern image is input. A recording medium having a computer program recorded thereon that causes a computer to execute an information processing method.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2024/004051 WO2025169329A1 (en) | 2024-02-07 | 2024-02-07 | Information processing device, information processing method, and recording medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2024/004051 WO2025169329A1 (en) | 2024-02-07 | 2024-02-07 | Information processing device, information processing method, and recording medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025169329A1 true WO2025169329A1 (en) | 2025-08-14 |
Family
ID=96699368
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2024/004051 Pending WO2025169329A1 (en) | 2024-02-07 | 2024-02-07 | Information processing device, information processing method, and recording medium |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025169329A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10177650A (en) * | 1996-12-16 | 1998-06-30 | Nec Corp | Device for extracting picture characteristic, device for analyzing picture characteristic, and system for collating picture |
| JP2014232373A (en) * | 2013-05-28 | 2014-12-11 | 日本電気株式会社 | Collation object extraction system, collation object extraction method, and collation object extraction program |
| JP2019121022A (en) * | 2017-12-28 | 2019-07-22 | 富士通株式会社 | Biometric authentication device, biometric authentication program, and biometric authentication method |
| JP2021532453A (en) * | 2019-06-18 | 2021-11-25 | ユーエービー “ニューロテクノロジー” | Extraction of fast and robust skin imprint markings using feedforward convolutional neural networks |
| US20220067331A1 (en) * | 2020-09-01 | 2022-03-03 | Samsung Display Co., Ltd. | Fingerprint authentication device, display device including the same, and method of authenticating fingerprint of display device |
-
2024
- 2024-02-07 WO PCT/JP2024/004051 patent/WO2025169329A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10177650A (en) * | 1996-12-16 | 1998-06-30 | Nec Corp | Device for extracting picture characteristic, device for analyzing picture characteristic, and system for collating picture |
| JP2014232373A (en) * | 2013-05-28 | 2014-12-11 | 日本電気株式会社 | Collation object extraction system, collation object extraction method, and collation object extraction program |
| JP2019121022A (en) * | 2017-12-28 | 2019-07-22 | 富士通株式会社 | Biometric authentication device, biometric authentication program, and biometric authentication method |
| JP2021532453A (en) * | 2019-06-18 | 2021-11-25 | ユーエービー “ニューロテクノロジー” | Extraction of fast and robust skin imprint markings using feedforward convolutional neural networks |
| US20220067331A1 (en) * | 2020-09-01 | 2022-03-03 | Samsung Display Co., Ltd. | Fingerprint authentication device, display device including the same, and method of authenticating fingerprint of display device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7348445B2 (en) | Speaker diarization method, system, and computer program combined with speaker identification | |
| CN106326327A (en) | Methods and apparatuses for updating user authentication data | |
| JP2001351103A (en) | Image collation device, image collation method, and recording medium recording image collation program | |
| CN112231034A (en) | Software interface element identification method and device combining RPA and AI | |
| KR100442007B1 (en) | Pattern-collating device, pattern-collating method and pattern-collating program | |
| CN111382407A (en) | User authentication method and apparatus, and speaker authentication method and apparatus | |
| JP2022521540A (en) | Methods and systems for object tracking using online learning | |
| WO2021130888A1 (en) | Learning device, estimation device, and learning method | |
| JP2011257963A (en) | Image processing device, and processing method and program thereof | |
| KR20250139363A (en) | Systems and methods for authenticating the scarcity of digital assets | |
| CN112487376A (en) | Man-machine verification method and device | |
| WO2025169329A1 (en) | Information processing device, information processing method, and recording medium | |
| CN114996109B (en) | User behavior recognition method, device, equipment and storage medium | |
| WO2013145249A1 (en) | Biometric authentication device, biometric authentication method and biometric authentication program | |
| CN109815791B (en) | Blood vessel-based identification method and device | |
| WO2026003960A1 (en) | Collation device, collation method, and recording medium | |
| JP7452094B2 (en) | Moving object extraction device, moving object extraction method and program | |
| WO2025169461A1 (en) | Information processing device, information processing method, and recording medium | |
| JPH11272801A (en) | Method for recognizing and detecting deformation of a series of patterns in an image and recording medium storing the program | |
| JP7683812B2 (en) | Learning device, learning method, tracking device, tracking method, and recording medium | |
| KR101823792B1 (en) | Method and system for detecting multi-object based on context | |
| WO2025169330A1 (en) | Information processing device, information processing method, and recording medium | |
| JP3708383B2 (en) | Pattern recognition method and pattern recognition apparatus | |
| CN115597575A (en) | A side-slip detection method, chip and robot of a robot | |
| US20230410322A1 (en) | Inference apparatus and learning apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24924010 Country of ref document: EP Kind code of ref document: A1 |