[go: up one dir, main page]

WO2018124372A1 - Appareil et procédé de génération de base de données de récupération de contenu visuel - Google Patents

Appareil et procédé de génération de base de données de récupération de contenu visuel Download PDF

Info

Publication number
WO2018124372A1
WO2018124372A1 PCT/KR2017/001130 KR2017001130W WO2018124372A1 WO 2018124372 A1 WO2018124372 A1 WO 2018124372A1 KR 2017001130 W KR2017001130 W KR 2017001130W WO 2018124372 A1 WO2018124372 A1 WO 2018124372A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute
image
search
database
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2017/001130
Other languages
English (en)
Korean (ko)
Inventor
김성민
윤경용
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yap Co
Original Assignee
Yap Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yap Co filed Critical Yap Co
Publication of WO2018124372A1 publication Critical patent/WO2018124372A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to an apparatus and method for generating a database for visual content search.
  • Patent Document 1 Republic of Korea Patent Publication No. 10-0754157 (2007.09.03. Registered)
  • the present invention extracts an attribute value based on an image obtained by transforming visual content including any one food into various forms, and generates search comparison information to be used when searching for one food using the extracted attribute value.
  • An apparatus and method for generating a database for visual content search are provided.
  • the apparatus for generating a database for visual content search is an apparatus for generating a database for visual content search in which a plurality of pieces of basic information about a plurality of foods are stored. And an input unit for receiving a deformed image corresponding to any one of the basic information in the visual content search database, and the color of the content as an attribute value for the deformed image through the analysis of the deformed image.
  • a plurality of attribute extraction unit for extracting at least one or more of the position value in the image, the form of the content and the form of the container containing the content, and each of the representative characteristics matching each of the basic information and for distinguishing from each other Data matched with each of the classification attributes and the plurality of classification attributes.
  • the attribute training set is connected to any one of the basic information based on the updated attribute training set
  • the information may include a data generation unit for generating data connected to any one of the basic information in the database for visual content search based on this information.
  • the apparatus for generating a database for visual content search sets the shooting environment information of the camera for capturing the modified image or receives the shooting environment information through linkage with the camera to extract the attribute.
  • the apparatus may further include a photographing environment setting unit provided to the unit, and the attribute extracting unit may extract photographing environment information provided through the photographing environment setting unit as attribute values of the modified image and provide the photographing environment information to the data generation unit.
  • the apparatus for generating a database for visual content search further includes an image storage unit which maps the deformed image and the extracted attribute value by using any one of the basic information as an index and stores the extracted image in an image database.
  • the data generation unit may generate search comparison information for any one of the basic information through machine learning using the modified image and the attribute value stored in the image storage unit.
  • a method for generating a database for visual content search is a method for generating a database for visual content search in which a plurality of pieces of basic information about a plurality of foods are stored. And receiving a deformed image corresponding to any one of the basic information in the visual content search database, and analyzing the deformed image, the color of the content as an attribute value of the deformed image, and the content of the content.
  • a plurality of fractionating attributes each having and a plurality of fractionating genus May comprise the steps of using the updated attribute training set produced the search information comparison for coupled to the one of the basic information to update the properties training set consisting of data values matched to each.
  • the method for generating a database for visual content search further includes setting shooting environment information of a camera for capturing the deformed image or receiving shooting environment information through linkage with the camera.
  • the extracting may include extracting the setting or received photographing environment information as attribute values of the modified image.
  • the method of generating a database for visual content search comprises mapping the transformed image and the extracted attribute value by using any one of the basic information as an index, and storing the data in the image database.
  • the generating may include generating search comparison information of the basic information through machine learning using the stored modified image and the attribute value.
  • the attribute value is extracted based on the image of transforming the visual content including any one of the food in various forms, the search to be used when searching for any one food using the extracted attribute value
  • FIG. 1 is an overall network diagram showing a visual content retrieval system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a detailed configuration of a database generating device for visual content search according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a process of constructing an image database by the apparatus for generating a database for visual content search according to an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a process of generating search comparison information based on data stored in an image database according to an exemplary embodiment of the present invention.
  • FIG. 5 is a block diagram illustrating a detailed configuration of an apparatus for processing visual content search request according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating an interface to a visual browser displayed on a display unit of a device on which a device for processing a visual content search request according to an embodiment of the present invention is installed.
  • FIG. 7 is a flowchart illustrating a process of searching visual content according to an embodiment of the present invention.
  • a "part" includes the unit realized by hardware, the unit implemented by software, and the unit implemented using both.
  • one unit may be realized using two or more pieces of hardware, or two or more units may be realized by one piece of hardware.
  • FIG. 1 is an overall network diagram showing a visual content retrieval system according to an embodiment of the present invention.
  • the visual content search system includes a database generating device 100 for visual content search, an image database 110, a database for search 120, a visual content search request processing device 130, and a search engine ( 140, the content database 150, the recipe database 160, and the like.
  • the apparatus 100 for generating a database for visual content search of FIG. 1 may be connected to the image database 110 and the database for search 120 by wire or wirelessly.
  • the apparatus 100 for generating a database for visual content search may be stored in a recording medium in an executable form by at least one or more processors.
  • the apparatus 100 for generating a database for visual content search is executed by at least one processor as an image, for example, an image corresponding to food-related visual content, is extracted, and then the basic information is indexed based on the extracted attribute of the image.
  • Image, attribute, and shooting environment information are mapped and stored in the image database 110, and machine learning is performed based on the data stored in the image database 110 to correspond to search comparison information using the basic information as an index.
  • Data, such as annotation information may be generated and stored in the search database 120.
  • the search database 120 connected to the visual content search database generating apparatus 100 classifies and stores basic information about content including a plurality of foods into a plurality of categories, and is used for search matching to each basic information.
  • the plurality of search comparison information to be connected are stored.
  • the plurality of search comparison information may refer to comparison data used for visual content search
  • the basic information may refer to search result data as data named for each content.
  • basic information may include jjajangmyeon, champon, gimbap, pasta, pizza, etc., jjajangmyeon and champon are stored in the Chinese category, and pasta and pizza are stored in the European category. Can be classified and stored as.
  • the search comparison information connected to each basic information may be an attribute value for the corresponding basic information.
  • the apparatus 100 for generating a database for visual content search may store an image in the image database 110 as described above or update search comparison information connected to each basic information in the search database 120. You can.
  • FIG. 2 is a block diagram showing a detailed configuration of the database generating apparatus 100 for visual content search according to an embodiment of the present invention.
  • the apparatus 100 for generating a database for visual content search includes an input unit 210, a shooting environment information setting unit 220, an attribute extracting unit 230, an image storage unit 240, and a data generation unit. 250, and the like.
  • the input unit 210 may receive a plurality of modified images of data for visual content corresponding to one piece of basic information.
  • the input unit 210 may include a plurality of modified images of the visual content data corresponding to the jangjangmyeon, for example, the color of the content, the position value (coordinate value in the image) of the content (food) in the image, the shape of the content, the content.
  • the photographed image may be input by modifying or changing a container shape, etc.
  • the input unit 210 may be linked with the camera 200 for capturing content.
  • the input unit 210 may receive a plurality of modified images and provide them to the attribute extractor 230.
  • the shooting environment information setting unit 220 may provide an interface for setting shooting environment information or may receive shooting environment information from the camera 200 through linkage with the camera 200.
  • the shooting environment information provided from the camera 200 may be provided to the attribute extractor 230.
  • the photographing environment information may include data related to the operation of the camera 200, for example, a zoom state, a flash operation state, and a surrounding environment state (data such as weather, photographing time, season, etc.).
  • the attribute extractor 230 analyzes the plurality of images input through the input unit 210 and calculates attribute values for each of the plurality of images based on the photographing environment information provided through the photographing environment information setting unit 220. Can be extracted. Specifically, the attribute extractor 230 separates the contents and the container containing the contents through the analysis of the image, the color of the contents, the position value where the contents are located in the image, the shape of the contents, the shape of the container, the photographing environment. Attribute values such as information can be extracted. The attribute value of each extracted image may be provided to the image storage unit 240.
  • the image storage unit 240 may map the image and the extracted attribute value using the basic information as an index and store the image information in the image database 110.
  • the data generator 250 may receive the image stored in the image database 110 and the attribute value mapped to the image to perform machine learning to generate search comparison information about the basic information.
  • the data generator 250 is for learning basic information, and may generate data to be compared with attribute values of visual content requested to be searched, that is, search comparison information.
  • the data generator 250 may manage the attribute training set 260 for each basic information.
  • the data generator 250 may classify each property in the image and manage a plurality of classification properties including a plurality of data values (attribute values) for each basic information.
  • the plurality of classification attributes may have a plurality of data values (attribute values for images), and may have classification information, that is, representative features, to be distinguished from other classification attributes.
  • the classification information may include the color of the basic information, the position value, the shape of the contents and the container, and each parameter in the shooting environment information.
  • the data generator 250 determines the attribute value of the image stored in the image database 110 in which classification attribute in the attribute training set 260 and then includes the attribute value in the determined data value in the classification attribute.
  • the data values in the plurality of classification attributes can be updated.
  • the data generator 250 determines which classification property includes the property value of the image, and then, if the difference (distance) between the data value and the property value in the classification property determined is greater than the predetermined classification reference value,
  • the sub fractionation attribute may be generated to update the attribute training set 260. For example, when the attribute value is red, which is the color of the content, the data generator 250 determines that the attribute value of the image is included in the classification attribute related to the color, and then the data value in the color classification attribute is mostly black.
  • a color classification attribute and a red classification attribute having a subclassification attribute related to the color classification attribute, that is, the attribute value, as a data value may be generated by determining that a difference between the color system) and the red value, which is an attribute value of the image, is greater than a predetermined classification reference value. Accordingly, the color classification attribute may be connected to the red classification attribute as a sub, and the attribute training set 260 may be updated.
  • the sub classification attribute having the coordinate range value (the coordinate range value of the contents in the image) is connected to the data value in the classification information, and the data generating unit 250 is the attribute for the position value. After determining which coordinate range the value is close to, the data value in the sub classification attribute corresponding to the determined coordinate range may be updated using the attribute value.
  • the data generator 250 associates the classification attribute, the sub classification attribute corresponding to the attribute value of the image, and the data values connected to the classification attribute and the sub classification attribute with the basic information, so that the basic information in the search database 120 may be connected to the basic information. It is possible to update the associated search comparison information, eg annotation information for the basic information.
  • FIG. 3 is a flowchart illustrating a process of constructing an image database 110 by the apparatus 100 for generating a visual content search database according to an embodiment of the present invention.
  • the input unit 210 receives a modified image of a visual content corresponding to any one basic information through interworking with the camera 200, and then provides it to the attribute extractor 230. (S300). Specifically, the input unit 210 receives the visual content from the camera 200 including an image in which the shape, the color, the container shape, the size, the position of the content, etc., of the content are modified.
  • the photographing environment information setting unit 220 provides the photographing environment information provided from the camera 200 to the attribute extracting unit 230 (S302).
  • the attribute extractor 230 uses the analysis of the deformed image and the photographing environment information on the attribute value of the deformed image, for example, the color of the content in the image, the location value of the content in the image, the shape of the content, the container Form, shooting environment information, etc. are extracted (S304).
  • the image storage unit 240 updates the image database 110 by generating the data to which the attribute value and the image are mapped and then connecting it to the basic information (S306).
  • the image database 110 may store basic information connected to a plurality of modified images and data mapped with attribute values.
  • the apparatus 100 for generating a database for visual content search performs machine learning based on data stored in the image database 110 to update the attribute training set 260 and to generate search comparison information used for searching. can do. This will be described with reference to FIG. 4.
  • FIG. 4 is a flowchart illustrating a process of generating search comparison information based on data stored in the image database 110 according to an exemplary embodiment of the present invention.
  • the data generator 250 sequentially receives a plurality of images connected to any one piece of basic information and data mapped thereto (attribute values) from the image database 110 (S400).
  • the data generator 250 selects a classification property corresponding to the property value of the image from among the plurality of classification properties connected to the basic information in the property training set 260 (S402), and the data value and image connected to the selected classification property.
  • the difference between the attribute values, i.e., the distance, is analyzed (S404).
  • the data generator 250 determines whether the distance is equal to or greater than a preset classification reference value (S406).
  • the data generator 250 updates the attribute training set 260 by applying the attribute value of the image to the selected classification attribute in the data value (S408).
  • the data generating unit 250 generates a sub classification attribute connected to the selected classification attribute using the attribute value of the image (S410), and the attribute value of the image as the data value of the sub classification attribute. Connect to update the attribute training set 260 (S412).
  • the data generator 250 repeatedly performs the steps S400 to S412 as described above to correspond to any one piece of basic information based on a predetermined number or more, for example, 700 or more deformed images and corresponding attribute values.
  • the attribute training set 260 is updated, and search comparison information, that is, search comparison information bound to basic information, is generated using the updated attribute training set 260 (S414).
  • the visual content retrieval request processing device 130 may communicate by wire or wirelessly, and may be executed by at least one processor.
  • Devices may be included, for example, mobile devices, computers, TVs, and the like.
  • the visual content retrieval request processing device 130 may be an application stored in a recording medium in an executable form by at least one or more processors.
  • the device including the visual content search request processing device 130 may be connected to the search engine 140 linked with the search database 120 of FIG. 1 by wire or wirelessly.
  • the visual content search request processing unit 130 extracts a search attribute value (optionally including shooting environment information) for the visual content and requests a search based on the search content, and in response to the request, the search result, for example, visual content, is extracted. You can be provided with the type (name) of your food.
  • the visual content retrieval request processing device 130 extracts unique identification information corresponding to the location information in the beacon signal and transmits it in the retrieval request so that various types of information provided from the location where the visual content is captured, that is, POI information Can be provided.
  • the POI information may be restaurant information that provides the type of food in the visual content or various benefit information (discounts, coupons, etc.) provided by the restaurant.
  • the apparatus for processing visual content search request as described above includes an image capturing unit 500, a visual content obtaining unit 510, a beacon signal receiving unit 520, and a feature information extracting unit 530. ), A search request unit 540, a search result providing unit 550, a display unit 560, and the like.
  • the image capturing unit 500 may provide capturing environment information to the feature information extracting unit 530 when capturing an image.
  • the photographing environment information may be operation state and external condition information (environmental information such as weather, temperature, etc.) of the camera which is the image capturing unit 500 when capturing an image.
  • the visual content acquiring unit 510 may display an image including food, which is visual content acquired by the image capturing unit 500, on the display unit 560 through execution of a visual browser stored in a recording medium in an executable form in the device. Can be.
  • the beacon signal receiver 520 may be activated as the visual content is acquired or the visual browser is executed to receive the beacon signal broadcast within a preset radius, and may provide the received beacon signal to the search requester 540.
  • the beacon signal receiving unit 520 may receive a beacon signal in conjunction with a Bluetooth, an infrared communication module, a Wi-Fi module, and an ultrasonic signal receiving module (eg, a microphone) in the device.
  • the visual browser 600 executed in conjunction with the visual content search request processing device 130 according to an embodiment of the present invention, as shown in Figure 6, for controlling the operation of the beacon signal receiving unit 520
  • the control interface 610 may be provided.
  • the control interface 610 provides an operation interface for activating the operation of the beacon signal receiving unit 520, and controls the beacon signal receiving unit 520 according to a user operation, for example, a touch of the control interface 610. Can be activated. Accordingly, the beacon signal receiving unit 520 receives the beacon signal in cooperation with devices in the device (Bluetooth, infrared communication module, Wi-Fi module and ultrasonic signal receiving module (for example, a microphone)), and among the received beacon signals After extracting a beacon signal having a signal strength of more than a threshold value may be provided to the search result providing unit 550.
  • devices in the device Bluetooth, infrared communication module, Wi-Fi module and ultrasonic signal receiving module (for example, a microphone)
  • the feature information extractor 530 extracts a search attribute value required for searching for the visual content by analyzing the visual content, for example, an image including food, and photographing environment information provided from the image capturing unit 500, and extracts the extracted search value.
  • the attribute value may be provided to the search requester 540.
  • the feature information extractor 530 analyzes the visual content, the color of the content (food) in the visual content, the position value of the content in the visual content, the form of the content, the form and the photographing of the container including the content.
  • the search attribute value may be extracted based on the environment information and then provided to the search requester 540. Accordingly, the search requester 540 may request a search by transmitting a search attribute value to the search engine 140.
  • the search requester 540 may uniquely identify a high frequency band sound signal (a beacon signal received through a microphone of the device) having a predetermined variation pattern for unique identification information corresponding to location information of a specific building among the beacon signals. After extracting the information, the extracted unique identification information and the unique information of the device may be included in the search attribute value and transmitted to the search engine 140 to request the search.
  • a high frequency band sound signal a beacon signal received through a microphone of the device
  • the extracted unique identification information and the unique information of the device may be included in the search attribute value and transmitted to the search engine 140 to request the search.
  • the search result providing unit 550 may receive the search result from the search engine 140 and display the search result on the display unit 560. In other words, the search result providing unit 550 may display the search result provided from the search engine 140 on the display unit 560 through interworking with the visual browser.
  • the search engine 140 may transmit a search result for the visual content, that is, a search result for determining what the food in the visual content is in response to the search request of the search requester 540.
  • the search engine 140 determines which classification attribute in the attribute training set ( ⁇ ) of the attribute training set is included in the attribute training set and compares the search attribute value with the data value in the classification attribute and the search attribute value. It is determined whether the classification property is included. If not, the result is determined whether the classification property is included in the sub classification property by comparing the data value in the sub classification property and the search property value connected to the classification property.
  • the response to the search request may be transmitted using the basic information, for example, the name of the food as a search result.
  • the search engine 140 may include a content database 150 that manages content such as restaurant information, discount information, coupon information, and the like provided in a plurality of contents, for example, a specific location (a location for unique identification information), in the unique identification information. Can be connected. Accordingly, the search engine 140 searches for content in the content database 150 based on the unique identification information provided from the search requester 540, and then extracts food related information from the searched content, and extracts the food related information. Based on the comparison of at least one or more basic information from the search database 120 and the search comparison information associated with the search comparison information and the search attribute value received from the visual content search request processing device 130. You can generate search results through.
  • a content database 150 that manages content such as restaurant information, discount information, coupon information, and the like provided in a plurality of contents, for example, a specific location (a location for unique identification information), in the unique identification information. Can be connected. Accordingly, the search engine 140 searches for content in the content database 150 based on the unique identification information provided from the search requester 540
  • the search engine 140 may provide the content searched in the content database 150 to the visual content search request processing device 130 based on the unique identification information provided from the visual content search request processing device 130.
  • the search engine 140 searches recipe information for cooking a corresponding search result in the recipe database 160 based on a search result, that is, a search result of whether the food in the visual content is a kind, and retrieves the searched recipe information. It may be provided to the visual content search request processing device 130.
  • the feature information extractor 530 is included in the visual content search request processing device 130 of a device connected to the search engine 140 by wire or wirelessly, for example, The feature information extractor 530 may be included in the search engine 140.
  • the search requester 540 may request the search by transmitting the visual content acquired by the visual content acquisition unit 510 and the shooting environment information provided from the image capturing unit 500 to the search engine 140. .
  • FIG. 7 is a flowchart illustrating a process of searching visual content according to an embodiment of the present invention.
  • the visual content retrieval request processing device 130 analyzes the visual content received through the image capturing unit 500, for example, visual content including a food image, and provides the image from the image capturing unit 500.
  • the search attribute value is extracted using the received photographing environment information (S700), and the beacon signal received through the beacon signal receiving unit 520 is analyzed by analyzing the beacon signal (sound signal of the variation pattern with respect to the unique identification information having the location information). Identification information is extracted (S702).
  • the visual content retrieval request processing device 130 transmits the search attribute value, the unique identification information, and the unique information of the device on which the visual content retrieval request processing device 130 is operated to request the search (140). S704).
  • the search engine 140 searches the content database 150 for at least one or more contents mapped to the unique identification information (S706), and extracts food-related contents from the searched contents (S708).
  • the search engine 140 searches for at least one or more basic information to be used for the search and search comparison information connected thereto based on the food-related content in the search database 120 (S710).
  • the search engine 110 derives a search result indicating which food is the food image in the visual content through comparison between the searched at least one basic information, the search comparison information connected to the search attribute value, and the search result.
  • the visual content search request processing apparatus 130 S712.
  • the search engine 140 When the search engine 140 describes the process of deriving a search result in detail, it selects the classification property in any one of the basic information based on the search property value, and compares the difference between the data value and the search property value connected to the selected classification property. It is determined whether it is included in the classification property selected based on the above. This determination process may be performed on each of the search attribute values to determine what basic information may be included in each of the search attribute values, thereby obtaining a search result.
  • the search engine 140 searches the recipe database 160 for recipe information necessary for cooking food corresponding to a food image, based on the search result, contents related to the search result among the contents searched in the content database 150, For example, after extracting content such as discount information, restaurant information, coupon information, etc. related to a food image, the extracted content and recipe information are provided to the visual content search request processing device 130 (S714).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne un appareil et un procédé de génération d'une base de données de récupération de contenu visuel dans laquelle une pluralité d'informations de base concernant une pluralité d'aliments est stockée. À cet effet, l'invention concerne un appareil de génération d'une base de données de récupération de contenu visuel, comprenant : une unité d'entrée pour entrer une image modifiée correspondant à n'importe quelle information de base dans la base de données de récupération de contenu visuel ; une unité d'extraction d'attributs pour extraire au moins une ou plusieurs couleurs de contenus alimentaires, une valeur de position dans une image des contenus alimentaires, une forme des contenus alimentaires, et une forme d'un récipient contenant les contenus alimentaires en tant que valeur d'attribut pour l'image modifiée par analyse de l'image modifiée ; un ensemble d'apprentissage d'attributs composé d'une pluralité d'attributs de classification correspondant à l'une quelconque des informations de base et ayant des caractéristiques représentatives respectives permettant de les distinguer entre eux, et des valeurs de données mises en correspondance avec chacun de la pluralité d'attributs de classification ; et une unité de génération de données pour mettre à jour l'ensemble d'apprentissage d'attributs à l'aide de la valeur d'attribut extraite par l'unité d'extraction d'attributs, générer des informations de comparaison de récupération connectées à l'une quelconque des informations de base sur la base de l'ensemble d'apprentissage d'attributs mis à jour, puis générer des données connectées à n'importe quelle information de base dans la base de données de récupération de contenu visuel sur la base des informations de comparaison de récupération.
PCT/KR2017/001130 2016-12-29 2017-02-02 Appareil et procédé de génération de base de données de récupération de contenu visuel Ceased WO2018124372A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020160182529A KR101986804B1 (ko) 2016-12-29 2016-12-29 시각적 콘텐츠 검색용 데이터베이스 생성 장치 및 방법
KR10-2016-0182529 2016-12-29

Publications (1)

Publication Number Publication Date
WO2018124372A1 true WO2018124372A1 (fr) 2018-07-05

Family

ID=62709371

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/001130 Ceased WO2018124372A1 (fr) 2016-12-29 2017-02-02 Appareil et procédé de génération de base de données de récupération de contenu visuel

Country Status (2)

Country Link
KR (1) KR101986804B1 (fr)
WO (1) WO2018124372A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102293779B1 (ko) * 2019-12-06 2021-08-26 주식회사 네일25 레시피 유통 방법 및 레시피 유통 시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211964A1 (en) * 2006-03-09 2007-09-13 Gad Agam Image-based indexing and classification in image databases
US20080118160A1 (en) * 2006-11-22 2008-05-22 Nokia Corporation System and method for browsing an image database
KR20100086502A (ko) * 2007-11-22 2010-07-30 인터내셔널 비지네스 머신즈 코포레이션 이미지 데이터베이스 장치, 이미지 저장 방법 및 컴퓨터 프로그램 제품
KR101151851B1 (ko) * 2011-11-08 2012-06-01 (주)올라웍스 이미지 클러스터링을 이용한 이미지 태깅 방법, 장치, 및 이 방법을 실행하기 위한 컴퓨터 판독 가능한 기록 매체
US9367756B2 (en) * 2010-08-31 2016-06-14 Google Inc. Selection of representative images

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100754157B1 (ko) 2000-05-31 2007-09-03 삼성전자주식회사 멀티미디어 콘텐츠를 위한 데이터베이스 구축 방법
GB0607143D0 (en) * 2006-04-08 2006-05-17 Univ Manchester Method of locating features of an object
KR101117549B1 (ko) * 2010-03-31 2012-03-07 경북대학교 산학협력단 얼굴 인식 시스템 및 그 얼굴 인식 방법
KR101433472B1 (ko) * 2012-11-27 2014-08-22 경기대학교 산학협력단 상황 인식 기반의 객체 검출, 인식 및 추적 장치, 방법 및 컴퓨터 판독 가능한 기록 매체
KR102486699B1 (ko) * 2014-12-15 2023-01-11 삼성전자주식회사 영상 인식 방법, 영상 검증 방법, 장치, 및 영상 인식 및 검증에 대한 학습 방법 및 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211964A1 (en) * 2006-03-09 2007-09-13 Gad Agam Image-based indexing and classification in image databases
US20080118160A1 (en) * 2006-11-22 2008-05-22 Nokia Corporation System and method for browsing an image database
KR20100086502A (ko) * 2007-11-22 2010-07-30 인터내셔널 비지네스 머신즈 코포레이션 이미지 데이터베이스 장치, 이미지 저장 방법 및 컴퓨터 프로그램 제품
US9367756B2 (en) * 2010-08-31 2016-06-14 Google Inc. Selection of representative images
KR101151851B1 (ko) * 2011-11-08 2012-06-01 (주)올라웍스 이미지 클러스터링을 이용한 이미지 태깅 방법, 장치, 및 이 방법을 실행하기 위한 컴퓨터 판독 가능한 기록 매체

Also Published As

Publication number Publication date
KR20180077807A (ko) 2018-07-09
KR101986804B1 (ko) 2019-07-10

Similar Documents

Publication Publication Date Title
WO2012165859A2 (fr) Système permettant de recommander des conseils d'après un indice psychologique pour un utilisateur
WO2012020974A2 (fr) Procédé et appareil destinés à fournir des informations concernant un objet identifié
WO2019098409A1 (fr) Dispositif d'ajout de données fondé sur l'apprentissage automatique pour robot de conversation
WO2019156332A1 (fr) Dispositif de production de personnage d'intelligence artificielle pour réalité augmentée et système de service l'utilisant
WO2012099315A1 (fr) Procédé et appareil de commande de dispositif
WO2014115910A1 (fr) Système et procédé d'enquête sur des classements d'émissions
WO2018212470A1 (fr) Sélection de support pour fournir des informations correspondant à une demande vocale
WO2014175520A1 (fr) Appareil d'affichage destiné à fournir des informations de recommandation et procédé associé
WO2019125060A1 (fr) Dispositif électronique pour la fourniture d'informations associées à un numéro de téléphone, et procédé de commande correspondant
WO2015133856A1 (fr) Procédé et dispositif pour fournir un mot-clé de réponse correcte
WO2014107071A1 (fr) Système de commande de cctv fondé sur un suivi d'itinéraire à base de sig, et procédé associé
WO2019235653A1 (fr) Procédé et système de reconnaissance de connaissance proche sur la base d'une communication sans fil à courte portée et support d'enregistrement non transitoire lisible par ordinateur
WO2020111637A1 (fr) Procédé de gestion de niveau d'immersion et dispositif électronique le prenant en charge
WO2018043923A1 (fr) Dispositif d'affichage et procédé de commande associé
WO2018124372A1 (fr) Appareil et procédé de génération de base de données de récupération de contenu visuel
WO2018182072A1 (fr) Système et procédé d'extraction de données d'apprentissage à partir d'un contenu de réalité virtuelle et d'un contenu de réalité augmentée
WO2018117660A1 (fr) Procédé de reconnaissance de parole à sécurité améliorée et dispositif associé
WO2017135490A1 (fr) Appareil et procédé de reconnaissance d'objet dans un contenu visuel
WO2020138909A1 (fr) Procédé de partage de contenu et dispositif électronique associé
WO2015108243A1 (fr) Système et procédé servant à fournir des contenus à l'aide d'une reconnaissance d'objet, et application associée
WO2015088105A1 (fr) Système et procédé d'appariement de dispositifs, et terminal mobile
CN112257619B (zh) 一种目标重识别方法、装置、设备及存储介质
WO2009126012A2 (fr) Système de recherche et procédé correspondant
WO2022234878A1 (fr) Procédé de recherche, et dispositif d'exploitation, de stratégie de transition utilisant des vecteurs d'état d'utilisateur
WO2023008668A1 (fr) Dispositif de génération d'émoticône

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17885969

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17885969

Country of ref document: EP

Kind code of ref document: A1