WO2020085694A1 - Dispositif de capture d'image et procédé de commande associé - Google Patents
Dispositif de capture d'image et procédé de commande associé Download PDFInfo
- Publication number
- WO2020085694A1 WO2020085694A1 PCT/KR2019/013346 KR2019013346W WO2020085694A1 WO 2020085694 A1 WO2020085694 A1 WO 2020085694A1 KR 2019013346 W KR2019013346 W KR 2019013346W WO 2020085694 A1 WO2020085694 A1 WO 2020085694A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- background
- electronic device
- information
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the present disclosure relates to an artificial intelligence (AI) system that simulates functions such as cognition and judgment of the human brain by utilizing machine learning algorithms such as deep learning and its application.
- AI artificial intelligence
- the present disclosure relates to an apparatus for obtaining an image using an artificial intelligence system and a control method thereof.
- AI Artificial Intelligence
- Machine learning Deep learning
- elemental technologies utilizing machine learning.
- Machine learning is an algorithm technology that classifies / learns the characteristics of the input data by itself
- element technology is a technology that utilizes a machine learning algorithm such as deep learning, linguistic understanding, visual understanding, reasoning / prediction, knowledge expression, motion control, etc. It consists of technical fields.
- Linguistic understanding is a technology that recognizes and applies / processes human language / characters, and includes natural language processing, machine translation, conversation system, question and answer, and speech recognition / synthesis.
- Visual understanding is a technology that recognizes and processes objects like human vision, and includes object recognition, object tracking, image search, human recognition, scene understanding, spatial understanding, and image improvement.
- Inference prediction is a technique for logically inferring and predicting information by determining information, and includes knowledge / probability-based reasoning, optimization prediction, preference-based planning, and recommendation.
- Knowledge expression is a technology that automatically processes human experience information into knowledge data, and includes knowledge building (data generation / classification), knowledge management (data utilization), and so on.
- Motion control is a technique for controlling autonomous driving of a vehicle and movement of a robot, and includes motion control (navigation, collision, driving), operation control (behavior control), and the like.
- artificial intelligence technology can also be used to acquire images such as photos and videos.
- an apparatus for obtaining an image using an artificial intelligence (AI) system and a control method thereof may be provided.
- AI artificial intelligence
- An image acquisition method of an electronic device comprising: acquiring a first image including at least one object and a background; Detecting the at least one object and background based on feature information for each of the at least one object and background in the acquired first image; Determining image filters representing different image effects to be applied to each of the detected at least one object and background; And generating a second image by applying the determined image filters to each of the at least one object and the background.
- a display for displaying at least one image; A storage unit for storing one or more instructions; And a processor for executing the one or more instructions, wherein the processor acquires a first image including at least one object and a background by executing the one or more instructions, and at least one in the acquired first image. Based on the feature information for each object and background, the at least one object and background are detected, and image filters representing different image effects to be applied to each of the detected at least one object and background are determined, and the An electronic device that generates a second image by applying the determined image filters to each of the at least one object and the background may be provided.
- a computer program product including a computer-readable storage medium having an image acquisition method recorded thereon, the storage medium comprising: acquiring a first image including at least one object and a background; Detecting the at least one object and background based on feature information for each of the at least one object and background in the acquired first image; Determining image filters representing different image effects to be applied to each of the detected at least one object and background; And instructions for applying the determined image filters to each of the at least one object and each background to perform an operation of generating a second image.
- An electronic device acquiring an image according to the present disclosure may generate an image using image filters representing an image effect that matches the user's intention.
- FIG. 1 is a diagram for describing an image acquisition method performed by an electronic device according to an embodiment.
- FIG. 2 is a flowchart illustrating a method for an electronic device to acquire an image, according to an embodiment.
- FIG. 3 is a diagram for describing a method of determining location information of at least one object area and a background area included in a first image by an electronic device according to an embodiment.
- FIG. 4 is a diagram for describing a method for an electronic device to acquire feature information from an image, according to an embodiment.
- FIG. 5 is a flowchart illustrating a method for an electronic device to detect at least one object area and a background area from an image, according to an embodiment.
- FIG. 6 is a diagram for explaining a structure of a neural network used by an electronic device according to an embodiment.
- FIG. 7 is a flowchart illustrating a method for an electronic device to acquire an image according to another embodiment.
- FIG. 8 is a diagram for describing a method in which an electronic device applies a filter based on a composition of an image, according to an embodiment.
- FIG. 9 is a diagram for explaining a method of applying a filter using information on a light source in an image by an electronic device according to an embodiment.
- FIG. 10 is a flowchart illustrating a method for an electronic device to determine a filter using information about a light source in an image, according to an embodiment.
- FIG. 11 is a diagram illustrating an example of additional information used by an electronic device according to an embodiment.
- FIG. 12 is a diagram for explaining a method of applying a filter using additional information by an electronic device according to an embodiment.
- FIG. 13 is a diagram for explaining a method of using additional information for controlling an operation of an electronic device by an electronic device according to an embodiment.
- FIG. 14 is a diagram for describing a method in which an electronic device manually or automatically applies a filter using additional information, according to an embodiment.
- 15 is a diagram for describing a method for an electronic device to determine a filter according to another embodiment.
- 16 is a diagram for describing a method of generating a second image using a server according to an embodiment.
- 17 is a diagram for describing a method of generating a second image using a server according to another embodiment.
- 18 is a diagram for describing a method for an electronic device to analyze an image using a plurality of learning models, according to an embodiment.
- 19 is a diagram for explaining learning data of a neural network used by an electronic device according to an embodiment.
- 20 is a block diagram illustrating an electronic device according to an embodiment.
- 21 is a block diagram illustrating an electronic device according to an embodiment.
- 22 is a block diagram of a server according to an embodiment.
- An image acquisition method of an electronic device comprising: acquiring a first image including at least one object and a background; Detecting the at least one object and background based on feature information for each of the at least one object and background in the acquired first image; Determining image filters representing different image effects to be applied to each of the detected at least one object and background; And generating a second image by applying the determined image filters to each of the at least one object and the background.
- An electronic device comprising: a display that displays at least one image; A storage unit for storing one or more instructions; And a processor for executing the one or more instructions, wherein the processor acquires a first image including at least one object and a background by executing the one or more instructions, and at least one in the acquired first image. Based on the feature information for each object and background, the at least one object and background are detected, and image filters representing different image effects to be applied to each of the detected at least one object and background are determined, and the An electronic device that generates a second image by applying the determined image filters to each of the at least one object and the background may be provided.
- the operation of obtaining a first image including at least one object and a background Detecting the at least one object and background based on feature information for each of the at least one object and background in the acquired first image; Determining image filters representing different image effects to be applied to each of the detected at least one object and background; And a computer-readable recording medium storing a program for performing a second image generation operation by applying the determined image filters to each of the at least one object and a background.
- FIG. 1 is a diagram for describing an image acquisition method performed by the electronic device 1000 according to an embodiment.
- the electronic device 1000 may acquire an image using a camera included in the electronic device 1000.
- the electronic device 1000 may acquire a first image using a camera and generate a second image using the acquired first image.
- the first image acquired by the electronic device 1000 includes a preview image displayed on the display of the electronic device 1000 and a pre-stored image stored in the memory of the electronic device 1000 or received from the server.
- the second image may include a first image corrected based on image filters representing different image effects.
- the first image acquired by the electronic device 1000 may include at least one object and a background.
- the first image includes a first object region 112 and a second object region 114 that include an image of a portion corresponding to at least one object, and the electronic device 1000 acquires the first object.
- the first object region 112 and the second object region 114 may be detected in the image.
- the object area may be an area including pixels of a part corresponding to an object in the first image
- the background area may be an area including pixels of a part corresponding to the background in the first image.
- the electronic device 1000 includes a background in which the first object area and the second object area are removed from the detected first object area 112, the second object area 114, and the obtained first image.
- Image filters that display different image effects can be applied to the region. For example, referring to the output image 104 of FIG. 1, the electronic device 100 ink-inks the background region in which the first object region 112 and the second object region 114 are removed from the obtained first image. By applying a filter, an ink painting effect can be exhibited in the background area.
- the electronic device 1000 clearly displays an object by applying a sharpening filter to the first object region 112 and the second object region 114, and the outline filter to the background region By applying, it is possible to delineate the lines constituting the background area. Also, referring to the output image 108, the electronic device 1000 represents a plurality of objects in the first image as characters by applying a cartoonization filter to the first object region 112 and the second object region 114. You can.
- the first image acquired by the electronic device 1000 may further include at least one reference line for determining the composition of the first image.
- the electronic device 1000 generates a reference line for determining the composition of the first image from the obtained first image, and divides the first image into a plurality of divided regions using the generated reference line, and a plurality of A second image may be generated by applying image filters representing different image effects to the divided regions of.
- the reference lines generated by the electronic device 1000 according to the present disclosure may mean a plurality of vanishing lines intersecting at vanishing points in the first image.
- the vanishing point described herein may mean a point in which parallel straight lines in a physical space are projected on an image and collected by a perspective effect.
- the image effects that the electronic device 1000 intends to display using an image filter applied to each of at least one object and a background in the first image include ink painting effects, blurring effects, blurring effects, sharpening effects, and outline effects , Cartoonization effect, 3D effect, noise removal effect, noise addition effect, mosaic effect, fresco effect, pastel effect, paint effect, sponge effect, watercolor effect, black and white effect, but is not limited thereto.
- the electronic device 1000 may generate a second image by applying a plurality of filters to the first image using a deep learning algorithm having a deep neural network structure having multiple layers. Deep learning can basically be formed into a deep neural network structure with multiple layers.
- the neural network used by the electronic device according to the present disclosure may include, but is not limited to, a convolutional neural network (DNN), a deep neural network (DNN), a recurrent neural network (RNN), and a bidirectional recurrent deep neural network (BRDNN). Does not.
- the neural network used by the electronic device 1000 may be a structure in which a fully-connected layer is connected to a CNN structure in which a convolution layer and a pooling layer are repeatedly used.
- the electronic device 1000 may use a plurality of neural networks to generate a second image by applying a plurality of filters to the first image.
- the electronic device 1000 may detect at least one object and background from the first image obtained using the first neural network, and may use a second neural network to apply a plurality of filters to be applied to the first image. Can decide.
- the electronic device 1000 may be implemented in various forms.
- the electronic device 1000 described herein includes a digital camera, a mobile terminal, a smart phone, a laptop computer, a tablet PC, an electronic book terminal, a digital broadcasting terminal, and a PDA (Personal). Digital Assistants), PMP (Portable Multimedia Player), navigation, MP3 players, and the like, but are not limited thereto.
- PMP Portable Multimedia Player
- the electronic device 1000 described in this specification may be a wearable device.
- Wearable devices include accessory devices (e.g. watches, rings, cuff bands, ankle bands, necklaces, glasses, contact lenses), head-mounted devices (HMDs), fabric or garment-integrated devices (e.g. Electronic clothing), a body-attached device (eg, a skin pad), or a bio-implantable device (eg, an implantable circuit).
- accessory devices e.g. watches, rings, cuff bands, ankle bands, necklaces, glasses, contact lenses
- HMDs head-mounted devices
- fabric or garment-integrated devices e.g. Electronic clothing
- a body-attached device eg, a skin pad
- a bio-implantable device eg, an implantable circuit
- FIG. 2 is a flowchart illustrating a method for an electronic device to acquire an image, according to an embodiment.
- the electronic device 1000 may obtain a first image including at least one object and background.
- the electronic device 1000 may acquire a first image using a camera included in the electronic device, but may acquire an image from an external server or other electronic device connected by at least one of wired and wireless methods. You can.
- the first image obtained using the camera may include a preview image for acquiring the final image to be stored in the electronic device 1000, and the preview image is displayed on the display of the electronic device 1000 or the electronic device 1000 ) And an external display device connected by at least one of wired and wireless.
- the electronic device 1000 may detect at least one object and background from the acquired first image. For example, the electronic device 1000 obtains feature information for each of at least one object and background in the obtained first image, and detects at least one object and background from the first image based on the obtained feature information can do.
- the electronic device 1000 according to the present disclosure detects at least one object and background from the first image, the location of the at least one object region and the background region included in the first image is determined, and the location is determined. It may be to determine the types of objects and backgrounds included in the object area and the background area, respectively.
- the electronic device 1000 may detect at least one object and background from the first image using an object model and a background model that have been previously learned and stored in the electronic device.
- the electronic device 1000 when the acquired first image is input, the electronic device 1000 includes an object region including an image of a portion corresponding to the at least one object and an image of a portion corresponding to the background. Characteristic information for each of the at least one object and the background is obtained by using a first neural network that outputs feature information for identifying the background region, and at least from the first image based on the acquired feature information One object and background can be detected.
- the first neural network according to the present disclosure may be learned based on an object model and a background model previously learned and stored in the electronic device.
- the feature information according to the present disclosure may include information on the type of the at least one object and the background in the first image and information for determining the location of the at least one object area and the background area in the first image.
- the electronic device 1000 may display a marker indicating at least one detected object and a background area.
- the electronic device 1000 may display a display included in the electronic device 1000 or a marker indicating at least one object and background detected together with the first image on a display device connected by at least one of wired and wireless methods. Can be displayed together.
- the electronic device 1000 may display markers representing at least one detected object and background in various ways.
- the electronic device 1000 may display an object area including at least one object and a background, and a marker such as a star shape or a number symbol in the vicinity of the background area.
- the electronic device 1000 may determine image filters to be applied to each of the detected at least one object and background. For example, the electronic device 1000 may determine an image filter for each of the at least one object and background in order to display different image effects on each of the at least one object and background in the first image. According to an embodiment of the present disclosure, when the feature information for each of the at least one object and the background is input, the electronic device 1000 uses the second neural network to output the image filter to be applied to the first image. Filters can also be determined.
- the electronic device 1000 may generate a second image by applying the determined image filters to each of at least one object and background.
- the electronic device 1000 may store the generated second image in a memory of the electronic device 1000 or a server or other electronic device connected to the electronic device 1000.
- the electronic device 1000 according to the present disclosure may be applied by overlapping a plurality of image filters for each detected at least one object area and background area. For example, a sharpening effect filter and a contour filter may be overlapped and applied to the first object region 112, and an ink painting filter and a black and white filter may be applied to the second object region 114.
- the electronic device 1000 adjusts the size and coverage of the determined image filter for each of the at least one object and the background, and provides an image filter in which the size and coverage are adjusted.
- a second image may be generated by applying at least one object region and a background region in one image.
- the electronic device 1000 may generate a second image, and then obtain feedback from the user on the generated second image.
- the electronic device 1000 may re-learn the second neural network by refining weights related to layers in the second neural network and connection strength between the layers based on the obtained feedback.
- FIG. 3 is a diagram for describing a method of determining location information of at least one object area and a background area included in a first image by an electronic device according to an embodiment.
- the electronic device 1000 may acquire a first image and re-sizing the obtained first image to a predefined size.
- the electronic device 1000 may generate grid cells 402 of a predetermined size by dividing the resized first image.
- the electronic device 1000 according to the present disclosure may resize the obtained first image to a size required for the first neural network in order to determine location information for detecting at least one object region and a background region from the first image. have.
- the electronic device 1000 may generate 40 grid cells, but is not limited thereto.
- the electronic device 1000 generates a predetermined number of boundary cells 406 and 409 that are dependent on respective grid cells using a first neural network using a first image as an input. And, it is possible to obtain a probability of including at least one object in the center coordinates of each of the generated boundary cells 406 and 409 and the boundary cells 406 and 409. According to an embodiment, the electronic device 1000 may generate two boundary cells for each of the generated grid cells, and the number of boundary cells depending on each grid cell may be changed.
- the generated boundary cells are generated based on the center coordinates of the boundary cells and the probability that an image corresponding to at least one object (eg, pixels representing a pixel value of the image corresponding to the object) exists in the boundary cell. It can be identified using information.
- the electronic device 1000 may detect, as object regions 404, 408, and 410, boundary cells having the highest probability that an image corresponding to the object exists among the boundary cells generated using the first neural network.
- the electronic device 1000 may be disposed inside (404, 406, 409) of the boundary cells overlapping an image (eg, an image representing a cookie, 404) corresponding to at least one object existing in the first image.
- the boundary cell 404 having the highest probability of existence of an object may be detected as an object area.
- the electronic device 1000 based on whether the probability that an image (for example, a pixel) corresponding to at least one object in the boundary cells among the boundary cells 404, 406, and 409 exists is equal to or greater than a preset threshold.
- Boundary cells 406 and 409 including an image corresponding to an object may be removed.
- the electronic device 1000 uses a Nom-maximal Suppression (NMS) algorithm to remove a boundary cell having a probability that an image corresponding to an object in the boundary cell is below a preset threshold. You can.
- NMS Nom-maximal Suppression
- the electronic device 1000 detects, as an object region, a boundary cell having the highest probability that an object exists in the boundary cells among the boundary cells including the images corresponding to at least one object existing in the first image. By repeating, it is possible to detect at least one object region from the first image.
- the electronic device 1000 may determine location information for determining an object region by using boundary cells corresponding to at least one object present in the first image.
- the electronic device 1000 may set an origin for setting coordinates in the first image, and obtain center pixel coordinates of the detected boundary cells and boundary pixel coordinates of the boundary cells based on the set origin. have.
- the electronic device 1000 may determine the location information of the object regions using the center pixel coordinates and boundary pixel coordinates of the boundary cell set for each of the at least one object in the first image.
- the electronic device 1000 determines location information of the background regions by using the center pixel coordinates and boundary pixel coordinates of the boundary cell corresponding to each of at least one object in the first image and the boundary pixel coordinates of the entire first image. You can.
- FIG. 4 is a diagram for describing a method for an electronic device to acquire feature information from an image, according to an embodiment.
- the electronic device 1000 detects at least one object regions 322, 323, 324, 326, 328, 329 and 332 from the first image using the determined location information of the object regions, Feature information on an object may be obtained by using at least one of location information indicating the location of the object regions and information on the type of the object included in the detected object regions. Also, the electronic device 1000 may acquire feature information about the background using at least one of location information indicating the location of the background region and information on the type of background included in the detected background region. The electronic device 1000 may detect at least one object and background from the first image based on the feature information of each of the at least one object and the background.
- a method of acquiring feature information for each object and background will be described in detail.
- the electronic device 1000 uses the first neural network to determine the at least one object regions using the location information of the at least one object regions 322, 323, 324, 326, 328, 329, and 332 It can detect and determine the types of objects included in the detected object areas.
- the electronic device 1000 may obtain feature information about the object using the location information of the detected object regions and information on the type of objects included in the object regions.
- the electronic device 1000 may mask the detected object regions to distinguish the detected object regions from the object regions, by using the location information of the object regions. For example, the electronic device 1000 may divide and mask at least one of the detected object areas and two areas in which the object areas are not detected. In the present specification, the masking of the at least one object regions detected by the electronic device 1000 may correspond to binary processing of pixel values of pixels included in the detected at least one object regions.
- the electronic device 1000 may detect a background area by removing at least one masked object areas from the first image. That is, the electronic device 1000 may detect the object regions based on the location information of the at least one object regions, and remove the detected object regions from the first image to detect the background region from the first image. According to another embodiment, the electronic device 1000 determines location information for determining a background region using the location information of at least one masked object region and the coordinates of all boundary pixels of the first image, and the background region The background region may be detected using location information for determining.
- the electronic device 1000 determines location information indicating the location of each of the at least one object region and the background region in the obtained first image, and each of the determined at least one object region and the background region The object region and the background region may be directly detected from the first image using the location information. That is, the electronic device is not a method of removing the detected at least one object area from the first image, but based on the location information of each of the at least one object area and the background area, the at least one object area and the background area are first It can be detected from an image.
- the electronic device 1000 according to the present disclosure may determine the type of background included in the detected background area based on the location information of the background area.
- the electronic device 1000 may obtain feature information on the background using the type of background included in the determined background region and location information of the background region.
- the electronic device 1000 determines information on the type of the at least one object and the background included in the at least one object area and the background area in the obtained first image, and determines the determined at least one object and background Characteristic information may be obtained by using at least one of information on the type and location information of the determined at least one object area and the background area.
- Feature information according to the present disclosure may be obtained for each of at least one object and background.
- the feature information on the background may include at least one of location information on the background region and information on the type of background included in the background region
- feature information on the object includes location information on the object region and object region It may include at least one of information about the type of object included in.
- the electronic device 1000 may generate a feature information table 380 using feature information acquired for each of at least one object and background.
- the feature information table 380 generated by the electronic device 1000 may include categories for the index 382, the image 384, the type 386, and the location information 388.
- the category for the index 382 indicates an identification number for distinguishing the detected object and background
- the category for the image 384 indicates information about pixel values of pixels representing at least one object and background.
- the category for the type 386 indicates the type of the object or background included in each of the at least one object area and the background area
- the category for the location information 388 is at least one object area and the background area, respectively. It can indicate the location information for.
- FIG. 5 is a flowchart illustrating a method for an electronic device to detect at least one object area and a background area from an image, according to an embodiment.
- the electronic device 1000 may generate a plurality of grid cells by dividing the acquired first image. For example, the electronic device 1000 may resize the obtained first image and divide the resized first image to generate grid cells of a predetermined size. According to an embodiment, the electronic device 1000 may generate a plurality of grid cells by inputting the obtained first image to the first neural network.
- the electronic device 1000 may generate a plurality of boundary cells depending on the generated grid cells.
- the boundary cells generated by the electronic device 1000 are dependent on the generated grid cells, and may indicate a probability that an image corresponding to at least one object exists in the boundary cells.
- the electronic device 100 may detect at least one object region from the first image using the location information of the object region.
- the electronic device 1000 may detect the background area by removing the detected object area from the first image. That is, the electronic device 100 may first determine location information for determining an object region in the acquired first image, and detect an object region and a background region from the first image based on the determined location information.
- the electronic device 1000 first determines location information for determining the object region and the background region in the acquired first image, and based on the determined location information, the object region and the background region from the first image Can be detected.
- FIG. 6 is a diagram for explaining a structure of a neural network used by an electronic device according to an embodiment.
- the first neural network used by the electronic device 1000 according to the present disclosure to obtain feature information for each of at least one object and background in a first image includes at least one convolutional layer that extracts convolutional features through convolutional operations, Fully connected layer that is connected to one end of the convolutional layers and outputs information on the types of objects and backgrounds respectively included in the detected at least one object region and the background region, and at least one object region and the location information of the background region It may include a fully connected layer representing and a fully connected layer that outputs the masked at least one object area and the background area.
- the first neural network may further include pooling layers alternately arranged alternately with the convolution layer in addition to the convolution layers and the fully connected layers.
- the electronic device 1000 determines filters to be applied to each of the at least one object and the background in the first image by inputting feature information for each of the at least one object areas and the background area to the second neural network. You can.
- the electronic device may use a plurality of neural network models to generate a second image by applying image filters representing different image effects to each of at least one object and background included in the acquired first image
- a second image may be generated using a single neural network model. That is, the first neural network and the second neural network used by the electronic device 1000 may be implemented as a single neural network model.
- the neural network model used by the electronic device 1000 to generate the second image is an artificial intelligence model that operates to process input data according to a predefined operation rule stored in the memory, and may be composed of a plurality of neural network layers.
- Each of the plurality of neural network layers has a plurality of weight values, and performs a neural network operation through calculation between a result of calculation of a previous layer and a plurality of weights.
- the plurality of weights of the plurality of neural network layers may be optimized by learning results of the artificial intelligence model. For example, a plurality of weights may be updated such that a loss value or a cost value obtained from the artificial intelligence model is reduced or minimized during the learning process.
- the artificial neural network may include a deep neural network (DNN), for example, Convolutional Neural Network (CNN), Deep Neural Network (DNN), Recurrent Neural Network (RNN), Restricted Boltzmann Machine (RBM), There are Deep Belief Network (DBN), Bidirectional Recurrent Deep Neural Network (BRDNN) or Deep Q-Networks, but are not limited to the above-described examples.
- DNN Deep neural network
- FIG. 7 is a flowchart illustrating a method for an electronic device to acquire an image according to another embodiment.
- the electronic device 1000 may determine a photographing mode for the electronic device photographing the first image by using feature information for each of the background and at least one object in the first image.
- the electronic device 100 may determine a photographing mode for the electronic device photographing the first image using the learned neural network based on the pre-trained photographing model.
- the electronic device 1000 may determine a shooting mode suitable for photographing the first image based on the feature information of the object and the background included in the first image, and the shooting mode according to the present disclosure is close-up. Mode, text mode, landscape mode, night view mode, tripod mode, exercise mode, night view portrait mode, backlight mode, but may not be limited thereto.
- the shooting mode according to the present disclosure may further include a multi-shooting mode in which one or more shooting modes are applied together.
- the electronic device 1000 may determine photographing parameters determined according to the determined photographing mode.
- the shooting parameters according to the present disclosure may include adjustable aperture values, sensitivity, shutter speed values, etc. when the electronic device 1000 photographs an image.
- the electronic device 1000 may determine a plurality of shooting modes for photographing the first image, and may determine shooting parameters according to the determined plurality of shooting modes.
- step S740 the electronic device 1000 determines the relative positional relationship between the at least one object and the background using the feature information for each of the at least one object and the background in the first image, and based on the determined relative positional relationship.
- the composition of the first image can be analyzed.
- the electronic device 1000 analyzes the composition of the first image using feature information about each of the at least one object and the background in the first image and a shooting parameter determined according to the shooting mode of the electronic device. can do.
- the composition of the image according to the present disclosure may include, but is not limited to, golden ratio composition, horizontal composition, vertical composition, trisection composition, vanishing point composition, and the like. The method in which the electronic device 1000 applies a filter using the composition of the analyzed first image will be described in detail with reference to FIG. 8.
- the electronic device 1000 may determine information about a light source in the first image. For example, when the proportion of the background area in the first image is equal to or greater than a preset threshold, the electronic device 1000 may determine information about the light source in the first image based on the pixel values of the pixels in the first image. have.
- the information about the light source may include information about the light source center and the light source boundary.
- the electronic device 1000 may determine a pixel having the greatest brightness value among pixels in the first image using the pixel values of the pixels in the first image, and the coordinates of the pixel having the greatest brightness value may be centered in the light source. You can decide.
- the electronic device 1000 detects pixel sets having a predetermined pixel value or more among the pixels in the first image using the pixel values of the pixels in the first image, and the pixels located at the boundary of the detected pixel sets. Coordinates can be determined as the light source boundary.
- step S780 the electronic device 1000 may acquire additional information. Additional information acquired by the electronic device 1000 according to the present disclosure will be described in detail with reference to FIG. 11.
- the electronic device 1000 may apply image filters to be applied to each of the at least one object and the background based on at least one of the determined shooting mode, composition of the first image, information about the light source in the first image, and additional information. Can decide. That is, the electronic device 1000 determines the image filter using only the shooting parameters according to the determined shooting mode, determines the image filter using only the composition of the first image, or uses only information about the light source in the first image The filter may be determined, or an image filter may be determined based on the obtained additional information. According to another embodiment, the electronic device 1000 may determine an image filter using all of the photographing parameters according to the determined photographing mode, the composition of the first image, information about the light source in the first image, and additional information.
- FIG. 8 is a diagram for describing a method in which an electronic device applies a filter based on a composition of an image, according to an embodiment.
- the electronic device 1000 may apply an image filter to each of the detected at least one object.
- the electronic device 1000 applies a black and white filter only to the building 2 814, which is an object detected in the input image 802, to show a black and white effect in the area where the building 2 is located. You can.
- the electronic device 1000 analyzes the composition of the first image based on the feature information, and uses the analyzed composition of the first image, and at least one object in the first image and You can decide which filter to apply to each of the backgrounds.
- the electronic device 1000 may obtain feature information for each of the building 1 813, the building 2 814, and the background from the input image 802.
- the electronic device 1000 uses a plurality of reference lines (eg, missing lines, 816, 818, 820, for determining the composition of the input image) using feature information of each of the detected building 1 813, building 2 814, and background. 822) and dividing the input image 802 using the generated plurality of reference lines to generate a plurality of divided regions 832 and 836.
- the baselines according to the present disclosure may converge to at least one vanishing point in the first image.
- the electronic device 1000 generates a plurality of divided regions by dividing the input image 802 using a plurality of reference lines 816, 818, 820, 822, , A contour filter may be applied only to the fourth divided area 836 among the generated plurality of divided areas.
- the output image 808, the electronic device 1000 generates a plurality of divided areas by dividing the input image 802 using a plurality of reference lines, and among the generated plurality of divided areas
- a sharpening effect filter may be applied to sharpen the shape of an image included in the first divided region 832 and the second divided region 836.
- FIG. 9 is a diagram for explaining a method of applying a filter using information on a light source in an image by an electronic device according to an embodiment.
- the electronic device 1000 may determine information about a light source in the first image and determine an image filter to be applied to the first image using the determined information about the light source in the first image. For example, the electronic device 1000 may detect the background area 902 from the first image, and determine the ratio of the detected background area 902 to the first image. For example, when the ratio of the area occupied by the background area in the acquired first image is equal to or greater than a preset threshold (eg, 50% or more), the electronic device 1000 may display the first image based on the pixel values of the pixels in the first image. I can determine information about my light source.
- a preset threshold eg, 50% or more
- the information about the light source may include information about the light source center and the light source boundary.
- the electronic device 1000 may use the pixel values of the pixels in the first image to determine the pixel with the greatest brightness value among the pixels in the first image, and determine the coordinates of the pixel with the greatest brightness value as the center of the light source. . Also, the electronic device 1000 detects pixel sets having a predetermined pixel value or more among the pixels in the first image using the pixel values of the pixels in the first image, and the pixels located at the boundary of the detected pixel sets. Coordinates can be determined as the light source boundary.
- the electronic device 1000 may determine a light source region using the determined light source center and light source boundary.
- the electronic device 1000 may generate a second image by applying an image filter to the light source region determined based on the light source information.
- the electronic device 1000 may apply different image filters for each light source area determined based on light source information, but may apply different image filters for each distance away from the center of the light source.
- the electronic device 1000 may include a light source region 932 having a square shape in the background region detected from the first image, an upper region 934 including the light source region 932 in the background region, and a light source in the background region.
- the lower region 936 not including the region 932 may be determined, and different image filters may be applied to each region.
- the electronic device 1000 applies an image filter for reducing the brightness values of pixels included in the light source region 932 and the top region 934 to the light source region 932 and the top region 934.
- An image filter for increasing the brightness value of the pixels included in the lower region 936 mainly including pixels having a low brightness value because relatively no light source is included may be applied to the lower region 936.
- the electronic device 1000 may apply a filter showing different image effects for each distance away from the center of the light source to the background area.
- the electronic device 1000 applies an image filter for reducing the brightness value by -3 to the light source region 942 having the smallest radius from the light source center as the origin, and the light source region having the largest radius from the light source center as the origin.
- an image filter for reducing the brightness value by -1 can be applied.
- FIG. 10 is a flowchart illustrating a method for an electronic device to determine a filter using information about a light source in an image, according to an embodiment.
- step S920 the electronic device 1000 may detect a background area from the acquired first image. Since step S920 may correspond to step S240 of FIG. 2, detailed description will be omitted.
- the electronic device 1000 may determine whether the ratio occupied by the detected background area is greater than or equal to a threshold. For example, the electronic device 1000 may determine whether the ratio occupied by the detected background area is greater than or equal to a threshold value by using the ratio of the number of pixels included in the background area to the total number of pixels included in the first image. . According to an embodiment, when the proportion of the background area occupied by the first image is 50% or more, the electronic device 1000 may determine information about the light source in the first image based on the pixel values of the pixels in the first image. .
- the electronic device 1000 may generate a histogram for the brightness value based on the pixel values of the pixels in the first image.
- the electronic device 1000 may detect the light source region from the first image using the generated histogram. For example, the electronic device 1000 uses the pixel values of the pixels in the first image to generate a histogram for the brightness value for each pixel coordinate, and uses the brightness value indicated in the generated histogram to center the light source and the light source boundary. Can decide. The electronic device 1000 may determine the middle light source area using the determined light source center and light source boundary.
- the electronic device 1000 may determine an image filter representing different image effects based on at least one of the determined light source area and a distance away from the light source center. Step S980 may correspond to the operation of the electronic device 1000 of FIG. 9, so a detailed description thereof will be omitted.
- FIG. 11 is a diagram illustrating an example of additional information used by an electronic device according to an embodiment.
- the electronic device 1000 acquires additional information related to the first image, and further uses the obtained additional information to at least one object and background in the first image. You can decide which image filter to apply.
- the additional information acquired by the electronic device 1000 according to the present disclosure includes information 1021 for a time when the first image was taken, information 1022 for a place where the first image was taken, and the first image.
- additional information acquired by the electronic device 1000 may be previously stored in a table format in a memory in the electronic device 1000. Also, additional information acquired by the electronic device 1000 may be obtained in the form of metadata attached to the first image. According to an embodiment, information 1021 on the time when the first image was taken may be obtained using a time stamp attached to the first image. Information about the direction of the electronic device that photographed the first image according to the present disclosure 1023 indicates whether the obtained first image was captured using a camera of a front camera or a rear camera included in the electronic device 1000. Can represent Also, the preference filter information 1027 of the electronic device user may indicate an object mainly used by the user of the electronic device 1000 and a preferred filter for each background.
- the operation control information of the electronic device may indicate whether the electronic device 1000 automatically or manually applies the determined image filter to the first image.
- the electronic device 1000 may automatically apply the determined image filters to each of at least one object and background included in the acquired first image.
- the motion control information of the electronic device is passive (MO)
- the electronic device 1000 provides candidate filter images applicable to each of at least one object and background included in the first image, and provided candidate filter
- the image filters may be manually applied based on a user input selecting at least one candidate image filter among the images.
- FIG. 12 is a diagram for explaining a method of applying a filter using additional information by an electronic device according to an embodiment.
- the electronic device 1000 acquires first images and additional information related to the first image, and uses the obtained additional information to apply image filters to each of at least one object and background in the first image Can be applied. Additional information according to the present disclosure may be combined in a metadata format with the first image. For example, information on a subject including an object and a background may be included in the first image acquired by the electronic device 1000, the time when the first image was taken (1073), the location where the first image was taken, and the place (1074). , Information about the weather 1075 of the place where the first image was taken may be combined at a time when the first image was taken.
- the electronic device 1000 detects at least one object and background from the first image using a neural network previously learned in the electronic device, and features information for each of the detected objects and backgrounds ( Based on 1077), composition information 1078 for the composition of the first image, information 1079 for the imaging mode of the electronic device 1000 for capturing the first image, information about the light source in the first image ( 1081).
- the electronic device 1000 further acquires additional information including a user's filter modification history 1084, and obtains additional information, feature information 1077, composition information 1078, and shooting mode
- a second image may be generated by applying image filters to be applied to each object and background in the first image by using at least one of information about 1079 and information about light source 1081.
- the electronic device 1000 applies a brightness (bright, 1085) filter to the face portion of the person on the first image, a cartoon filter 1087 to the hat portion, and a blur filter (out-) to the background portion.
- a second image can be generated by applying a filter.
- FIG. 13 is a diagram for explaining a method of using additional information for controlling an operation of an electronic device by an electronic device according to an embodiment.
- step S1310 the electronic device 1000 may obtain additional information regarding the first image. Since the additional information acquired by the electronic device 1000 in S1310 may correspond to information included in the additional information table of FIG. 11, a detailed description will be omitted.
- the electronic device 1000 may determine whether or not the electronic device control information is included in the acquired additional information.
- the electronic device control information acquired by the electronic device 1000 in S1320 may correspond to the electronic device control information (camera option, 1026) included in the additional information table of FIG. 11, so a detailed description thereof will be omitted.
- step S1330 when the electronic device control information is included in the acquired additional information, the electronic device 1000 may determine whether the electronic device control information is manual.
- step S1340 when the electronic device control information included in the additional information is passive (MO), the electronic device 1000 may display candidate image filters to be applied to each of at least one object and background in the first image. .
- the electronic device 1000 may generate a second image by applying an image filter to each of the background and at least one object in the first image, based on a user input selecting candidate image filters provided on the display.
- step S1350 when the electronic device control information included in the additional information is automatic (AO), the electronic device 1000 may automatically apply image filters to each of at least one object and background in the first image. That is, when the electronic device control information included in the additional information is automatic, the electronic device 1000 automatically applies image filters to each of at least one object and background in the first image, so that the second image can be generated without additional user input. Can generate
- FIG. 14 is a diagram for describing a method in which an electronic device manually or automatically applies a filter using additional information, according to an embodiment.
- the electronic device 1000 may acquire the input image 1340 and additional information 1342 in which the camera option is set to automatic (AO).
- the electronic device 1000 When the camera option included in the additional information is automatic (AO), the electronic device 1000 according to the present disclosure automatically filters image filters representing different image effects on each of the background and at least one object included in the first image. By applying as, an output image 1350 can be generated.
- the electronic device 1000 may acquire the input image 1360 and additional information 1362 in which the camera option is set to manual (AO).
- the electronic device 1000 includes candidate image filters that show different image effects on each of the background and at least one object included in the first image. Based on a user input provided on the display of the electronic device and selecting at least one candidate image filter among the candidate image filters provided on the display, the image filters are respectively provided to each of the background and at least one objects included in the first image. By applying, a second image can be generated.
- 15 is a diagram for describing a method for an electronic device to determine a filter according to another embodiment.
- the electronic device 1000 may provide a candidate image filter to be applied to each of the background and at least one object in the first image obtained using the at least one neural network model to the display.
- the electronic device 1000 may input characteristic information including information 1540 about the types of objects and backgrounds included in the acquired first image and location information of objects and backgrounds included in the first image.
- a neural network model outputting at least one filter may be pre-trained, and at least one candidate image filters may be provided using a pre-trained neural network model.
- the electronic device 1000 in addition to the first image, the electronic device 1000 further acquires additional information 1550 regarding the first image, and characteristic information for each of at least one object and background in the obtained first image And when additional information is input, a neural network model that outputs an image filter to be applied to each of the objects and backgrounds in the first image is pre-trained, and at least one candidate image filters can be provided using the pre-trained neural network model.
- the electronic device 1000 according to the present disclosure may be configured according to at least one of information about types of objects and backgrounds included in the first image, location information of objects and backgrounds included in the first image, and additional information 1550. Different candidate image filters for each category may be provided by generating a plurality of categories 1570 that can be distinguished, and using a neural network model that has been previously trained for each of the generated categories.
- 16 is a diagram for describing a method of generating a second image using a server according to an embodiment.
- the electronic device 1000 applies an image filter to be applied to each of at least one object and background in the first image using a neural network mounted on the server 2000 connected to the electronic device 1000 by wire or wirelessly. Can decide.
- the electronic device 1000 may transmit the acquired first image to the server. That is, when the first image is obtained, the electronic device 1000 forms a communication link with a server including the first to second neural networks, and transmits the first image to the server 2000 using the formed communication link. You can.
- step S1620 when the first image is input, the server 2000 is configured to identify an object area including an image corresponding to at least one object and a background area including an image corresponding to the background.
- Feature information for each object and background in the first image may be obtained using the first neural network that outputs the information.
- the server 2000 may detect at least one object and background in the first image based on the acquired feature information.
- the server 2000 may analyze the first image obtained based on the feature information for each of the detected at least one object and the background. For example, when the feature information is input, the server 2000 according to the present disclosure analyzes a first image transmitted from the electronic device 1000 using a second neural network that outputs image filters to be applied to the first image. can do.
- the operation of analyzing the first image by the server 2000 according to the present disclosure includes the operation of analyzing the composition of the first image, the operation of determining information about a light source in the first image, and the shooting of an electronic device that captures the first image It may further include an operation of determining a mode.
- the server 2000 may determine image filters representing different image effects to be applied to each object and background in the first image, based on the analysis result of the first image. In step S1660, the server 2000 may transmit information on the determined image filters to the electronic device 1000.
- the electronic device 1000 may apply image filters representing different image effects to each of at least one object and background in the first image by using information on the image filters received from the server 2000. .
- the electronic device 1000 may generate a second image by applying filters representing different image effects to each of at least one object and background in the first image.
- 17 is a diagram for describing a method of generating a second image using a server according to another embodiment.
- the electronic device 1000 acquires feature information for each of at least one object and background in a first image obtained using a first neural network mounted in the electronic device 1000, and the electronic device ( 1000) and the second neural network mounted on the wired or wirelessly connected server 2000 may determine image filters to be applied to each of at least one object and background in the first image.
- the electronic device 1000 uses a first neural network that outputs feature information for identifying at least one object area and a background area. Thus, feature information for each of the at least one object and the background in the first image may be obtained.
- the electronic device 1000 may detect at least one object and background in the first image based on the acquired feature information.
- the electronic device 1000 may transmit information on the detected at least one object and background to the server 2000.
- the electronic device 1000 when the feature information for each of at least one object and background in the acquired first image is output from the first neural network, the electronic device 1000 communicates with the server 2000 including the second neural network.
- a link may be formed, and information on at least one detected object and background may be transmitted to the server 2000 based on the characteristic information obtained using the formed communication link.
- the server 2000 may analyze the first image using a second neural network that outputs image filters to be applied to the first image.
- the operation of analyzing the first image by the server 2000 according to the present disclosure includes determining an imaging mode of the electronic device that captures the first image, analyzing the composition of the first image, and illuminating the light source in the first image. It may further include the operation of determining the information.
- the server 2000 may determine image filters to be applied to each of at least one object and background in the first image based on the analysis result of the first image. For example, the server 2000 may apply to each of at least one object and background in the first image based on at least one of the determined shooting mode, composition of the first image, and information about the light source included in the first image. Filters can be determined.
- the server 2000 may transmit information on the determined image filters to the electronic device 1000.
- the electronic device 1000 may apply image filters to be applied to each of at least one object and background in the first image by using information on the image filters received from the server 2000.
- the electronic device 1000 may generate a second image by applying image filters to each of at least one object and background in the first image.
- 18 is a diagram for describing a method for an electronic device to analyze an image using a plurality of learning models, according to an embodiment.
- the electronic device 1000 When the first image 1810 is input, the electronic device 1000 according to the present disclosure outputs a pre-trained first neural network that outputs feature information for each of at least one object and background in the first image. It is possible to detect at least one object and background from the first image.
- the first neural network used by the electronic device 1000 to detect objects and backgrounds may be learned based on the object and background model 1822, and the object and background model 1822 may be used to detect objects and backgrounds detected in the first image. It may be updated (eg, updated) based on the information about. That is, the electronic device 1000 according to the present disclosure detects at least one object and background from the first image using the first neural network, and updates the object and the updated object based on information on the detected at least one object and background.
- the background model may be used to re-learn by updating the weights of the layers in the first neural network and the connection strength between the layers.
- the electronic device 1000 uses the second neural network that inputs feature information for each of at least one object and background in the first image, and then the electronic device 1000 captures the first image.
- the photographing mode may be determined, and image filters to be applied to the first image may be determined based on the photographing parameters determined according to the determined photographing mode.
- the second neural network used by the electronic device 1000 to determine the shooting mode may be learned based on the shooting model 1832, and the shooting model 1832 may provide information on the shooting mode determined for the acquired first image. It can be updated (eg updated) on a basis.
- the electronic device 1000 determines a photographing mode for photographing a first image using a second neural network, and uses the updated photographic model based on the information on the determined photographing mode to determine the second neural network. It can be re-learned by updating the weights related to the connection strength between my layers and the layers.
- the electronic device 1000 uses the second neural network to input characteristic information for each of at least one object and background in the first image 1810 to compose and control the first image.
- Information about the light source in the image can be determined.
- the second neural network that the electronic device 1000 uses to determine the composition of the first image and information about the light source in the first image may be learned based on the composition and the light source model 1842, and the composition and the light source 1842
- the model may be updated (eg, updated) based on the composition and information about the light source determined for the first image.
- the electronic device 1000 uses the second neural network to determine the composition of the first image and information about the light source included in the first image, and includes the determined composition of the first image and the first image Based on the information about the updated light source, the updated composition and the light source model may be used to re-learn by updating the weights of the layers in the second neural network and the connection strength between the layers.
- 19 is a diagram for explaining learning data of a neural network used by an electronic device according to an embodiment.
- the neural network model used by the electronic device 1000 according to the present disclosure may be trained based on the first image original 1910 acquired by the electronic device 1000. However, the electronic device 1000 according to the present disclosure may be learned based on the first image 1920 to which the image filter has been applied, in order to provide image filters more suited to the user's intention. In addition, since the neural network model used by the electronic device 1000 according to the present disclosure can be learned based on the first image 1930 in which the applied image filter is modified, a user who modifies candidate image filters provided by the neural network model It is also possible to provide a candidate image filter that reflects the intention of.
- 20 and 21 are block diagrams for describing an electronic device according to an embodiment.
- the electronic device 1000 may include a display 1100, a processor 1300, a communication unit 1500, and a storage unit 1700.
- the illustrated components are essential components.
- the electronic device 1000 may be implemented by more components than the illustrated components, and the electronic device 1000 may also be implemented by fewer components.
- the electronic device 1000 includes a sensing unit 1400 in addition to the user input unit 1100, the output unit 1200, the processor 1300, and the communication unit 1500. ), An A / V input unit 1600, and a memory 1700.
- the user input unit 1100 refers to a means for a user to input data for controlling the electronic device 1000.
- the user input unit 1100 includes a key pad, a dome switch, and a touch pad (contact capacitive type, pressure resistive film type, infrared sensing type, surface ultrasonic conduction type, integral type) Tension measurement method, piezo effect method, etc.), a jog wheel, a jog switch, and the like, but are not limited thereto.
- the user input unit 1100 may receive a user input for selecting at least one image filter among candidate image filters for applying to the first image provided by the electronic device 1000 on the display.
- the output unit 1200 may output an audio signal, a video signal, or a vibration signal, and the output unit 1200 may include a display unit 1210, an audio output unit 1220, and a vibration motor 1230. have.
- the display unit 1210 includes a screen for displaying and outputting information processed by the electronic device 1000. Also, the screen can display an image. For example, at least a portion of the screen may display at least a portion of the first image and a second image to which the at least one image filter is applied.
- the audio output unit 1220 outputs audio data received from the communication unit 1500 or stored in the memory 1700. Also, the sound output unit 1220 outputs sound signals related to functions (for example, call signal reception sound, message reception sound, and notification sound) performed by the electronic device 1000.
- functions for example, call signal reception sound, message reception sound, and notification sound
- the processor 1300 typically controls the overall operation of the electronic device 1000.
- the processor 1300 executes programs stored in the memory 1700, thereby allowing the user input unit 1100, the output unit 1200, the sensing unit 1400, the communication unit 1500, and the A / V input unit 1600. ) Etc. can be controlled overall.
- the processor 1300 may perform functions of the electronic device 1000 illustrated in FIGS. 1 to 20 by executing programs stored in the memory 1700.
- the processor 1300 may be composed of one or a plurality of processors, and one or a plurality of processors are general-purpose processors such as a CPU, an AP, and a digital signal processor (DSP), graphics such as a GPU, and a Vision Processing Unit (VPU). It may be a dedicated processor or an artificial intelligence (AI) dedicated processor such as an NPU. According to an embodiment, when the processor 1300 includes a general-purpose processor, an artificial intelligence processor, and a graphics-only processor, the artificial intelligence processor may be implemented as a separate chip from a general-purpose processor or a graphics-only processor.
- the processor 1300 detects at least one object and background in the first image, and applies artificial intelligence to display different image effects by applying image filters to each of the detected at least one object and background.
- a second image may be generated using at least one of an (AI) processor, a graphics-only processor, or a general-purpose processor.
- the electronic device 1000 uses a general-purpose processor to display a second image generated by general operations of the electronic device (eg, obtaining a first image and applying image filters to the first image) on the display.
- a general-purpose processor to display a second image generated by general operations of the electronic device (eg, obtaining a first image and applying image filters to the first image) on the display.
- the electronic device 1000 efficiently, in generating a second image, determines necessary processing resources, and may use at least one of a general purpose processor, a dedicated graphic processor, or an artificial intelligence processor based on the determined processing resources. have.
- the processor 1300 may control the user input unit 1100 to receive a user's text, image, and video input.
- the processor 1300 may control the microphone 1620 to receive a user's voice input.
- the processor 1300 may execute an application that performs an operation of the electronic device 1000 based on the user input, and may control to receive the user input through the executed application.
- the processor 1300 may control to receive a user's voice input through the microphone 1620 by executing a Voice Assistant Application and controlling the executed application.
- the processor 1300 may control the output unit 1200 and the memory 1700 of the electronic device 1000 so that the first image and the second image are displayed.
- the processor 1300 provides a display with a candidate image filter to be applied to each of at least one object and a background in the first image, and a first image before the candidate image filter is applied on the display and a second image to which the candidate image filter is applied.
- the output unit 1200 and the memory 1700 may be controlled to be displayed together.
- the processor 1300 detects at least one object and background in the first image, determines an image filter for each of the at least one object and background in the first image, and images for each of the at least one object and background in the first image By applying a filter, an artificial intelligence model for generating a second image can be trained.
- the processor 1300 may train the artificial intelligence model by using training data including image data before the image filter is applied or image data to which the image filter is applied. Also, the processor 1300 may train an artificial intelligence model based on an object and a background model, a shooting model, or a composition light source model previously stored in a memory or a database.
- the processor 1300 may acquire learning data for learning the artificial intelligence model from an input device in the electronic device or an external device that can communicate with the electronic device. For example, the processor 1300 may acquire original image data for training an artificial intelligence model or image data to which an image filter has been applied, from another electronic device or server connected to the electronic device. Also, the processor 1300 may receive an object and a background model, a shooting model, or a composition light source model for learning an artificial intelligence model from another electronic device or server connected to the electronic device.
- the processor may pre-process data acquired for learning the artificial intelligence model.
- the processor may process the acquired data in a preset format.
- the processor sets learning data for learning an AI model based on predetermined criteria (eg, a region in which training data is generated, a time in which training data is generated, a size of training data, a genre of training data, a training data) It can be selected according to the constructor, the type of object in the training data, etc.), and the method of selecting the criteria for selecting the training data for AI model training can also be learned.
- predetermined criteria eg, a region in which training data is generated, a time in which training data is generated, a size of training data, a genre of training data, a training data
- one or a plurality of processors in the electronic device may control to process input data according to predefined operation rules or artificial intelligence models stored in the memory.
- the AI-only processor may be designed with a hardware structure specialized for processing a specific AI model.
- the processor 1300 when the processor 1300 is implemented with a plurality of processors or a graphics-only processor or an artificial intelligence-only processor such as an NPU, at least some of the plurality of processors or the graphics-only processor or an artificial intelligence-only processor such as an NPU are used.
- the electronic device 1000 and other electronic devices or servers 2000 connected to the electronic device 1000 may be mounted.
- a predefined operation rule or an artificial intelligence model for operating the electronic device 1000 is created through learning.
- the basic artificial intelligence model is learned using a plurality of training data by a learning algorithm, thereby creating a predefined operation rule or artificial intelligence model set to perform a desired characteristic (or purpose). It means Jim.
- Such learning may be performed on a device on which artificial intelligence according to the present disclosure is performed, or may be performed through a separate server and / or system. Examples of learning algorithms include supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but are not limited to the examples described above.
- the neural network model used by the processor 1300 according to the present disclosure to generate a second image is an artificial intelligence model that operates to process input data according to a predefined operation rule stored in memory. It may be composed of a plurality of learned neural network layers.
- the neural network models that the processor 1300 uses to generate the second image include a plurality of learning models (object and background models 1822, photographing models 1832) stored in the memory 1700 or the server 2000.
- the processor 1300 acquires a first image, acquires feature information for each of at least one object and a background included in the obtained first image, using a first neural network stored in memory, and inputs feature information If it does, the second image may be generated by applying an image filter to each of at least one object and background in the first image using a second neural network that outputs at least one image filters to be applied to the first image.
- the sensing unit 1400 may detect a state of the electronic device 1000 or a state around the electronic device 1000 and transmit the sensed information to the processor 1300.
- the sensing unit 1400 generates some of specification information of the electronic device 1000, status information of the electronic device 1000, surrounding environment information of the electronic device 1000, user status information, and user device usage history information. Can be used.
- the sensing unit 1400 includes a magnetic sensor 1410, an acceleration sensor 1420, a temperature / humidity sensor 1430, an infrared sensor 1440, a gyroscope sensor 1450, and a position sensor. (Eg, GPS) 1460, a barometric pressure sensor 1470, a proximity sensor 1480, and an RGB sensor (illuminance sensor) 1490, but may include at least one.
- GPS GPS
- barometric pressure sensor 1470 a barometric pressure sensor
- proximity sensor 1480 a proximity sensor
- RGB sensor luminance sensor
- the communication unit 1500 may include one or more components that allow the electronic device 1000 to communicate with other devices (not shown) and the server 2000.
- Another device may be a computing device such as the electronic device 1000 or a sensing device, but is not limited thereto.
- the communication unit 1500 may include a short-range communication unit 1510, a mobile communication unit 1520, and a broadcast reception unit 1530.
- the short-range wireless communication unit 1510 includes a Bluetooth communication unit, a Bluetooth Low Energy (BLE) communication unit, a Near Field Communication unit, a WLAN (Wi-Fi) communication unit, a Zigbee communication unit, and an infrared ( IrDA, an infrared data association (WDA) communication unit, a WFD (Wi-Fi Direct) communication unit, a UWB (ultra wideband) communication unit, an Ant + communication unit, and the like, but are not limited thereto.
- BLE Bluetooth Low Energy
- Wi-Fi Wireless Fidelity
- the mobile communication unit 1520 transmits and receives wireless signals to and from at least one of a base station, an external terminal, and a server on a mobile communication network.
- the wireless signal may include various types of data according to transmission and reception of a voice call signal, a video call signal, or a text / multimedia message.
- the broadcast receiving unit 1530 receives a broadcast signal and / or broadcast-related information from the outside through a broadcast channel.
- the broadcast channel may include a satellite channel and a terrestrial channel.
- the electronic device 1000 may not include the broadcast receiving unit 1530.
- the communication unit 1500 may transmit information about the background and the object detected from the first image or the first image to the server 2000.
- the communication unit 1500 may transmit at least a portion of the first image acquired by the electronic device 1000 or the first image acquired by the electronic device 1000 and stored in the memory 1700 to the server 2000. Can transmit.
- the communication unit 1500 includes information on at least one object and background detected from the first image (for example, information on the type of the at least one object and background and information on the location of the object area and the background) Feature information) to the server 2000.
- the communication unit 1500 may transmit information about an image stored in another electronic device 1000 connected to the electronic device 1000, at least one object in the image, and a background to the server 2000.
- the communication unit 1500 may transmit the identifier (eg, URL, metadata) of the first image to the server 2000.
- the communication unit 1500 may receive information about an image filter to be applied to the first image from the server. According to an embodiment, the communication unit 1500 may receive a second image in which an image effect is applied by applying an image filter to the first image from the server.
- the A / V (Audio / Video) input unit 1600 is for inputting an audio signal or a video signal, which may include a camera 1610 and a microphone 1620.
- the camera 1610 may obtain a video frame such as a still image or a video through an image sensor in a video call mode or a shooting mode.
- the image captured through the image sensor may be processed through the processor 1300 or a separate image processing unit (not shown).
- the image photographed by the camera 1610 may be used as context information of the user.
- the microphone 1620 receives external sound signals and processes them as electrical voice data.
- the microphone 1620 may receive an acoustic signal from an external device or user.
- the microphone 1620 may receive a user's voice input.
- the microphone 1620 may use various noise removal algorithms to remove noise generated in the process of receiving an external sound signal.
- the memory 1700 may store a program for processing and controlling the processor 1300, and may store data input to or output from the electronic device 1000. Also, the memory 1700 may store images and results of searching for images stored in the memory 1700. The memory 1700 may store information related to images stored in the electronic device 1000 in the electronic device 1000. For example, the memory 1700 may store a path in which an image is stored, additional information related to an image including an image capture time, an object and a background model 1822, a shooting model 1832, a composition and a light source model 1842, and the like. Can be saved.
- the memory 1700 is a neural network that is learned based on the object and background model 1822, the photographing model 1832, the composition and light source model 1842, layers for specifying the structure of the neural network, and weights between the layers. More information about can be stored. For example, if the memory 1700 modifies an image filter already applied to the acquired original image, the original image applied with the filter, and the image applied with the filter, as well as the learned neural network, the applied image filter You can save the modified image, etc.
- the memory 1700 is a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory, etc.), RAM (RAM, Random Access Memory) SRAM (Static Random Access Memory), ROM (ROM, Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory), magnetic memory, magnetic disk , It may include at least one type of storage medium of the optical disk.
- Programs stored in the memory 1700 may be classified into a plurality of modules according to their functions.
- the programs stored in the memory 1700 may be classified into a UI module 1710, a touch screen module 1720, and a notification module 1730. .
- the UI module 1710 may provide specialized UIs, GUIs, and the like interlocked with the electronic device 1000 for each application.
- the touch screen module 1720 may detect a touch gesture on the user's touch screen and transmit information regarding the touch gesture to the processor 1300.
- the touch screen module 1720 according to some embodiments may recognize and analyze a touch code.
- the touch screen module 1720 may be configured with separate hardware including a controller.
- the notification module 1730 may generate a signal for notifying the occurrence of an event in the electronic device 1000. Examples of events generated in the electronic device 1000 include call signal reception, message reception, key signal input, and schedule notification.
- the notification module 1730 may output a notification signal in the form of a video signal through the display unit 1210, or may output a notification signal in the form of an audio signal through the sound output unit 1220, or the vibration motor 1230 It is also possible to output a notification signal in the form of a vibration signal.
- 22 is a block diagram of a server according to an embodiment.
- the server 2000 may include a processor 2300, a communication unit 2500, and a database (Data Base, 2700).
- the communication unit 2500 may include one or more components that enable communication with the electronic device 1000.
- the communication unit 2500 receives the first image from the electronic device 1000 or information about at least one object and background detected by the electronic device 1000 from the first image (for example, feature information for each object and background) Can receive. Also, the communication unit 2500 may transmit information on image filters to be applied to each of the background and at least one object in the first image to the electronic device 1000.
- DB 2700 is based on the object and background model 1822, the shooting model 1832, the composition and light source model 1842, and a plurality of learning models 1822, 1832, and 1842. Data can be saved.
- the DB 2700 is applied to each of at least one object and background in the first image when the feature is input, and the first neural network that outputs feature information for each of the at least one object and the background in the first image A second neural network outputting an image filter to be stored may be stored.
- the DB 2700 may further store information related to images stored in the electronic device 1000.
- the DB 2700 may include an original image to which an image filter is not applied, or at least one image filter to be applied.
- an image to which a filter is applied when a user modifies the image filter, an image in which the applied image filter is modified may be stored.
- the processor 2300 typically controls the overall operation of the server 2000.
- the processor 2300 may overall control the DB 2700 and the communication unit 2500 by executing programs stored in the DB 2700 of the server 2000.
- the processor 2300 may perform some of the operations of the electronic device 1000 in FIGS. 1 to 20 by executing programs stored in the DB 2700.
- the processor 2300 acquires feature information for each of the at least one object and the background from the first image, and analyzes the composition of the first image based on the feature information for each of the acquired object and the background. At least one of a function of determining a photographing mode of the electronic device 1000 photographing an image, a function of determining information about a light source included in the first image, and a function of acquiring additional information related to the first image may be performed. have.
- the processor 2300 includes data required to obtain feature information for each of at least one object and background from the first image, and data required to analyze the composition of the first image based on the acquired feature information, light source in the first image It is possible to manage at least one of data necessary for determining information about, and data for determining a photographing mode of the electronic device for photographing the first image.
- a method of acquiring an image by an electronic device may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable recording medium.
- Computer readable recording media can be any available media that can be accessed by a computer, and includes both volatile and nonvolatile media, removable and non-removable media.
- computer-readable recording media may include both computer storage media and communication media.
- Computer storage media include magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD-ROMs, DVDs, and magnetic-optical media such as floptical disks ( Hardware implemented in any method or technology for storage of information such as magneto-optical media, and computer readable instructions such as ROM, RAM, flash memory, data structures, program modules or other data.
- Examples of program instructions include high-level language code that can be executed by a computer using an interpreter, etc., as well as machine language codes produced by a compiler.
- a computer program device or a computer program product including a recording medium in which a program for allowing an electronic device to perform a method for acquiring an image is stored may be provided.
- the “part” may be a hardware component such as a processor or circuit, and / or a software component executed by a hardware component such as a processor.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Biodiversity & Conservation Biology (AREA)
- Image Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
Abstract
La présente invention concerne une étiquette et un procédé d'émission par l'étiquette d'un signal de réponse en réponse à un signal de recherche d'étiquette. De façon spécifique, l'invention concerne un procédé d'émission d'un signal de réponse, comprenant les étapes consistant à : recevoir le signal de recherche d'étiquette comprenant des données d'identification pour identifier l'étiquette, à partir d'au moins l'un de multiples nœuds esclaves ; charger l'élément de stockage d'énergie dans l'étiquette en utilisant le signal de recherche d'étiquette reçu ; acquérir les données d'identification pour identifier l'étiquette, à partir du signal de recherche d'étiquette reçu ; déterminer si les données d'identification acquises correspondent à des informations d'identification stockées dans l'étiquette à l'avance ; et délivrer un signal de réponse en réponse au signal de recherche d'étiquette lorsque l'élément de stockage d'énergie est chargé à un niveau prédéterminé ou plus et les données d'identification acquises correspondent à des informations d'identification stockées dans l'étiquette à l'avance.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/283,137 US11699213B2 (en) | 2018-10-23 | 2019-10-11 | Image-capturing device and method for controlling same |
| KR1020217005416A KR102500760B1 (ko) | 2018-10-23 | 2019-10-11 | 이미지 획득 장치 및 그의 제어 방법 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR20180126860 | 2018-10-23 | ||
| KR10-2018-0126860 | 2018-10-23 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020085694A1 true WO2020085694A1 (fr) | 2020-04-30 |
Family
ID=70331578
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2019/013346 Ceased WO2020085694A1 (fr) | 2018-10-23 | 2019-10-11 | Dispositif de capture d'image et procédé de commande associé |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11699213B2 (fr) |
| KR (1) | KR102500760B1 (fr) |
| WO (1) | WO2020085694A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102251076B1 (ko) * | 2020-08-26 | 2021-05-13 | 아주대학교산학협력단 | 실내 이미지를 사용하여 설계도면을 추정하는 방법 |
| KR20220076178A (ko) * | 2020-11-30 | 2022-06-08 | 삼성전자주식회사 | 영상의 ai 복호화를 위한 장치, 및 방법 |
| EP4181067A4 (fr) * | 2020-11-30 | 2023-12-27 | Samsung Electronics Co., Ltd. | Dispositif et procédé de codage et de décodage d'images par ia |
Families Citing this family (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102824640B1 (ko) * | 2016-09-07 | 2025-06-25 | 삼성전자주식회사 | 뉴럴 네트워크에 기초한 인식 장치 및 뉴럴 네트워크의 트레이닝 방법 |
| WO2018088794A2 (fr) * | 2016-11-08 | 2018-05-17 | 삼성전자 주식회사 | Procédé de correction d'image au moyen d'un dispositif et dispositif associé |
| JP6841345B2 (ja) * | 2017-12-06 | 2021-03-10 | 日本電気株式会社 | 画像認識モデル生成装置、画像認識モデル生成方法および画像認識モデル生成プログラム |
| JP6705533B2 (ja) * | 2018-10-19 | 2020-06-03 | ソニー株式会社 | センサ装置、パラメータ設定方法 |
| CN113168713B (zh) * | 2018-12-14 | 2024-09-06 | 富士胶片株式会社 | 小批量学习装置及其工作程序、工作方法及图像处理装置 |
| KR102697346B1 (ko) * | 2018-12-20 | 2024-08-21 | 삼성전자주식회사 | 영상에서 오브젝트를 인식하는 전자 장치 및 그 동작 방법 |
| US11748870B2 (en) * | 2019-09-27 | 2023-09-05 | Intel Corporation | Video quality measurement for virtual cameras in volumetric immersive media |
| US11656353B2 (en) * | 2019-10-10 | 2023-05-23 | Orbital Insight, Inc. | Object measurement using deep learning analysis of synthetic aperture radar backscatter signatures |
| CN114981836B (zh) * | 2020-01-23 | 2025-05-23 | 三星电子株式会社 | 电子设备和电子设备的控制方法 |
| JP2021149446A (ja) * | 2020-03-18 | 2021-09-27 | 株式会社日立製作所 | 注視物体認識システム及び方法 |
| KR102775308B1 (ko) * | 2020-04-13 | 2025-03-05 | 삼성전자주식회사 | 광원 정보를 출력하는 방법 및 장치 |
| WO2022077348A1 (fr) * | 2020-10-15 | 2022-04-21 | 京东方科技集团股份有限公司 | Procédé de calcul de volume d'aliment, procédé de calcul de calories, appareil électronique, dispositif électronique, et support de stockage |
| US11727678B2 (en) * | 2020-10-30 | 2023-08-15 | Tiliter Pty Ltd. | Method and apparatus for image recognition in mobile communication device to identify and weigh items |
| KR102742533B1 (ko) * | 2021-04-08 | 2024-12-12 | 서강대학교산학협력단 | 학습 데이터 생성장치 및 학습 데이터 생성장치의 동작방법 |
| CN117616447A (zh) | 2021-07-09 | 2024-02-27 | 三星电子株式会社 | 电子装置和电子装置的操作方法 |
| US12056791B2 (en) * | 2021-08-20 | 2024-08-06 | Adobe Inc. | Generating object-based layers for digital image editing using object classification machine learning models |
| KR20230105233A (ko) * | 2022-01-03 | 2023-07-11 | 삼성전자주식회사 | 이미지 기반의 이미지 효과를 제공하는 전자 장치 및 그 제어 방법 |
| EP4394709A4 (fr) * | 2022-01-03 | 2025-01-22 | Samsung Electronics Co., Ltd. | Dispositif électronique fournissant un effet d'image basé sur une image et son procédé de commande |
| CN117319812A (zh) * | 2022-06-20 | 2023-12-29 | 北京小米移动软件有限公司 | 图像处理方法及装置、移动终端、存储介质 |
| WO2024129082A1 (fr) * | 2022-12-15 | 2024-06-20 | Google Llc | Système et procédé de superposition de vidéo sur vidéo |
| US12373919B2 (en) | 2023-01-04 | 2025-07-29 | Samsung Electronics Co., Ltd. | Image-filtering interface |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20080076170A (ko) * | 2007-02-15 | 2008-08-20 | 연세대학교 산학협력단 | 지문 영상 생성을 위한 영상 필터 조합 생성 방법 |
| US20150370907A1 (en) * | 2014-06-19 | 2015-12-24 | BrightSky Labs, Inc. | Systems and methods for intelligent filter application |
| KR20160051390A (ko) * | 2014-11-03 | 2016-05-11 | 삼성전자주식회사 | 전자장치 및 전자장치의 필터 제공 방법 |
| KR20170098089A (ko) * | 2016-02-19 | 2017-08-29 | 삼성전자주식회사 | 전자 장치 및 그의 동작 방법 |
| WO2018088794A2 (fr) * | 2016-11-08 | 2018-05-17 | 삼성전자 주식회사 | Procédé de correction d'image au moyen d'un dispositif et dispositif associé |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101471199B1 (ko) * | 2008-04-23 | 2014-12-09 | 주식회사 케이티 | 영상을 전경과 배경으로 분리하는 방법 및 장치, 영상을전경과 배경으로 분리하여 배경을 대체하는 방법 및 장치 |
| JP2014068227A (ja) | 2012-09-26 | 2014-04-17 | Nikon Corp | 撮像システム |
| US20140354768A1 (en) * | 2013-05-30 | 2014-12-04 | Microsoft Corporation | Socialized Mobile Photography |
| US9412046B2 (en) | 2014-10-10 | 2016-08-09 | Facebook, Inc. | Training image adjustment preferences |
| US9569697B1 (en) * | 2015-02-19 | 2017-02-14 | Google Inc. | Object oriented image editing |
| KR102359391B1 (ko) | 2016-11-08 | 2022-02-04 | 삼성전자주식회사 | 디바이스가 이미지를 보정하는 방법 및 그 디바이스 |
| CN106851063A (zh) * | 2017-02-27 | 2017-06-13 | 努比亚技术有限公司 | 一种基于双摄像头的曝光调节终端及方法 |
| US10706512B2 (en) * | 2017-03-07 | 2020-07-07 | Adobe Inc. | Preserving color in image brightness adjustment for exposure fusion |
| US10497105B2 (en) * | 2017-11-01 | 2019-12-03 | Google Llc | Digital image auto exposure adjustment |
| US10630903B2 (en) * | 2018-01-12 | 2020-04-21 | Qualcomm Incorporated | Systems and methods for image exposure |
| US10614347B2 (en) * | 2018-01-25 | 2020-04-07 | Adobe Inc. | Identifying parameter image adjustments using image variation and sequential processing |
| KR102661983B1 (ko) * | 2018-08-08 | 2024-05-02 | 삼성전자주식회사 | 이미지의 인식된 장면에 기반하여 이미지를 처리하는 방법 및 이를 위한 전자 장치 |
| CN112840635A (zh) * | 2018-10-15 | 2021-05-25 | 华为技术有限公司 | 智能拍照方法、系统及相关装置 |
-
2019
- 2019-10-11 US US17/283,137 patent/US11699213B2/en active Active
- 2019-10-11 KR KR1020217005416A patent/KR102500760B1/ko active Active
- 2019-10-11 WO PCT/KR2019/013346 patent/WO2020085694A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20080076170A (ko) * | 2007-02-15 | 2008-08-20 | 연세대학교 산학협력단 | 지문 영상 생성을 위한 영상 필터 조합 생성 방법 |
| US20150370907A1 (en) * | 2014-06-19 | 2015-12-24 | BrightSky Labs, Inc. | Systems and methods for intelligent filter application |
| KR20160051390A (ko) * | 2014-11-03 | 2016-05-11 | 삼성전자주식회사 | 전자장치 및 전자장치의 필터 제공 방법 |
| KR20170098089A (ko) * | 2016-02-19 | 2017-08-29 | 삼성전자주식회사 | 전자 장치 및 그의 동작 방법 |
| WO2018088794A2 (fr) * | 2016-11-08 | 2018-05-17 | 삼성전자 주식회사 | Procédé de correction d'image au moyen d'un dispositif et dispositif associé |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102251076B1 (ko) * | 2020-08-26 | 2021-05-13 | 아주대학교산학협력단 | 실내 이미지를 사용하여 설계도면을 추정하는 방법 |
| KR20220076178A (ko) * | 2020-11-30 | 2022-06-08 | 삼성전자주식회사 | 영상의 ai 복호화를 위한 장치, 및 방법 |
| EP4181067A4 (fr) * | 2020-11-30 | 2023-12-27 | Samsung Electronics Co., Ltd. | Dispositif et procédé de codage et de décodage d'images par ia |
| US12266089B2 (en) | 2020-11-30 | 2025-04-01 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence encoding and artificial intelligence decoding of image |
| KR102830406B1 (ko) * | 2020-11-30 | 2025-07-07 | 삼성전자주식회사 | 영상의 ai 복호화를 위한 장치, 및 방법 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR102500760B1 (ko) | 2023-02-16 |
| US20210390673A1 (en) | 2021-12-16 |
| US11699213B2 (en) | 2023-07-11 |
| KR20210030466A (ko) | 2021-03-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020085694A1 (fr) | Dispositif de capture d'image et procédé de commande associé | |
| WO2018212538A1 (fr) | Dispositif électronique et procédé de détection d'événement de conduite de véhicule | |
| WO2018117428A1 (fr) | Procédé et appareil de filtrage de vidéo | |
| WO2018117704A1 (fr) | Appareil électronique et son procédé de fonctionnement | |
| EP3602497A1 (fr) | Dispositif électronique et procédé de détection d'événement de conduite de véhicule | |
| WO2019027141A1 (fr) | Dispositif électronique et procédé de commande du fonctionnement d'un véhicule | |
| WO2019132518A1 (fr) | Dispositif d'acquisition d'image et son procédé de commande | |
| WO2020080773A1 (fr) | Système et procédé de fourniture de contenu sur la base d'un graphe de connaissances | |
| WO2019151735A1 (fr) | Procédé de gestion d'inspection visuelle et système d'inspection visuelle | |
| WO2018117662A1 (fr) | Appareil et procédé de traitement d'image | |
| WO2020231153A1 (fr) | Dispositif électronique et procédé d'aide à la conduite d'un véhicule | |
| WO2019031714A1 (fr) | Procédé et appareil de reconnaissance d'objet | |
| WO2016126007A1 (fr) | Procédé et dispositif de recherche d'image | |
| EP3539056A1 (fr) | Appareil électronique et son procédé de fonctionnement | |
| WO2019059505A1 (fr) | Procédé et appareil de reconnaissance d'objet | |
| WO2019093819A1 (fr) | Dispositif électronique et procédé de fonctionnement associé | |
| WO2020235852A1 (fr) | Dispositif de capture automatique de photo ou de vidéo à propos d'un moment spécifique, et procédé de fonctionnement de celui-ci | |
| WO2019124963A1 (fr) | Dispositif et procédé de reconnaissance vocale | |
| WO2018117538A1 (fr) | Procédé d'estimation d'informations de voie et dispositif électronique | |
| WO2021206221A1 (fr) | Appareil à intelligence artificielle utilisant une pluralité de couches de sortie et procédé pour celui-ci | |
| WO2021006482A1 (fr) | Appareil et procédé de génération d'image | |
| WO2020262746A1 (fr) | Appareil à base d'intelligence artificielle pour recommander un parcours de linge, et son procédé de commande | |
| EP3545685A1 (fr) | Procédé et appareil de filtrage de vidéo | |
| WO2020013676A1 (fr) | Dispositif électronique et procédé de fonctionnement pour commander la luminosité d'une source de lumière | |
| WO2020256169A1 (fr) | Robot destiné à fournir un service de guidage au moyen d'une intelligence artificielle, et son procédé de fonctionnement |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19876903 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 20217005416 Country of ref document: KR Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19876903 Country of ref document: EP Kind code of ref document: A1 |