WO2023234061A1 - データ取得装置、データ取得方法、及びデータ取得台 - Google Patents
データ取得装置、データ取得方法、及びデータ取得台 Download PDFInfo
- Publication number
- WO2023234061A1 WO2023234061A1 PCT/JP2023/018641 JP2023018641W WO2023234061A1 WO 2023234061 A1 WO2023234061 A1 WO 2023234061A1 JP 2023018641 W JP2023018641 W JP 2023018641W WO 2023234061 A1 WO2023234061 A1 WO 2023234061A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light
- emitting panel
- image
- light emitting
- data acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05B—ELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
- H05B47/00—Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
- H05B47/10—Controlling the light source
- H05B47/105—Controlling the light source in response to determined parameters
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- the present disclosure relates to a data acquisition device, a data acquisition method, and a data acquisition stand.
- Patent Document 1 Conventionally, systems for generating learning data used for learning in semantic segmentation and the like have been known (for example, see Patent Document 1).
- a data acquisition device is configured to be able to control a light emitting panel, and configured to be able to acquire a captured image of the light emitting panel and a target located in front of the light emitting panel. Department.
- the control unit causes the light emitting panel to emit light and generates mask data of the object based on the photographed image.
- a data acquisition method includes causing a light emitting panel to emit light, and masking data of the object based on a photographed image of the object located in front of the light emitting panel and the light emitting panel. and generating.
- a data acquisition stand includes a light-emitting panel that emits light in a predetermined color, and a light-transmitting member located between the light-emitting panel and an object placed in front of the light-emitting panel.
- FIG. 1 is a block diagram illustrating a configuration example of a data acquisition system according to an embodiment.
- FIG. 1 is a plan view showing a configuration example of a data acquisition system.
- 3 is a sectional view taken along line AA in FIG. 2.
- FIG. 3 is a diagram showing an example of the brightness of each pixel of a photographed image of a target object.
- 4A is a diagram showing an example of a mask image generated based on the photographed image of FIG. 4A.
- FIG. FIG. 2 is a plan view showing an example of an object located on a light emitting panel. It is a figure which shows an example of the photographed image of the light emitting panel in the state of emitting light.
- FIG. 1 is a plan view showing a configuration example of a data acquisition system.
- 3 is a sectional view taken along line AA in FIG. 2.
- FIG. 3 is a diagram showing an example of the brightness of each pixel of a photographed image of a target object.
- 4A is
- FIG. 3 is a diagram showing an example of a photographed image of an object located on a light emitting panel in a state of emitting light.
- 6B is a diagram showing an example of a mask image generated based on the difference between the captured image in FIG. 6A and the captured image in FIG. 6B.
- FIG. FIG. 3 is a diagram showing an example of a photographed image of an object located on a light-emitting panel in a state where the light is off. It is a figure which shows an example of the same mask image as FIG. 6C.
- 7B is a diagram illustrating an example of an extracted image obtained by applying the mask image of FIG. 7B to the captured image of FIG. 7A to extract an image of the object.
- FIG. 7B is a diagram showing an example of an extracted image obtained by applying the mask image of FIG. 7B to the captured image of FIG. 7A to extract an image of the object.
- FIG. 7C is a diagram showing an example of teacher data generated by superimposing the extracted image of FIG. 7C on a background image.
- FIG. 3 is a flowchart illustrating an example of a procedure of a data acquisition method. It is a top view which shows an example of the object which is located on the light emitting panel and which has a side surface.
- FIG. 7 is a plan view showing an example in which the color of the light emitted from the light emitting panel and the color of the side surface of the object are the same.
- FIG. 7 is a plan view illustrating an example in which the color of the light emitted from the light emitting panel and the color of the side surface of the object are different.
- FIG. 3 is a flowchart illustrating an example of a procedure of a data acquisition method. It is a top view which shows an example of the object which is located on the light emitting panel and which has a side surface.
- FIG. 7 is a plan view showing an example in which the color of the light emitted from
- FIG. 7 is a diagram illustrating an example of a mask image generated when the color of the light emitted from the light emitting panel and the color of the side surface of the object are the same.
- FIG. 6 is a diagram illustrating an example of a mask image generated when the luminescent color of the luminescent panel and the color of the side surface of the object are different.
- 12B is a diagram showing an example of a mask image generated by calculating the logical sum of each pixel in FIG. 12A and each pixel in FIG. 12B.
- FIG. It is a flowchart which shows the example of a procedure of the data acquisition method including the procedure of making a light emitting panel emit light in at least two colors.
- FIG. 1 is a schematic diagram showing a configuration example of a robot control system.
- a data acquisition system 1 acquires teacher data for generating a trained model that outputs a recognition result of a recognition target included in input information.
- the learned model may include a CNN (Convolution Neural Network) having multiple layers. Convolution based on predetermined weighting coefficients is performed in each layer of the CNN on the information input to the trained model. In training the trained model, the weighting coefficients are updated.
- the trained model may include a fully connected layer.
- the learned model may be configured by VGG16 or ResNet50.
- the trained model may be configured as a transformer.
- the learned model is not limited to these examples, and may be configured as various other models.
- a data acquisition system 1 includes a data acquisition device 10, a light emitting panel 20, and a photographing device 30.
- the light-emitting panel 20 has a light-emitting surface, and is configured such that an object 50 for acquiring teacher data can be placed on the light-emitting surface.
- the photographing device 30 is configured to photograph the object 50 placed on the light emitting panel 20 and the light emitting panel 20 .
- the photographing device 30 may photograph the light emitting panel 20 in a state where the object 50 is not placed on the light emitting panel 20.
- the data acquisition device 10 controls the light emitting state of the light emitting panel 20.
- the data acquisition device 10 acquires an image of the object 50 from the photographing device 30.
- the data acquisition device 10 is configured to be able to acquire captured images.
- the data acquisition device 10 can generate data that allows the object 50 to be recognized, for example, based on the photographed image.
- the data acquisition device 10 can, for example, generate training data of the object 50 based on the photographed image and acquire the training data.
- the data acquisition device 10 includes a control section 12, a storage section 14, and an interface 16.
- the control unit 12 is configured to be able to control the light-emitting panel 20 and to be able to acquire at least one captured image of the light-emitting surface of the light-emitting panel 20.
- Control unit 12 may be configured to include at least one processor to provide control and processing capabilities to perform various functions.
- the processor may execute programs that implement various functions of the control unit 12.
- a processor may be implemented as a single integrated circuit.
- An integrated circuit is also called an IC (Integrated Circuit).
- a processor may be implemented as a plurality of communicatively connected integrated and discrete circuits. The processor may be implemented based on various other known technologies.
- the storage unit 14 may include an electromagnetic storage medium such as a magnetic disk, or may include a memory such as a semiconductor memory or a magnetic memory.
- the storage unit 14 stores various information.
- the storage unit 14 stores programs and the like executed by the control unit 12.
- the storage unit 14 may be configured as a non-transitory readable medium.
- the storage unit 14 may function as a work memory for the control unit 12. At least a portion of the storage unit 14 may be configured separately from the control unit 12.
- the interface 16 is configured to input and output information or data between the light emitting panel 20 and the photographing device 30.
- the interface 16 may be configured to include a communication device configured to be able to communicate by wire or wirelessly.
- the communication device may be configured to be able to communicate using communication methods based on various communication standards.
- Interface 16 can be constructed using known communication techniques.
- the interface 16 may include a display device.
- Display devices may include a variety of displays, such as, for example, liquid crystal displays.
- the interface 16 may include an audio output device such as a speaker.
- the interface 16 is not limited to these, and may be configured to include various other output devices.
- the interface 16 may be configured to include an input device that accepts input from the user.
- the input device may include, for example, a keyboard or physical keys, a touch panel or touch sensor, or a pointing device such as a mouse.
- the input device is not limited to these examples, and may be configured to include various other devices.
- the light emitting panel 20 has a light emitting surface.
- the light emitting panel 20 may be configured as a diffuser plate that disperses the light emitted from the light source and emits it in a planar manner.
- the light emitting panel 20 may be configured as a self-emitting panel.
- the light emitting panel 20 may be configured to emit light of one predetermined color.
- the light emitting panel 20 may be configured to emit light in a single color such as white, for example.
- the light emitting panel 20 is not limited to white, and may be configured to emit light in various colors.
- the light emitting panel 20 may be configured to emit light in a predetermined color.
- the light emitting panel 20 may be configured to emit light in at least two colors.
- the light emitting panel 20 may be configured to control the spectrum of the emitted light color, for example, by combining the brightness values of each color of RGB (Red Green Blue).
- the light emitting panel 20 may have multiple pixels.
- the light emitting panel 20 may be configured to be able to control the state of each pixel into a lighted state or a lighted out state.
- the light emitting panel 20 may be configured to be able to control the color of light emitted by each pixel.
- the light-emitting panel 20 may be configured to control the light-emitting color or light-emitting pattern of the light-emitting panel 20 as a whole depending on the state of each pixel or a combination of light-emitting colors.
- the photographing device 30 may be configured to include various image sensors, cameras, and the like.
- the photographing device 30 is arranged to be able to photograph the light emitting surface of the light emitting panel 20 or the object 50 placed on the light emitting surface. That is, the photographing device 30 is configured to be able to photograph the object 50 located in front of the light emitting panel 20 as seen from the photographing device 30 together with the light emitting panel 20.
- the photographing device 30 may be configured to photograph the light emitting surface of the light emitting panel 20 from various directions.
- the photographing device 30 may be arranged such that the normal direction of the light emitting surface of the light emitting panel 20 and the optical axis of the photographing device 30 coincide.
- the data acquisition system 1 may further include a darkroom that accommodates the light emitting panel 20 and the photographing device 30.
- a darkroom that accommodates the light emitting panel 20 and the photographing device 30.
- the side of the object 50 facing the photographing device 30 is not illuminated by ambient light.
- the photographing device 30 photographs the object 50 with the light emitted from the light emitting panel 20 as a background, thereby displaying the object 50 as a photographed image. Get a silhouette image of.
- the data acquisition system 1 further includes a lighting device 40, although it is not essential.
- illumination device 40 is configured to emit illumination light 42 that illuminates object 50.
- the illumination device 40 may be configured to emit illumination light 42 as light of various colors.
- the photographing device 30 may photograph the object 50 while the object 50 is illuminated with the illumination light 42 and the environment light.
- the photographing device 30 may photograph the object 50 while the object 50 is illuminated with the illumination light 42 .
- the photographing device 30 may photograph the object 50 while the object 50 is illuminated with ambient light.
- the data acquisition device 10 acquires teacher data used in learning to generate a trained model that recognizes the object 50 from an image of the object 50.
- the image of the object 50 includes the background of the object 50.
- the control unit 12 of the data acquisition device 10 may acquire teacher data from a captured image 60 having 25 pixels arranged in 5 ⁇ 5 pixels, for example, as shown in FIG. 4A.
- the numerical value written in the cell corresponding to each pixel of the photographed image 60 corresponds to the brightness of each pixel when the color of each pixel is expressed in gray scale.
- the numerical value represents the brightness in 256 steps from 0 to 255. It is assumed that the larger the value, the closer the pixel is to white. When the numerical value is 0, it is assumed that the color of the pixel corresponding to that cell is black. When the numerical value is 255, it is assumed that the color of the pixel corresponding to that cell is white.
- the pixels corresponding to 12 cells with a numerical value of 255 are assumed to be the background. It is assumed that the pixels corresponding to the 13 cells whose numerical values are 190, 160, 120, or 100 are pixels that represent the object 50.
- the control unit 12 may generate a mask image 70 as illustrated in FIG. 4B.
- the numerical value written in each cell of the mask image 70 indicates the distinction between a mask portion and a transparent portion.
- a pixel corresponding to a cell with a numerical value of 1 corresponds to a transparent portion.
- the transparent portion corresponds to pixels extracted as an image of the object 50 from the photographed image 60 when the mask image 70 is superimposed on the photographed image 60.
- a pixel corresponding to a cell with a numerical value of 0 corresponds to a mask portion.
- the mask portion corresponds to pixels that are not extracted from the photographed image 60 when the mask image 70 is superimposed on the photographed image 60.
- each pixel of a photographed image represents a target object or a background is determined based on the brightness of each pixel.
- the luminance of each pixel in the photographed image is equal to or higher than the threshold value, that pixel is determined to be a pixel representing the background.
- the luminance of each pixel in the photographed image is less than a threshold value, that pixel is determined to be a pixel representing a target object.
- the background is close to black, it is difficult to distinguish between pixels in which the object is reflected and pixels in which the background is reflected.
- the data acquisition system 1 causes the photographing device 30 to photograph the object 50 so that the light emitted from the light emitting panel 20 forms the background of the object 50.
- the background and the object 50 can be easily separated.
- the transparent portion of the mask image 70 used to extract the image of the object 50 tends to match the shape of the image of the object 50. In other words, the accuracy with which the image of the target object 50 is extracted becomes high.
- the control unit 12 of the data acquisition device 10 acquires training data for generating a trained model that recognizes the object 50 placed on the light emitting panel 20, as shown in FIG.
- the object 50 illustrated in FIG. 5 is a bolt-shaped component.
- the object 50 is not limited to a bolt, but may be any other various parts, and may not be limited to a part, but may be any other various articles.
- the control unit 12 acquires a photographed image 60 illustrated in FIG. 6A in which the light-emitting panel 20 is lit and the object 50 is not placed on the light-emitting panel 20.
- the photographed image 60 in FIG. 6A includes a lighting image 24 photographed in a state where the light emitting panel 20 is lit.
- the control unit 12 acquires a photographed image 60 illustrated in FIG. 6B in which the light-emitting panel 20 is lit and the object 50 is placed on the light-emitting panel 20.
- the photographed image 60 in FIG. 6B includes an object image 62, which is a photograph of the object 50, as a foreground, and a lighting image 24, which is a photograph of the light-emitting panel 20 in a lit state, as a background.
- the control unit 12 creates a mask image 70 as shown in FIG. 6C by taking the difference between the photographed image 60 that does not include the target object image 62 in FIG. 6A and the photographed image 60 that includes the target object image 62 in FIG. 6B. generate.
- the mask image 70 is also referred to as mask data.
- the control unit 12 controls the light-emitting panel 20 and the object located in front of the light-emitting panel 20 in the at least one photographed image 60 with the light-emitting panel 20 emitting light.
- Mask data of the object 50 may be generated based on a photographed image 60 of the object 50 and a photographed image 60 of the light emitting panel 20 in a state where the object 50 is not located in front of the light emitting panel 20. .
- the captured image 60 that does not include the object image 62 in FIG. 6A is also referred to as a background image.
- the background image may be a captured image 60 of only the light emitting panel 20, or may be a captured image 60 of the light emitting panel 20 and some kind of indicator.
- the image including the object image 62 in FIG. 6B is also referred to as a foreground image.
- the control unit 12 can generate mask data based on the foreground image and the background image.
- the mask image 70 includes a mask portion 72 and a transparent portion 74.
- the control unit 12 may control the light emitting panel 20 so as to increase the contrast between the light emitting panel 20 in a state of emitting light and the object 50.
- the control unit 12 may determine the emission color of the light emitting panel 20 based on the color of the target object 50.
- the light emitting panel 20 and the photographing device 30 may be housed in a dark room so as to increase the contrast between the light emitting panel 20 and the object 50 in the emitted state.
- the control unit 12 can obtain a photographed image 60 in a state where the object 50 and the light-emitting panel 20 are not exposed to environmental light.
- the control unit 12 may control the illumination light 42 of the illumination device 40 so as to increase the contrast between the light emitting panel 20 and the object 50 in the emitted state.
- the control unit 12 may set the light emission brightness of the light emitting panel 20 so that the brightness of a pixel that shows the light emitting panel 20 in the photographed image 60 is higher than the brightness of a pixel that shows the target object 50.
- the photographing device 30 may place the object 50 on the light-emitting panel 20 and photograph the light-off image with the light-emitting panel 20 turned off.
- the photographing device 30 may place the object 50 on the light-emitting panel 20 and photograph a light-on image with the light-emitting panel 20 turned on.
- the control unit 12 may generate the mask image 70 as mask data based on the difference between the light-off image and the light-on image. In other words, the control unit 12 further uses the mask data of the object 50 based on the difference image between the captured image 60 when the light emitting panel 20 is emitting light and the captured image 60 when the light emitting panel 20 is not emitting light. may be generated.
- the control unit 12 may generate mask data based only on the foreground image. For example, the control unit 12 may generate mask data for the object 50 by determining a portion where the light emitting panel 20 is shown and a portion where the object 50 is shown in the foreground image. In other words, when at least one photographed image is acquired, the control unit 12 selects the light-emitting panel 20 and the object 50 located in front of the light-emitting panel 20 with the light-emitting panel 20 emitting light in the at least one photographed image. Mask data of the object 50 may be generated based on the captured image 60.
- the control unit 12 extracts the object image 62 from the photographed image 60 using the generated mask image 70, and generates an extracted image 64 (see FIG. 7C). Specifically, the control unit 12 acquires a photographed image 60 illustrated in FIG. 7A, which is taken with the light-emitting panel 20 turned off and the object 50 placed on the light-emitting panel 20. do.
- the photographed image 60 in FIG. 7A includes an object image 62 obtained by photographing the object 50 as a foreground, and includes a non-lights image 22 obtained by photographing a state in which the light-emitting panel 20 is turned off as a background.
- the control unit 12 may generate the extracted image 64 by extracting image data of the object 50 from the captured image 60 used to generate the mask data.
- the control unit 12 generates an extracted image 64 by extracting image data of the object 50 from an image taken of the object 50 at the same position as when the photographed image 60 was taken, based on the mask data of the object 50. It's fine.
- the control unit 12 generates the extracted image 64 shown in FIG. 7C by applying the mask image 70 shown in FIG. 7B to the captured image 60 in FIG. 7A and extracting the object image 62.
- the extracted image 64 includes a foreground made up of pixels depicting the object 50 and a background made up of transparent pixels.
- the control unit 12 may generate teacher data using the extracted image 64. Specifically, the control unit 12 may generate an image that is a combination of the extracted image 64 and an arbitrary background image 82 as the composite image 80, as illustrated in FIG. The control unit 12 may output the composite image 80 as teacher data.
- the image of the target object 50 may be an image in which the target object 50 is exposed to ambient light. Further, when generating the extracted image 64, the image of the target object 50 may be an image of the target object 50 placed at a location different from the light emitting panel 20.
- the control unit 12 may photograph the object 50 while controlling the illumination device 40. That is, in order to increase the diversity of training data, the object 50 may be photographed under an illumination environment in which the position or brightness of the illumination light 42 is controlled. Further, the object 50 may be photographed under a plurality of lighting environments.
- the data acquisition device 10 may execute a data acquisition method including the steps of the flowchart illustrated in FIG. 9 .
- the data acquisition method may be realized as a data acquisition program that is executed by a processor that constitutes the control unit 12 of the data acquisition device 10.
- the data acquisition program may be stored on a non-transitory computer readable medium.
- the control unit 12 photographs the light emitting panel 20 using the photographing device 30 (step S1). Specifically, the control unit 12 lights up the light emitting panel 20 to emit light, and photographs the light emitting panel 20 with the photographing device 30 in a state where the object 50 is not placed on the light emitting panel 20. good.
- the control unit 12 may obtain an image of the light-emitting panel 20 that is lit and emitting light.
- the control unit 12 photographs the light emitting panel 20 with the photographing device 30 while the object 50 is placed on the light emitting panel 20 and the light emitting panel 20 is lit and emitting light (step S2). ).
- the control unit 12 may acquire an image photographed by the photographing device 30.
- the control unit 12 generates mask data based on the difference between an image of the light emitting panel 20 taken with no object 50 placed thereon and an image of the light emitting panel 20 taken with the object 50 placed thereon. is generated (step S3).
- the control unit 12 may generate the mask image 70 as mask data.
- the control unit 12 extracts the image of the object 50 from the photographed image 60 using the mask data to generate an extracted image 64 (step S4).
- the control unit 12 generates teacher data using the extracted image 64 (step S5). After executing the procedure of step S5, the control unit 12 ends the execution of the procedure of the flowchart of FIG.
- the contrast between the object 50 and the background can be increased in the photographed image 60 of the object 50.
- mask data for extracting the target object 50 can be generated with high accuracy.
- annotation can be simplified.
- Object 50 may include a top surface 52 and side surfaces 54, as illustrated in FIG.
- the light-emitting panel 20 lights up and emits light
- the light emitted from the light-emitting panel 20 may be reflected by the side surface 54 and enter the photographing device 30 .
- the side surface 54 of the object 50 may appear to be emitting light in the photographed image 60.
- the light emitting panel 20 and the color of the side surface 54 of the target object 50 have different colors.
- the side surface 54 becomes difficult to distinguish.
- the mask image 70 only the upper surface 52 of the object 50 may be set as the transparent section 74, and the side surface 54 may be set as the mask section 72.
- the emission color of the light-emitting panel 20 and the color of the side surface 54 of the object 50 are significantly different, in the photographed image 60, the light-emitting panel 20 and the side surface 54 of the object 50 are different from each other. become easier to distinguish.
- the emission color of the light emitting panel 20 and the color of the side surface 54 of the object 50 are complementary to each other, the light emitting panel 20 and the side surface 54 of the object 50 can be easily distinguished in the photographed image 60.
- the upper surface 52 and side surface 54 of the object 50 may be set as the transparent portion 74.
- the light emitting panel 20 by causing the light emitting panel 20 to emit light in at least two colors and generating mask data for each color, the influence of reflected light on the side surface 54 can be reduced.
- control unit 12 may cause the light emitting panel 20 to emit light in the same color as the side surface 54 of the object 50 as the first color, and may cause the light emitting panel 20 to emit light in a color different from the side surface 54 as the second color. It is assumed that the light emitting panel 20 illustrated in FIG. 11A emits light in a first color.
- An image of mask data generated based on a photographed image of the light emitting panel 20 illustrated in FIG. 11A is illustrated as FIG. 12A.
- the image of the mask data illustrated in FIG. 12A is an image when the light emitting panel 20 is emitting light in the first color, and is referred to as a first mask image 70A. It is assumed that the light emitting panel 20 illustrated in FIG. 11B emits light in the second color.
- FIG. 12B An image of mask data generated based on a photographed image of the light emitting panel 20 illustrated in FIG. 11B is illustrated as FIG. 12B.
- the image of the mask data illustrated in FIG. 12B is an image when the light emitting panel 20 emits light in the second color, and is referred to as a second mask image 70B.
- cells surrounded by a thicker frame than other cells represent pixels corresponding to the side surface 54 of the object 50.
- the pixel corresponding to the side surface 54 is a mask portion 72.
- the pixel corresponding to the side surface 54 is a transparent portion 74. That is, depending on whether the light-emitting panel 20 emits light in the first color or the second color, the pixel corresponding to the side surface 54 becomes the mask part 72 or the transparent part 74.
- the control unit 12 may generate the mask image 70 by calculating the logical sum of the first mask image 70A in FIG. 12A and the second mask image 70B in FIG. 12B. Specifically, the control unit 12 can generate the mask image 70 illustrated in FIG. 12C by calculating the logical sum of each pixel of the first mask image 70A and each pixel of the second mask image 70B. In other words, the control unit 12 may generate the mask data of the object 50 using a plurality of mask data corresponding to each emission color based on the photographed image 60 when the light emitting panel 20 emits light in each emission color. . In the mask image 70 of FIG. 12C, the pixels corresponding to the side surfaces 54 of the object 50 are transparent portions 74.
- the mask data corresponding to the side surface 54 of the object 50 would be incorrect data.
- the light-emitting panel 20 By causing the light-emitting panel 20 to emit at least two different colors and generating mask data with each color, errors in the mask data on the side surface 54 of the object 50 are less likely to occur.
- the data acquisition device 10 may execute a data acquisition method including a procedure of lighting the light emitting panel 20 in multiple colors as shown in the flowchart of FIG. 13.
- the data acquisition method may be realized as a data acquisition program that is executed by a processor that constitutes the control unit 12 of the data acquisition device 10.
- the data acquisition program may be stored on a non-transitory computer readable medium.
- the control unit 12 photographs the light emitting panel 20 using the photographing device 30 (step S11). Specifically, the control unit 12 lights up the light emitting panel 20 to emit light of the first color and the second color, and in a state where the object 50 is not placed on the light emitting panel 20, The light emitting panel 20 may be photographed by the photographing device 30. The control unit 12 may obtain an image of the light emitting panel 20 lit to emit light in the first color and the second color.
- the control unit 12 causes the light emitting panel 20 to be displayed by the photographing device 30 while the object 50 is placed on the light emitting panel 20 and the light emitting panel 20 is lit and emitting the first color.
- a photograph is taken (step S12).
- the control unit 12 may acquire the image photographed by the photographing device 30 as the first lighting image.
- the control unit 12 generates the first mask image 70A based on the first lighting image (step S13).
- the control unit 12 controls the light emitting panel 20 by the photographing device 30 while the object 50 is placed on the light emitting panel 20 and the light emitting panel 20 is lit and emitting the second color. A photograph is taken (step S14).
- the control unit 12 may acquire the image photographed by the photographing device 30 as the second lighting image.
- the control unit 12 generates the second mask image 70B based on the second lighting image (step S15).
- the control unit 12 calculates the logical sum of the first mask image 70A and the second mask image 70B, and generates the mask image 70 (step S16). Specifically, the control unit 12 calculates the logical sum of each pixel of the first mask image 70A and each pixel of the second mask image 70B, and generates an image in which the calculation results of each pixel are arranged as the mask image 70. It's fine. After executing the procedure of step S16, the control unit 12 ends the execution of the procedure of the flowchart of FIG. 13.
- the data acquisition system 1 may include a data acquisition stand for acquiring data.
- the data acquisition stand may include a light emitting panel 20 and a plate for placing the object 50 on the light emitting surface of the light emitting panel 20.
- the plate on which the object 50 is placed is configured to transmit the light emitted from the light emitting panel 20, and is also referred to as a light transmitting member.
- the light transmitting member may be configured so that the object 50 does not directly touch the light emitting surface.
- the light transmitting member may be arranged at a distance from the light emitting surface, or may be arranged so as to be in contact with the light emitting surface.
- the data acquisition stand may further include a dark room that accommodates the light emitting panel 20 and the light transmitting member. Further, the data acquisition stand may further include an illumination device 40 configured to be able to illuminate the object 50.
- a robot control system 100 includes a robot 2 and a robot control device 110.
- the robot 2 moves the work object 8 from the work start point 6 to the work target point 7 . That is, the robot control device 110 controls the robot 2 so that the work object 8 moves from the work start point 6 to the work target point 7.
- the work object 8 is also referred to as a work object.
- the robot control device 110 controls the robot 2 based on information regarding the space in which the robot 2 performs work. Information regarding space is also referred to as spatial information.
- the robot control device 110 acquires a learned model based on learning using the teacher data generated by the data acquisition device 10.
- the robot control device 110 determines the work object 8, the work start point 6, the work target point 7, etc. that exists in the space where the robot 2 performs the work, based on the image taken by the camera 4 and the learned model. recognize. In other words, the robot control device 110 acquires a learned model generated to recognize the work object 8 and the like based on the image taken by the camera 4.
- Robot controller 110 may be configured to include at least one processor to provide control and processing capabilities to perform various functions. Each component of the robot control device 110 may be configured to include at least one processor. A plurality of components among the components of the robot control device 110 may be realized by one processor. The entire robot control device 110 may be realized by one processor. The processor can execute programs that implement various functions of the robot controller 110.
- a processor may be implemented as a single integrated circuit. An integrated circuit is also called an IC (Integrated Circuit).
- a processor may be implemented as a plurality of communicatively connected integrated and discrete circuits. The processor may be implemented based on various other known technologies.
- the robot control device 110 may include a storage unit.
- the storage unit may include an electromagnetic storage medium such as a magnetic disk, or may include a memory such as a semiconductor memory or a magnetic memory.
- the storage unit stores various information, programs executed by the robot control device 110, and the like.
- the storage unit may be configured as a non-transitory readable medium.
- the storage unit may function as a work memory of the robot control device 110. At least a portion of the storage unit may be configured separately from the robot control device 110.
- the robot 2 includes an arm 2A and an end effector 2B.
- the arm 2A may be configured as a 6-axis or 7-axis vertically articulated robot, for example.
- the arm 2A may be configured as a 3-axis or 4-axis horizontal articulated robot or a SCARA robot.
- the arm 2A may be configured as a two-axis or three-axis orthogonal robot.
- the arm 2A may be configured as a parallel link robot or the like.
- the number of axes constituting the arm 2A is not limited to those illustrated.
- the robot 2 has an arm 2A connected by a plurality of joints, and operates by driving the joints.
- the end effector 2B may include, for example, a gripping hand configured to be able to grip the workpiece 8.
- the grasping hand may have multiple fingers. The number of fingers of the gripping hand may be two or more. The fingers of the grasping hand may have one or more joints.
- the end effector 2B may include a suction hand configured to be able to suction the workpiece 8.
- the end effector 2B may include a scooping hand configured to be able to scoop up the workpiece 8.
- the end effector 2B may include a tool such as a drill, and may be configured to perform various processing such as drilling a hole in the workpiece 8.
- the end effector 2B is not limited to these examples, and may be configured to perform various other operations. In the configuration illustrated in FIG. 14, it is assumed that the end effector 2B includes a gripping hand.
- the robot control device 110 can control the position of the end effector 2B by operating the arm 2A of the robot 2.
- the end effector 2B may have an axis that serves as a reference for the direction in which it acts on the workpiece 8.
- the robot control device 110 can control the direction of the axis of the end effector 2B by operating the arm 2A of the robot 2.
- the robot control device 110 controls the start and end of the operation of the end effector 2B acting on the workpiece 8.
- the robot control device 110 can move or process the workpiece 8 by controlling the position of the end effector 2B or the direction of the axis of the end effector 2B and controlling the operation of the end effector 2B. can. In the configuration illustrated in FIG.
- the robot control device 110 causes the end effector 2B to grip the work object 8 at the work start point 6, and moves the end effector 2B to the work target point 7.
- the robot control device 110 causes the end effector 2B to release the work object 8 at the work target point 7. By doing so, the robot control device 110 can cause the robot 2 to move the work object 8 from the work start point 6 to the work target point 7.
- the robot control system 100 further includes a sensor 3.
- the sensor 3 detects physical information about the robot 2.
- the physical information of the robot 2 may include information regarding the actual position or posture of each component of the robot 2 or the speed or acceleration of each component of the robot 2.
- the physical information of the robot 2 may include information regarding forces acting on each component of the robot 2.
- the physical information of the robot 2 may include information regarding the current flowing through the motors that drive each component of the robot 2 or the torque of the motors.
- the physical information of the robot 2 represents the results of the actual movements of the robot 2. That is, the robot control system 100 can grasp the result of the actual operation of the robot 2 by acquiring the physical information of the robot 2.
- the sensor 3 may include a force sensor or a tactile sensor that detects force acting on the robot 2, distributed pressure, slip, etc. as physical information about the robot 2.
- the sensor 3 may include a motion sensor that detects the position or posture, speed, or acceleration of the robot 2 as physical information about the robot 2 .
- the sensor 3 may include a current sensor that detects a current flowing through a motor that drives the robot 2 as physical information about the robot 2 .
- the sensor 3 may include a torque sensor that detects the torque of a motor that drives the robot 2 as physical information about the robot 2.
- the sensor 3 may be installed in a joint of the robot 2 or a joint drive unit that drives the joint.
- the sensor 3 may be installed on the arm 2A of the robot 2 or the end effector 2B.
- the sensor 3 outputs the detected physical information of the robot 2 to the robot control device 110.
- the sensor 3 detects and outputs physical information about the robot 2 at predetermined timing.
- the sensor 3 outputs physical information about the robot 2 as time series data.
- the robot control system 100 includes two cameras 4.
- the camera 4 photographs objects, people, etc. located in the influence range 5 that may affect the operation of the robot 2.
- the image taken by the camera 4 may include monochrome luminance information, or may include luminance information of each color represented by RGB or the like.
- the influence range 5 includes the movement range of the robot 2. It is assumed that the influence range 5 is a range in which the movement range of the robot 2 is further expanded to the outside.
- the influence range 5 may be set such that the robot 2 can be stopped before a person or the like moving from outside the motion range of the robot 2 toward the inside of the motion range enters the inside of the motion range of the robot 2 .
- the influence range 5 may be set, for example, to a range extending outward by a predetermined distance from the boundary of the movement range of the robot 2.
- the camera 4 may be installed so as to be able to take a bird's-eye view of the influence range 5 or the movement range of the robot 2, or the area around these.
- the number of cameras 4 is not limited to two, and may be one, or three or more.
- the robot control device 110 acquires a trained model in advance.
- the robot control device 110 may store the learned model in the storage unit.
- the robot control device 110 obtains an image of the workpiece 8 from the camera 4 .
- the robot control device 110 inputs the captured image of the work object 8 to the trained model as input information.
- the robot control device 110 acquires output information output from the trained model in response to input information.
- the robot control device 110 recognizes the work object 8 based on the output information, and executes work of gripping and moving the work object 8.
- the robot control system 100 can acquire a trained model based on learning using the teacher data generated by the data acquisition system 1, and can recognize the workpiece 8 using the trained model.
- the embodiments of the data acquisition system 1 and the robot control system 100 have been described above, but the embodiments of the present disclosure include a method or program for implementing the system or device, as well as a storage medium on which the program is recorded ( As an example, it is also possible to take an embodiment as an optical disk, a magneto-optical disk, a CD-ROM, a CD-R, a CD-RW, a magnetic tape, a hard disk, a memory card, etc.).
- the implementation form of a program is not limited to an application program such as an object code compiled by a compiler or a program code executed by an interpreter, but may also be in the form of a program module incorporated into an operating system. good.
- the program may or may not be configured such that all processing is performed only in the CPU on the control board.
- the program may be configured such that part or all of the program is executed by an expansion board attached to the board or another processing unit mounted in an expansion unit, as necessary.
- embodiments according to the present disclosure are not limited to any of the specific configurations of the embodiments described above. Embodiments of the present disclosure extend to any novel features or combinations thereof described in this disclosure, or to any novel methods or process steps or combinations thereof described. be able to.
- descriptions such as “first” and “second” are identifiers for distinguishing the configurations.
- the numbers in the configurations can be exchanged.
- the first mask image 70A can exchange the identifiers "first” and “second” with the second mask image 70B.
- the exchange of identifiers takes place simultaneously.
- the configurations are distinguished.
- Identifiers may be removed.
- Configurations with removed identifiers are distinguished by codes.
- the description of identifiers such as “first” and “second” in this disclosure should not be used to interpret the order of the configuration or to determine the existence of lower-numbered identifiers.
- the data acquisition device includes a control unit that is configured to be able to control a light emitting panel and configured to be able to acquire at least one photographed image of a light emitting surface of the light emitting panel.
- the control unit masks the object based on a photographed image of the light-emitting panel and the object located in front of the light-emitting panel with the light-emitting panel emitting light, among the at least one photographed image. Generate data.
- control unit may determine the emission color of the light emitting panel based on the color of the target object.
- the control unit causes the light-emitting panel to emit light in a plurality of colors, and captures an image when the light-emitting panel emits light in each color.
- the mask data of the object may be generated using a plurality of mask data corresponding to each of the emission colors based on the above.
- control unit may control the light-emitting panel in a state where the object is not located in the at least one photographed image.
- Mask data of the object may be generated based on the photographed image.
- control unit may control a captured image when the light-emitting panel is emitting light and a captured image when the light-emitting panel is not emitting light.
- Mask data of the object may be generated based on a difference image with the photographed image.
- control unit may acquire a photographed image in a state where the object and the light emitting panel are not exposed to environmental light.
- control unit may determine whether the luminance of the luminescence panel is equal to the luminance of the object in the photographed image. You can set it to be larger than .
- control unit may control the object at the same position as when the photographed image was taken, based on mask data of the object.
- Image data of the object may be extracted from an image of the object.
- control unit may control illumination light that illuminates the target object.
- a data acquisition method includes causing a light emitting panel to emit light, and masking data of the object based on a captured image of the object located in front of the light emitting panel and the light emitting panel. and generating.
- the data acquisition method in (10) above extracts image data of the object from an image taken of the object at the same position as when the photographed image was taken, based on mask data of the object. It may further include:
- the data acquisition stand includes a light-emitting panel that emits light in a predetermined color, and a light-transmitting member located between an object placed in front of the light-emitting panel and the light-emitting panel.
- the data acquisition stand of (12) above may further include a dark room that accommodates the light emitting panel and the light transmitting member.
- the data acquisition stand of (12) or (13) above may further include an illumination device configured to be able to illuminate the target object.
- the light emitting panel may emit light in one of predetermined colors.
- Data acquisition system 10 Data acquisition device (12: control unit, 14: storage unit, 16: interface) 20 Light emitting panel (22: off image, 24: on image) 30 Photography device 40 Illumination device (42: Illumination light) 50 Object (52: top surface, 54: side surface) 60 Photographed image (62: Image of target object, 64: Extracted image of target object) 70 Mask image (70A: first mask image, 70B: second mask image, 72: mask section, 74: transparent section) 80 Composite image (82: Background image) 100 Robot control system (2: robot, 2A: arm, 2B: end effector, 3: sensor, 4: camera, 5: influence range, 6: work start point, 7: work target point, 8: work object, 110 : robot control device)
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
本開示の一実施形態に係るデータ取得システム1は、入力情報に含まれる認識対象の認識結果を出力する学習済みモデルを生成するための教師データを取得する。学習済みモデルは、複数の層を有するCNN(Convolution Neural Network)を含んで構成されてよい。学習済みモデルに入力された情報に対して、CNNの各層において所定の重みづけ係数に基づく畳み込みが実行される。学習済みモデルの学習において、重みづけ係数が更新される。学習済みモデルは、全結合層を含んで構成されてよい。学習済みモデルは、VGG16又はResNet50によって構成されてもよい。学習済みモデルは、トランスフォーマとして構成されてもよい。学習済みモデルは、これらの例に限られず、他の種々のモデルとして構成されてもよい。
データ取得装置10は、制御部12と、記憶部14と、インタフェース16とを備える。
発光パネル20は、発光面を有する。発光パネル20は、光源から射出される光を分散して面状に射出する拡散板として構成されてよい。発光パネル20は、自発光するパネルとして構成されてよい。発光パネル20は、所定の色のうち一色を発光するように構成されてよい。発光パネル20は、例えば白色等の単色を発光色として発光するように構成されてよい。発光パネル20は、白色に限られず種々の色で発光するように構成されてよい。発光パネル20は、所定の色で発光するように構成されてよい。発光パネル20は、少なくとも2色を発光色として発光するように構成されてよい。発光パネル20は、例えばRGB(Red Green Blue)の各色の輝度値の組み合わせによって発光色のスペクトルを制御するように構成されてよい。
撮影装置30は、種々の撮像素子又はカメラ等を含んで構成されてよい。撮影装置30は、発光パネル20の発光面又は発光面の上に載置された対象物50を撮影可能に配置される。つまり、撮影装置30は、撮影装置30から見て発光パネル20の前に位置する対象物50を発光パネル20とともに撮影可能に構成される。撮影装置30は、発光パネル20の発光面を種々の方向から撮影するように構成されてもよい。撮影装置30は、発光パネル20の発光面の法線方向と撮影装置30の光軸とが一致するように配置されてよい。
データ取得システム1において、データ取得装置10は、対象物50を撮影した画像から対象物50を認識する学習済みモデルを生成するための学習で用いる教師データを取得する。対象物50を撮影した画像は、対象物50の背景を含む。データ取得装置10の制御部12は、例えば図4Aに示されるように、5×5個で配列する25個の画素を有する撮影画像60から教師データを取得してよい。撮影画像60の各画素に対応するセルに記載されている数値は、各画素の色をグレースケールで表したときの各画素の輝度に対応する。数値は、0から255までの256段階で輝度を表すとする。数値が大きいほどその画素が白色に近いとする。数値が0である場合、そのセルに対応する画素の色は黒色であるとする。数値が255である場合、そのセルに対応する画素の色は白色であるとする。
データ取得装置10は、図9に例示されるフローチャートの手順を含むデータ取得方法を実行してもよい。データ取得方法は、データ取得装置10の制御部12を構成するプロセッサに実行させるデータ取得プログラムとして実現されてもよい。データ取得プログラムは、非一時的なコンピュータ読み取り可能な媒体に格納されてよい。
以上述べてきたように、本実施形態に係るデータ取得システム1、データ取得装置10及びデータ取得方法によれば、対象物50の撮影画像60において、対象物50と背景とのコントラストが高められ得る。コントラストが高められることによって、対象物50を抽出するためのマスクデータが高精度で生成され得る。マスクデータが高精度で生成されることによって、対象物50の画像に対して手動で修正する必要が無くなり得る。その結果、アノテーションが簡易化され得る。
以下、他の実施形態が説明される。
対象物50は、図10に例示されるように上面52と側面54とを備えることがある。発光パネル20が点灯して発光する場合、発光パネル20から射出される光は、側面54で反射して撮影装置30に入射し得る。側面54で反射した光が撮影装置30に入射した場合、撮影画像60において対象物50の側面54が発光しているように見えることがある。
データ取得システム1は、データを取得するためのデータ取得台を備えてよい。データ取得台は、発光パネル20と、発光パネル20の発光面の上に対象物50を載置するための板とを備えてよい。対象物50を載置するための板は、発光パネル20から射出される光を透過するように構成され、光透過部材とも称される。光透過部材は、対象物50が発光面に直接触れないように構成されてよい。光透過部材は、発光面と間隔を空けて配置されてよいし、発光面に接触するように配置されてもよい。
図14に示されるように、一実施形態に係るロボット制御システム100は、ロボット2と、ロボット制御装置110とを備える。本実施形態において、ロボット2は、作業対象物8を作業開始地点6から作業目標地点7へ移動させるとする。つまり、ロボット制御装置110は、作業対象物8が作業開始地点6から作業目標地点7へ移動するようにロボット2を制御する。作業対象物8は、作業対象とも称される。ロボット制御装置110は、ロボット2が作業を実施する空間に関する情報に基づいて、ロボット2を制御する。空間に関する情報は、空間情報とも称される。
ロボット制御装置110は、データ取得装置10で生成された教師データを用いた学習に基づく学習済みモデルを取得する。ロボット制御装置110は、カメラ4で撮影した画像と学習済みモデルとに基づいて、ロボット2が作業を実施する空間に存在する、作業対象物8、又は作業開始地点6若しくは作業目標地点7等を認識する。言い換えれば、ロボット制御装置110は、カメラ4で撮影した画像に基づいて作業対象物8等を認識するために生成された学習済みモデルを取得する。
ロボット2は、アーム2Aと、エンドエフェクタ2Bとを備える。アーム2Aは、例えば、6軸又は7軸の垂直多関節ロボットとして構成されてよい。アーム2Aは、3軸又は4軸の水平多関節ロボット又はスカラロボットとして構成されてもよい。アーム2Aは、2軸又は3軸の直交ロボットとして構成されてもよい。アーム2Aは、パラレルリンクロボット等として構成されてもよい。アーム2Aを構成する軸の数は、例示したものに限られない。言い換えれば、ロボット2は、複数の関節で接続されるアーム2Aを有し、関節の駆動によって動作する。
図14に示されるように、ロボット制御システム100は、更にセンサ3を備える。センサ3は、ロボット2の物理情報を検出する。ロボット2の物理情報は、ロボット2の各構成部の現実の位置若しくは姿勢、又は、ロボット2の各構成部の速度若しくは加速度に関する情報を含んでよい。ロボット2の物理情報は、ロボット2の各構成部に作用する力に関する情報を含んでよい。ロボット2の物理情報は、ロボット2の各構成部を駆動するモータに流れる電流又はモータのトルクに関する情報を含んでよい。ロボット2の物理情報は、ロボット2の実際の動作の結果を表す。つまり、ロボット制御システム100は、ロボット2の物理情報を取得することによって、ロボット2の実際の動作の結果を把握することができる。
図14に示される構成例において、ロボット制御システム100は、2台のカメラ4を備えるとする。カメラ4は、ロボット2の動作に影響を及ぼす可能性がある影響範囲5に位置する物品又は人間等を撮影する。カメラ4が撮影する画像は、モノクロの輝度情報を含んでもよいし、RGB等で表される各色の輝度情報を含んでもよい。影響範囲5は、ロボット2の動作範囲を含む。影響範囲5は、ロボット2の動作範囲を更に外側に広げた範囲であるとする。影響範囲5は、ロボット2の動作範囲の外側から動作範囲の内側へ向かって移動する人間等がロボット2の動作範囲の内側に入るまでにロボット2を停止できるように設定されてよい。影響範囲5は、例えば、ロボット2の動作範囲の境界から所定距離だけ外側まで拡張された範囲に設定されてもよい。カメラ4は、ロボット2の影響範囲5若しくは動作範囲又はこれらの周辺の領域を俯瞰的に撮影できるように設置されてもよい。カメラ4の数は、2つに限られず、1つであってもよいし、3つ以上であってもよい。
ロボット制御装置110は、学習済みモデルをあらかじめ取得する。ロボット制御装置110は、学習済みモデルを記憶部に格納してよい。ロボット制御装置110は、カメラ4から作業対象物8を撮影した画像を取得する。ロボット制御装置110は、作業対象物8を撮影した画像を入力情報として学習済みモデルに入力する。ロボット制御装置110は、学習済みモデルから入力情報の入力に応じて出力される出力情報を取得する。ロボット制御装置110は、出力情報に基づいて作業対象物8を認識し、作業対象物8を把持したり移動したりする作業を実行する。
以上述べてきたように、ロボット制御システム100は、データ取得システム1で生成された教師データを用いた学習に基づく学習済みモデルを取得し、学習済みモデルによって作業対象物8を認識できる。
10 データ取得装置(12:制御部、14:記憶部、16:インタフェース)
20 発光パネル(22:消灯画像、24:点灯画像)
30 撮影装置
40 照明装置(42:照明光)
50 対象物(52:上面、54:側面)
60 撮影画像(62:対象物の画像、64:対象物の抽出画像)
70 マスク画像(70A:第1マスク画像、70B:第2マスク画像、72:マスク部、74:透過部)
80 合成画像(82:背景画像)
100 ロボット制御システム(2:ロボット、2A:アーム、2B:エンドエフェクタ、3:センサ、4:カメラ、5:影響範囲、6:作業開始地点、7:作業目標地点、8:作業対象物、110:ロボット制御装置)
Claims (15)
- 発光パネルを制御可能に構成され、かつ前記発光パネルの発光面を撮影した少なくとも1つの撮影画像を取得可能に構成される制御部を備え、
前記制御部は、前記少なくとも1つの撮影画像のうち、前記発光パネルを発光させた状態で前記発光パネル及び前記発光パネルの前に位置する対象物を撮影した撮影画像に基づいて前記対象物のマスクデータを生成する、データ取得装置。 - 前記制御部は、前記対象物の色に基づいて前記発光パネルの発光色を決定する、請求項1に記載のデータ取得装置。
- 前記制御部は、
前記発光パネルを複数の発光色で発光させ、
前記発光パネルが各発光色で発光しているときの撮影画像に基づく前記各発光色に対応する複数のマスクデータによって、前記対象物のマスクデータを生成する、請求項1又は2に記載のデータ取得装置。 - 前記制御部は、
前記少なくとも1つの撮影画像のうち、前記対象物が位置していない状態で前記発光パネルを撮影した撮影画像に基づいて、前記対象物のマスクデータを生成する、請求項1に記載のデータ取得装置。 - 前記制御部は、前記発光パネルが発光しているときの撮影画像と前記発光パネルが発光していないときの撮影画像との差分画像に基づいて、前記対象物のマスクデータを生成する、請求項1又は2に記載のデータ取得装置
- 前記制御部は、前記対象物及び前記発光パネルに環境光が当たらない状態での撮影画像を取得する、請求項1又は2に記載のデータ取得装置。
- 前記制御部は、前記発光パネルの発光輝度を、前記撮影画像において前記発光パネルの輝度が前記対象物の輝度よりも大きくなるように設定する、請求項1又は2に記載のデータ取得装置。
- 前記制御部は、前記対象物のマスクデータに基づいて、前記撮影画像を撮影したときと同じ位置の前記対象物を撮影した画像から前記対象物の画像データを抽出する、請求項1又は2に記載のデータ取得装置。
- 前記制御部は、前記対象物を照らす照明光を制御する、請求項8に記載のデータ取得装置。
- 発光パネルを発光させることと、
前記発光パネルの前に位置する対象物と前記発光パネルとを撮影した撮影画像に基づいて前記対象物のマスクデータを生成することと
を含む、データ取得方法。 - 前記対象物のマスクデータに基づいて、前記撮影画像を撮影したときと同じ位置の前記対象物を撮影した画像から前記対象物の画像データを抽出することを更に含む、請求項10に記載のデータ取得方法。
- 所定の色で発光する発光パネルと、
前記発光パネルの前に配置する対象物及び前記発光パネルの間に位置する光透過部材と、
を備える、データ取得台。 - 前記発光パネル及び前記光透過部材を収容する暗室を更に備える、請求項12に記載のデータ取得台。
- 前記対象物を照明可能に構成される照明装置を更に備える、請求項12又は13に記載のデータ取得台。
- 前記発光パネルは、所定の色のうち一色に発光する、請求項12に記載のデータ取得台。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202380043035.0A CN119183585A (zh) | 2022-05-31 | 2023-05-18 | 数据取得装置、数据取得方法以及数据取得台 |
| US18/869,561 US20250351247A1 (en) | 2022-05-31 | 2023-05-18 | Data obtaining device, data obtaining method, and data obtaining stage |
| JP2024524337A JPWO2023234061A1 (ja) | 2022-05-31 | 2023-05-18 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2022-088690 | 2022-05-31 | ||
| JP2022088690 | 2022-05-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023234061A1 true WO2023234061A1 (ja) | 2023-12-07 |
Family
ID=89026603
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/018641 Ceased WO2023234061A1 (ja) | 2022-05-31 | 2023-05-18 | データ取得装置、データ取得方法、及びデータ取得台 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20250351247A1 (ja) |
| JP (1) | JPWO2023234061A1 (ja) |
| CN (1) | CN119183585A (ja) |
| WO (1) | WO2023234061A1 (ja) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2016181068A (ja) * | 2015-03-24 | 2016-10-13 | 株式会社明電舎 | 学習サンプル撮影装置 |
| JP2017162217A (ja) * | 2016-03-10 | 2017-09-14 | 株式会社ブレイン | 物品識別システム |
| WO2019167277A1 (ja) * | 2018-03-02 | 2019-09-06 | 日本電気株式会社 | 画像収集装置、画像収集システム、画像収集方法、画像生成装置、画像生成システム、画像生成方法、およびプログラム |
| WO2021182345A1 (ja) * | 2020-03-13 | 2021-09-16 | 富士フイルム富山化学株式会社 | 学習データ作成装置、方法、プログラム、学習データ及び機械学習装置 |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2021089168A (ja) * | 2019-12-03 | 2021-06-10 | リョーエイ株式会社 | ワーク検査用パネル照明システム及びワークの検査方法 |
-
2023
- 2023-05-18 WO PCT/JP2023/018641 patent/WO2023234061A1/ja not_active Ceased
- 2023-05-18 JP JP2024524337A patent/JPWO2023234061A1/ja active Pending
- 2023-05-18 CN CN202380043035.0A patent/CN119183585A/zh active Pending
- 2023-05-18 US US18/869,561 patent/US20250351247A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2016181068A (ja) * | 2015-03-24 | 2016-10-13 | 株式会社明電舎 | 学習サンプル撮影装置 |
| JP2017162217A (ja) * | 2016-03-10 | 2017-09-14 | 株式会社ブレイン | 物品識別システム |
| WO2019167277A1 (ja) * | 2018-03-02 | 2019-09-06 | 日本電気株式会社 | 画像収集装置、画像収集システム、画像収集方法、画像生成装置、画像生成システム、画像生成方法、およびプログラム |
| WO2021182345A1 (ja) * | 2020-03-13 | 2021-09-16 | 富士フイルム富山化学株式会社 | 学習データ作成装置、方法、プログラム、学習データ及び機械学習装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2023234061A1 (ja) | 2023-12-07 |
| CN119183585A (zh) | 2024-12-24 |
| US20250351247A1 (en) | 2025-11-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020122632A1 (ko) | 로봇 장치 및 로봇의 작업 스킬을 학습하는 방법 | |
| US20230339118A1 (en) | Reliable robotic manipulation in a cluttered environment | |
| CN101733755A (zh) | 机械手系统、机械手控制装置以及机械手控制方法 | |
| JP2004216552A (ja) | 移動ロボットとその自律走行システム及び方法 | |
| CN107030692A (zh) | 一种基于感知增强的机械手遥操作方法及系统 | |
| JP2025147230A (ja) | ロボットの保持態様決定装置、保持態様決定方法、及びロボット制御システム | |
| CN110619630A (zh) | 一种基于机器人的移动设备可视化测试系统及测试方法 | |
| WO2023234061A1 (ja) | データ取得装置、データ取得方法、及びデータ取得台 | |
| WO2023234062A1 (ja) | データ取得装置、データ取得方法、及びデータ取得台 | |
| US20240265691A1 (en) | Trained model generating device, trained model generating method, and recognition device | |
| US20240351198A1 (en) | Trained model generation method, trained model generation device, trained model, and holding mode inference device | |
| CN114845845A (zh) | 把持装置、控制方法和程序 | |
| US20240342905A1 (en) | Holding parameter estimation device and holding parameter estimation method | |
| KR20200101300A (ko) | 증강현실과 연계 동작 가능한 스마트 코딩블록 시스템 | |
| US20240265669A1 (en) | Trained model generating device, trained model generating method, and recognition device | |
| US20250135663A1 (en) | Illumination control in robotic end effector manipulation | |
| JP7651691B2 (ja) | 保持位置決定装置、及び保持位置決定方法 | |
| WO2025013784A1 (ja) | 処理装置、プログラム、表示装置及び処理システム | |
| CN118234551A (zh) | 用于模块化玩具构造套件的传感器模块 | |
| CN119212835A (zh) | 机器人控制装置以及机器人控制方法 | |
| WO2025164228A1 (ja) | ティーチングのための装置、ティーチングのための装置の制御方法およびティーチングのための装置のプログラム | |
| WO2023054535A1 (ja) | 情報処理装置、ロボットコントローラ、ロボット制御システム、及び情報処理方法 | |
| CN120791798A (zh) | 一种基于视觉感知的机械臂抓取控制系统及方法 | |
| WO2024204048A1 (ja) | 処理装置、設定装置、多面体及びプログラム | |
| CN111844041A (zh) | 定位辅助装置、机器人和视觉定位系统 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23815820 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024524337 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 18869561 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 23815820 Country of ref document: EP Kind code of ref document: A1 |
|
| WWP | Wipo information: published in national office |
Ref document number: 18869561 Country of ref document: US |