CN111476336A - Piece counting method, device and equipment for clothes - Google Patents
Piece counting method, device and equipment for clothes Download PDFInfo
- Publication number
- CN111476336A CN111476336A CN201910063798.7A CN201910063798A CN111476336A CN 111476336 A CN111476336 A CN 111476336A CN 201910063798 A CN201910063798 A CN 201910063798A CN 111476336 A CN111476336 A CN 111476336A
- Authority
- CN
- China
- Prior art keywords
- image data
- moving object
- clothes
- working area
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06M—COUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
- G06M7/00—Counting of objects carried by a conveyor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a piece counting method, a piece counting device and piece counting equipment for clothes, wherein the method comprises the following steps: acquiring at least one frame of image data for performing quality inspection operation on the clothes; identifying a moving object in the image data and a working area where the moving object is located; and performing piece counting operation on the clothes subjected to quality inspection according to the moving object and the working area. By acquiring at least one frame of image data for performing quality inspection operation on the clothes, identifying a moving object in the image data and a working area where the moving object is located, and performing piece counting operation on the clothes subjected to quality inspection according to the moving object and the working area, the piece counting of the clothes subjected to quality inspection operation is effectively ensured, the production management cost for piece counting of the clothes is reduced, the efficiency and accuracy of piece counting of the clothes are ensured, the production management of a factory is facilitated, and the management efficiency of the factory is improved; meanwhile, a user can acquire production process data at any time and know the order production progress, and efficient production-marketing cooperation is finally realized.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a piece counting method, a piece counting device and piece counting equipment for clothes.
Background
When producing and processing clothes, the processing process of a factory mainly comprises the following steps: fabric cutting, assembly line/whole piece and subsequent three processes. At present, in the back link, the piece counting operation can be performed on the clothes, and the specific mode for realizing the piece counting operation comprises the following steps: the traditional invasive bar code gun and the RFID technology are adopted or special piece counting personnel are directly dispatched to carry out manual data entry operation.
However, the mode of the bar code gun and the radio frequency identification RFID technology is adopted to perform piece counting operation on the clothes, so that the operation and learning time cost of workers needs to be increased, and the cost of processing the clothes in a factory is greatly increased; and the mode of manual statistics is adopted to carry out piece counting operation on the clothes, so that the efficiency is low, the labor cost is increased, and the management efficiency of a factory on the clothes is reduced.
Disclosure of Invention
The embodiment of the invention provides a piece counting method, a piece counting device and piece counting equipment for clothes, which are used for reducing the piece counting cost of the clothes, ensuring the piece counting efficiency of the clothes and further improving the clothes management efficiency of a factory.
In a first aspect, an embodiment of the present invention provides a method for designing a garment, including:
acquiring at least one frame of image data for performing quality inspection operation on the clothes;
identifying a moving object in the image data and a working area where the moving object is located;
and counting the piece of the clothes after quality inspection according to the moving object and the working area.
In a second aspect, an embodiment of the present invention provides a piece counting device for a garment, including:
the acquisition module is used for acquiring at least one frame of image data for performing quality inspection on the clothes;
the identification module is used for identifying the moving object in the image data and the working area where the moving object is located;
and the piece counting module is used for carrying out piece counting operation on the clothes subjected to quality inspection according to the moving object and the working area.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the piece count method for a garment of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to make a computer execute a piece counting method for a garment in the first aspect.
By acquiring at least one frame of image data for performing quality inspection operation on the clothes, identifying a moving object in the image data and a working area where the moving object is located, and performing piece counting operation on the clothes after quality inspection according to the moving object and the working area, the piece counting statistics of the clothes subjected to quality inspection operation is effectively guaranteed, the production management cost for piece counting of the clothes is reduced, the piece counting efficiency and accuracy of the clothes are guaranteed, the production management of a factory is facilitated, and the management efficiency of the factory is improved; meanwhile, a user can acquire production process data at any time and know the order production progress, and efficient production-marketing cooperation is finally realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a method for piece counting a garment according to an embodiment of the present invention;
FIG. 2 is a flow chart of identifying moving objects in the image data according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an embodiment of identifying a moving object in each frame of image data according to at least one frame of image data and a background model image;
fig. 4 is a flowchart for determining a moving object in the image data according to the first pixel value and the second pixel value according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a process of identifying a work area in which a moving object is located in the image data according to an embodiment of the present invention;
fig. 6 is a flowchart for establishing a statistical matrix for representing the motion change frequency of the moving object according to an embodiment of the present invention;
fig. 7 is a flowchart of determining a working area where the moving object is located according to the statistical matrix according to the embodiment of the present invention;
fig. 8 is a flowchart of updating the statistical matrix according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a piece counting operation performed on the quality-inspected garment according to the moving object and the work area according to an embodiment of the present invention;
FIG. 10 is a schematic view of a piece counting device for a garment according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device corresponding to the piece counting device of the garment provided in the embodiment shown in fig. 10.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
FIG. 1 is a flow chart of a method for piece counting a garment according to an embodiment of the present invention; referring to fig. 1, the embodiment provides a piece counting method for a garment, the execution main body of the piece counting method is a piece counting device, and when the piece counting device executes the piece counting method, piece counting statistics can be performed on the garment subjected to quality inspection operation in a subsequent process of producing and processing the garment. Specifically, the piece counting method may include:
s101: at least one frame of image data for performing quality inspection operation on the garment is acquired.
The image data in this embodiment may be obtained in real time, for example: when the quality inspection operation is performed on the clothes, the image acquisition device can be installed at a preset position and can be a camera, and at the moment, the image data for performing the quality inspection operation on the clothes can be acquired in real time through the image acquisition device. Or, the image data in this embodiment may also be acquired in non-real time, and at this time, the image data may be acquired by a preset image acquisition device and stored in a preset storage area, and the image data may be acquired by accessing the storage area; alternatively, the image data may be actively or passively transmitted by the image capturing device, and in this case, the image data may be stored in a preset area of the image capturing device. Of course, a person skilled in the art may select other ways to obtain image data for performing quality inspection on a garment according to specific design requirements and application scenarios, as long as the accuracy and reliability of obtaining the image data can be ensured, which is not described herein again.
Optionally, after acquiring the at least one frame of image data for quality inspection of the garment, in order to ensure quality and efficiency of processing the image data, the method in this embodiment may further include adjusting the resolution of the at least one frame of image data so that the resolution of the image data meets a preset standard.
The preset standard is preset, and those skilled in the art can set different resolution standards according to specific application requirements, for example: the preset criterion may refer to a resolution of 320 × 240dpi of the image data; alternatively, the preset criterion may refer to a resolution of 640 × 480dpi of the image data, or the like. It should be noted that, the resolution of the image data meets the preset standard, which can ensure the accuracy and reliability of the analysis and identification of the image data, therefore, after the image data is obtained, the resolution of the image data can be obtained first, and when the resolution of the image data is greater than the preset standard, the image data at this time is a high-resolution image. When the resolution of the image data is smaller than the preset standard, the image data is a small-resolution image, and when the image data is processed, in order to ensure the accuracy of identifying the moving object and the working area in the image data, the resolution of the image data can be improved.
For example: the resolution of 320 × 240dpi is used as the resolution of the preset standard, and when the resolution of the acquired image data is 1280 × 720dpi, the resolution is greater than the preset standard, so that the current image data can be down-sampled, and the resolution of the image data is adjusted from 1280 × 720dpi to 320 × 240dpi, thereby reducing the amount of calculation for processing the image data, ensuring the real-time performance and reliability of processing the image data, and acquiring an accurate processing result. When the resolution of the acquired image data is 160 × 120dpi, the resolution is smaller than the preset standard, so that the current image data can be adjusted, the resolution of the image data is adjusted from 160 × 120dpi to 320 × 240dpi, the accuracy of image data identification can be effectively ensured, and an accurate processing result can be obtained.
Optionally, before identifying the moving object in the image data and the working area where the moving object is located, the method in this embodiment may further include: and carrying out filtering and denoising processing on at least one frame of image data.
When image data is acquired, due to the influence of the working environment of a moving object and other factors, the acquired image data may have more noise, and in order to improve the accuracy of image data processing, a gaussian model can be adopted to perform filtering and denoising processing on at least one frame of image data, so that noise mixed in an image is eliminated, and clearer image data is acquired.
S102: moving objects in the image data and a work area where the moving objects are located are identified.
After the image data is acquired, the image data may be analyzed to identify a moving object and a work area in which the moving object is located in the image data. The moving object refers to a worker who performs quality inspection operation on the clothes, and the working area where the moving object is located refers to an area where the worker performs quality inspection operation on the clothes.
In this embodiment, a specific implementation process for identifying the moving object and the working area is not limited, and a person skilled in the art may select different implementation manners according to specific design requirements and application scenarios, for example: when a moving object in image data and a working area where the moving object is located are identified, profile information of all objects in the image data can be obtained first, and the profile information of all the objects is analyzed and compared with preset standard profile information, wherein the standard profile information is profile information which is stored in advance and corresponds to a worker, and it can be understood that the standard profile information can be one or more; determining an object corresponding to the contour information matched with the at least one piece of standard contour information as a moving object; and then, acquiring time information of all areas of the moving object in the image data, and determining the area of which the time information is greater than or equal to a preset time threshold value as a working area of the moving object. Of course, those skilled in the art may also use other manners to identify the moving object and the working area, as long as the accuracy of obtaining the moving object and the working area can be ensured, which is not described herein again.
S103: and performing piece counting operation on the clothes subjected to quality inspection according to the moving object and the working area.
After the moving object and the working area are obtained, the moving object and the working area can be analyzed and processed, and whether piece counting operation is carried out on the clothes or not is realized according to the analysis and processing result; specifically, referring to fig. 8, in the present embodiment, the component designing operation of the quality-inspected garment according to the moving object and the working area may include:
s1031: it is detected whether a moving object is located within the working area.
For whether the moving object is located in the working area, one way to achieve this is: whether the moving object is located in the working area or not can mean whether the position of the moving object is located in the working area or not, at this time, when whether the moving object is located in the working area or not is detected, current position information of the moving object can be obtained first, whether the current position information is located in the working area or not is judged, and if the current position information is located in the working area, the moving object can be determined to be located in the working area; if the current position information is not in the working area, it may be determined that the moving object is not in the working area.
Another way that can be achieved is whether the moving object is located within the working area: whether the moving object is located in the working area or not can mean whether the motion change amplitude of the moving object is located in the working area or not, at this time, when whether the moving object is located in the working area or not is detected, the motion change amplitude of the moving object can be obtained firstly, whether the motion change amplitude exceeds the working area or not is judged, and if the motion change amplitude does not exceed the working area, the moving object can be determined to be located in the working area; if the motion variation amplitude exceeds the working area, the moving object can be determined not to be in the working area.
Of course, those skilled in the art may also use other methods to detect whether the moving object is located in the working area according to the specific application scenario and design requirements, as long as the accuracy and reliability of the detection can be ensured, which is not described herein again.
S1032: and if the moving object is not in the working area, performing piece counting operation on the clothes subjected to quality inspection.
The clothing quality inspection method is characterized in that the clothing quality inspection method comprises the following steps that the clothing quality inspection is carried out on the clothing, and the clothing number of the clothing after the quality inspection operation can be obtained.
It is understood that the method in this embodiment may further include: and if the moving object is in the working area, the counting operation is not executed.
Further, after the moving object is not in the working area, the method in this embodiment may further include:
s1033: and if the moving object is positioned in the preset first area, counting the qualified clothes after quality inspection.
The preset first area is used for placing the clothes with qualified quality inspection results, and the first area can be located at a preset position of the working area, for example, the first area can be located on the left side or the right side of the working area; when the moving object is not in the working area any more and is located in the first area, the moving object is indicated to finish the quality inspection operation on the clothing, and the quality inspection result of the clothing is qualified, so that the moving object is performing the operation of placing the qualified clothing in the first area, and at the moment, the piece counting operation can be performed on the qualified clothing.
S1034: and if the moving object is positioned in the preset second area, counting the unqualified clothes after the quality inspection.
The preset second area is used for placing the clothes with unqualified quality inspection results, the second area can be located at the preset position of the working area, and the setting position of the second area is different from that of the first area; for example, while the first zone is located on the left side of the work area, the second zone may be located on the right side of the work area; when the first area is located at the right side of the working area, the second area may be located at the left side of the working area; when the moving object is not in the working area any more and is located in the second area, the moving object is indicated to finish the quality inspection operation on the clothes, and the quality inspection result of the clothes is unqualified, so that the moving object is performing the operation of placing the unqualified clothes in the second area, and at the moment, the piece counting operation can be performed on the unqualified clothes.
It should be noted that the piece counting operation in the present embodiment may include three kinds of piece counting operations: (1) counting the clothes subjected to the quality inspection operation by a piece P; (2) the clothing with qualified quality inspection is subjected to piece counting P1; (3) counting P2 for the clothes with unqualified quality inspection; it is understood that, in general, P is P1+ P2.
When the detection result shows that the moving object is located in the working area, the moving object performs quality inspection operation on the clothes at the moment, so that piece counting operation on the clothes is not performed.
Optionally, the method in this embodiment may further include:
s104: and storing at least one frame of image data of the quality inspection operation of the moving object on the clothes.
When the clothing of quality inspection operation is counted and counted, a corresponding image data can be stored for each piece of clothing, so that a user can conveniently view or call up related records related to the counted pieces at any time.
It is understood that the method in this embodiment may further include:
s105: and storing the image data of at least one frame of the qualified clothes placed on the moving object.
S106: and storing at least one frame of image data of unqualified clothes placed by the moving object.
Similarly, when the clothes subjected to the quality inspection operation are placed, statistics can be performed on the clothes of the quality inspection results of different placed quality inspection results, and image data corresponding to each clothes in statistics can be stored, so that a user can check or call related records related to the piece counting at any time.
According to the piece counting method for the clothes, by acquiring at least one frame of image data for performing quality inspection operation on the clothes, the moving object in the image data and the working area where the moving object is located are identified, and the piece counting operation is performed on the clothes subjected to quality inspection according to the moving object and the working area, piece counting of the clothes subjected to quality inspection operation is effectively guaranteed, production management cost for piece counting of the clothes is reduced, efficiency and accuracy of piece counting of the clothes are guaranteed, production management of factories is facilitated, and management efficiency of factories is improved; meanwhile, a user can acquire production process data at any time and know the order production progress, and efficient production-marketing cooperation is finally realized.
FIG. 2 is a flow chart of identifying moving objects in image data according to an embodiment of the present invention; FIG. 3 is a flowchart of identifying a moving object in each frame of image data according to at least one frame of image data and a background model image according to an embodiment of the present invention; fig. 4 is a flowchart of determining a moving object in image data according to a first pixel value and a second pixel value according to an embodiment of the present invention; on the basis of the foregoing embodiment, with reference to fig. 2 to 4, it can be seen that, in this embodiment, a specific implementation process for identifying a moving object in image data is not limited, and a person skilled in the art may set the process according to a specific design requirement, and preferably, the identifying the moving object in the image data in this embodiment may include:
s1021: and establishing a background model image based on at least one frame of image data.
The background model image can be established for the acquired at least one frame of image data by adopting a Gaussian mixture background modeling method, and the background model image comprises the working environment of the moving object. Specifically, a background model image may be established based on all image data; alternatively, at least one background model image may also be established based on at least one frame of image data, namely: a background model image may be created based on a portion of the image data in at least one frame of image data, and another background model image may be created based on another image data in at least one frame of image data, and preferably, a background model image may be created based on all the image data.
In addition, the size of the created background model image is the same as the size of the image data. For example, for 1000 frames of image data, a corresponding background model image may be established based on the 1000 frames of image data, and the size of the established background model image is the same as that of the image data; or, the first background model image and the second background model image which may correspond to each other may be based on 1000 frames of image data, where the size of the first background model image is the same as the size of the corresponding image data; the size of the second background model image is the same as the size of the corresponding image data.
S1022: and identifying the moving object in each frame of image data according to at least one frame of image data and the background model image.
After the background model image is created, each frame of image data may be analyzed and compared with the background model image to identify the moving object in each frame of image data. Specifically, identifying the moving object in each frame of image data according to at least one frame of image data and the background model image may include:
s10221: and acquiring a first pixel value of each pixel point in at least one frame of image data and a second pixel value of the same pixel point in the background model image.
Since the background model image and the image data have the same size, for each pixel point in the image data, a corresponding pixel point exists in the background model image. In order to identify the moving object, a first pixel value of each pixel point in the image data may be obtained, and a second pixel value of a corresponding pixel point in the background model image may be obtained.
S10222: a moving object in the image data is determined based on the first pixel value and the second pixel value.
After the first pixel value and the second pixel value are acquired, the first pixel value and the second pixel value may be subjected to analysis processing, so that a moving object in the image data is determined according to the analysis processing result. Specifically, determining the moving object in the image data according to the first pixel value and the second pixel value may include:
s102221: and acquiring a difference value between the first pixel value and the second pixel value.
The difference value is a difference degree between the first pixel value and the second pixel value, specifically, the difference value may be a difference value between the first pixel value and the second pixel value, or the difference value may also be a ratio of the first pixel value to the second pixel value; of course, those skilled in the art may also adopt other manners to represent the difference between the first pixel value and the second pixel value, which is not described herein again.
S102222: and searching all pixel points with the difference value larger than or equal to a preset pixel threshold value in the image data, wherein all the pixel points form a moving object in the image data.
For each image data, the image data includes a dynamic region and a static region, where the dynamic region is a region where a pixel point in the image data may change, that is: the dynamic area is formed by dynamic pixel points; the static area is an area where the pixel points in the image data basically cannot change, namely: the static area is formed by static pixel points. As can be seen from the above, the area where the moving object is located is the dynamic area in the image data, and further, when the difference value between the first pixel value and the second pixel value is greater than or equal to the pixel threshold value, it indicates that the difference between the pixel point in the image data and the corresponding pixel point in the background model image is relatively large, and it can be determined that the pixel point is the dynamic pixel point, so that all dynamic pixel points in the image data can be obtained, and at this time, all dynamic pixel points constitute the moving object in the image data.
In addition, the pixel threshold in this embodiment is preset, and a person skilled in the art may determine a specific numerical range of the pixel threshold according to specific design requirements and application scenarios, and it can be understood that different difference values may correspond to different pixel thresholds; for example, one: when the difference value is a difference value between the first pixel value and the second pixel value, the corresponding pixel threshold value may be TH1, and then the moving object in the image data may be determined according to the following formula:
the method includes the steps of obtaining a Background model image, obtaining a current pixel value, obtaining a Background pixel value, obtaining a current.
Example two: when the difference value is a ratio of the first pixel value to the second pixel value, the corresponding pixel threshold may be TH2, and then the moving object in the image data may be determined according to the following formula:
the method includes the steps of obtaining a Background model image, obtaining a current pixel value, obtaining a Background pixel value, obtaining a current.
By identifying the moving object in the image data in the above way, the accuracy and reliability of identifying the moving object in each frame of image data are effectively ensured, and the accuracy degree of the piece counting operation on the clothes is ensured.
Fig. 5 is a flowchart of identifying a work area in which a moving object is located in image data according to an embodiment of the present invention; FIG. 6 is a flowchart of establishing a statistical matrix for representing motion change frequencies of a moving object according to an embodiment of the present invention; fig. 7 is a flowchart of determining a working area where a moving object is located according to a statistical matrix according to an embodiment of the present invention; on the basis of the foregoing embodiment, with reference to fig. 5 to 7, it can be seen that, in this embodiment, a specific implementation process for identifying a working area where a moving object is located in image data is not limited, and a person skilled in the art may set the working area according to a specific design requirement, and preferably, the working area where the moving object is located in the image data in this embodiment may include:
s1023: and establishing a statistical matrix for reflecting the motion change frequency of the moving object, wherein the size of the statistical matrix is the same as that of the image data.
As shown in fig. 6, the establishing of the statistical matrix for representing the motion change frequency of the moving object may include:
s10231: obtaining a statistical value corresponding to each pixel point in at least one frame of image data.
S10232: and establishing a statistical matrix based on the statistical values.
The statistical value can be determined based on the motion characteristic of each pixel point in each frame of image data, and because each image data comprises a dynamic region and a static region, the pixel points in the dynamic region can be dynamic pixel points, the pixel points in the static region can be static pixel points, and the motion characteristic of the pixel points is that the pixel points are dynamic pixel points or static pixel points; when the pixel point is a dynamic pixel point, a preset statistic value can be corresponded; when the pixel point is a static pixel point, the pixel point can correspond to another preset statistical value. After obtaining the plurality of statistics, a statistical matrix may be established based on the statistics, and a size of the established statistical matrix is the same as a size of the image data.
For example, for 500 frames of image data, each image data has a size of 320 × 240dpi, and therefore, an initial statistical matrix with a size of 320 × 240 may be established first, assuming that each element in the initial statistical matrix is 0, that is: the preset initial statistic value of each element is 0. When the motion characteristic of a pixel point A in first frame image data is identified, determining that the pixel point A is a dynamic pixel point, and at the moment, adding 1 to an element of the corresponding pixel point A in an initial statistical matrix so as to obtain that a statistical value corresponding to the pixel point A is 1; when the motion characteristic of the pixel point A in the second frame of image data is identified, the pixel point A is found to be a static pixel point, at this time, the elements of the initial statistical matrix and the corresponding pixel point A can be kept unchanged, and on the basis of the first frame of image data, the statistical value corresponding to the pixel point A in the initial statistical matrix can be determined to be 1. When the motion characteristic of the pixel point B in the first frame of image data is identified, the pixel point B is found to be a dynamic pixel point, at the moment, the element corresponding to the pixel point B in the initial statistical matrix can be added with 1, and thus the statistical value corresponding to the pixel point B is 1; when the motion characteristic of the pixel point B in the second frame of image data is identified, the pixel point B is found to be a dynamic pixel point, and at the moment, the statistic value corresponding to the pixel point B in the statistic matrix can be determined to be 2; specifically, the statistical values in the statistical matrix satisfy the following relational expression:
the motion of the pixel in the image data is represented by a motion matrix, where motionbound (i, j) is a preset statistical value in the statistical matrix, and forebound (i, j) is a motion characteristic of each pixel in the image data, and when forebound (i, j) is 0, the pixel at this time is a static pixel, and when forebound (i, j) is 255, the pixel at this time is a dynamic pixel.
After analyzing and processing the pixel points in all the image data according to the relational expression, the statistical values corresponding to the pixel points in all the image data can be obtained, and the statistical matrix can be established based on the statistical values.
S1024: and determining the working area where the moving object is located according to the statistical matrix.
After the statistical matrix is obtained, the working area may be determined by using the statistical matrix, specifically, determining the working area where the moving object is located according to the statistical matrix may include:
s10241: and carrying out normalization processing on the statistical matrix to obtain a pixel gray value corresponding to each statistical value in the statistical matrix.
S10242: and when the pixel gray value is greater than or equal to the preset gray threshold value, determining the pixel area corresponding to the pixel gray value as a working area where the moving object is located.
The gray threshold is a preset limit value, the specific numerical range of the gray threshold is not limited in this embodiment, and those skilled in the art can arbitrarily set the gray threshold according to specific design requirements, for example: the grayscale threshold may be 20, 30, or 40, etc. After the statistical matrix is normalized, the statistical matrix can be displayed in an image mode, the gray value of each pixel point in the image is 0-255, when the gray value of a pixel is greater than or equal to a gray threshold, the change frequency of a pixel area corresponding to the gray value of the pixel is higher, and then the pixel area corresponding to the gray value of the pixel can be determined as a working area where a moving object is located.
Identifying a working area where the moving object is located through a statistical matrix, specifically, analyzing which pixel areas in the image data have frequent action changes and which pixel areas have no change basically through the statistical matrix, and estimating the working area range of the moving object for normal quality inspection based on the action change frequency; specifically, a region with high action change frequency is identified, and the region is a working region for quality inspection operation of the moving object, so that the accuracy and reliability of identification of the working region are effectively ensured, and the use accuracy of the piece counting method is further improved.
In specific application, the numerical value of the statistical matrix cannot be increased without limit, and if the numerical value elements of the statistical matrix are increased to a certain extent, the accuracy of processing the image data is affected; moreover, when the moving object performs quality inspection operation on the clothes, the working area where the moving object is located is not constant and can be changed at any time according to the change of the moving object, so that the statistical matrix can be periodically updated in order to ensure the accuracy of identifying the working area. Specifically, the method in this embodiment may further include:
s201: and updating the statistical matrix.
As shown in fig. 8, the updating the statistical matrix may include:
s2011: and acquiring a preset updating coefficient, wherein the updating coefficient is a positive number smaller than 1.
The update coefficient is preset, the specific numerical range of the update coefficient is not limited in this embodiment, and a person skilled in the art can set the update coefficient arbitrarily according to specific application requirements as long as the update coefficient can meet the above requirements, that is: the update coefficient may be any value that satisfies the above condition, for example: 0.1, 0.2, 0.3, 0.5, 0.8, etc., and for convenience of explanation, the following description will take an example in which the update coefficient is 0.5.
S2012: and multiplying all the statistical values in the statistical matrix with the updating coefficient respectively to obtain updated values.
S2013: an updated statistical matrix is obtained based on the updated values.
After the update coefficient is obtained, the statistics in the statistics matrix may be adjusted according to the obtained update coefficient to reduce the number elements in the statistics, that is, motionbound (a) is 0.5 motionbound (b); wherein, motionround (a) is all statistics after adjustment, motionround (b) is all statistics before adjustment, and 0.5 is an update coefficient.
When the statistical matrix is updated, the statistical matrix can be updated according to a preset period or a preset fixed frame number, so that the whole numerical value of the statistical matrix is updated according to a preset updating coefficient, a working area can be identified by using the updated statistical matrix, the working area of a moving object is estimated in a self-adaptive manner, and the accuracy and the reliability of the determination of the working area are effectively ensured.
When the system is applied specifically, a camera can be installed at a preset position of a factory, the camera can acquire image data of a moving object for performing quality inspection operation on clothes in real time, and after the image data is acquired, the acquired image data can be subjected to filtering and denoising processing by a Gaussian filtering method; and then, performing background modeling image based on the working environment where the moving object is located by adopting a Gaussian mixture background modeling mode, and determining the moving object in the image data according to the established background model image. Then, according to the action change frequency of the moving object, a region with higher action change frequency is identified, namely, the region is a normal working region for carrying out quality inspection on the clothes. Because the behavior frequency of the clothes placed after the quality inspection is carried out on the clothes is obviously lower than the behavior frequency of the moving object for repeatedly checking the clothes during the quality inspection, the quality inspection state of the moving object on the clothes can be judged based on the rule characteristic, after the quality inspection is carried out on the clothes, the piece counting operation can be carried out on the clothes, and the image data for carrying out the quality inspection operation is stored.
The method provided by the application embodiment can effectively reduce the cost and the transformation difficulty of factory digital transformation, has the characteristics of light weight deployment and strong reproducibility, can acquire the real-time progress of factory clothes processing in real time under the condition of not changing the original working mode of workers, and synchronizes the real-time progress to producers, platforms and consumers, thereby achieving efficient production and marketing cooperation and being beneficial to accurately matching, optimizing and improving the working condition of the workers.
FIG. 10 is a schematic view of a piece counting device for a garment according to an embodiment of the present invention; referring to fig. 10, the present embodiment provides a piece counting device for a garment, which can perform the piece counting method for the garment, and specifically, the piece counting device can include:
the acquisition module 11 is configured to acquire at least one frame of image data for performing quality inspection on the garment;
the identification module 12 is configured to identify a moving object in the image data and a working area where the moving object is located;
and the piece counting module 13 is used for carrying out piece counting operation on the clothes after quality inspection according to the moving object and the working area.
Wherein, when the recognition module 12 recognizes a moving object in the image data, the recognition module 12 may be configured to perform: establishing a background model image based on at least one frame of image data; and identifying the moving object in each frame of image data according to at least one frame of image data and the background model image.
Alternatively, when the recognition module 12 recognizes a moving object in each frame of image data according to at least one frame of image data and the background model image, the recognition module 12 may be configured to perform: acquiring a first pixel value of each pixel point in at least one frame of image data and a second pixel value of the same pixel point in a background model image; a moving object in the image data is determined based on the first pixel value and the second pixel value.
Alternatively, when the recognition module 12 determines the moving object in the image data according to the first pixel value and the second pixel value, the recognition module 12 may be configured to perform: acquiring a difference value between the first pixel value and the second pixel value; and searching all pixel points with the difference value larger than or equal to a preset pixel threshold value in the image data, wherein all the pixel points form a moving object in the image data.
Alternatively, when the recognition module 12 recognizes a working area in which a moving object is located in the image data, the recognition module 12 may be configured to perform: establishing a statistical matrix for reflecting the action change frequency of the moving object, wherein the size of the statistical matrix is the same as that of the image data; and determining the working area where the moving object is located according to the statistical matrix.
Wherein, when the identification module 12 establishes the statistical matrix for representing the motion change frequency of the moving object, the identification module 12 may be configured to perform: obtaining a statistic corresponding to each pixel point in at least one frame of image data; and establishing a statistical matrix based on the statistical values.
In addition, when the identification module 12 determines the working area where the moving object is located according to the statistical matrix, the identification module 12 may be configured to perform: carrying out normalization processing on the statistical matrix to obtain a pixel gray value corresponding to each statistical value in the statistical matrix; and when the pixel gray value is greater than or equal to the preset gray threshold value, determining the pixel area corresponding to the pixel gray value as a working area where the moving object is located.
Optionally, the identification module 12 in this embodiment is further configured to perform: and updating the statistical matrix.
Specifically, when the identification module 12 updates the statistical matrix, the identification module 12 may be configured to perform: acquiring a preset updating coefficient, wherein the updating coefficient is a positive number smaller than 1; multiplying all the statistical values in the statistical matrix with the updating coefficient respectively to obtain updated values; an updated statistical matrix is obtained based on the updated values.
Optionally, when the piece counting module 13 performs the piece counting operation on the quality-checked clothes according to the moving object and the working area, the piece counting module 13 may be configured to perform: detecting whether a moving object is located within a work area; and if the moving object is not in the working area, performing piece counting operation on the clothes subjected to quality inspection.
Optionally, after the moving object is not in the working area, the piece counting module 13 in this embodiment may be further configured to perform: if the moving object is located in the preset first area, performing piece counting operation on qualified clothes after quality inspection; or if the moving object is located in the preset second area, performing piece counting operation on unqualified clothes after quality inspection.
Optionally, the obtaining module 11 in this embodiment is further configured to, after obtaining at least one frame of image data for performing quality inspection on the garment, adjust the resolution of the at least one frame of image data so that the resolution of the image data meets a preset standard.
Optionally, the obtaining module 11 in this embodiment is further configured to perform filtering and denoising processing on at least one frame of image data before identifying a moving object in the image data and a working area where the moving object is located.
Optionally, the piece counting module 13 in this embodiment is further configured to perform: and storing at least one frame of image data of the quality inspection operation of the moving object on the clothes.
The apparatus shown in fig. 10 can perform the method of the embodiment shown in fig. 1-9, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-9. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to 9, and are not described herein again.
In one possible design, the piece counting device structure of the garment shown in fig. 10 may be implemented as an electronic device, which may be a mobile phone, a tablet computer, a server, or other devices. As shown in fig. 11, the electronic device may include: a processor 21 and a memory 22. Wherein the memory 22 is used for storing a program for supporting the electronic device to execute the piece counting method of the garment provided in the above-mentioned embodiments shown in fig. 1-9, and the processor 21 is configured for executing the program stored in the memory 22.
The program comprises one or more computer instructions which, when executed by the processor 21, are capable of performing the steps of:
acquiring at least one frame of image data for performing quality inspection operation on the clothes;
identifying a moving object in the image data and a working area where the moving object is located;
and performing piece counting operation on the clothes subjected to quality inspection according to the moving object and the working area.
Optionally, the processor 21 is further configured to perform all or part of the steps in the embodiments of fig. 1-9 described above.
The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
In addition, the embodiment of the present invention provides a computer storage medium for storing computer software instructions for an electronic device, which includes a program for executing the method for designing a garment according to the method embodiment shown in fig. 1 to 9.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (16)
1. A method of garment design, comprising:
acquiring at least one frame of image data for performing quality inspection operation on the clothes;
identifying a moving object in the image data and a working area where the moving object is located;
and counting the piece of the clothes after quality inspection according to the moving object and the working area.
2. The method of claim 1, wherein identifying moving objects in the image data comprises:
establishing a background model image based on at least one frame of the image data;
and identifying a moving object in each frame of the image data according to at least one frame of the image data and the background model image.
3. The method of claim 2, wherein identifying a moving object in each frame of the image data from at least one frame of the image data and the background model image comprises:
acquiring a first pixel value of each pixel point in at least one frame of image data and a second pixel value of the same pixel point in the background model image;
determining a moving object in the image data from the first pixel value and the second pixel value.
4. The method of claim 3, wherein determining a moving object in the image data from the first pixel value and the second pixel value comprises:
acquiring a difference value between the first pixel value and the second pixel value;
and searching all pixel points of which the difference values are greater than or equal to a preset pixel threshold value in the image data, wherein all the pixel points form a moving object in the image data.
5. The method of claim 1, wherein identifying a work area in the image data in which a moving object is located comprises:
establishing a statistical matrix for reflecting the motion change frequency of the moving object, wherein the size of the statistical matrix is the same as that of the image data;
and determining the working area where the moving object is located according to the statistical matrix.
6. The method of claim 5, wherein establishing a statistical matrix for embodying a frequency of motion change of the moving object comprises:
obtaining a statistic corresponding to each pixel point in at least one frame of the image data;
and establishing a statistical matrix based on the statistical values.
7. The method of claim 5, wherein determining the working area in which the moving object is located according to the statistical matrix comprises:
carrying out normalization processing on the statistical matrix to obtain a pixel gray value corresponding to each statistical value in the statistical matrix;
and when the pixel gray value is greater than or equal to a preset gray threshold value, determining a pixel area corresponding to the pixel gray value as a working area where the moving object is located.
8. The method of claim 5, further comprising:
and updating the statistical matrix.
9. The method of claim 8, wherein updating the statistical matrix comprises:
acquiring a preset updating coefficient, wherein the updating coefficient is a positive number smaller than 1;
multiplying all the statistical values included in the statistical matrix with the updating coefficient respectively to obtain updated values;
an updated statistical matrix is obtained based on the updated values.
10. The method according to any one of claims 1 to 9, wherein performing a piece-counting operation on the quality-inspected garment according to the moving object and the working area comprises:
detecting whether the moving object is located within the working area;
and if the moving object is not in the working area, counting the piece of the clothes after the quality inspection.
11. The method of claim 10, wherein after the moving object is not within the working area, the method further comprises:
if the moving object is located in a preset first area, performing piece counting operation on qualified clothes after quality inspection; or,
and if the moving object is located in a preset second area, counting the unqualified clothes after the quality inspection.
12. The method of any one of claims 1-9, wherein after acquiring at least one frame of image data for quality inspection of the garment, the method further comprises:
and adjusting the resolution of at least one frame of the image data so that the resolution of the image data meets a preset standard.
13. The method of any one of claims 1-9, wherein prior to identifying the moving object and the work area in which the moving object is located in the image data, the method further comprises:
and carrying out filtering and denoising processing on at least one frame of image data.
14. The method according to any one of claims 1-9, further comprising:
and storing at least one frame of image data of the quality inspection operation of the moving object on the clothes.
15. A device for measuring a garment, comprising:
the acquisition module is used for acquiring at least one frame of image data for performing quality inspection on the clothes;
the identification module is used for identifying the moving object in the image data and the working area where the moving object is located;
and the piece counting module is used for carrying out piece counting operation on the clothes subjected to quality inspection according to the moving object and the working area.
16. An electronic device, comprising: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement a method of fitting a garment as claimed in any one of claims 1 to 14.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910063798.7A CN111476336B (en) | 2019-01-23 | 2019-01-23 | Method, device and equipment for counting clothes |
| TW108142548A TW202030641A (en) | 2019-01-23 | 2019-11-22 | Method, apparatus and device for counting clothing by number of pieces |
| PCT/CN2020/071926 WO2020151530A1 (en) | 2019-01-23 | 2020-01-14 | Method, apparatus and device for counting clothing by number of pieces |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910063798.7A CN111476336B (en) | 2019-01-23 | 2019-01-23 | Method, device and equipment for counting clothes |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111476336A true CN111476336A (en) | 2020-07-31 |
| CN111476336B CN111476336B (en) | 2023-06-20 |
Family
ID=71736310
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910063798.7A Active CN111476336B (en) | 2019-01-23 | 2019-01-23 | Method, device and equipment for counting clothes |
Country Status (3)
| Country | Link |
|---|---|
| CN (1) | CN111476336B (en) |
| TW (1) | TW202030641A (en) |
| WO (1) | WO2020151530A1 (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112991262B (en) * | 2021-02-05 | 2025-02-07 | 宁波方太厨具有限公司 | Method, system, electronic device and storage medium for detecting oil fume concentration |
| CN114640753B (en) * | 2022-04-01 | 2023-10-27 | 北京市疾病预防控制中心 | Nematode pharyngeal pump movement frequency automatic identification method based on experimental video processing |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
| CN102609689A (en) * | 2012-02-03 | 2012-07-25 | 江苏科海智能系统有限公司 | Video driveway background modeling method based on multi-frame counting |
| CN102930279A (en) * | 2012-09-29 | 2013-02-13 | 广西工学院 | Image identification method for detecting product quantity |
| WO2013075295A1 (en) * | 2011-11-23 | 2013-05-30 | 浙江晨鹰科技有限公司 | Clothing identification method and system for low-resolution video |
| JP2014157524A (en) * | 2013-02-18 | 2014-08-28 | Nippon Telegr & Teleph Corp <Ntt> | Image display system, server device, information processing terminal and control method |
| CN104616290A (en) * | 2015-01-14 | 2015-05-13 | 合肥工业大学 | Target detection algorithm in combination of statistical matrix model and adaptive threshold |
| CN104899557A (en) * | 2015-05-25 | 2015-09-09 | 浙江工业大学 | Intersection background image extraction method based on video |
| US20160367212A1 (en) * | 2015-06-18 | 2016-12-22 | Toshiba Medical Systems Corporation | Method and apparatus for iteratively reconstructing tomographic images from electrocardiographic-gated projection data |
| CN108345842A (en) * | 2018-01-24 | 2018-07-31 | 成都鼎智汇科技有限公司 | A kind of processing method based on big data |
| CN108453731A (en) * | 2017-02-17 | 2018-08-28 | 发那科株式会社 | robot system |
-
2019
- 2019-01-23 CN CN201910063798.7A patent/CN111476336B/en active Active
- 2019-11-22 TW TW108142548A patent/TW202030641A/en unknown
-
2020
- 2020-01-14 WO PCT/CN2020/071926 patent/WO2020151530A1/en not_active Ceased
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
| WO2013075295A1 (en) * | 2011-11-23 | 2013-05-30 | 浙江晨鹰科技有限公司 | Clothing identification method and system for low-resolution video |
| CN102609689A (en) * | 2012-02-03 | 2012-07-25 | 江苏科海智能系统有限公司 | Video driveway background modeling method based on multi-frame counting |
| CN102930279A (en) * | 2012-09-29 | 2013-02-13 | 广西工学院 | Image identification method for detecting product quantity |
| JP2014157524A (en) * | 2013-02-18 | 2014-08-28 | Nippon Telegr & Teleph Corp <Ntt> | Image display system, server device, information processing terminal and control method |
| CN104616290A (en) * | 2015-01-14 | 2015-05-13 | 合肥工业大学 | Target detection algorithm in combination of statistical matrix model and adaptive threshold |
| CN104899557A (en) * | 2015-05-25 | 2015-09-09 | 浙江工业大学 | Intersection background image extraction method based on video |
| US20160367212A1 (en) * | 2015-06-18 | 2016-12-22 | Toshiba Medical Systems Corporation | Method and apparatus for iteratively reconstructing tomographic images from electrocardiographic-gated projection data |
| CN108453731A (en) * | 2017-02-17 | 2018-08-28 | 发那科株式会社 | robot system |
| CN108345842A (en) * | 2018-01-24 | 2018-07-31 | 成都鼎智汇科技有限公司 | A kind of processing method based on big data |
Non-Patent Citations (3)
| Title |
|---|
| 危自福;毕笃彦;张明;何林远;: "基于背景重构和水平集的多运动目标分割" * |
| 彭长生;詹智财;张松松;程碧淳;: "一种基于多帧统计的车道背景建模方法" * |
| 王修岩;程婷婷;: "基于单目视觉的工业机器人目标识别技术研究" * |
Also Published As
| Publication number | Publication date |
|---|---|
| TW202030641A (en) | 2020-08-16 |
| CN111476336B (en) | 2023-06-20 |
| WO2020151530A1 (en) | 2020-07-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10929644B2 (en) | Face detection training method and apparatus, and electronic device | |
| CN106557778B (en) | General object detection method and device, data processing device and terminal equipment | |
| CN110135514B (en) | A workpiece classification method, device, equipment and medium | |
| CN111461101A (en) | Method, device and equipment for identifying work clothes mark and storage medium | |
| CN110909712A (en) | Moving object detection method and device, electronic equipment and storage medium | |
| CN107991309A (en) | Product quality detection method, device and electronic equipment | |
| CN118503889B (en) | Financial risk prevention and control large model data processing method and electronic equipment | |
| CN119399199B (en) | Tobacco leaf looseness detection method and system based on YOLOv8 | |
| CN113643260A (en) | Method, apparatus, apparatus, medium and product for detecting image quality | |
| CN112633037A (en) | Object monitoring method and device, storage medium and electronic equipment | |
| CN118230385A (en) | Face recognition method and device, electronic equipment and storage medium | |
| CN111476336B (en) | Method, device and equipment for counting clothes | |
| CN119811055A (en) | A forest fire prevention method based on smoke recognition | |
| CN116560794A (en) | Exception handling method and device for virtual machine, medium and computer equipment | |
| US20210081821A1 (en) | Information processing device and information processing method | |
| US10789477B2 (en) | Method and apparatus for real-time detection of a scene | |
| CN119251197A (en) | Image detection method and device, electronic device and storage medium | |
| CN113065454A (en) | High-altitude parabolic target identification and comparison method and device | |
| CN118211873A (en) | Performance evaluation method, device, medium and product of intelligent cloud bin | |
| CN118181426A (en) | Intelligent control system and method for plywood production and processing | |
| CN120031911B (en) | A convenience store inventory behavior recognition method, storage medium and device | |
| CN121061897B (en) | A robot control method and system based on multimodal fusion | |
| CN110765303A (en) | Method and system for updating database | |
| CN119967089A (en) | Telephone number identification method and device, electronic device and storage medium | |
| CN121061897A (en) | A robot control method and system based on multimodal fusion |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |