GB2634765A - Systems, apparatus and methods of condition monitoring - Google Patents
Systems, apparatus and methods of condition monitoring Download PDFInfo
- Publication number
- GB2634765A GB2634765A GB2316024.5A GB202316024A GB2634765A GB 2634765 A GB2634765 A GB 2634765A GB 202316024 A GB202316024 A GB 202316024A GB 2634765 A GB2634765 A GB 2634765A
- Authority
- GB
- United Kingdom
- Prior art keywords
- station
- stun
- aquatic animal
- aquatic
- fish
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K61/00—Culture of aquatic animals
- A01K61/90—Sorting, grading, counting or marking live aquatic animals, e.g. sex determination
- A01K61/95—Sorting, grading, counting or marking live aquatic animals, e.g. sex determination specially adapted for fish
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K61/00—Culture of aquatic animals
- A01K61/10—Culture of aquatic animals of fish
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22B—SLAUGHTERING
- A22B3/00—Slaughtering or stunning
- A22B3/08—Slaughtering or stunning for poultry or fish, e.g. slaughtering pliers, slaughtering shears
- A22B3/083—Stunning devices specially adapted for fish
-
- A—HUMAN NECESSITIES
- A22—BUTCHERING; MEAT TREATMENT; PROCESSING POULTRY OR FISH
- A22C—PROCESSING MEAT, POULTRY, OR FISH
- A22C25/00—Processing fish ; Curing of fish; Stunning of fish by electric current; Investigating fish by optical means
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Zoology (AREA)
- Environmental Sciences (AREA)
- Marine Sciences & Fisheries (AREA)
- Animal Husbandry (AREA)
- Biodiversity & Conservation Biology (AREA)
- Food Science & Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Wood Science & Technology (AREA)
- Quality & Reliability (AREA)
- Farming Of Fish And Shellfish (AREA)
- Image Processing (AREA)
Abstract
The invention relates to systems, apparatus, and methods of objects such as aquatic animals e.g. fish in a harvesting environment. There may be provided a system for monitoring aquatic animals e.g. fish at harvest in real time comprising an electric stun station, a dewatering station, optionally, a processing station e.g. a post-stun processing station to elicit death, such as percussive stun and/or bleed, and/or ice treatment, and so on and at least one imaging station. The at least one imaging station includes an image acquisition unit comprising at least one camera module for capturing images of aquatic animals post stun as these pass through the imaging station and an image processing unit comprising at least one microprocessor and at least one storage medium comprising one or more programs. The one or more programs comprising instructions for detecting at least one object in at least one image frame determining at least one condition associated with the object in the at least one image frame a first condition of the object being determined to be an aquatic animal of interest e.g is it in a stunned state and, in response to the determination of the at least one condition, taking an action associated with the harvesting of aquatic animals e.g adjusting the stun conditions at the stun station.
Description
Systems, Apparatus and Methods of Condition Monitoring Field of the Invention The invention relates to systems (e.g. apparatus) and methods of condition monitoring of objects such as aquatic animals (e.g. fish) in a harvesting environment.
Background
Biomass estimation of fish in real time in fish pens and cages remains challenging. Much focus has been placed on monitoring of live fish in such environments to assess live weight. Examples include the following.
FISHSCAN EU FP7-SME Project No 262323 2012 uses image recognition techniques to 10 estimate the mass of fish in pens.
SHORTIS et al in 2013 in 'A review of techniques for the identification and measurement of fish in underwater stereo-video image sequences' SPIE 8791, Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, 87910G (23 May 2013); doi: 10.1117/12.2020941 describe a wide review of the technology and techniques used.
Another example is P. C. Naval and L. T. David, 'FishDrop: Estimation of reef fish population density and biomass using stereo cameras,' 2016 Techno-Ocean (TechnoOcean), Kobe, Japan, 2016, pp. 527-531, doi: 10.1109/Techno-Ocean.2016.7890710 FISH DROP Prospero et al 2016, Techno-Ocean 2016, a project which estimates biomass in the oceans.
W02019232247 SHANG AQUABYTE describes biomass estimation in an aquaculture environment using an immersed stereo camera system that captures stereo images of freely moving fish.
Other examples of art include GB82539495 PYNE CARTER and BEDDOW et al 'Predicting Salmon Biomass remotely using a digital stereo imaging technique' in Aquaculture 146 25 (1996) 189-203.
More recently, LI et al in Nonintrusive methods for biomass estimation in aquaculture with emphasis on fish: a review' in Reviews in Aquaculture (2020) 12, 1390-1411 reviewed the various methods and technologies carried out over the years.
Use of clustering based on intensity or texture or colour has been applied in image recognition. Indeed, image recognition using clustering techniques in general are known, for example 'A density-based algorithm for discovering cluster' ESTER et al 1996 KDD-96 Proceedings, AAA! (www.aaai.org), known as DBSCAN (Density Based Spatial Clustering of Applications with Noise) clustering in which, for each point of a cluster the neighbourhood of a given radius has to contain at least a minimum number of points i.e. the density in the neighbourhood has to exceed some threshold'.
More recently, W02022/129362 BRINGSDAL CREATEVIEW AS describe a system for monitoring of dead fish from the bottom of individual fish pens which uses an image 10 detection system, a detection and tracking unit, a communication unit, a weight estimation unit, and a data uploading unit.
None of these documents address issues and challenges encountered at harvest. These include difficulties of counting fish at harvest, the difficulties of weighing fish at harvest, and the difficulties of managing the harvest process in general.
None of the known systems and/or methods have taken into account the unique challenges in engaging in observations at harvest. This lack of good and/or adequate observations at harvest may result in poor treatment of fish, inaccurate measurement of count and/or inaccurate measurement of biomass of harvested fish and so on. Knowing the condition of fish can improve the harvesting process and/or offer insights into the earlier fish farming process itself.
Challenges and problems remain in the art, some of which are outlined above and elsewhere in this document. The present invention seeks to alleviate or improve matters in respect of one or more of these challenges and/or problems.
Statements of the Invention
In a first aspect of the invention there is provided a system for monitoring aquatic animals (e.g. fish) at harvest (e.g. in real time) comprising (e.g. optionally in the order stun station, dewatering station and, where provided, a processing station): a stun station, preferably an electric stun station (e.g. comprising an inline electric stunner with one or more stun sections each comprising one or more electrodes); a dewatering station; optionally (e.g. preferably), a processing station (e.g. a post-stun processing station, for example, to elicit death, such as percussive stun and/or bleed, and/or ice treatment, or for other treatment e.g. for lice, and/or a packaging station and so on); at least one imaging station; and, at the at least one imaging station, an image acquisition unit comprising: at least one camera module (e.g. comprising at least one camera such as (e.g. 2D) (e.g. HD and/or RGB), 3D such as Time of Flight, stereo, lidar etc.) configured for capturing images of aquatic animals post stun (e.g. electric stun) as these pass through the imaging station; an image processing unit of, or operably coupled to, the image acquisition unit, the image processing unit comprising: at least one microprocessor; at least one storage medium comprising one or more programs, the one or more programs configured for execution on the at least one microprocessor; the one or more programs comprising instructions for: detecting at least one object in at least one image frame; determining at least one condition associated with the object in the at least one image frame; the at least one condition comprising a first condition of the object being determined to be an aquatic animal of interest; and, in response to the determination of the at least one condition, taking an action associated with the harvesting of aquatic animals (e.g. of the identified aquatic animal, and/or of the harvesting system and/or process itself).
In one or more embodiments, the stun station comprises an electric stun station. In one or 25 more embodiments, the stun station comprises an inline electric stun station. In one or more embodiments, the stun station comprises an inline wet electric stun station (e.g. one through which water flows).
In one or more embodiments, the stun station comprises a dry electric or percussive stun station (optionally with a bleed station). This may be used to increase efficacy of these 30 types of stunning e.g. by helping to adjust the stun station, and/ or move fish automatically to a secondary stun area for additional stunning.
In a second aspect of the invention there is provided a method for monitoring aquatic animals (e.g. fish) at harvest (e.g. in real time) in a system comprising: a stun station (e.g. an electric stun station comprising an electric stunner, such as an inline (e.g. along a pipeline or channel) electric stunner with one or more stun sections each comprising one or more electrodes); a dewatering station; optionally (e.g. preferably), a processing station (e.g. a post-stun processing station to elicit death, such as percussive stun and/or bleed, and/or ice treatment, and/or a packaging station and so on); at least one imaging station; at the at least one imaging station, an image acquisition unit comprising: at least one camera module (e.g. comprising at least one camera such as (e.g. 2D) (e.g. HD and/or RGB), 3D such as Time of Flight, stereo, lidar etc.) configured for capturing images of aquatic animals post (e.g. electric) stun as these pass through the imaging station; an image processing unit of, or operably coupled to, the image acquisition unit, the image processing unit comprising: at least one microprocessor; and at least one storage medium; the method comprising: detecting at least one object in at least one image frame; determining at least one condition associated with the object in the at least one image frame; the at least one condition comprising a first condition of the object being determined to be an aquatic animal of interest; and, in response to the determination of the at least one condition, taking an action associated with the harvesting of aquatic animals (e.g. of the identified aquatic animal, and/or of the harvesting system and/or process itself).
In one or more embodiments, the action comprises increasing the count of aquatic animals of interest by one.
In one or more embodiments, the condition comprises a dimension of the aquatic animal 30 being determined (e.g. being determined with sufficient accuracy); and the action comprises estimating a biomass of the aquatic animal.
In one or more embodiments, the condition comprises the aquatic animal being in a stunned (e.g. unconscious) state; and the action comprises storing an indication that the aquatic animal is in a stunned (e.g. unconscious) state.
In one or more embodiments, the condition comprises the aquatic animal being capable of 5 movement (e.g. not in a fully stunned (e.g. unconscious) state); and the action comprises storing an indication that the aquatic animal is not in a stunned state.
In one or more embodiments, the action comprises adjusting the stun conditions of the stun station.
In one or more embodiments, where the stun station is an electric stun station, the action 10 comprises adjusting the electric stun conditions of the electric stun station (e.g. adjusting the voltage and/or frequency (if varying voltages) and/or current at one or more electrodes).
The electric stun station may comprise a stun section and, following the stun section, a stun maintenance section. The stun maintenance section can be varied in length or number of electrodes or stun parameters to adjust the length of time the stun is maintained for.
The stun condition of the electric stun station can be varied in a number of ways, including increasing or decreasing the level of stun (e.g. voltage and/or frequency and/or current) in a stun section, or increasing or decreasing the level of stun (e.g. voltage and/or frequency and/or current) in a stun maintenance section, where provided.
Thus, by careful design of the harvesting system and/or by careful control of stun conditions, aquatic animals can be passed to a post-stun processing station for post-stun treatment, e.g. for treatment and/or to elicit death without recovering from the stun. The careful design typically comprises a time spent post-stun travelling to the processing station, which depends upon speed of travel of the aquatic animals, and length of post-stun line, e.g. the imaging station and the distance to the processing station, as well as that of the processing station itself.
In one or more embodiments, the imaging station is after the dewatering station and, where provided, before the processing station.
In one or more embodiments, the imaging station is after a or the processing station.
In one or more embodiments, the order of the stations may be varied as would be understood by someone skilled in the art e.g. the imaging station may be between percussive stun station and a bleed station.
In one or more embodiments where a wet e.g. inline electric, stun station is provided, the stun station is before the dewatering station, and the imaging station is provided after the dewatering station, e.g. so that imaging may take place in air.
In one or more embodiments where a dry stun station, and/or percussive stun station, is provided, a dewatering station is provided before the stun station, and the imaging station is provided after the stun station e.g. so that stunning and imaging may take place in air. In one or more embodiments, two imaging stations are provided, a first imaging station between the dewatering station and a or the processing station, and a second imaging station after the processing station. The first and second imaging stations may be the same (e.g. of identical construction) but may not be.
In one or more embodiments, the at least one camera module comprises a 2D camera (e.g. HD and/or RGB). In one or more embodiments, the at least one camera module comprises a 2D camera and a 3D camera.
In one or more embodiments, the 2D camera may be a high-definition (also known as high 20 resolution) camera. Typically, these have upwards of 12 megapixels). It may be HD and RGB, or HD and monochrome. In one or more embodiments, the 2D camera may be a colour (e.g. RGB) camera.
In one or more embodiments, the 3D camera may be a stereo camera (e.g. comprising 2x2D cameras such as 2x2D low resolution, monochrome cameras of resolution of 12 25 megapixels or lower, typically 8 megapixels or lower).
In one or more embodiments, the 3D camera may be a time-of-flight camera. The 3D camera may be a time-of-flight camera e.g. such as that described in GB2539495 PYNE CARTER.
In one or more embodiments, two or more cameras within the at least one (e.g. one, two 30 or more, or each) camera module have the same, or substantially the same, focal length and/or same, or substantially same, field of view.
In one or more embodiments, at least two cameras are provided within the at least one (e.g. one, two or more, or each) camera module, each camera having a different focal length.
In one or more embodiments, the imaging station comprises a substantially planar platform on which aquatic animals travel. In one or more embodiments, the imaging station comprises a substantially horizontal platform on which aquatic animals travel. Thus, preferably, in one or more embodiments, a planar, horizontal platform for travel is provided. Travel may take place by conveyer belt, driven rollers, momentum of ingress etc. In one or more embodiments, the imaging station comprises a sloped platform on which 10 aquatic animals travel. Thus, in one or more embodiments, a sloped, planar platform is provided. It may be curved.
In one or more embodiments, the distance of the platform and so the animal from the camera is predetermined. Preferably the height and/or orientation of the image acquisition unit above the platform may varied to one or more present heights and or orientations.
In one or more embodiments, the system comprises (e.g. within the image processing unit) a first trained deep learning network (e.g. a convolutional neural network) for classifying and/or uniquely identifying an object in an image frame as an aquatic animal of interest (e.g. a fish and/or a species of fish).
In one or more embodiments, the system comprises the same, or a second, trained deep 20 learning network (e.g. a CNN such as a mask R-CNN e.g. within the image processing unit) to determine a segmentation mask of an aquatic animal in one or more (2D or 3D) image frames.
In one or more embodiments, the system (e.g. the instructions) comprise(s) determining a length dimension of an aquatic animal and/or determining a width of an aquatic animal.
In one or more embodiments, the system (e.g. the instructions) comprise uniquely identifying an aquatic animal in an image.
In one or more embodiments, the system (e.g. the instructions) comprise associating a condition with a uniquely identified aquatic animal.
In one or more embodiments, the method comprises the action comprises increasing the 30 count of aquatic animals of interest by one (e.g. this is an example of an action).
In one or more embodiments, the method comprises determining a dimension of the aquatic animal; and estimating a biomass of the aquatic animal.
In one or more embodiments, the method comprises determining that the aquatic animal is in a stunned state; and storing an indication that the aquatic animal is in a stunned state (e.g. the action here is storing the indication).
In one or more embodiments, the method comprises determining that the aquatic animal of interest is capable of movement (e.g. not in a fully stunned (e.g. unconscious) state); and storing an indication that the aquatic animal is not in a stunned state (e.g. the action here is storing the indication).
In one or more embodiments, the method comprises adjusting the stun (e.g. electric stun) conditions of the stun station (e.g. of the electric stun station).
In some embodiments, the imaging station is between the dewatering station and a or the processing station. In some embodiments, the imaging station is after a or the processing station. In some embodiments, two imaging stations are provided, e.g. a first imaging station between the dewatering station and a or the processing station, and a second imaging station after the processing station.
In one or more embodiments, in which the system comprises (e.g. within the image processing unit) a first trained deep learning network (e.g. a convolutional neural network), the method may comprise classifying and/or uniquely identifying an object in an image 20 frame as an aquatic animal of interest (e.g. a fish and/or a species of fish).
In one or more embodiments, a stereo camera may be provided and the method may further comprise: * i) obtaining and storing a pair of digital images captured by, or derived from digital images captured by, the stereo camera; * fi) matching pixels between pairs of digital images; * iii) producing a disparity map; * iv) using the disparity map to determine a depth map (e.g. a point cloud); * v) using a trained machine learning tool, (e.g. convolutional neural network), to classify a region of the depth map, (e.g. as containing an image of a fish; optionally, cropping the digital image to the image region to generate a cropped image).
Where a stereo camera is provided, the instructions in the system or the method may comprise the following: * based on obtaining and storing a pair of digital images captured by, or derived from digital images, captured by the stereo camera, using a trained, (e.g. convolutional) neural network to classify an image region of a digital image, of the pair of digital images, as contained in an image of an aquatic animal; * cropping the digital image to the image region to generate a cropped image; * identifying a plurality of landmark points on the aquatic animal within the cropped image; * determining a plurality of disparity map values corresponding to at least the plurality of landmark points; * based on the plurality of disparity map values, calculating one or more dimensions of the aquatic animal; and * estimating a mass of the aquatic animal based on the one or more dimensions.
In one or more embodiments, the step of detecting objects in 2D may comprise tagging and/or segmenting objects in 2D images from a 2D camera (e.g. using intensity or texture 15 or colour etc), optionally from a 2D camera separate from the 3D camera.
In one or more embodiments, the step of detecting objects in 3D may comprise tagging and/or segmenting objects in 3D images (e.g. using depth or indeed using depth and/or using intensity and/or texture and/or colour).
In one or more embodiments, the method may comprise using machine learning to identify 20 relevant pixel groupings representing the object in a 2D and/or 3D image.
In one or more embodiments, in which the system comprises the same, or a second, trained deep learning network (e.g. a CNN such as a mark R-CNN e.g. within the image processing unit), the method may comprise determining a segmentation mask of an aquatic animal in one or more image frames.
In one or more embodiments, the method comprises determining a length dimension of an aquatic animal and/or determining a width of an aquatic animal.
In one or more embodiments, the method comprises uniquely identifying an aquatic animal in an image. In one or more embodiments, the method comprises associating a condition 30 with a uniquely identified aquatic animal.
In one or more embodiments, the method may comprise: -identifying a plurality of (i.e. two or more) landmark points and/or one or more body surface lines within the image of the fish on the depth map; -determining one or more body dimensions of the fish using the plurality of landmark points and/or one or more surface lines, and, based on the one or more body dimensions, estimating a mass of the fish.
Several embodiments of the invention are described and any one or more features of any 5 one or more embodiments may be used in any one or more aspects of the invention as described above and elsewhere herein.
Brief Description of the Invention
The present invention will now be described, by way of example only, with reference to the following figures in which like numerals refer to like features.
Figure 1 shows a schematic side view of a first harvesting system in one example embodiment of the invention.
Figure 2 shows a schematic side view of a second system in one example embodiment of the invention.
Figure 3 shows overview steps of a method of determining one or more conditions (e.g. 15 count and/or consciousness and/or mass) of objects such as fish at harvest.
Figure 4 shows a schematic system overview of a camera system and method according to one or more example embodiments of the invention.
Detailed Description of the Invention
It will be understood by those skilled in the art that any dimensions and relative orientations such as lower and higher, above and below, and any directions, such as vertical, horizontal, upper, lower, axial, radial, longitudinal, tangential, etc., and any angles etc. referred to in this application, are within expected structural tolerances and limits for the technical field, and for the apparatus and methods described, and these should be interpreted with this in mind.
In this document reference is made to fish but it will be understood that the invention applies to other freely movable objects such as other marine or aquatic animals.
A stun station may be designed and/or operated to render an aquatic animal unconscious and/or to kill the aquatic animal, as would be understood by someone skilled in the art.
Examples of electric stunners which may be used in one or more embodiments of the present invention include those described in GB2417408, W02017006072, GB2557245, GB2540154, GB2556564 all to LINES and PYNE CARTER. Examples of stunners which may be used that are commercially available include the HSU stunner from Ace Aquatec Ltd. Dundee, UK. Other stunners, nonelectric and electric may be used although electric stunners are preferred, and in particular in-line electric stunners with continuous flow of fish through a pipeline or channel rather than batch electric stunners.
As described above, much effort has been focused on biomass estimation of live fish in water. W02022/129362 BRINGSDAL looks at dead fish pumped from the bottom of fish pens. Nevertheless, neither approach is suitable at harvest. The present inventor(s) has/have appreciated that this gap present problems in taking good care of fish welfare during harvest.
Figure 1 shows a schematic side view elevation of a harvesting system 100, according to one or more embodiments of the invention. Harvesting system 100 comprises, here, an inline electric stunner 10, a dewatering station 20, a first imaging station 30, a processing station 40, a second imaging station 50, and a further processing station 60. An alternative system may be used, comprising a different kind of electric stunner 10, and only one imaging station 30 and one processing station 40.
The side walls of the stations of Figure 1 are typically omitted or indicated by dashed lines 20 to aid clarity. These may be quite widely separated or of a predetermined narrow width to allow only one fish through at a time. A funnel entrance to these narrowly spaced side walls may be provided.
Inline electric stunner 10 comprises a pipe 11 along which electrodes 12 are distributed. The arrangement of pipe 11 and electrodes 12 may take various forms, with electrodes 25 distributed along and/or opposite and/or across and/or above and below a flow path within a pipe 11 or, alternatively, a channel.
A suitable power supply 14 powers electrodes 12. Stunner pipe 11 has an exit 26 which delivers aquatic animals to a dewatering station 20. Dewatering station 20 typically comprises a grid or bars through which water can fall out via an exit 24, retaining fish on top of the grid or bars. There is typically a slight drop from pipe 11 to dewatering station 20 which gives momentum to fish over dewatering station 20 so these continue to travel and pass on to a connected imaging station 30. Alternatively, the dewatering station may have a powered conveying means such as rotating rollers for delivering fish to imaging station 30.
Imaging station 30 comprises an upwardly facing conveying platform 32 on which fish travel before being delivered to a processing station 40. Conveying platform 32 may be sloped, as shown in Figure 1, or may be horizontal. In either case, this is preferably planar. Conveying platform 32 (and 52) may be continuous e.g. a continuous stationary surface or a continuous moving surface, or of discrete parts e.g. formed of rollers, which may be freely rotating, or the like. In either case, it is configured to convey fish from one side to another, via the fish's own momentum or via movement of the platform, or both. Conveying platform 32 may be powered e.g. by comprising numerous powered rollers and/or a conveyor belt for transporting fish from one side of imaging station 30 to another.
Imaging station 30 here comprises a camera module 80, here a first camera module 80A. Camera module 80, 80A comprises a HD 2D camera 82 which has a field of view Fl of platform 32 of imaging station 30. Camera module 80A also comprises a 3D camera 84, here arranged as 2x2D cameras 84A and 84B, one either side of HD 2D camera 82. It will be understood that cameras 82, 84A and 84B may be coloured (RGB) or monochrome.
Together, cameras 84A and 84B form a stereo camera. Camera 84A and 84B have respective fields of view F2 and F3 of platform 32 of imaging station 30.
Each camera module 80 may be an integral unit demountable and re-mountable in its own right from a housing. Each camera module typically comprises at least one 2D camera 82 and preferably also at least one 3D camera 84 (84A, 84B), preferably a separate 3D camera as shown here, extending outwards from a longitudinal axis of a main housing of camera module 80. For example, the camera module 80 may be angled to correspond to the slope of imaging station 30. In this way, camera module 80, and cameras 82, 84 within it, may be substantially parallel to the sloped platform of station 30.
Preferably, the cameras 82, 84A, 84B in each respective camera module are provided in line with one another along a respective substantially horizontal (or sloped) axis so that each camera within a given camera module faces outwardly in the same vertical or angled direction. Thus, the two stereo cameras 84A and 84B are in line with one another in use. This enables a view from one side and the other side of a plane extending downwards from the camera module, and so to one side and the other of fish when these travel in or through that plane. In some embodiments, each of the cameras within a camera module is chosen so that the width of its field of view is generally, or substantially, the same as the other cameras within that camera module. At least, the two cameras within a pair of stereo cameras have the same field of view preferably. These are also preferably identical cameras.
In Figure 1, it can be seen that 2D camera 82 has a first field of view Fl, a first 2D camera 84A of a 3D (2x2D) stereo camera has a second field of view F2, and a second 2D camera 84B of a 3D stereo camera has a third field of view F3. The width of these fields of view as shown in Figure 1 overlap so that images from one camera can be compared with images from another camera. Typically, their focal lengths are the same. These are preferably adapted to the slope, or absence of slope, of the upper platform of transport of first imaging station 30.
It can be seen that the fields of view F1, F2, F3 extending vertically downwards from respective cameras and overlap and occupy a substantial proportion, typically a majority, 10 or most, or all, of the surface of imaging station 30 when seen from above.
The overlap of the 2x2D (pair) of the fields of view of stereo cameras 84A, 84B with each other and with corresponding high-definition camera 82 (HD or RGB) is shown. As these cameras all face in the same direction and are closely spaced one next to the other, preferably in a line, this means that they view more or less the same region of imaging station 30.
Where upper platform 32 is sloped, for example sloped up to 15° from the horizontal, a front optical plane of camera module 80 (and so of cameras 82 and 84) may be commensurately sloped, so these are generally or substantially parallel to one another. Alternatively, as shown in Figure 1, platform 32 may be slightly sloped with respect to the horizontal, and camera module 80 may be arranged such that a front optical plane of camera module 80 (and so of cameras 82 and 84) is generally or substantially horizontal. Corrections can be made in depth calculations to account for this difference in slope, and so distance.
The field of view Fl of the high-definition camera 82 typically extends along and across the majority, or most of, platform 32 so that fish can be tracked in high definition 2D across the platform. Similarly, cameras 84A and 84B may have fields of view F2 and F3 which extend along (and across) the platform or these may be somewhat shorter, and/or narrower, so that a very specific area, typically a central area 34, of platform 32 is designated as a 3D viewing zone. Indeed, 2D camera 82 may also be so arranged.
Such an arrangement assists in knowing the expected position, distance from the camera, and lighting conditions which assists in reducing variation of images received within 3D camera 84 (and preferably also 2D camera 82) and so improves speed and accuracy in processing images. Furthermore, the processing software may be trained to the local conditions within a specified area 34 of platform 32 for detecting objects e.g. fish. The time of arrival of objects in area 34 and their orientation can be predicted with some degree of certainty by design of the harvesting system.
When sloped, imaging station 30 may also function as a delivery chute to a processing station 40 as well as an image processing station. Fish are delivered from imaging station 5 30 to processing station 40. Indeed, one or more processing stations may be provided.
Here, processing station 40 is a percussive stun and/or bleed station. It may, alternatively, be a treatment station for lice and the like.
Following first processing station 40, a second imaging station 50 is provided. In this example, second imaging station 50 is provided in this example embodiment, with, here, a 10 substantially horizontal upper platform 52 and a conveyor belt 56 beneath platform 52 for conveying fish through imaging station 50.
Imaging station 50 is provided with a camera module, which is, in this example embodiment, a second camera module 80B. Camera module 80B may be identical to camera module 80A or may, as shown here, be slightly different. Here, camera module 80B comprises a HD 2D camera 82 upstream from a stereo camera 84 comprising 2x2D cameras 84A and 84B. Preferably, the field of view of HD camera 82 encompasses a majority, or most, or all, of platform 52, or at least a viewing region 54. The 2x2D cameras 84A and 84B of stereo camera 84 have respective fields of view F2 and F3 which overlap one another to view, or indeed define, a viewing region 54.
The first and second camera modules 80A and 80B provide input to an image processing system 200 (see Figure 4) comprising a storage media 210 used for storing of images as frames, and used for storing programmes of instructions for carrying out the methods on the apparatus of the invention.
Thus, a designated viewing area 54 is provided for stereo camera 84, typically 25 encompassing a central or later region of upper platform 52. In this way, images from 2D camera 82 may be taken and processed first, with the results available for feeding into image processing of images frames from 3D camera 84.
Here, platform 52 is substantially horizontal and an optical plane of camera module 80 is also substantially horizontal. These are, therefore, substantially parallel to one another.
A second or further processing station 60 is provided downstream from second imaging station 50. Processing station 60 may comprise filleting and/or packaging, or indeed ice treatment to elicit death etc. Turning to Figure 2, further detail is shown of an alternative harvesting system 100, here comprising an inline electric stunner 10, a dewatering station 20, one imaging station 30, and one processing station 40. Different numbers and arrangement of imaging stations and processing stations may be provided., but it is particularly preferable if at least one imaging station is provided before (e.g. immediately before) a processing station which can then take an action. Alternatively, an action may be taken by an imaging processing unit or a control unit (not shown) to control stun parameters in the stun station.
Inline electric stunner 10 in Figure 1 or Figure 2 may be fed by a fish pump 70 from a fish inlet 16, which delivers fish to outlet 26. Fish and water travel over dewatering station 20 10 and water is removed via exit 24 with fish being delivered to imaging station 30.
In Figure 2, only a single imaging station is provided. Also, a single camera module 80 is provided here perpendicular to the direction of travel of the fish (in contrast to Figure 1 in which both camera modules were in line with the direction of travel of fish). Thus, the fields of view along Fl, F2, and F3 of cameras 82 and 84 in Figure 2 along the direction of travel of fish are more or less identical. A single processing station, e.g. a bleed or filleting or packaging station may be provided.
Referring now to Figure 3, a process 102 for object detection and optional segmentation is shown. In step 230-1, a deep learning network, such as a convolution neural network, is used to determine that an object is an aquatic animal of interest, e.g. a fish or specific species of fish, in a 2D image and, optionally to assign boundary boxes and/or key points. Once a first condition is satisfied, namely that an object is determined as a fish, one or more actions are taken. Typically, the action is or includes increasing the fish count by one. This is valuable improvement over existing systems, adding a level of intelligence, in the form of supervised machine learning, to an otherwise somewhat erratic manual process of counting, increasing confidence in the count and providing opportunities to determine further conditions, and take further actions.
The action taken upon determining that the object is a fish may be to increase the fish count by one and/or to determine a boundary box and/or key points of the fish in the 2D image, preferably both.
The nature of an inline process such as this can orient the fish in a suitable way so that, as it arrives in the designated area, e.g. the platforms 32 or 52, or a designated area 34 or 54 of platforms 32 or 52, means that the dimensional variation within the recognition problem is much reduced. Typically, the fish will approach head first and lie along the direction of travel so that the longitudinal axes of the fish lie along the direction of travel. This simplifies greatly the recognition problem (the is it a fish? question) and so the counting problem. Furthermore, this also simplifies the challenge of assigning a correctly-sized boundary box to the fish and, in determining the location of key points within a boundary box.
Nevertheless, as shown in Figure 3, optionally, a segmentation mask may be determined from a 2D image in step 230-2 e.g. using machine learning, such as a deep learning network such as a neural network. Again, given the simplified orientation of the fish and the limitation of variation of conditions in air, segmentation processing is much easier and faster.
Only one camera need be provided in the camera module and this may be a 2D or 3D camera but, preferably, both are provided, in which case optional steps 260-1, 260-2, 2603a, 260-3b may be used. In step 260-1, 260-2, a 3D image is used to create a depth map (a point cloud) comprising x, y, z co-ordinates of a fish on the viewing platform 32 (or 52). In step 260-3a, machine learning, such as deep learning using a neural network e.g. a CNN and/or a corresponding 2D image such as a segmentation mask from a 2D image, may be used to segment the depth map, to identify fish in the depth map. In step 260-3b, optionally, unsupervised clustering such as using a DBSCAN cluster algorithm may be used to segment the depth map.
Whether or not steps 230-2 to 260-3b are carried out, preferably in step 270 a dimension of the object is determined. Preferably, the dimension is determined in at least two frames 20 in step 270, in 2D and/or 3D Regarding the 2D images, these may be from a high-definition camera or from a base image from one of the 2D cameras of the stereo camera. Object detection is carried out to provide images of fish. Objects are tracked e.g. images of the fish or the boundary boxes across frames and/or cameras and the most likely frames may be selected. A series of example frames f=1 to 3 at respective times t=1 to 3 are shown. Using the high-resolution 2D images facilitates the choice of frames which may be taken from the 3D images and/or can be used to check dimensions taken from the 3D images.
In step 280 two frames each spaced by a time period T3 may each be used to compare corresponding dimension of the fish. A dimension in at least one frame is compared to a dimension in at least one other frame, the two frames being spaced in time, anywhere in the range from of T3 = 5ps to 0.5s, more preferably 100 ys to 0.5s. Indeed, the system may be trained using supervised learning to take images based upon the accuracy of the output of the comparison of dimensions.
The dimension may be determined in two sets of frames. For example, a first set of frames may be a set of two to five frames taken very rapidly one after the other, a dimension may be determined for each one and averaged. The set of frames are typically spaced apart in time from one another by a time period Ti.
A second set of preferably the same number of frames may be similarly determined. Each of these second set of frames are separated by a time period T2, which is typically of the same order or the same as T1. However, the first set of frames and the second set of frames may be separated from each other in time by a longer time period T3, which is typically at least 5 or 10 x T1 or T2. So, for example, if the camera takes images at 40 frames per second, each frame is separated by 25ps, whereas a set of one frames may be separated by, for example, T3 = 0.25s (10x 25 ps) or more, say 0.5s and so on, from a second set of frames. Thus, the average of dimensions from closely spaced frames separated by T1 or T2 can be used to improve accuracy, whilst at the same time providing the ability to compare the dimension of a differing time period T3 (e.g. 5ps to 0.5s, more preferably 100 ys to 0.5s).
By comparing in step 280 at least one dimension (e.g. length, height, width, thickness,) in at least two frames, or in two or more sets of frames, an indication of the stun state (e.g. consciousness and/or unconsciousness) may be determined. If the fish is not stunned, it can move and its apparent dimensions can therefore change. In preferred embodiments, this can be captured by the system and method of the invention, and various actions taken.
For example, if the dimension is a length and a comparison is made of the length in one frame to another frame or from one set of frames to another set of frames, and if the length has varied more than a predetermined amount, this indicates the fish is not in a stunned state, as it has moved, and vice versa. This information can be used to control the harvesting machine in a number of ways, for example to: direct non-stunned fish to a different processing line; change e.g. increase and/or decrease the stun parameters on the electrode in the electric stun station (e.g. adjust stun parameters in the stun section and/or adjust stun parameters in a stun maintenance section).
Optionally, machine learning tools e.g. deep learning network e.g. neural networks may be used to determine boundary boxes of fish and/or of key points of fish and/or to segment a depth map. Indeed, once boundary boxes of fish and/or of key points and/or segmentation of fish have been carried out on a 2D image, the corresponding 3D image (from the corresponding 3D camera in the same camera module) may be used to map onto the corresponding pixels in a 3D camera, e.g. on a pixel by pixel (pixel wise) basis, to determine the depth of the boundary boxes of the fish and/or the boundary boxes of the key points of the fish and/or of other elements of the fish and/or to determine a segmentation mask in the 3D image e.g. based on boundary boxes of fish and/or boundary boxes of key points of fish and/or more preferably the segmentation mask from the 2D image.
Fish are preferably tracked across at least two frames. As images are taken in air on a fixed platform of imaging station 30, object identification and, indeed, tracking is much simplified. Thus, when looking at Figure 1, it can be seen that each fish may be tracked along a significant proportion of its journey through imaging station 30. This means that images of fish at different locations with respect to the camera on the travel platform of image station 30 can be gathered. Whilst global trackers, such as the KALMAN tracker machine learning tool, may be used to identify the trajectory of an object from one frame to the next and estimate the probability that an object in the next frame is the same as the one seen before, these are typically not needed. A knowledge of velocity of the object over the image station platform and time between frames is enough to give a prediction of position of that fish on platforms 32 and 52.
By collecting two or more image frames. confidence that these are the same fish can be determined. Indeed, more images of one fish being collected enables a better estimate of a particular dimension such as a length (e.g. longest) and/or a height (e.g. average or 20 highest) in order to give a better estimate of the weight.
Thus, measuring at harvest and in air along a known trajectory simplifies imaging and calculations, reducing errors and improving estimation. Further, these images can be collected and used in training modules for enhancing the accuracy of the machine learning algorithms that detect fish in this environment.
Indeed, images collected from the apparatus and methods of the invention may be used to train machine learning algorithms in a number of ways e.g. directly and/or following manipulations (inversion, size changes, brightness changes, aspect ratio changes, tilt etc). The images collected may therefore be referred to as digital twins of real and pseudo-real images.
Referring now to Figure 4, a system overview is shown comprising two camera modules 80A and 80B and an image processing system 200 for use with the camera modules. Only one camera module may be used and, indeed, only one camera within each camera module. Nevertheless, it will be understood that, by providing two spaced apart camera modules, one at a first imaging station 30 and one at a second imaging station 50, provides two opportunities to assess a condition of a fish such as its length and/or its stun status and/or its weight etc. This improves accuracy of the whole system.
Image processing system 200 comprises a storage media 210 for storing images and/or programs of instructions. A control unit, typically comprising a microprocessor, runs the programs for undertaking various activities including detecting objects e.g. fish and determining dimensions as well as, optionally, filtering and/or tracking and/or selection within images. In step 230, object detection e.g. using boundary boxes and/or segmentation takes place using intensity and/or texture and/or colours etc. on 2D images.
In step 240, fish are counted.
In step 250 fish are tracked across frames and/or across cameras and/or across camera modules and suitable frames are selected. A frame number f and a time t are allocated to each frame.
Preferably, a 3D camera is provided, such as a stereo camera. In which case, stereo matching takes place between images in step 260-1 (for images A and B) to develop a disparity map and a depth calculation is carried out in step 260-2 to produce a depth map. Optionally, image detection takes place within the depth map 260 e.g. using segmentation based on clustering to produce a DBSCAN result.
In step 270, dimensions are estimated for tracked objects for optionally using depth e.g. in selected frames and/or at selected times. The number of frames and/or the time separation between frames selected for comparison may be varied or used as inputs into a further machine learning tool such as deep learning networks such as a neural network to determine if, from the image data and the input information relating to time and/or frame selection, improved accuracy of result for the dimension and/or state can be determined.
Thus, storage media 210 may be provided with up to three machine learning tools; a first 25 one for detecting objects in step 230 (e.g. is it a fish in 2D), a second one for detecting objects in step 260 (e.g. an outline of a fish in 3D images), and/or a third one for selecting, and preferably also processing, frames (or sets of frames) in step 270.
In step 280, a determination is made if the estimation of the dimension is changing more than a predetermined limit e.g. if the length is changing more than a predetermined limit, 30 by comparing that dimension e.g. length in two or more images, or sets of images. If the dimension is unchanging compared to the limit in step 290, a stun status of unconscious may be indicated and, if the dimension is changing compared to the predetermined limit, a stun state of conscious may be indicated in step 290.
In step 300, an action is taken, e.g. to adjust the stun condition or to direct the fish in question to a particular processing line downstream. It will be understood that various adjustments may be made to improve the accuracy of the estimated dimension. For example, in optional step 310, the pixel choice may be adjusted e.g. adding pixels at the end of a line for a length and/or height estimation.
In step 320, optionally, a selected frame with a representative dimension from one or more selected frames may be used. For example, where a set of frames are taken, closely 10 spaced together, the longest dimension from the set may be chosen for comparison with a similarly longest dimension from another set of frames.
A centre line of an object is determined from a depth map, e.g. from a DBSCAN result, to take account of object thickness. In step 310, optionally, extra pixels are added to a centre line e.g. at one or both ends of the object, for example if the difference in depth values outside an outline, e.g. a segmented outline of the fish, is less than a predetermined amount such as s0.5 to s1.0 cm. or 1 to 5cm, or 'I to 2cm.
In step 320, optionally, the most representative frame with the likely best dimension, e.g. length, is selected. In step 330, an estimate or a determination of a dimension for each object is made. In this example embodiment, in step 330, the weight is estimated. For example, in step 330, the dimension that has been found, e.g. length and/or height and/or area and/or volume is used to determine an estimate of weight. A distribution of dimensions and/or weights may be established.
Where the images taken are fed back into the machine learning tools (e.g. neural networks) used within the image processing system, this enables the image processing system to 25 improve itself (e.g. by further training and/or adjustment of the trained algorithms) as it goes along and, further, can also provide training images for training purposes.
Machine learning algorithms typically require at least 200 images and, preferably, many thousands of images to identify or improve features of interest. However, by limiting the distance and angles at which fish can be seen by cameras 80 (82, 84), a valid identification 30 of that fish can be made with far higher probability.
In one or more embodiments, trained machine learning based classifier algorithms such as Haar classified cascade can be used to identify fish in one of the left-hand side and right-hand side images in stereo image processing. Optionally, images that do not have at least a predetermined number of features (e.g. fins and /or eyes and/or tail) in the left-hand side image and/or in the right-hand side image may be discarded.
In one or more preferred embodiments, the machine learning employs segmentation, 5 preferably alongside boundary box recognition. Such segmentation may be based on image features such as intensity, colour, texture, brightness etc., but may be at least in part, on depth. Thus, segmentation in general takes a pixel-by-pixel approach and, where depth is involved, this enables full 3D rotation of the image surveyed, accurate object identification, and providing a clearer path to rapid training from the increased amounts of 10 usable images.
One or more embodiments may be configured as follows: - A high resolution 2D camera may be provided and optimally used to run A.I.
detection algorithms, (otherwise the algorithms can be run with a sole stereo camera). The high resolution 2D camera may be used to detect objects trained in the software which may be one or more of different species of fish, lice, indicators of maturation, disease (e.g. fin rot), health etc. - Two stereo cameras (or optionally a ToF camera, or optionally stereo cameras and a ToF camera) may be provided. Preferably, these are aligned to provide the same focal view and/or length as the high resolution RGB camera.
The depth map may be used to create a centre line for each fish and for each selected frame.
Tracking of fish across multiple frames becomes easier in air, particularly when mechanical guide (e.g. side walls) channel fish to a predetermined viewing portion of an imaging station (e.g. 34, 54) associated with the 2D and/or 3D camera (82, 84).
Filters may be applied to choose the most reliable image (e.g. the curvature of the line is preferably no more than 0.1 see TILLETT et al, Estimating Dimensions of Free-Swimming Fish Using 3D Point Distribution Models, Computer Vision and Image Understanding 79, 123-141 (2000)).
Length and height may then be used to calculate weight. Alternatively, standard algorithms based upon area of the fish determined from point cloud data can be used to reach a better estimate of weight. Alternatively, segmentation can be used in combination with tagging, to provide more reliable distances of tagged portions of a fish (e.g. a segmented fin or head) to reach a better estimate of weight.
Whilst electric stunning in water is preferred, the stun station may be automatic percussive, or dry electric, or indeed in water electric and so on. The system may be used to develop instructions regarding how to stun better, and/or to move an un stunned animal to a further station where they can be stunned again with a secondary system.
Percussive: The system can be used to check if the stun station is knocking the fish unconscious. In percussive stun systems, the system is set up according to the head size of the fish, so if bigger fish comes through, it may hit their nose and lead to recovery. In this instance the camera module in the harvesting system of the present invention would detect movement. A sorting arm, or lever, or the like moves the fish into a separate line for secondary stunning with percussion or people.
Dry electric: - If movement is detected it may specify changes in the applied the voltages to change or the conveyor speed to slow down. Indeed, if movement is detected, the same as above may be used, e.g. if the harvesting system detects movement, optionally a sorting arm, or lever, or the like moves the fish into a separate line for secondary stunning with percussion or people.
Where dry stunners and percussive stunners are used, a dewatering station, then a stun station, then an imaging station are typically provided in that order.
Wet Electric: If movement is detected, movement in the fish can inform the stunner to increase the electric field strength and/or movement in the fish can inform the stunner to reduce the flow speed of the pump so that fish pass through the stunner more slowly increasing the immersion times. Indeed, if movement is detected, the same as above may be used, e.g. if the harvesting system detects movement, optionally a sorting arm, or lever, or the like moves the fish into a separate line for secondary stunning with percussion or people.
Here a stun station, then a dewatering station, then an imaging station are typically provided in that order.
harvesting system 10 inline electric stunner harvesting system pipe 11 pipe or channel 12 electrodes 14 power supply 16 inlet dewatering station 24 water outlet 26 fish outlet first (e.g. pre-processing) imaging station (e.g. delivery chute) 34 predetermined viewing area processing station (e.g. percussive stun and bleed) second (e.g. post-processing) imaging station 52 upwardly facing platform 54 predetermined viewing area 56 conveyer belt further processing station(s) (e.g filleting and/or packaging) fish pump camera module 82 2D (e.g. HD and/or RGB) camera 84 (84A, 84B) stereo camera (3D = 2x2D cameras) image processing system 210 storage media 220 control unit
Fl field of view of 2D camera
F2, F3 field of view of respective 2D cameras of the stereo camera
Claims (34)
- Claims 1. A system for monitoring aquatic animals at harvest comprising: a stun station; a dewatering station; at least one imaging station; and, at the at least one imaging station, an image acquisition unit comprising: at least one camera module configured for capturing images of aquatic animals post stun as these pass through the imaging station; an image processing unit of, or operably coupled to, the image acquisition unit comprising: at least one microprocessor; at least one storage medium comprising one or more programs, the one or more programs configured for execution on the at least one microprocessor; the one or more programs comprising instructions for: - detecting at least one object in at least one image frame; - determining at least one condition associated with the object in the at least one image frame; - the at least one condition comprising a first condition of the object being determined to be an aquatic animal of interest; - and, in response to the determination of the at least one condition, taking an action associated with the harvesting of aquatic animals.
- 2. A system according to claim 1 in which: the action comprises increasing the count of aquatic animals of interest by one.
- 3. A system according to claim 1 or 2 in which: the condition comprises a dimension of the aquatic animal being determined; and the action comprises estimating a biomass of the aquatic animal.
- 4. A system according to any preceding claim in which: the condition comprises the aquatic animal being in a stunned state; and, the action comprises storing an indication that the aquatic animal is in a stunned state.
- 5. A system according to any preceding claim in which: the condition comprises the aquatic animal being capable of movement; and, the action comprises storing an indication that the aquatic animal is not in a stunned state.
- 6. A system according to any preceding claim in which: the action comprises adjusting the electric stun conditions of the electric stun station.
- 7. A system according to any preceding claim in which a processing station is provided and in which the imaging station is between the dewatering station and the processing station.
- 8. A system according to any of claims 1 to 6 in which a processing station is provided and in which the imaging station is after the processing station.
- 9. A system according to any preceding claim in which a processing station is provided and in which two imaging stations are provided, a first imaging station between the dewatering station and the processing station, and a second imaging station after the processing station.
- 10. A system according to any preceding claim in which the at least one camera module comprises a 2D camera.
- 11. A system according to any preceding claim in which the at least one camera module comprises a 2D camera and a 3D camera.
- 12. A system according to any preceding claim in which two or more cameras within the at least one camera module have the same, or substantially the same, focal length and/or same, or substantially same, field of view.
- 13. A system according to any preceding claim in which at least two cameras are provided within the at least one camera module, each camera having a different focal length.
- 14. A system according to any preceding claim in which the imaging station comprises a substantially planar platform on which aquatic animals travel.
- 15. A system according to any preceding claim in which the imaging station comprises a substantially horizontal platform on which aquatic animals travel.
- 16. A system according to any of claims 1 to 14 in which the imaging station comprises a sloped platform on which aquatic animals travel.
- 17. A system according to any preceding claim comprising a first trained deep learning network for classifying and/or uniquely identifying an object in an image frame as an aquatic animal of interest.
- 18. A system according to claim 17 comprising: the same, or a second, trained deep learning network to determine a segmentation mask of an aquatic animal in one or more image frames.
- 19. A system according to any preceding claim comprising determining a length dimension of an aquatic animal and/or determining a width of an aquatic animal.
- 20. A system according to any preceding claim comprising uniquely identifying an aquatic animal in an image.
- 21. A system according to claim 20 comprising associating a condition with a uniquely identified aquatic animal.
- 22. A system according to any preceding claim in which the stun station comprises an electric stun station.
- 23. A system according to claim 22 in which the stun station comprises an inline wet electric stun station.
- 24. A method for monitoring aquatic animals at harvest in a system comprising: a stun station; a dewatering station; at least one imaging station; at the at least one imaging station, an image acquisition unit comprising: at least one camera module configured for capturing images of aquatic animals post stun as these pass through the imaging station; an image processing unit of, or operably coupled to, the image acquisition unit comprising: at least one microprocessor; and at least one storage medium; the method comprising: - detecting at least one object in at least one image frame; - determining at least one condition associated with the object in the at least one image frame; -the at least one condition comprising a first condition of the object being determined to be an aquatic animal of interest; - and, in response to the determination of the at least one condition, taking an action associated with the harvesting of aquatic animals.
- 25. A method according to claim 24 comprising: the action comprises increasing the count of aquatic animals of interest by one.
- 26. A method according to claim 24 or 25 comprising: determining a dimension of the aquatic animal; and estimating a biomass of the aquatic animal.
- 27. A method according to any of claims 24 to 26 comprising: determining that the aquatic animal is in a stunned state; and, storing an indication that the aquatic animal is in a stunned state.
- 28. A method according to any of claims 24 to 27 comprising: determining that the aquatic animal of interest is capable of movement; and, storing an indication that the aquatic animal is not in a stunned state.
- 29. A method according to any of claims 24 to 28 in which the stun station comprises an electric stun station, and the method comprises: adjusting the electric stun conditions of the electric stun station.
- 30. A method according to any of claims 24 to 29 in which the system comprises a first trained deep learning network and the method comprises classifying and/or uniquely identifying an object in an image frame as an aquatic animal of interest.
- 31. A method according to any of claims 24 to 30 in which the system comprises: the same or a second trained deep learning network; and, the method comprises determining a segmentation mask of an aquatic animal in one or more image frames.
- 32. A method according to any of claims 24 to 31 comprising determining a length dimension of an aquatic animal and/or determining a width dimension of an aquatic animal.
- 33. A method according to any of claims 24 to 32 comprising uniquely identifying an aquatic animal in an image.
- 34. A method according to claim 33 comprising associating a condition with a uniquely identified aquatic animal.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2316024.5A GB2634765A (en) | 2023-10-19 | 2023-10-19 | Systems, apparatus and methods of condition monitoring |
| PCT/EP2024/077150 WO2025082719A1 (en) | 2023-10-19 | 2024-09-26 | System and method for monitoring and/or treating aquatic animals |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2316024.5A GB2634765A (en) | 2023-10-19 | 2023-10-19 | Systems, apparatus and methods of condition monitoring |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202316024D0 GB202316024D0 (en) | 2023-12-06 |
| GB2634765A true GB2634765A (en) | 2025-04-23 |
Family
ID=88970208
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2316024.5A Pending GB2634765A (en) | 2023-10-19 | 2023-10-19 | Systems, apparatus and methods of condition monitoring |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2634765A (en) |
| WO (1) | WO2025082719A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014128230A1 (en) * | 2013-02-20 | 2014-08-28 | Nordischer Maschinenbau Rud. Baader Gmbh + Co. Kg | A fish processing device |
| US20220071180A1 (en) * | 2020-05-28 | 2022-03-10 | X Development Llc | Analysis and sorting in aquaculture |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2417408B (en) | 2004-08-27 | 2008-05-28 | John Ace-Hopkins | Fish processing |
| GB2539495B (en) | 2015-06-19 | 2017-08-23 | Ace Aquatec Ltd | Improvements relating to time-of-flight cameras |
| GB2540154B (en) | 2015-07-07 | 2019-07-31 | Ace Aquatec Ltd | Improvements related to fish stunning |
| GB2557245B (en) | 2016-12-01 | 2019-03-13 | Ace Aquatec Ltd | Improvements relating to stunning aquatic animals in water |
| SG11202001085QA (en) * | 2017-08-07 | 2020-03-30 | Pharmaq As | Live fish processing system, and associated methods |
| WO2019232247A1 (en) | 2018-06-01 | 2019-12-05 | Aquabyte, Inc. | Biomass estimation in an aquaculture environment |
| NO348007B1 (en) | 2020-12-16 | 2024-06-17 | Createview As | A system for monitoring of dead fish |
-
2023
- 2023-10-19 GB GB2316024.5A patent/GB2634765A/en active Pending
-
2024
- 2024-09-26 WO PCT/EP2024/077150 patent/WO2025082719A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014128230A1 (en) * | 2013-02-20 | 2014-08-28 | Nordischer Maschinenbau Rud. Baader Gmbh + Co. Kg | A fish processing device |
| US20220071180A1 (en) * | 2020-05-28 | 2022-03-10 | X Development Llc | Analysis and sorting in aquaculture |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202316024D0 (en) | 2023-12-06 |
| WO2025082719A1 (en) | 2025-04-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12406521B2 (en) | Entity identification using machine learning | |
| Yang et al. | An automatic classifier for monitoring applied behaviors of cage-free laying hens with deep learning | |
| Costa et al. | Extracting fish size using dual underwater cameras | |
| CN112598713A (en) | Offshore submarine fish detection and tracking statistical method based on deep learning | |
| US11864537B2 (en) | AI based feeding system and method for land-based fish farms | |
| Atienza-Vanacloig et al. | Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model | |
| Kuningas et al. | Population size, survival and reproductive rates of northern Norwegian killer whales (Orcinus orca) in 1986–2003 | |
| CN105913082B (en) | Method and system for classifying targets in image | |
| CN115100512A (en) | Monitoring, identifying and catching method and system for marine economic species and storage medium | |
| CN114612397B (en) | Fish fry sorting method, system, electronic equipment and storage medium | |
| Kounalakis et al. | A robotic system employing deep learning for visual recognition and detection of weeds in grasslands | |
| RU2019114132A (en) | FORECASTING THE YIELD OF THE GRAIN FIELD | |
| Xu et al. | Detection of bluefin tuna by cascade classifier and deep learning for monitoring fish resources | |
| Sokolova et al. | An integrated end-to-end deep neural network for automated detection of discarded fish species and their weight estimation | |
| GB2634765A (en) | Systems, apparatus and methods of condition monitoring | |
| WO2022171267A1 (en) | System, method, and computer executable code for organism quantification | |
| US12396442B2 (en) | Monocular underwater camera biomass estimation | |
| Westling et al. | A modular learning approach for fish counting and measurement using stereo baited remote underwater video | |
| Mazzei et al. | Automated video imaging system for counting deep-sea bioluminescence organisms events | |
| CN115526880B (en) | A method for discriminating the leftovers in feed troughs for caged meat pigeons | |
| EP4583069A1 (en) | System and method for identifying individual birds | |
| US20250221385A1 (en) | System and method for identifying individual birds | |
| GB2633400A (en) | Apparatus and methods of biomass estimation | |
| Lin et al. | Novel Cannibalism Indices of Grouper Juvenile in High-Density Aquaculture using Density Map and Optical Flow | |
| CN119302243B (en) | Automatic determination device and method for individual egg laying performance of family small-group cage-raised geese |