WO2017042677A1 - Method and system for determining the position of an individual in a determined working area - Google Patents
Method and system for determining the position of an individual in a determined working area Download PDFInfo
- Publication number
- WO2017042677A1 WO2017042677A1 PCT/IB2016/055279 IB2016055279W WO2017042677A1 WO 2017042677 A1 WO2017042677 A1 WO 2017042677A1 IB 2016055279 W IB2016055279 W IB 2016055279W WO 2017042677 A1 WO2017042677 A1 WO 2017042677A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- individual
- industrial vehicle
- video
- cameras
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q1/00—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
- B60Q1/26—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
- B60Q1/50—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
- B60Q1/525—Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking automatically indicating risk of collision between vehicles in traffic or with pedestrians, e.g. after risk assessment using the vehicle sensor data
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q9/00—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
- B60Q9/008—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q2800/00—Features related to particular types of vehicles not otherwise provided for
- B60Q2800/20—Utility vehicles, e.g. for agriculture, construction work
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
Definitions
- the present invention relates to the field of safety in the working areas, in particular port terminals, within which industrial vehicles as “reach stacker” and “forklift” operate for handling containers.
- the present invention relates to a system and a method for determining the position of an individual within a working area where an industrial vehicle operates.
- the method for processing the images appears to need a lot of time for completing the processing. This can lead to detect late the individual or, even, to an erroneous evaluation, since from the moment the processing starts to when the result is obtained by the system, some seconds go by, during which the situation can be completely changed.
- Another important drawback of the above disclosed method is to provide the automatic stop of the vehicle if an individual is detected within a distance from the vehicle less than a threshold distance.
- a threshold distance For example, only on the basis of the information given by the distance between the individual and the vehicle, it is not possible to determine if there is a real risk to have an accident, because, even though the individual is close to the vehicle, he/she could be in a sheltered place, for example for the presence of structural elements, e.g. industrial warehouses, or however in an area that cannot be reached by the vehicle for the presence of physical obstacles. Therefore, the above disclosed method implies the risk to stop the vehicle even when the safety of the individual, who is near the vehicle, is not really in danger, with the consequent loss of productivity without any justifiable reason.
- an object of the present invention to provide a system for determining the position of an individual within a determined working area, in particular a port area, where an industrial vehicle operates for handling containers.
- system for determining the position of an individual within a determined working area, said system comprising:
- each video-camera of said plurality being arranged to acquire a sequence of images at a respective angle of view a ⁇ ;
- a GPS unit arranged to instantaneously determine the position of the industrial vehicle within said working area and to generate corresponding position data p ( ti ) ;
- processing unit arranged to process said image data acquired by said plurality of video-cameras by applying at least a predetermined object recognition algorithm, said processing unit arranged to determine, through said processing, the presence of an individual and his/her distance from said industrial vehicle;
- said processing unit being, furthermore, arranged to process said position data acquired by said GPS unit for determining the spatial position of said industrial vehicle and, therefore, the spatial position of said individual.
- an inertial measurement unit is, furthermore, provided that is mounted on said industrial vehicle, and that is equipped with at least an inertial sensor.
- the inertial measurement unit is arranged to generate inertial data i(ti), on the basis of which, the spatial orientation of said industrial vehicle, instant by instant, is determined.
- the processing unit is arranged to process also said inertial data, and to combine them with said position data, for determining the trajectory of the movement of the industrial vehicle, and the risk of colliding with the individual .
- the system is arranged to verify, not only that the detected individual is close to the industrial vehicle, on which the video-cameras are mounted, but also that the industrial vehicle is moving along a trajectory, which can be dangerous for the safety of the individual.
- the inertial sensor can be selected among :
- the processing unit comprises at least a vehicular processing unit mounted on the vehicle, said vehicular processing unit being arranged to process said data acquired by said video-cameras, which are mounted on said industrial vehicle, for determining the presence, or not, of an individual within the working area, and for determining the distance between the individual and the industrial vehicle.
- a plurality of industrial vehicles is provided, each industrial vehicle of said plurality being provided with a respective vehicular processing unit, which is arranged to process the data acquired by the video-cameras mounted on the same industrial vehicle for determining the presence of one, or more, individuals within the working area and their spatial position within the working area.
- a map is obtained of the relative positions of the industrial vehicles with respect to the individual, or the individuals, who are present within the working area .
- a plurality of fixed video-cameras is, furthermore, provided adapted to acquire a plurality of additional images of said working area.
- the processing unit comprises a central processing unit arranged to process the plurality of additional images and to supplement the results of data processing with the results of said, or each, vehicular processing unit, in such a way to generate a geo-referenced map of the industrial vehicles and of the individuals that are present within the working area.
- said plurality of video-cameras can be configured in such a way to obtain, in all, a total angle of view oi tot set between 27 0 ° and 3 60 ° about the industrial vehicle .
- the total angle of view tot is 3 60 ° .
- a method for determining the position of an individual within a determined working area, said industrial vehicle being free to move in a plurality of directions within a working area comprises the steps of:
- a determining step is, furthermore, provided of the spatial orientation of the industrial vehicle according to a plurality of inertial data i(ti) detected by an inertial unit, which is mounted on said industrial vehicle.
- the video-cameras mounted on the industrial vehicle can be arranged to cover a total angle of view set between 270° and 360°.
- the determining step of the spatial orientation can be carried out by means of an inertial measurement unit.
- the output of the inertial measurement unit is a spatial orientation datum of the industrial vehicle on which it is installed.
- the data detected by said, or each, inertial sensor are sent to a remote processing unit, which, on the basis of such data, provides to determine the spatial orientation of the industrial vehicle.
- a verifying step is, furthermore, provided for verifying if the detected individual is positioned in a predetermined dangerous position, and being, furthermore, provided an emitting step for emitting an alarm signal if said spatial position of said detected individual coincides with said predetermined dangerous position .
- an emitting step can be, furthermore, provided for emitting an alarm signal if the detected individual is in a predetermined dangerous position.
- the alarm signal can be a visual and/or audio signal .
- a stop step can be provided to completely stop the industrial vehicle.
- the images acquired by the video- cameras mounted on the vehicle can be displayed on a display positioned in the cockpit of the industrial vehicle.
- the worker who is on board of the industrial vehicle, is able to directly monitor the situation and, if necessary, to carry out the appropriate safety manoeuvring, in order to avoid to collide with the individual who is situated near the industrial vehicle.
- the images acquired by the fixed video-cameras are displayed on a display, which is placed inside the cockpit of the industrial vehicle.
- the processing step of the acquired images comprises the following steps:
- the processing step of the acquired images furthermore, provides to apply a Histogram of oriented gradients, or HOG, algorithm, to said areas of interest of the above disclosed plurality of images, which have been processed by the algorithm based on Haar-cascade classifiers .
- a Histogram of oriented gradients, or HOG algorithm
- a tracking step is provided to "follow" an individual who has been already detected in a previous detecting step. More in detail, the tracking step is useful to quickly identify an individual, who has been already identified as such in a determined position, during a previous detection, and who is temporarily escaped, for some reasons, to the video-cameras. This can happen, in particular, because an obstacle interposes among the video-cameras and the individual, e.g. a container, or a building, or a machine, which temporarily "hides” the individual to the system, or because the individual has temporarily left the working area.
- the tracking step allows to speed up the processing steps carried out, applying the second recognition algorithm, for determining the individual position, even if he/she has been temporarily hidden to the video- cameras .
- Example of techniques that can be used for tracking the individual are: the SIFT method "Scale Invariant Feature Transform" (see [Lowe, David G. , Object recognition from local scale-invariant features , Proceedings of the International Conference on Computer Vision, vol. 2, 1999, pp. 1150-1157]) , the optical flow tracking method (see [S. S. Beauchemin , J. L. Barron (1995). The computation of optical flow ACM New York, USA] e David J. Fleet and Yair Weiss (2006) . "Optical Flow Estimation” . In Paragios et al . Handbook of Mathematical Models in Computer Vision. Springer. ISBN 0-381-26311-3] ) and the Kalman filter method.
- Figure 1 diagrammatically shows, in a plan view, a system, according to the invention, for determining the position of an individual within a working area in which an industrial vehicle operates for handling containers in operating conditions;
- Figure 2 diagrammatically shows in an elevational side view a possible embodiment of the system of figure 1 ;
- Figures 3 e 4 diagrammatically show in a plan view the system of figure 2 ;
- Figure 5 diagrammatically shows, in a plan view, an exemplary embodiment of the system of figures from 2 to 4;
- Figure 6 shows a flow diagram of a possible succession of steps provided by the method for detecting the presence of an individual in proximity to an industrial vehicle
- Figure 7 diagrammatically shows a possible succession of steps provided by the method, according to the invention, for processing the images acquired by the video-cameras in order to detect the presence of an individual close to the industrial vehicle;
- Figure 8 diagrammatically shows in a plan view an exemplary embodiment of the system of figure 1;
- FIG. 9 diagrammatically shows in a plan view another exemplary embodiment of the system of figure 1.
- a first embodiment is diagrammatically shown of a system 1 for determining the position of a individual 25 within a working area 200, in which an industrial vehicle 10 operates, in particular a "forklift” , or a "reach stacker", that is free to move in a plurality of directions.
- the system 1 provides, in particular, a predetermined number of video-cameras 50 mounted on the industrial vehicle 10.
- Each video-camera 50 is arranged to acquire a series of image data 55 at a respective angle of view a ⁇ .
- the video-cameras 50 can be analog video-cameras.
- the images acquired by the analog video-cameras, before being sent to the processing unit 150, are converted in digital form, in particular by a converter device, not shown in the figure for reasons of simplicity.
- a part of the video- cameras 50 of system 1 can be digital video-cameras and the others analog video-cameras .
- FIG. 5 diagrammatically shown in figure 5, four video-cameras 50a-50d are provided, each of which installed at one of the 4 sides of the vehicle 10.
- the images acquired by the video-cameras 50 are transmitted to a processing unit 150, which provides to process the images received by the video-cameras 50, through at least one object recognition algorithm. In this way, it is possible to determine the presence of an individual. Once that the presence of the individual has been detected, it is, then, determined the distance d of the same individual from the industrial vehicle 10, or more precisely from video-camera 50 that has detected the same individual.
- An identifying step is, then, provided for identifying in the above disclosed plurality, the video-camera 50, which has detected the individual 25.
- the distance is known of this video-camera from the barycentre of the vehicle 10. Therefore, on the basis of the spatial position of vehicle 10, which has been determined by GPS unit, as above disclosed, it is possible to accurately know the spatial position of the identified video-camera 50.
- the above disclosed recognition algorithms are adapted to define an area of interest 101, for example having rectangular-shaped, comprising the detected figure, del quale it is possible to determine the height in pixel.
- an area of interest 101 for example having rectangular-shaped, comprising the detected figure, del quale it is possible to determine the height in pixel.
- the focal length of video-camera 50 the height value, expressed in pixel, of said area of interest, and by using known geometric relations, it is possible to determine the distance d of individual from the video- camera 5, who has detected him/her.
- the distance d is calculated along the optical axis 51 of video-camera 50 assuming that it is positioned at soil, and therefore also known as "soil distance".
- the distance d is indicated as the distance calculated along the optical axis 51 of video-camera 50 extending the area of interest 101 of individual 25.
- System 1 comprises, furthermore, a GPS unit 400 arranged to instantaneously determine the position of industrial vehicle 10, in particular within the working area 200, and to generate corresponding position data p(ti) .
- the position data are, then, processed, in particular by the processing unit 150, for determining the spatial coordinates of industrial vehicle 10.
- the position data of the industrial vehicle 10, and the distance d of the detected individual from industrial vehicle 10 same allow, therefore, " georefencing" the individual, i.e. to determine his/her spatial position, in other words his/her spatial coordinates are determined.
- georefencing the individual, i.e. to determine his/her spatial position, in other words his/her spatial coordinates are determined.
- the distance d of individual 25 from video-camera 50.
- the processing unit 150 can carry out an additional processing of the images acquired by the video-cameras 50, in order to determine the angle ⁇ formed between the optical axis 51 of video-camera 50 and the position of the individual 25.
- the processing unit 150 can carry out an additional processing of the images acquired by the video-cameras 50, in order to determine the angle ⁇ formed between the optical axis 51 of video-camera 50 and the position of the individual 25.
- This can be, for example, an accelerometer, or a gyroscope, or a magnetometer, but it is also foreseen the possibility that the inertial measurement unit 300 is equipped with two, or more, of any of the above disclosed sensors.
- the inertial measurement unit 300 is arranged to instantaneously determine the spatial orientation of the industrial vehicle 10 and to generate corresponding inertial data i(ti) .
- the data detected by the inertial sensors 310 can be sent to a remote processing unit, which determines the spatial orientation of vehicle 10, according to the detected inertial data.
- the inertial data i(ti) it is possible to determine the trajectory of the movement of the industrial vehicle 10 and, therefore, determine the risk of collision between vehicle 10 and individual. In this way, it is possible, in particular, to establish if the safety of individual 25 is really in danger due to the closeness of industrial vehicle 10, or on the contrary, if the industrial vehicle 10 will not create unsafe conditions for the individual, in spite of its closeness. As a consequence, it is possible to avoid the stop of the industrial vehicle 10, even if there is an individual 25 near it, thus, optimizing, the productivity.
- the inertial unit 300 is arranged to detect the inertial data at least in 2 successive instants tl and t2.
- the processing unit 150 is mounted on the vehicle 10, whilst in the case of figure 5, the same is arranged in a remote position.
- the solution shown in figure 4 can be advantageous since the processing unit 150 provides to process the flows of data that are received by video-cameras 50, and to infer, from these, the information of interest, which are, then, transmitted, for example, to an operations centre. In this way it is avoided that a too much heavy flow of data can be transmitted towards the outside.
- the video-cameras 50 can be installed substantially anywhere and in any number on the industrial vehicle 10.
- 6 video-cameras 50 can be provided, in order to cover the desired total angle of view .
- the total angle of view covered by video-cameras 50, which is mounted on a vehicle 10 is set between 270 and 360°, for example it is 360°.
- one, or more, fixed video-cameras 60 can be also provided configured in such a way to acquire a plurality of additional images.
- the additional images can be sent to a central processing unit 250, which provides to process the same obtaining additional processed images.
- the central processing unit 250 is, preferably, arranged to process the above disclosed plurality of additional images and to supplement the results of this processing step with the data obtained by the vehicular processing unit 150. In this way, the central processing unit 250 is arranged to generate a geo-referenced map of industrial vehicle 10 and each individual 25 that are present within the working area 200.
- a plurality of industrial vehicles are provided, for example 2 vehicles 10a e 10b, each of which provided with a respective vehicular processing unit 150a and 150b, respectively, arranged to process the data acquired by the video-cameras 50 mounted on the same industrial vehicle.
- the data processed by each vehicular processing unit 150a and 150b, are sent to the central processing unit 250, which provides, as above disclosed, to supplement these data with those acquired by the fixed video-cameras and to generate a geo-referenced map of all the vehicles 10a, 10b and of all the individuals 25a, 25b that are within the working area 200.
- an emitting device can be provided for emitting an alarm signal, for example visual and/or audio alarm signal, if the detected individual is at a distance from the industrial vehicle 10 less than a predetermined threshold distance d* .
- a stop procedure can be immediately started del industrial vehicle 10.
- d2* can be set between 4 and 6 m, for example 5 m, whilst d* can be set between 5 and 10 m.
- the block diagram 500 of figure 6 illustrates the main steps of the method for determining the presence of an individual close to the industrial vehicle 10.
- the method provides a acquiring step for acquiring a plurality of images by said plurality of video-cameras 50 mounted on the industrial vehicle 10, block 501.
- a determining step is carried out for determining the position of the industrial vehicle 10 within the working area 200, generating corresponding position data p(ti), block 502.
- the method furthermore, provides a generation step of inertial data i(ti) and, consequently, the determination of the spatial orientation of the industrial vehicle 10, block 503.
- a processing step is, then, provided for processing the acquired images, the position data and the inertial data, by the processing unit for determining the presence, or not, of an individual within the above disclosed working area, and in case his/her position, block 504. More in detail, the above described algorithm is arranged to verify the presence of the gauge of individuals in the above disclosed plurality of images acquired by the video- cameras 50, once they have been trained to recognize gauges of individuals in a plurality of images of reference 50' that are resident in a database of individuals images.
- the training step provides to associate parts, or complete gauges, of individuals, to predetermined detected shapes.
- the processing step of the images comprises the succession of steps diagrammatically shown in figure 7.
- each image 100 that is acquired by the video- cameras 50a-50d is, at first, converted in grey-levels, and, then, a first recognition algorithm based on Haar-cascade classifiers is applied.
- determined areas of image 100 are identified, for example 3 areas 101, 102 and 103, where one, or more, individuals 25 could be present.
- These areas 101, 102 and 103, and only these, are then subject to a second recognition algorithm, preferably an algorithm based on Histogram of oriented gradient, or HOG.
- This specific technical solution allows optimizing the method for determining the presence of individuals 25, both in terms of processing duration and reliability.
- the use of classifiers based on Haar-like features allows selecting very quickly the areas of 101-103, i.e.
- the first algorithm allows to carry out, very quickly, a first screening of the acquired images, obtaining areas of interest.
- the application of the second algorithm much more accurate, only to these areas selected by the first algorithm, allows identifying, highly accurately, the presence of one, or more, individuals within the images without slowing down the whole processing step, because the most complicated and elaborate calculation is carried out only on the portions 101-103 starting image 100, and not on all the image 100.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
Abstract
A system for determining the position of an individual (25) within a working area, in particular a working area where an industrial vehicle (10) operates. The system comprises an industrial vehicle (10) that is free to move in a plurality of directions within the working area (200) and a plurality of video-cameras (50) mounted on the same industrial vehicle (10). A GPS unit (400) is, then, provided for determining the position of the industrial vehicle (10), and to generate corresponding position data p(ti). The data and the images are, then, processed by a processing unit (150), by applying at least a predetermined object recognition algorithm, in such a way to determine the position of an individual (25) within the working area, and his/her spatial position.
Description
TITLE
METHOD AND SYSTEM FOR DETERMINING THE POSITION OF AN INDIVIDUAL IN A DETERMINED WORKING AREA
DESCRIPTION Field of the invention
The present invention relates to the field of safety in the working areas, in particular port terminals, within which industrial vehicles as "reach stacker" and "forklift" operate for handling containers.
More in detail, the present invention relates to a system and a method for determining the position of an individual within a working area where an industrial vehicle operates.
Background of the invention As known, within the port areas, vehicles, such as
"reach stacker" and "forklift" , are used for handling containers, that can be very big, and that can weigh even 70 tons .
Unfortunately, during load and unload operations, it is possible that an accident, which involves the human workers who are situated near the industrial vehicle, can occur.
An attempt of solving the above disclosed problem is described in US 2007/229238. In this document a system is described that is mounted on an industrial vehicle for determining the presence of obstacles within a scene around the vehicle. In particular, the system provides a series of video-cameras, each of which is operatively connected to a processor of images for processing the images acquired by
the relative video-camera. When an individual is present within the angle of view of a video-camera, the processor of images, which is to it connected, identifies the individual within a portion of the detected image.
Nevertheless, this solution has many drawbacks.
Firstly, the method for processing the images appears to need a lot of time for completing the processing. This can lead to detect late the individual or, even, to an erroneous evaluation, since from the moment the processing starts to when the result is obtained by the system, some seconds go by, during which the situation can be completely changed.
Another important drawback of the above disclosed method is to provide the automatic stop of the vehicle if an individual is detected within a distance from the vehicle less than a threshold distance. However, only on the basis of the information given by the distance between the individual and the vehicle, it is not possible to determine if there is a real risk to have an accident, because, even though the individual is close to the vehicle, he/she could be in a sheltered place, for example for the presence of structural elements, e.g. industrial warehouses, or however in an area that cannot be reached by the vehicle for the presence of physical obstacles. Therefore, the above disclosed method implies the risk to stop the vehicle even when the safety of the individual, who is near the vehicle, is not really in danger, with the consequent loss of productivity without any justifiable reason.
Another solution with similar drawbacks is also disclosed in US2015/151725.
Summary of the invention
It is, therefore, an object of the present invention to provide a system for determining the position of an individual within a determined working area, in particular a port area, where an industrial vehicle operates for handling containers.
It is a further object of the invention to provide such a system, which is able to determine if the individual is positioned in a dangerous place because it is used for handling containers by means of vehicles, or machinery of different kind, and where appropriate communicate the danger to the operator on board the vehicle and/or to the detected individual .
It is, in particular, object of the invention to provide a system for determining, in a rapid and, at the same time, reliable way, the position of a individual within a determined working area.
These and other objects are achieved by the system, according to the invention, for determining the position of an individual within a determined working area, said system comprising:
- an industrial vehicle arranged to be free to move in a plurality of directions within the working area;
- a plurality of video-cameras mounted on said industrial vehicle, each video-camera of said plurality being arranged to acquire a sequence of images at a respective angle of view a±;
whose main characteristic is to provide, furthermore:
- a GPS unit arranged to instantaneously determine the position of the industrial vehicle within said
working area and to generate corresponding position data p ( ti ) ;
- a processing unit arranged to process said image data acquired by said plurality of video-cameras by applying at least a predetermined object recognition algorithm, said processing unit arranged to determine, through said processing, the presence of an individual and his/her distance from said industrial vehicle;
said processing unit being, furthermore, arranged to process said position data acquired by said GPS unit for determining the spatial position of said industrial vehicle and, therefore, the spatial position of said individual.
Advantageously, an inertial measurement unit is, furthermore, provided that is mounted on said industrial vehicle, and that is equipped with at least an inertial sensor. In particular, the inertial measurement unit is arranged to generate inertial data i(ti), on the basis of which, the spatial orientation of said industrial vehicle, instant by instant, is determined. In this case, the processing unit is arranged to process also said inertial data, and to combine them with said position data, for determining the trajectory of the movement of the industrial vehicle, and the risk of colliding with the individual .
In particular, the system is arranged to verify, not only that the detected individual is close to the industrial vehicle, on which the video-cameras are mounted, but also that the industrial vehicle is moving along a trajectory, which can be dangerous for the safety
of the individual.
In particular, the inertial sensor can be selected among :
- at least one accelerometer ;
- at least one gyroscope;
- at least one magnetometer;
- a combination thereof.
In particular, the processing unit comprises at least a vehicular processing unit mounted on the vehicle, said vehicular processing unit being arranged to process said data acquired by said video-cameras, which are mounted on said industrial vehicle, for determining the presence, or not, of an individual within the working area, and for determining the distance between the individual and the industrial vehicle.
In an exemplary embodiment of the invention, a plurality of industrial vehicles is provided, each industrial vehicle of said plurality being provided with a respective vehicular processing unit, which is arranged to process the data acquired by the video-cameras mounted on the same industrial vehicle for determining the presence of one, or more, individuals within the working area and their spatial position within the working area. Practically, combining all the results obtained processing all the data, a map is obtained of the relative positions of the industrial vehicles with respect to the individual, or the individuals, who are present within the working area .
Advantageously, a plurality of fixed video-cameras is, furthermore, provided adapted to acquire a plurality of
additional images of said working area. In particular, the processing unit comprises a central processing unit arranged to process the plurality of additional images and to supplement the results of data processing with the results of said, or each, vehicular processing unit, in such a way to generate a geo-referenced map of the industrial vehicles and of the individuals that are present within the working area.
In particular, said plurality of video-cameras can be configured in such a way to obtain, in all, a total angle of view oitot set between 27 0 ° and 3 60 ° about the industrial vehicle .
Preferably, the total angle of view tot is 3 60 ° .
According to another aspect of the invention, a method for determining the position of an individual within a determined working area, said industrial vehicle being free to move in a plurality of directions within a working area comprises the steps of:
- acquiring a plurality of images by means of a plurality of video-cameras mounted on the industrial vehicle ;
- determining the position of the industrial vehicle within said working area, and generating of a corresponding position data p(ti), said determining step of the position being carried out by means of a
GPS unit;
- processing said acquired images and said position data comprising the steps of:
- applying at least one predetermined object recognition algorithm to said acquired images for
determining the presence of an individual and the distance of said detected individual from said industrial vehicle;
- determining the spatial position of said detected individual by combining said determined distance and said position data of said industrial vehicle .
Advantageously, a determining step is, furthermore, provided of the spatial orientation of the industrial vehicle according to a plurality of inertial data i(ti) detected by an inertial unit, which is mounted on said industrial vehicle.
In particular, the video-cameras mounted on the industrial vehicle can be arranged to cover a total angle of view set between 270° and 360°.
Advantageously, the determining step of the spatial orientation can be carried out by means of an inertial measurement unit. In this case, the output of the inertial measurement unit is a spatial orientation datum of the industrial vehicle on which it is installed. Alternatively, the data detected by said, or each, inertial sensor are sent to a remote processing unit, which, on the basis of such data, provides to determine the spatial orientation of the industrial vehicle.
Advantageously, a verifying step is, furthermore, provided for verifying if the detected individual is positioned in a predetermined dangerous position, and being, furthermore, provided an emitting step for emitting an alarm signal if said spatial position of said detected individual coincides with said predetermined dangerous
position .
In particular, an emitting step can be, furthermore, provided for emitting an alarm signal if the detected individual is in a predetermined dangerous position. For example, the alarm signal can be a visual and/or audio signal .
In particular, it is provided a plurality of dangerous positions, of which the spatial coordinates are known. Therefore, if it is verified that the individual is placed in one of the dangerous position of said plurality, said alarm signal is emitted.
Advantageously, if the detected individual is distance from the industrial vehicle less than a second predetermined threshold distance d2*<d*, a stop step can be provided to completely stop the industrial vehicle.
In particular, the images acquired by the video- cameras mounted on the vehicle can be displayed on a display positioned in the cockpit of the industrial vehicle. In this way, the worker, who is on board of the industrial vehicle, is able to directly monitor the situation and, if necessary, to carry out the appropriate safety manoeuvring, in order to avoid to collide with the individual who is situated near the industrial vehicle.
In an exemplary embodiment of the invention, also the images acquired by the fixed video-cameras, are displayed on a display, which is placed inside the cockpit of the industrial vehicle.
In particular, the processing step of the acquired images comprises the following steps:
- converting the acquired images in grey-level
images ;
- applying a recognition algorithm based on Haar- cascade classifiers to grey-level images, obtaining areas of interest on said acquired images.
Concerning Haar-cascade classifiers see for example
[Viola and Jones, "Rapid object detection using a boosted cascade of simple features" , Computer Vision and Pattern Recognition, 2001] .
Preferably, the processing step of the acquired images, furthermore, provides to apply a Histogram of oriented gradients, or HOG, algorithm, to said areas of interest of the above disclosed plurality of images, which have been processed by the algorithm based on Haar-cascade classifiers .
Concerning HOG algorithms see for example [N. Dalai and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. IEEE Comput. Soc. Conf. CVPR, 2005, vol. 1, pp. 886-893.]
Advantageously, a tracking step is provided to "follow" an individual who has been already detected in a previous detecting step. More in detail, the tracking step is useful to quickly identify an individual, who has been already identified as such in a determined position, during a previous detection, and who is temporarily escaped, for some reasons, to the video-cameras. This can happen, in particular, because an obstacle interposes among the video-cameras and the individual, e.g. a container, or a building, or a machine, which temporarily "hides" the individual to the system, or because the individual has temporarily left the working area. In other
words, the tracking step allows to speed up the processing steps carried out, applying the second recognition algorithm, for determining the individual position, even if he/she has been temporarily hidden to the video- cameras .
In fact, once "marked" as individual it will be easier and faster, for the system, to identify again the same individual, by applying the second recognition algorithm.
Example of techniques that can be used for tracking the individual are: the SIFT method "Scale Invariant Feature Transform" (see [Lowe, David G. , Object recognition from local scale-invariant features , Proceedings of the International Conference on Computer Vision, vol. 2, 1999, pp. 1150-1157]) , the optical flow tracking method (see [S. S. Beauchemin , J. L. Barron (1995). The computation of optical flow ACM New York, USA] e David J. Fleet and Yair Weiss (2006) . "Optical Flow Estimation" . In Paragios et al . Handbook of Mathematical Models in Computer Vision. Springer. ISBN 0-381-26311-3] ) and the Kalman filter method.
Brief description of the drawings
The invention will now illustrated with the following description of an exemplary embodiment thereof, exemplifying but not limitative, with reference to the attached drawings wherein:
Figure 1 diagrammatically shows, in a plan view, a system, according to the invention, for determining the position of an individual within a working area in which an industrial vehicle operates for handling containers in operating conditions;
Figure 2 diagrammatically shows in an elevational side view a possible embodiment of the system of figure 1 ;
Figures 3 e 4 diagrammatically show in a plan view the system of figure 2 ;
Figure 5 diagrammatically shows, in a plan view, an exemplary embodiment of the system of figures from 2 to 4;
Figure 6 shows a flow diagram of a possible succession of steps provided by the method for detecting the presence of an individual in proximity to an industrial vehicle;
Figure 7 diagrammatically shows a possible succession of steps provided by the method, according to the invention, for processing the images acquired by the video-cameras in order to detect the presence of an individual close to the industrial vehicle;
Figure 8 diagrammatically shows in a plan view an exemplary embodiment of the system of figure 1;
- Figure 9 diagrammatically shows in a plan view another exemplary embodiment of the system of figure 1.
Description of preferred exemplary embodiments
In figure 1 a first embodiment is diagrammatically shown of a system 1 for determining the position of a individual 25 within a working area 200, in which an industrial vehicle 10 operates, in particular a "forklift" , or a "reach stacker", that is free to move in a plurality of directions. The system 1 provides, in particular, a predetermined number of video-cameras 50
mounted on the industrial vehicle 10. Each video-camera 50 is arranged to acquire a series of image data 55 at a respective angle of view a± . According to the invention and diagrammatically shown in figure 4, it is possible to install, for example, a first video-camera 50a at the upper part of vehicle 10, and a second video-camera 50b at the bottom part of the same vehicle 10. More in detail, each video-camera 50 is a digital video-camera. Therefore, the acquired images are digital images constituted, as known, of a matrix of points or pixels.
Alternatively, the video-cameras 50 can be analog video-cameras. In this case, the images acquired by the analog video-cameras, before being sent to the processing unit 150, are converted in digital form, in particular by a converter device, not shown in the figure for reasons of simplicity. It is also provided that a part of the video- cameras 50 of system 1 can be digital video-cameras and the others analog video-cameras .
In an exemplary embodiment of the invention, diagrammatically shown in figure 5, four video-cameras 50a-50d are provided, each of which installed at one of the 4 sides of the vehicle 10.
As it is shown in figures 4 and 5, the images acquired by the video-cameras 50, are transmitted to a processing unit 150, which provides to process the images received by the video-cameras 50, through at least one object recognition algorithm. In this way, it is possible to determine the presence of an individual. Once that the presence of the individual has been detected, it is, then, determined the distance d of the same individual from the industrial vehicle 10, or more precisely from video-camera
50 that has detected the same individual.
An identifying step is, then, provided for identifying in the above disclosed plurality, the video-camera 50, which has detected the individual 25. The distance is known of this video-camera from the barycentre of the vehicle 10. Therefore, on the basis of the spatial position of vehicle 10, which has been determined by GPS unit, as above disclosed, it is possible to accurately know the spatial position of the identified video-camera 50.
More in detail, as diagrammatically shown in figure 2, the above disclosed recognition algorithms are adapted to define an area of interest 101, for example having rectangular-shaped, comprising the detected figure, del quale it is possible to determine the height in pixel. As a person skilled in the art would be understand, by knowing the focal length of video-camera 50, the height value, expressed in pixel, of said area of interest, and by using known geometric relations, it is possible to determine the distance d of individual from the video- camera 5, who has detected him/her.
In particular, as diagrammatically shown in figure 2, the distance d is calculated along the optical axis 51 of video-camera 50 assuming that it is positioned at soil, and therefore also known as "soil distance". In figure 2, in fact, the distance d is indicated as the distance calculated along the optical axis 51 of video-camera 50 extending the area of interest 101 of individual 25.
System 1 comprises, furthermore, a GPS unit 400 arranged to instantaneously determine the position of industrial vehicle 10, in particular within the working
area 200, and to generate corresponding position data p(ti) . The position data are, then, processed, in particular by the processing unit 150, for determining the spatial coordinates of industrial vehicle 10.
The position data of the industrial vehicle 10, and the distance d of the detected individual from industrial vehicle 10 same, allow, therefore, " georefencing" the individual, i.e. to determine his/her spatial position, in other words his/her spatial coordinates are determined. In particular, by processing the digital images, it is possible to determine, as above described, the distance d of individual 25 from video-camera 50.
It is appropriate to note that, the information deriving from this processing only, leaves a degree of uncertainty in determining the relative position between individual 25 and vehicle 50. In fact, by the distance d only, it is possible to identify the arc of circle with radius d, on which the individual 25 is placed, but not his/her exact position on the arc of circle (figure 1) .
Even though this degree of uncertainty can be acceptable, in an exemplary embodiment of the invention, it is, however, provided that the processing unit 150 can carry out an additional processing of the images acquired by the video-cameras 50, in order to determine the angle β formed between the optical axis 51 of video-camera 50 and the position of the individual 25. Using known geometric relations, it is, then, possible, to univocally determine the position of individual 25 with respect to vehicle 10, i.e. to determine in which point of the arc of the circle 125, the individual 25 is situated, thus eliminating the above described uncertainty. As a consequence, by
combining the exact position of the individual 25 with respect to vehicle 10, and the position data obtained by GPS unit 400, it is possible to determine, with a high level of precision, the spatial position of the individual 25.
In this way, it is possible, in particular, to determine, very accurately, if the individual is positioned within an off-limits area, i.e. within an area that is considered to be dangerous for his/her safety, because, for example, used for handling loads by the same industrial vehicles 10, or by apparatuses and machineries of different kind, for example elevators.
In an exemplary embodiment, the system 1, furthermore, comprises an inertial measurement unit 300 equipped with at least one inertial sensor 310. This can be, for example, an accelerometer, or a gyroscope, or a magnetometer, but it is also foreseen the possibility that the inertial measurement unit 300 is equipped with two, or more, of any of the above disclosed sensors. More precisely, the inertial measurement unit 300 is arranged to instantaneously determine the spatial orientation of the industrial vehicle 10 and to generate corresponding inertial data i(ti) . Alternatively, the data detected by the inertial sensors 310 can be sent to a remote processing unit, which determines the spatial orientation of vehicle 10, according to the detected inertial data.
In particular, according to the inertial data i(ti), it is possible to determine the trajectory of the movement of the industrial vehicle 10 and, therefore, determine the risk of collision between vehicle 10 and individual. In this way, it is possible, in particular, to establish if
the safety of individual 25 is really in danger due to the closeness of industrial vehicle 10, or on the contrary, if the industrial vehicle 10 will not create unsafe conditions for the individual, in spite of its closeness. As a consequence, it is possible to avoid the stop of the industrial vehicle 10, even if there is an individual 25 near it, thus, optimizing, the productivity.
More in particular, in order to determine the industrial vehicle trajectory, the inertial unit 300 is arranged to detect the inertial data at least in 2 successive instants tl and t2.
In the case shown in figure 4, the processing unit 150 is mounted on the vehicle 10, whilst in the case of figure 5, the same is arranged in a remote position. The solution shown in figure 4 can be advantageous since the processing unit 150 provides to process the flows of data that are received by video-cameras 50, and to infer, from these, the information of interest, which are, then, transmitted, for example, to an operations centre. In this way it is avoided that a too much heavy flow of data can be transmitted towards the outside.
As an example, in the figures 3 and 4, 4 video-cameras 50a-50d are provided, which are mounted on the cockpit roof of vehicle 10, whilst in figure 5, the 4 video- cameras 50a-50d are positioned at different parts of the bodywork of vehicle 10. However, the video-cameras 50 can be installed substantially anywhere and in any number on the industrial vehicle 10. For example, in the case of big vehicles, such as reach stacker, 6 video-cameras 50 can be provided, in order to cover the desired total angle of view .
In an exemplary embodiment, the total angle of view covered by video-cameras 50, which is mounted on a vehicle 10, is set between 270 and 360°, for example it is 360°. As it is diagrammatically shown in figure 8, in addition to the video-cameras 50 mounted on the industrial vehicle, one, or more, fixed video-cameras 60 can be also provided configured in such a way to acquire a plurality of additional images. The additional images can be sent to a central processing unit 250, which provides to process the same obtaining additional processed images. The central processing unit 250 is, preferably, arranged to process the above disclosed plurality of additional images and to supplement the results of this processing step with the data obtained by the vehicular processing unit 150. In this way, the central processing unit 250 is arranged to generate a geo-referenced map of industrial vehicle 10 and each individual 25 that are present within the working area 200.
In the example of figure 9, a plurality of industrial vehicles are provided, for example 2 vehicles 10a e 10b, each of which provided with a respective vehicular processing unit 150a and 150b, respectively, arranged to process the data acquired by the video-cameras 50 mounted on the same industrial vehicle. The data processed by each vehicular processing unit 150a and 150b, are sent to the central processing unit 250, which provides, as above disclosed, to supplement these data with those acquired by the fixed video-cameras and to generate a geo-referenced map of all the vehicles 10a, 10b and of all the individuals 25a, 25b that are within the working area 200.
According to another aspect of the invention, an emitting
device can be provided for emitting an alarm signal, for example visual and/or audio alarm signal, if the detected individual is at a distance from the industrial vehicle 10 less than a predetermined threshold distance d* .
It is also provided that, if the detected individual is at a distance less than the threshold distance d* , or a second threshold distance d2* less than d* , a stop procedure can be immediately started del industrial vehicle 10. In particular, d2* can be set between 4 and 6 m, for example 5 m, whilst d* can be set between 5 and 10 m.
The block diagram 500 of figure 6 illustrates the main steps of the method for determining the presence of an individual close to the industrial vehicle 10. In particular, the method provides a acquiring step for acquiring a plurality of images by said plurality of video-cameras 50 mounted on the industrial vehicle 10, block 501. Then, a determining step is carried out for determining the position of the industrial vehicle 10 within the working area 200, generating corresponding position data p(ti), block 502. The method, furthermore, provides a generation step of inertial data i(ti) and, consequently, the determination of the spatial orientation of the industrial vehicle 10, block 503. A processing step is, then, provided for processing the acquired images, the position data and the inertial data, by the processing unit for determining the presence, or not, of an individual within the above disclosed working area, and in case his/her position, block 504. More in detail, the above described algorithm is arranged to verify the presence of the gauge of individuals in the above
disclosed plurality of images acquired by the video- cameras 50, once they have been trained to recognize gauges of individuals in a plurality of images of reference 50' that are resident in a database of individuals images. In particular, the training step provides to associate parts, or complete gauges, of individuals, to predetermined detected shapes. Preferably, the processing step of the images comprises the succession of steps diagrammatically shown in figure 7. More in detail, each image 100 that is acquired by the video- cameras 50a-50d is, at first, converted in grey-levels, and, then, a first recognition algorithm based on Haar-cascade classifiers is applied. During this processing step, determined areas of image 100 are identified, for example 3 areas 101, 102 and 103, where one, or more, individuals 25 could be present. These areas 101, 102 and 103, and only these, are then subject to a second recognition algorithm, preferably an algorithm based on Histogram of oriented gradient, or HOG. This specific technical solution allows optimizing the method for determining the presence of individuals 25, both in terms of processing duration and reliability. In fact, the use of classifiers based on Haar-like features allows selecting very quickly the areas of 101-103, i.e. the areas where it is "suspected" that individuals 25 can be present. The use of algorithm based on Histogram of oriented gradient features, therefore, makes the whole processing step highly reliable. In fact, the first algorithm allows to carry out, very quickly, a first screening of the acquired images, obtaining areas of interest. The application of the second algorithm, much
more accurate, only to these areas selected by the first algorithm, allows identifying, highly accurately, the presence of one, or more, individuals within the images without slowing down the whole processing step, because the most complicated and elaborate calculation is carried out only on the portions 101-103 starting image 100, and not on all the image 100.
The foregoing description of a specific embodiment will so fully reveal the invention according to the conceptual point of view, so that others, by applying current knowledge, will be able to modify and/or adapt for various applications such an embodiment without further research and without parting from the invention, and it is therefore to be understood that such adaptations and modifications will have to be considered as equivalent to the specific embodiment. The means and the materials to realise the different functions described herein could have a different nature without, for this reason, departing from the field of the invention. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation .
Claims
1. System for determining the position of an individual (25) within a determined working area (200), said system comprising:
- an industrial vehicle (10) configured to be free to move in a plurality of directions within said working area (200) ;
- a plurality of video-cameras mounted on said industrial vehicle (10), each video-camera of said plurality being arranged to acquire a series of image data at a respective angle of view a±;
said system characterised in that it comprises, furthermore :
- a GPS unit (400) arranged to instantaneously determine the position of the industrial vehicle (10) within said working area (200) and to generate corresponding position data p(ti) ;
- a processing unit (150,250) arranged to process said image data acquired by said plurality of video- cameras (50) by applying at least a predetermined object recognition algorithm, said processing unit
(150,250) arranged to determine, through said processing step, the presence of an individual (25) and his/her distance (d) from said industrial vehicle
(10) ;
and in that said processing unit (150,250) is, furthermore, arranged to process said position data acquired by said GPS unit in such a way to determine the spatial position of said industrial vehicle (10)
and, therefore, the spatial position of said individual (25) .
2. System, according to claim 1, wherein an inertial measurement unit (300) is, furthermore, provided mounted on said industrial vehicle (10), said inertial measurement unit (300) equipped with at least an inertial sensor (310), and arranged to generate inertial data i(ti) on the basis of which the spatial orientation is determined, instant by instant, of said industrial vehicle (10), said processing unit (150,250) arranged to process said inertial data and to combine them with said position data in order to determine the movement trajectory of said industrial vehicle (10), and the risk of colliding with said individual (25) .
3. System, according to claim 1, wherein said inertial sensor (310) is selected from the group consisting of:
- at least one accelerometer ;
- at least one gyroscope;
- at least one magnetometer;
- o a combination thereof.
4. System, according to any of the previous claim, wherein said processing unit (150,250) comprises:
- a vehicular processing unit (150) mounted on said vehicle (10) and arranged to process said data acquired by said video-cameras (50) mounted on said industrial vehicle (10) for determining the presence, or not, of an individual (25) within said working area (200), and the distance of said individual same (25) from said industrial vehicle (10) .
5. System, according to any of the previous claim, wherein a plurality of industrial vehicles (10a, 10b) is provided, each industrial vehicle (10a, 10b) of said plurality provided with a respective vehicular processing unit (150a, 150b) arranged to process the data acquired by the video-cameras (50) mounted on a same industrial vehicle (10a, 10b), for determining the spatial position of at least an individual (25a, 25b) within said working area (200), in such a way to obtain a map of relative positions between said industrial vehicles (10) and said, or each, individual (25) .
6. System, according to any of the previous claim, wherein a plurality of fixed video-cameras (60) is, furthermore, provided arranged to acquire a plurality of additional images of said working area (200), said processing unit (150,250) comprising a central processing unit (250) arranged to process said plurality of additional images and to supplement the results of said processing step with the results of said, or each, vehicular processing unit (150), in such a way to generate a geo-referenced map of said industrial vehicles (10a, 10b) and of said individuals (25a, 25b) that are present in said working area (200) .
7. System, according to any of the previous claim, wherein said plurality of video-cameras (50a, 50b) is configured in such a way to obtain a total angle of view cxtot set between 270° and 360° about said industrial vehicle (10) .
8. System, according to any of the previous claim,
wherein said plurality of video-cameras (50a, 50b) is configured in such a way to obtain a total angle of view cxtot 360°.
9. Method for determining the position of a individual within a determined working area (10), said industrial vehicle (10) being free to move in a plurality of directions within a working area (200), said method comprising le steps di :
- acquiring a plurality of images by a plurality of video-cameras (50) mounted on said industrial vehicle (10) ;
- determining the position of the industrial vehicle (10) within said working area (200) and generating corresponding position data p(ti), said determining step of the position being carried out by means of a GPS unit;
- processing of said acquired images and said position data comprising the steps of:
- applying to said acquired images at least one predetermined object recognition algorithm in order to determine the presence of an individual and determine the distance of said individual detected from said industrial vehicle;
- determining the spatial position of said detected individual by combining said distance determined and said position data of said industrial vehicle.
10. Method, according to claim 9, wherein a determining step is, furthermore, provided for determining the spatial orientation of the industrial vehicle (10)
according to a plurality of inertial data i(ti) detected by an inertial unit (300) mounted on said industrial vehicle (10) .
11. Method, according to claim 9, wherein a verifying step is, furthermore, provided for verifying if the detected individual (25) is in a predetermined dangerous position and being, furthermore, provided a step for emitting an alarm signal, if said spatial position of said detected individual coincides with said predetermined dangerous position.
12. Method, according to claim 9, wherein said processing step of said images comprises the following steps:
- converting said acquired images in grey-level images ;
- applying a recognition algorithm based on Haar- cascade classifiers to said grey-level images, obtaining areas of interest (101-103) on said acquired images (100) .
13. Method, according to claim 10, wherein said processing step of said images comprises, furthermore, an applying step of an Histogram of oriented gradients, or HOG, algorithm, only to said areas of interest (101-103) of said plurality of images, which have been processed through said algorithm based on said Haar- cascade classifiers, in such a way to speed up the processing step.
14. Method, according to any claim from 9 to 13, wherein a tracking step is, furthermore, provided for "following" an individual who has already been
detected in a determined position during a previous detecting step, in such a way to speed up the successive processing steps through said second recognition algorithm, in order to determine the position of the individual even though he/she is temporarily not detected by said video-cameras.
15. Method, according to any claim from 9 to 13, wherein an additional processing step is, furthermore, provided for processing said images acquired by said video-cameras (50) for determining the angle β formed between the optical axis (51) of the video-camera (50) of said plurality that has detected said individual (25) and the position i of said individual (25) .
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| IT102015000049708 | 2015-09-08 | ||
| ITUB2015A003491A ITUB20153491A1 (en) | 2015-09-08 | 2015-09-08 | METHOD AND SYSTEM TO DETECT THE PRESENCE OF AN INDIVIDUAL IN PROXIMITY OF AN INDUSTRIAL VEHICLE |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017042677A1 true WO2017042677A1 (en) | 2017-03-16 |
Family
ID=55069962
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2016/055279 Ceased WO2017042677A1 (en) | 2015-09-08 | 2016-09-02 | Method and system for determining the position of an individual in a determined working area |
Country Status (2)
| Country | Link |
|---|---|
| IT (1) | ITUB20153491A1 (en) |
| WO (1) | WO2017042677A1 (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10958877B2 (en) | 2014-11-12 | 2021-03-23 | Helmerich & Payne Technologies, Llc | System and method for inhibiting or causing automated actions based on person locations estimated from multiple video sources |
| US10954729B2 (en) | 2015-08-31 | 2021-03-23 | Helmerich & Payne Technologies, Llc | System and method for estimating cutting volumes on shale shakers |
| US10957177B2 (en) | 2018-10-22 | 2021-03-23 | Motive Drilling Technologies, Inc. | Systems and methods for oilfield drilling operations using computer vision |
| US10982950B2 (en) | 2014-11-12 | 2021-04-20 | Helmerich & Payne Technologies, Llc | Oil rig drill pipe and tubing tally system |
| US10997412B2 (en) | 2014-11-12 | 2021-05-04 | Helmerich & Payne Technologies, Llc | System and method for estimating rig state using computer vision for time and motion studies |
| US11162356B2 (en) | 2019-02-05 | 2021-11-02 | Motive Drilling Technologies, Inc. | Downhole display |
| US11408266B2 (en) | 2014-11-12 | 2022-08-09 | Helmerich & Payne Technologies, Llc | System and method for measuring characteristics of cuttings from drilling operations with computer vision |
| EP4383217A1 (en) * | 2022-12-09 | 2024-06-12 | BlooLoc NV | Person detection method and system for collision avoidance |
| US12012809B2 (en) | 2019-10-16 | 2024-06-18 | Magnetic Variation Services LLC | Drill pipe tally system |
| US12049822B2 (en) | 2018-10-22 | 2024-07-30 | Motive Drilling Technologies, Inc. | Systems and methods for oilfield drilling operations using computer vision |
| US12359551B2 (en) | 2019-02-05 | 2025-07-15 | Magnetic Variation Services, Llc | Geosteering methods and systems for improved drilling performance |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070229238A1 (en) * | 2006-03-14 | 2007-10-04 | Mobileye Technologies Ltd. | Systems And Methods For Detecting Pedestrians In The Vicinity Of A Powered Industrial Vehicle |
| US20130243259A1 (en) * | 2010-12-09 | 2013-09-19 | Panasonic Corporation | Object detection device and object detection method |
| US20130301911A1 (en) * | 2012-05-08 | 2013-11-14 | Samsung Electronics Co., Ltd | Apparatus and method for detecting body parts |
| US20150151725A1 (en) * | 2013-12-04 | 2015-06-04 | Mobileye Vision Technologies Ltd. | Systems and methods for implementing a multi-segment braking profile for a vehicle |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9230419B2 (en) * | 2010-07-27 | 2016-01-05 | Rite-Hite Holding Corporation | Methods and apparatus to detect and warn proximate entities of interest |
| GB2484133B (en) * | 2010-09-30 | 2013-08-14 | Toshiba Res Europ Ltd | A video analysis method and system |
-
2015
- 2015-09-08 IT ITUB2015A003491A patent/ITUB20153491A1/en unknown
-
2016
- 2016-09-02 WO PCT/IB2016/055279 patent/WO2017042677A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070229238A1 (en) * | 2006-03-14 | 2007-10-04 | Mobileye Technologies Ltd. | Systems And Methods For Detecting Pedestrians In The Vicinity Of A Powered Industrial Vehicle |
| US20130243259A1 (en) * | 2010-12-09 | 2013-09-19 | Panasonic Corporation | Object detection device and object detection method |
| US20130301911A1 (en) * | 2012-05-08 | 2013-11-14 | Samsung Electronics Co., Ltd | Apparatus and method for detecting body parts |
| US20150151725A1 (en) * | 2013-12-04 | 2015-06-04 | Mobileye Vision Technologies Ltd. | Systems and methods for implementing a multi-segment braking profile for a vehicle |
Non-Patent Citations (1)
| Title |
|---|
| DALAL N ET AL: "Histograms of oriented gradients for human detection", PROCEEDINGS / 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2005 : [20 - 25 JUNE 2005, SAN DIEGO, CA], IEEE, PISCATAWAY, NJ, USA, 25 June 2005 (2005-06-25), pages 886 - 893vol.1, XP031330347, ISBN: 978-0-7695-2372-9 * |
Cited By (28)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11971247B2 (en) | 2014-11-12 | 2024-04-30 | Helmerich & Payne Technologies, Llc | Oil rig drill pipe and tubing tally system |
| US12395610B2 (en) | 2014-11-12 | 2025-08-19 | Helmerich & Payne Technologies, Llc | Systems and methods for personnel location at a drilling site |
| US12338723B2 (en) | 2014-11-12 | 2025-06-24 | Helmerich & Payne Technologies, Llc | System and method for measuring characteristics of cuttings from drilling operations with computer vision |
| US10982950B2 (en) | 2014-11-12 | 2021-04-20 | Helmerich & Payne Technologies, Llc | Oil rig drill pipe and tubing tally system |
| US10958877B2 (en) | 2014-11-12 | 2021-03-23 | Helmerich & Payne Technologies, Llc | System and method for inhibiting or causing automated actions based on person locations estimated from multiple video sources |
| US12529288B2 (en) | 2014-11-12 | 2026-01-20 | Helmerich & Payne Technologies, Llc | Systems and methods for estimating rig state using computer vision |
| US12049812B2 (en) | 2014-11-12 | 2024-07-30 | Helmerich & Payne Technologies, Llc | System and method for measuring characteristics of cuttings from drilling operations with computer vision |
| US11378387B2 (en) | 2014-11-12 | 2022-07-05 | Helmerich & Payne Technologies, Llc | System and method for locating, measuring, counting, and aiding in the handling of drill pipes |
| US11408266B2 (en) | 2014-11-12 | 2022-08-09 | Helmerich & Payne Technologies, Llc | System and method for measuring characteristics of cuttings from drilling operations with computer vision |
| US11592282B2 (en) | 2014-11-12 | 2023-02-28 | Helmerich & Payne Technologies, Llc | Oil rig drill pipe and tubing tally system |
| US11859468B2 (en) | 2014-11-12 | 2024-01-02 | Helmerich & Payne Technologies, Llc | Systems and methods for estimating rig state using computer vision |
| US11906283B2 (en) | 2014-11-12 | 2024-02-20 | Helmerich & Payne Technologies, Llc | System and method for locating, measuring, counting, and aiding in the handling of drill pipes |
| US10997412B2 (en) | 2014-11-12 | 2021-05-04 | Helmerich & Payne Technologies, Llc | System and method for estimating rig state using computer vision for time and motion studies |
| US11917333B2 (en) | 2014-11-12 | 2024-02-27 | Helmerich & Payne Technologies, Llc | Systems and methods for personnel location at a drilling site |
| US10954729B2 (en) | 2015-08-31 | 2021-03-23 | Helmerich & Payne Technologies, Llc | System and method for estimating cutting volumes on shale shakers |
| US11948322B2 (en) | 2015-08-31 | 2024-04-02 | Helmerich & Payne Technologies, Llc | Systems for monitoring drilling cuttings |
| US12444040B2 (en) | 2018-10-22 | 2025-10-14 | Motive Drilling Technologies, Inc. | Systems and methods for oilfield drilling operations using computer vision |
| US11361646B2 (en) | 2018-10-22 | 2022-06-14 | Motive Drilling Technologies, Inc. | Systems and methods for oilfield drilling operations using computer vision |
| US12014482B2 (en) | 2018-10-22 | 2024-06-18 | Motive Drilling Technologies, Inc. | Systems and methods for oilfield drilling operations using computer vision |
| US10957177B2 (en) | 2018-10-22 | 2021-03-23 | Motive Drilling Technologies, Inc. | Systems and methods for oilfield drilling operations using computer vision |
| US12049822B2 (en) | 2018-10-22 | 2024-07-30 | Motive Drilling Technologies, Inc. | Systems and methods for oilfield drilling operations using computer vision |
| US12359551B2 (en) | 2019-02-05 | 2025-07-15 | Magnetic Variation Services, Llc | Geosteering methods and systems for improved drilling performance |
| US12006818B2 (en) | 2019-02-05 | 2024-06-11 | Motive Drilling Technologies, Inc. | Downhole display |
| US11162356B2 (en) | 2019-02-05 | 2021-11-02 | Motive Drilling Technologies, Inc. | Downhole display |
| US12012809B2 (en) | 2019-10-16 | 2024-06-18 | Magnetic Variation Services LLC | Drill pipe tally system |
| US12454868B2 (en) | 2019-10-16 | 2025-10-28 | Magnetic Variation Services LLC | Drill pipe tally system |
| US20240192700A1 (en) * | 2022-12-09 | 2024-06-13 | Blooloc Nv | Person detection method and system for collision avoidance |
| EP4383217A1 (en) * | 2022-12-09 | 2024-06-12 | BlooLoc NV | Person detection method and system for collision avoidance |
Also Published As
| Publication number | Publication date |
|---|---|
| ITUB20153491A1 (en) | 2017-03-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2017042677A1 (en) | Method and system for determining the position of an individual in a determined working area | |
| KR102098140B1 (en) | Method for monotoring blind spot of vehicle and blind spot monitor using the same | |
| US10551854B2 (en) | Method for detecting target object, detection apparatus and robot | |
| JP7025912B2 (en) | In-vehicle environment recognition device | |
| EP3293669B1 (en) | Enhanced camera object detection for automated vehicles | |
| EP3196668B1 (en) | Object tracking system with radar/vision fusion for automated vehicles | |
| US10347005B2 (en) | Object state identification method, object state identification apparatus, and carrier | |
| US8965050B2 (en) | Behavior analysis device | |
| US10163225B2 (en) | Object state identification method, object state identification apparatus, and carrier | |
| JP2021165080A (en) | Vehicle control device, vehicle control method and computer program for vehicle control | |
| JP4561346B2 (en) | Vehicle motion estimation device and moving object detection device | |
| KR101903127B1 (en) | Gaze estimation method and apparatus | |
| CN107144839A (en) | Pass through the long object of sensor fusion detection | |
| JP2007310705A (en) | Vehicle periphery monitoring device | |
| US20060115119A1 (en) | Vehicle surroundings monitoring apparatus | |
| CN112818816B (en) | A temperature detection method, device and equipment | |
| JP2004145660A (en) | Obstacle detection device | |
| Perdoch et al. | Leader tracking for a walking logistics robot | |
| JP2005311691A (en) | Object detection apparatus and method | |
| JP2018048949A (en) | Object identification device | |
| US7969466B2 (en) | Vehicle surroundings monitoring apparatus | |
| CN115240471B (en) | A method and system for collision avoidance and early warning in smart factories based on image collection | |
| WO2019073024A1 (en) | Lane sensing method | |
| JP2007310706A (en) | Vehicle periphery monitoring device | |
| JP6263453B2 (en) | Momentum estimation device and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16787559 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16787559 Country of ref document: EP Kind code of ref document: A1 |