[go: up one dir, main page]

WO2021230049A1 - Information notification system - Google Patents

Information notification system Download PDF

Info

Publication number
WO2021230049A1
WO2021230049A1 PCT/JP2021/016518 JP2021016518W WO2021230049A1 WO 2021230049 A1 WO2021230049 A1 WO 2021230049A1 JP 2021016518 W JP2021016518 W JP 2021016518W WO 2021230049 A1 WO2021230049 A1 WO 2021230049A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
user
danger
degree
contact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2021/016518
Other languages
French (fr)
Japanese (ja)
Inventor
貴則 野村
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NTT Docomo Inc
Original Assignee
NTT Docomo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NTT Docomo Inc filed Critical NTT Docomo Inc
Priority to JP2022521807A priority Critical patent/JP7504201B2/en
Publication of WO2021230049A1 publication Critical patent/WO2021230049A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/04Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using a single signalling line, e.g. in a closed loop
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/005Traffic control systems for road vehicles including pedestrian guidance indicator
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • One aspect of the present invention relates to an information notification system.
  • Patent Document 1 when an object image enters the wearer's side from the alert line in the captured image, the wearer is alerted by the image of the earphone and the wearer worn by the wearer. The device is described.
  • One aspect of the present invention is made in view of the above circumstances, and relates to an information notification system capable of appropriately notifying a user of necessary information according to the degree of danger of the user.
  • the information notification system includes an acquisition unit that acquires one or more captured images related to an area around the user, and one or a plurality of captured images captured by a terminal attached to the user. Based on the detection unit that detects objects in the area around the user, and the determination unit that determines the degree of risk that the object detected by the detection unit will come into contact with the user based on the result detected by the detection unit.
  • a generator that generates notification information indicating that the user is dangerous based on the degree of danger and an output unit that outputs the notification information to the terminal are provided. The higher the degree of danger, the more the user is an object. Generate notification information to help you recognize the danger of contact.
  • an object approaching the user is recognized based on one or a plurality of captured images related to the area around the user, and the risk of the object coming into contact with the user is determined. .. Then, in this information notification system, notification information is generated so that the user can easily recognize the danger of contact with an object as the degree of danger increases, and the notification information is output to the terminal. According to such a configuration, it is possible to notify the user according to the risk of contact with an object. Specifically, for example, when the risk of an object coming into contact with a user is low, it is not suitable for the determined risk by simply outputting notification information to the extent that the user is notified of the approach of the object.
  • the notification information emphasizing that the danger is imminent is output to the user, so that the contact between the object and the user is appropriate. It can be avoided.
  • the information notification system according to one aspect of the present invention, it is possible to appropriately notify the user of necessary information according to the degree of danger of the user.
  • necessary information can be appropriately notified to the user according to the degree of danger of the user.
  • FIG. 1 is a diagram illustrating an outline of an information notification system according to the present embodiment.
  • FIG. 2 is a block diagram showing a functional configuration of the information notification system according to the present embodiment.
  • FIG. 3 is a diagram illustrating notification information.
  • FIG. 4 is a diagram illustrating a degree of danger that an object comes into contact with a user.
  • FIG. 5 is a diagram showing an example of an image displayed on a communication terminal.
  • FIG. 6 is a diagram showing an example of a danger image displayed on a communication terminal.
  • FIG. 7 is a diagram showing an example of a danger image displayed on a communication terminal.
  • FIG. 8 is a flowchart showing a process performed by the information notification system according to the present embodiment.
  • FIG. 9 is a flowchart showing a process performed by the information notification system according to the present embodiment.
  • FIG. 10 is a diagram showing a hardware configuration of a communication terminal, an object detection server, and a determination server included in the information notification system according to the present embodiment
  • FIG. 1 is a diagram illustrating an outline of the information notification system 1 according to the present embodiment.
  • FIG. 2 is a block diagram showing a functional configuration of the information notification system 1 according to the present embodiment.
  • the information notification system 1 shown in FIGS. 1 and 2 is a system for notifying various information including notification information indicating that the user is dangerous.
  • the information notification system 1 includes a communication terminal (terminal) 10 attached to the user, an object detection server 30, and a determination server 50.
  • FIG. 1 illustrates an image P1 which is one of a plurality of temporally continuous captured images captured by the communication terminal 10.
  • the object detection server 30 detects an object in the captured image for a plurality of captured images captured by the communication terminal 10, and transmits information about the detected object to the determination server 50.
  • the object detection server 30 detects the person H1 as an object based on a plurality of captured images including the image P1.
  • the determination server 50 determines the degree of risk that the detected object will come into contact with the user based on the detection result detected by the object detection server 30. Then, the determination server 50 generates notification information based on the determined risk level.
  • the notification information is information indicating that the user is in danger.
  • the determination server 50 generates notification information so that the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object.
  • the notification information includes sound information and image information.
  • the sound information is information notified by sound in the speaker of the communication terminal 10, and is information on dangerous sounds that make it easier for the user to recognize the danger of contact with an object as the degree of danger increases.
  • the image information is information to be notified (displayed) on the screen of the communication terminal 10, and is information related to a dangerous image in which the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object.
  • the details of the image information and the sound information will be described later. Then, when the determination server 50 outputs the notification information to the communication terminal 10, a dangerous sound is emitted from the speaker of the communication terminal 10, and a dangerous image is displayed on the screen of the communication terminal 10.
  • a person (object) H1 detected as an object is running from the front toward the user, and a predetermined condition for determining that the person H1 has a high risk of contacting the user.
  • the determination server 50 determines that the risk of contact with the user by the person H1 is "high" (details will be described later), and notifies the user so that the danger of contact with the person H1 can be easily recognized. Generate information. Then, when the determination server 50 outputs the notification information to the communication terminal 10, as shown in FIG. 3, a dangerous sound M1 is emitted from the speaker of the communication terminal 10, and the screen of the communication terminal 10 is dangerous.
  • Image P2 which is an image, is displayed. The mode in which the dangerous sound and the dangerous image are output is only an example, and only one of them may be output or another notification information may be output.
  • the information notification system 1 notifies the user of notification information indicating that it is dangerous.
  • the number of communication terminals 10 shown in FIGS. 1 and 2 is one, the number of communication terminals 10 may be plural.
  • the communication terminal 10 is, for example, a terminal configured to perform wireless communication.
  • the communication terminal 10 is a terminal worn by a user, and is, for example, a goggle-type wearable device.
  • a plurality of continuously (temporally continuous) captured images are captured by the mounted camera.
  • the communication terminal 10 transmits the plurality of captured images captured to the object detection server 30.
  • the acquired plurality of captured images are used for detecting an object by the object detection server 30 and the like.
  • the communication terminal 10 has a storage unit 11, a transmission unit 12, and an output unit 13.
  • the storage unit 11 stores various information such as a plurality of captured images and notification information acquired from the determination server 50.
  • the transmission unit 12 transmits a plurality of captured images to the object detection server 30 and the determination server 50.
  • the output unit 13 outputs a specific output based on the notification information stored in the storage unit 11. Specifically, the output unit 13 may emit a dangerous sound from the speaker included in the communication terminal 10. Further, the output unit 13 may display a dangerous image on the screen of the communication terminal 10.
  • the object detection server 30 is a server that detects an object in the area around the user based on a plurality of captured images acquired from the communication terminal 10.
  • the object detection server 30 detects an object for each of the plurality of captured images.
  • the object detection server 30 has a storage unit 31, an acquisition unit 32, and a detection unit 33 as functional components.
  • the storage unit 31 stores the data 300.
  • the data 300 is data in which the template of each object listed in advance as an object that can come into contact with the user and the name of the object are associated with each other. Such data 300 may be limited data selected as objects that can come into contact with each other, or may be data of various objects that have not been selected at all.
  • the storage unit 31 may have an external configuration of the object detection server 30. That is, the data 300 may be data stored in a server outside the object detection server 30.
  • the acquisition unit 32 acquires a plurality of captured images that are continuous in time from the communication terminal 10.
  • Each captured image is an image captured by a communication terminal 10 worn by the user and relating to an area around the user.
  • the plurality of captured images may be a sufficient number of captured images for the determination unit 53 to estimate the movement of the object, the distance between the user and the object, the approach speed to the user, and the like, which will be described later.
  • the detection unit 33 detects an object in the area around the user based on the plurality of captured images captured by the communication terminal 10 and the data 300 stored in the storage unit 31. Specifically, the detection unit 33 detects an object captured in each captured image by a known image recognition process using the data of the template of the data 300. Then, the detection unit 33 transmits the object information, which is the information about the detected object, to the determination server 50 as the detection result.
  • the object information includes the name of the detected object, the position information of the object in each captured image, the time information when the captured image was captured, and the like.
  • the determination server 50 determines the degree of danger that the object detected by the detection unit 33 comes into contact with the user based on the result detected by the detection unit 33. Then, the determination server 50 generates notification information indicating that it is dangerous to the user based on the degree of danger, and outputs the generated notification information to the communication terminal 10.
  • the determination server 50 includes a storage unit 51, an acquisition unit 52, a determination unit 53, a generation unit 54, and an output unit 55.
  • the storage unit 51 stores information used for various processes performed by the determination server 50, such as determination of the degree of danger. Specifically, the storage unit 51 stores a plurality of captured images acquired from the communication terminal 10, object information acquired from the object detection server 30, and the like.
  • the acquisition unit 52 acquires object information from the object detection server 30. Further, the acquisition unit 52 acquires a plurality of captured images from the communication terminal 10. The plurality of captured images are used to generate notification information by the generation unit 54, which will be described later.
  • the determination unit 53 determines the degree of risk that the object detected by the detection unit 33 will come into contact with the user based on the result (object information) detected by the detection unit 33.
  • the method of determining the degree of danger by the determination unit 53 will be described with reference to an example in which a plurality of captured images including the image P1 shown in FIG. 1 are acquired.
  • the determination unit 53 determines the degree of danger based on the positions of the objects in the plurality of captured images detected by the detection unit 33. As shown in FIG. 4, in the present embodiment, the degree of danger is classified into four ranks of "high”, “medium”, "low”, and "none".
  • the determination unit 53 determines each of the contact possibility, the distance between the user and the object, and the approach speed of the object to the user, and based on the determination result, "high” and “medium”. , “Low”, and “None” to determine the most suitable risk.
  • Contactability is the possibility that an object will come into contact with the user and is classified as, for example, either "contact” or "non-contact”.
  • the determination unit 53 determines the possibility of contact by performing the frontal determination and the movement vector determination.
  • the frontal determination is the determination of the position of the object with respect to the user.
  • the movement vector determination is a determination of a region where an object is predicted to move.
  • the determination unit 53 determines whether or not the object is located in the frontal region with respect to the user based on the positions of the objects in the plurality of captured images. In the present embodiment, the determination unit 53 determines whether or not the object is located in a region of a predetermined angle from the user toward the front front.
  • the predetermined angle is, for example, an angle of ⁇ 15 °. That is, in the present embodiment, the "frontal region" means a region of ⁇ 15 ° from the user toward the front of the front when viewed from the vertical direction.
  • the determination unit 53 determines whether or not the object is located in the front area based on the position of the object included in the object information.
  • the determination unit 53 determines in the frontal determination that the object is located in the above-mentioned area of ⁇ 15 °, it is regarded as “front” and it is determined that the object is not located in the above-mentioned area of ⁇ 15 °. In the case, it is "non-front".
  • the frontal region is not limited to the region of ⁇ 15 ° from the user toward the front of the front, and may be a region of another angle.
  • the determination unit 53 determines the movement vector of the object based on the position of the object included in the object information. Specifically, the determination unit 53 calculates the movement vector of the object by taking the difference in the position of the object between the continuous captured images.
  • the determination unit 53 has a position of an object in an image of one of a plurality of images (hereinafter referred to as “first image”) and an image of one of the plurality of images following the first image. The time difference from the position of the object in the second image captured in is taken. Then, the determination unit 53 takes a time difference between the position of the object in the second image and the position of the object in the third image captured after the second image, which is one of the plurality of images.
  • the determination unit 53 determines the range in which the object moves based on the calculated movement vector. Specifically, based on the movement vector, the determination unit 53 moves the object in the front region while staying in the front region, or moves toward a region different from the front region. To determine whether to move. Then, the determination unit 53 determines whether the object stays in the area different from the front area and moves or moves toward the front area for the object in the area different from the front area. judge. In this way, the determination unit 53 determines the region where the object is predicted to move based on the movement vector of the object derived from the position of the object in the plurality of captured images.
  • the determination unit 53 performs the contact determination based on the respective results of the frontal determination and the movement vector determination. Specifically, when the result of the frontal determination is "front” and the result of the movement vector determination is "stays in the front area and moves", the determination unit 53 sets the contact determination as "contact”. judge. This is because it can be said that an object located in the area in front of the user and moving in the area in front of the user is likely to come into contact with the user.
  • the determination unit 53 determines the contact determination as "non-contact" when the result of the movement vector determination is "moving toward a region different from the front region". Is determined. This is because even if the object is located in the front area, if the object moves toward a different area from the front area, the risk of the object touching the user is low. Is. That is, when the determination unit 53 estimates that the object is located in the area in front of the user and that the object moves to a region different from the area in front based on the movement vector, the object is in the area in front of the user. The possibility of contact is judged to be lower than the case where it is estimated that the object stays inside and moves.
  • the determination unit 53 determines the contact determination as "non-contact”. Is determined. This is because it can be said that an object moving in an area different from the front area is unlikely to come into contact with the user.
  • the determination unit 53 determines that the contact determination is "contact” when the result of the movement vector determination is "moving toward the frontal region". .. This is because when an object moving in an area different from the front area moves toward the front area, the object stays in a different area from the front area and moves with the user. This is because the risk of contact is high. That is, when the determination unit 53 estimates that the object is located in a region different from the region in front of the user and estimates that the object moves toward the region in front based on the movement vector, the object is estimated. The possibility of contact is judged to be higher than the case where it is estimated that the object stays outside the front area and moves.
  • the determination unit 53 determines the distance between the user and the object based on the positions of the objects in the plurality of captured images detected by the detection unit 33. As an example, the determination unit 53 estimates the distance between the user and the object based on, for example, the position information and the time information of the object included in the object information. Then, when the determination unit 53 determines that the estimated distance between the user and the object is smaller than the threshold value (second threshold value), the determination unit 53 determines that the distance between the user and the object is "close (near)".
  • the threshold value second threshold value
  • the determination unit 53 determines that the estimated distance between the user and the object is not smaller than the threshold value (second threshold value)
  • the determination unit 53 determines that the distance between the user and the object is "far (far)".
  • the threshold value is a predetermined value, and is a limit allowable value of the distance between the user and the object, which is generally considered to have a low risk of the object coming into contact with the user.
  • the determination unit 53 determines the approach speed based on the positions of the objects in the plurality of captured images detected by the detection unit 33. As an example, the determination unit 53 estimates the approach speed based on, for example, the position information and the time information of the object included in the object information. Then, when the determination unit 53 determines that the estimated approach speed is larger than the threshold value (first threshold value), the determination unit 53 determines that the approach speed is "fast (fast)". On the other hand, when the determination unit 53 determines that the estimated approach speed is not larger than the threshold value (first threshold value), the determination unit 53 determines that the approach speed is "slow (slow)".
  • the threshold value is a predetermined value, and is generally a limit permissible value of the approach speed, which is considered to have a low risk of an object coming into contact with the user.
  • the determination unit 53 determines the degree of danger based on the results of determining the contact possibility, the distance between the user and the object, and the approach speed. First, when the determination unit 53 determines "contact” in the contact determination of the possibility of contact and “close” in the determination of the distance between the user and the object, the degree of danger is determined regardless of the determination result of the approach speed. Is determined to be "high".
  • the determination unit 53 determines "contact” in the contact determination of the possibility of contact and "far” in the determination of the distance between the user and the object, the degree of danger is determined in consideration of the determination of the approach speed. judge. Specifically, when the determination unit 53 determines "contact” in the contact determination of the possibility of contact, “far” in the determination of the distance between the user and the object, and “fast” in the determination of the approach speed, the degree of danger. Is determined to be “medium”. On the other hand, when the determination unit 53 determines "contact” in the contact determination of the possibility of contact, “far” in the determination of the distance between the user and the object, and “slow” in the determination of the approach speed, the risk level is “low”. “. Further, when the determination unit 53 determines "non-contact” in the contact determination of the possibility of contact, it determines that the degree of danger is “none” regardless of the determination result of the distance between the user and the object and the approach speed. do.
  • the determination unit 53 determines that the degree of danger is higher than when it is determined that the approach speed is not larger than the first threshold value. Further, when the determination unit 53 determines that the distance between the user and the object is smaller than the second threshold value, the determination unit 53 determines that the degree of danger is higher than when it is determined that the distance is not smaller than the second threshold value. ..
  • the acquisition unit 32 acquires a plurality of images including the image P1
  • the detection unit 33 obtains information about the detected person H1 as a detection result (object name (person), each image pickup).
  • the position information and time information of the person H1 in the image) is transmitted to the determination server 50.
  • the acquisition unit 52 acquires information about the person H1, and the determination unit 53 determines the degree of danger.
  • the determination unit 53 determines the possibility of contact (that is, contact determination). Specifically, the determination unit 53 is based on the position of the person H1 in a plurality of images including the image P1, for example, when the person H1 (object) is located in a region of ⁇ 15 ° toward the front front. Determines "front" (that is, the person H1 is located in the front area) in the front determination. Further, in the example shown in FIG. 1, the person H1 stays and moves in the front area.
  • the determination unit 53 determines "contact" in the contact determination based on the results of the frontal determination and the movement vector determination.
  • the determination unit 53 determines the distance between the user and the person H1 and determines the approach speed.
  • the distance between the user and the person H1 is not smaller than the threshold value, and the person H1 reaches the user at a speed larger than the threshold value.
  • the determination unit 53 determines that the distance between the user and the object is "far” and the approach speed is "fast” based on the position of the person H1 in the plurality of captured images including the image P1.
  • the determination unit 53 determines that the person H1 is the user based on the result of each determination described above (accessibility: “contact”, distance between the user and the person H1: “far”, and approach speed: “fast”). It is determined that the risk of contact with is “medium” (see FIG. 4). As described above, the degree of danger in the example shown in FIG. 1 is determined.
  • the generation unit 54 generates notification information indicating that the user is dangerous based on the degree of danger determined by the determination unit 53.
  • the generation unit 54 generates notification information so that the higher the risk, the easier it is for the user to recognize the danger of contact with an object.
  • the notification information includes sound information and image information.
  • the sound information is information for emitting a dangerous sound in the speaker of the communication terminal 10.
  • the generation unit 54 generates sound information (notification information) so that a dangerous sound (sound according to the degree of danger) is emitted from a place corresponding to the position of the object in the speaker.
  • the "place corresponding to the position of the object in the speaker” is, for example, a place in which the position of the object seen by the user is reflected in the speaker (hereinafter, simply referred to as "speaker") in which the communication terminal 10 is attached to the user. Is.
  • the position of the object as seen by the user is located in front of the user and on the left side (hereinafter, simply referred to as "front left side"), the position on the left side of the user's body in the speaker is in the speaker. It is a place corresponding to the position of the object.
  • the generation unit 54 generates notification information based on the position information of the object included in the object information so that the speaker emits a dangerous sound notifying the position of the object in the left-right direction of the user. Specifically, for example, when the position of the object seen by the user is located on the left side of the front, the generation unit 54 generates notification information so that a dangerous sound is emitted from a place close to the user's left ear in the speaker. do.
  • the generation unit 54 notifies the speaker that a dangerous sound indicating the position of an object in three dimensions (that is, in the left-right and front-back directions of the user) is emitted. Information may be generated. Specifically, for example, when the position of the object as seen by the user is located on the left side of the front, the generation unit 54 emits a dangerous sound from a place close to the left ear of the user in the speaker and from the direction of the left side of the front. Generate notification information so that the user can hear the danger sound.
  • Danger sounds are sounds according to the degree of danger.
  • the volume, content, etc. of the "sound according to the degree of danger" changes according to the level of the degree of danger, for example, the higher the degree of danger, the louder the sound, and the higher the degree of danger, the stronger the alerting sound. It's a sound.
  • the generation unit 54 generates, as dangerous sounds, a sound that conveys the position of the object, the name of the object, the fact that the object is approaching, and the mode in which the object is approaching.
  • the sound is generated based on, for example, the name of the object included in the object information, the distance between the user and the object, and the approach speed.
  • Examples of the mode in which the object is approaching include, for example, when the object is a person, a mode such as running or walking. In the example shown in FIG. 3, the person H1 who is an object runs and approaches the user from 5 m ahead in front of the user. Therefore, the generation unit 54 generates a danger sound M1 that conveys that the person H1 is running and approaching from 5 m ahead of the front surface.
  • the generation unit 54 generates image information (notification information) so that a dangerous image is displayed on the screen of the communication terminal 10 (hereinafter, simply referred to as “screen”).
  • the danger image is an image in which the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object.
  • Information about the object is displayed in the danger image.
  • the information about the object is, for example, a captured image acquired by the acquisition unit 32. That is, the dangerous image is an image based on the captured image. Further, the danger image may include (superimpose) other information different from the object depending on the degree of danger (specifically, the lower the degree of danger).
  • the other information is information different from the object, such as information related to a message application such as an e-mail to a user and information related to a map application, and is, for example, information acquired from another server (not shown).
  • the generation unit 54 generates image information so that a dangerous image that is continuous in time corresponding to each of the plurality of captured images is displayed on the screen.
  • the generation unit 54 produces a normal image in which other information is superimposed on the captured image when there is no object in the area in front of the user or when the determination unit 53 determines that the risk level is “absent”. Generate.
  • the generation unit 54 generates image information so that a normal image that is continuous in time corresponding to each of the plurality of captured images is displayed on the screen.
  • the image P3 shown in FIG. 5 is an example of a normal image, in which one image Pn included in a plurality of captured images has other information (information different from information about an object) Z, message information, and weather information. , And the image on which the map information is superimposed. In the image P3, the other information Z is superimposed on the image Pn in a non-transparent display state.
  • the generation unit 54 When the determination unit 53 determines that the risk level is "low”, the generation unit 54 generates a danger image in which other information is superimposed on the captured image.
  • the image (danger image) P4 shown in FIG. 6 is an example of a danger image when the degree of danger is “low”, and other information Z is superimposed on one image Pd included in a plurality of captured images. It is an image that was made. However, in the image P4, the image Pd (information about the object) is conspicuously displayed.
  • the generation unit 54 generates a dangerous image by setting the display of other information to a predetermined transparency.
  • the other information Z included in the image P3 is in a non-transparent display state, while the other information Z included in the image P4 is displayed semi-transparently.
  • other information of the image information when the degree of danger is determined to be "low” is less conspicuous than other information Z contained in the normal image (in other words, information about the object).
  • the image Pd is prominently generated).
  • the generation unit 54 generates only the captured image as a danger image when the determination unit 53 determines that the risk level is "medium” or "high". That is, the danger image generated when the determination unit 53 determines that the degree of danger is “medium” or “high” does not superimpose other information on the captured image. , It is different from the danger image generated when the degree of danger is determined to be “low”.
  • the image (danger image) P2 shown in FIG. 7 is an example of a danger image when the determination unit 53 determines that the degree of danger is “medium” and when it is determined to be “high”. In the image P2, the information about the object is conspicuously displayed.
  • the generation unit 54 hides other information and generates a dangerous image.
  • the other information Z is displayed semi-transparently in the image P4 generated when the degree of risk is "low", whereas the other information Z is not included in the image P2.
  • a frame surrounding the person H1 which is an object is displayed.
  • the image information when the risk level is determined to be "medium” and when the risk level is determined to be “high” is compared with the case where the risk level is determined to be "low”.
  • other information is generated so as to be inconspicuous (in other words, information about the object (image Pd) is further conspicuous).
  • the generation unit 54 generates image information (notification information) so that the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object.
  • the output unit 55 outputs the notification information generated by the determination unit 53.
  • the output notification information is acquired by the communication terminal 10.
  • a danger sound is emitted from the speaker of the communication terminal 10, and a danger image is displayed on the screen of the communication terminal 10.
  • the risk level is "high”
  • the notification information sound information and image information corresponding to the risk level: "high”
  • the dangerous sound M1 is emitted from the speaker of the communication terminal 10
  • the image P2 which is a dangerous image is displayed on the screen of the communication terminal 10 (see FIG. 3).
  • the dangerous sound M1 is emitted from a place corresponding to the person H1 in the speaker (a central place in the left-right direction of the user).
  • the danger sound M1 is emitted as a sound corresponding to the danger degree "high” at a louder volume than when the danger degree is determined to be “low” and "medium”.
  • the image P2 is a dangerous image of one of the temporally continuous dangerous images corresponding to a plurality of captured images, and is an image in which a frame surrounding the person H1 is superimposed on the image P1. As described above, in the information notification system 1, the user is notified of the notification information indicating that it is dangerous.
  • FIG. 8 is a flowchart showing a process performed by the information notification system 1.
  • the object detection server 30 acquires a plurality of temporally continuous captured images from the communication terminal 10 attached to the user (step S11). Specifically, the object detection server 30 is imaged by the communication terminal 10 and acquires a plurality of captured images related to the area around the user.
  • the object detection server 30 detects an object in the area around the user based on the plurality of captured images acquired in step S11 and the data 300 stored in the object detection server 30 (). Step S12). Then, the detected object information in the object detection server 30 is transmitted to the determination server 50.
  • step S13 determines the risk of the detected object coming into contact with the user based on the result (object information) detected by the object detection server 30 (step S13).
  • the process of step S13 will be described in detail with reference to the flowchart of FIG.
  • the determination server 50 determines whether or not the object is located in the area in front of the user based on the object information (step S21). Specifically, the determination unit 53 determines whether or not the object is located in a region of ⁇ 15 ° from the user toward the front front.
  • the determination server 50 heads for an area different from the front area based on the movement vector of the object. It is determined whether or not to move (step S22). Specifically, the determination server 50 calculates a movement vector based on the position of the object included in the object information, and determines a region where the object is predicted to move based on the calculated movement vector. When it is determined that the object moves toward an area different from the front area (step S22: YES), the determination server 50 determines that the degree of danger is "none" (step S23), and the process shown in FIG. Is finished. On the other hand, when it is determined that the object does not move toward a region different from the front region (step S22: NO), the process proceeds to step S25.
  • step S21: NO When it is determined by the user that the object is not located in the front area (step S21: NO), whether the object moves toward the front area based on the movement vector of the object in the determination server 50. Whether or not it is determined (step S24). If it is determined that the object moves toward the front area (step S24: YES), the process proceeds to S25. On the other hand, when it is determined that the object does not move toward the front area (step S24: NO), the process proceeds to step S23.
  • the determination server 50 determines whether or not the distance between the user and the object estimated based on the position information and the time information of the object included in the object information is smaller than the threshold value (second threshold value) (step). S25).
  • step S25: YES When it is determined that the estimated distance is smaller than the threshold value (step S25: YES), the determination server 50 determines that the risk level is "high” (step S26), and the process shown in FIG. 9 ends. do.
  • step S25: NO when it is determined that the estimated distance is not larger than the threshold value (step S25: NO), the threshold value is the approach speed estimated based on the position information and the time information of the object included in the object information in the determination server 50. It is determined whether or not it is larger than (first threshold value) (step S27).
  • step S27: YES When it is determined that the estimated approach speed is larger than the threshold value (step S27: YES), the determination server 50 determines that the risk level is "medium” (step S28), and the process shown in FIG. 9 is performed. finish. On the other hand, when it is determined that the estimated approach speed is not larger than the threshold value (step S27: NO), the determination server 50 determines that the risk level is “low” (step S29), which is shown in FIG. Processing ends.
  • notification information indicating that the user is dangerous is generated based on the degree of danger (step S14). Specifically, the determination server 50 generates sound information so that the dangerous sound is emitted from the place corresponding to the position of the object in the speaker, and the dangerous image is displayed on the screen of the communication terminal 10. , Generate image information.
  • the determination server 50 outputs the generated notification information to the communication terminal 10 (step S15).
  • the notification information is acquired by the communication terminal 10
  • a dangerous sound is emitted from the speaker of the communication terminal 10
  • a dangerous image is displayed on the screen of the communication terminal 10.
  • the information notification system 1 includes an acquisition unit 32 that acquires one or a plurality of captured images related to an area around the user, and one or a plurality of images captured by the communication terminal 10 mounted on the user. Based on the image, the detection unit 33 that detects an object in the area around the user, and based on the result detected by the detection unit 33, determines the degree of danger that the object detected by the detection unit 33 comes into contact with the user. A determination unit 53 is provided, a generation unit 54 that generates notification information indicating that the user is dangerous based on the degree of danger, and an output unit 55 that outputs the notification information to the communication terminal 10. The generation unit 54 generates notification information so that the higher the risk, the easier it is for the user to recognize the danger of contact with an object.
  • an object approaching the user is recognized based on a plurality of captured images related to the area around the user, and the degree of danger that the object comes into contact with the user is determined. Then, in the information notification system 1, notification information is generated so that the user can easily recognize the danger of contact with an object as the degree of danger increases, and the notification information is output to the communication terminal 10. According to such a configuration, it is possible to notify the user according to the risk of contact with an object. Specifically, for example, when the risk of an object coming into contact with a user is low, it is not suitable for the determined risk by simply outputting notification information to the extent that the user is notified of the approach of the object. It is possible to prevent the user from feeling annoyed by excessive notification.
  • the notification information emphasizing that the danger is imminent is output to the user, so that the contact between the object and the user is appropriate. It can be avoided.
  • the information notification system 1 it is possible to appropriately notify the user of necessary information according to the degree of danger of the user.
  • the information notification system 1 suppresses excessive notifications that are not suitable for the determined risk level, and thus has a technical effect of reducing the processing load.
  • the acquisition unit 32 acquires a plurality of captured images that are continuous in time, and the determination unit 53 has a risk that the object comes into contact with the user based on the position of the object in the plurality of captured images detected by the detection unit 33. Determine the degree. This reduces the risk of an object coming into contact with the user by taking into account the position of the object between the plurality of captured images that are continuous in time (for example, the change in the position of the object between the plurality of captured images). It can be determined with higher accuracy.
  • the determination unit 53 determines the contact possibility that the object may come into contact with the user based on the position of the object in the plurality of captured images, and determines the degree of danger based on the determination result of the contact possibility.
  • the possibility of contact is determined based on whether or not the object is located in the area in front of the user and the movement vector of the object derived from the position of the object in a plurality of captured images.
  • the contact possibility of the object is determined based on the position of the object with respect to the user and the moving direction of the object. Then, by determining the degree of danger from the determination result of the contact possibility of the object determined in this way, for example, when the possibility of contact with the object is high, the degree of danger can be increased, and the degree of danger can be increased. It can be determined with higher accuracy.
  • the determination unit 53 estimates that the object is located in a region different from the front region with respect to the user and estimates that the object moves toward the front region based on the movement vector, the object is in front of the front region. Compared with the case where it is estimated that the object stays and moves in a region different from the region, the possibility of contact is judged to be higher.
  • an object moving in an area different from the front area is unlikely to come into contact with the user.
  • the risk of the object coming into contact with the user is considered to be higher than when the object stays in a different area from the front area and moves. Be done.
  • the determination unit 53 determines the degree of danger based only on the position of the object with respect to the user, even if the object moves toward the front area, the possibility of contact is determined to be low, and as a result, the user is determined. Objects may come into contact.
  • the possibility of contact is determined in consideration of whether or not an object located in an area different from the front area moves toward the front area, so that the degree of danger is made more accurate. It can be determined.
  • the determination unit 53 estimates that the object is located in the front area with respect to the user and estimates that the object moves to a different area from the front area based on the movement vector, the determination unit 53 moves the object into the front area. The possibility of contact is judged to be lower than in the case of presuming that the object stays and moves.
  • an object moving in the front area is likely to come into contact with the user.
  • the determination unit 53 determines the degree of danger based only on the position of the object with respect to the user, there is a high possibility of contact if the object is located in the front area even if the object only crosses in front of the user. It may be determined, and as a result, excessive notification may be given that is not suitable for the determined risk level.
  • the possibility of contact is determined in consideration of whether or not an object located in the front area moves toward an area different from the front area, so that the degree of danger is more accurate. Can be determined.
  • the determination unit 53 determines that the speed at which the object moves in the direction approaching the user is greater than the first threshold value, the degree of danger is higher than when it is determined that the speed is not greater than the first threshold value. judge.
  • the risk of an object coming into contact with the user depends on the speed at which the object approaches the user. Specifically, for example, even when the object moves in the front area, if the object approaches the user at a low speed, the risk of contact between the user and the object is considered to be low.
  • the risk level is determined in consideration of the speed at which the object approaches the user, so that the risk level can be determined with higher accuracy.
  • the determination unit 53 determines that the distance between the user and the object is smaller than the second threshold value, the determination unit 53 determines that the degree of danger is higher than when it is determined that the distance is not smaller than the second threshold value.
  • the risk of an object coming into contact with the user depends on the distance between the user and the object. Specifically, for example, even if the object is moving in the area in front of the user, if the object is located far away from the user, the risk of the object touching the user is low. It can be said that.
  • the degree of danger is determined in consideration of the distance of the object to the user, so that the degree of danger can be determined with higher accuracy.
  • the information notification system 1 includes information notified by sound in the speaker of the communication terminal 10, and the generation unit 54 notifies the speaker so that the sound corresponding to the position of the object is emitted from the place corresponding to the position of the object. Generate information. As a result, for example, even for a visually impaired user, it is possible to give an appropriate notification to the user according to the degree of danger of the user, and the user can grasp the location of an object having a high degree of contact. Can be made easier.
  • the notification information includes information notified by a danger image displayed in a mode corresponding to the degree of danger on the screen of the communication terminal 10, and the generation unit 54 generates the information in the danger image as the degree of danger increases. Generate notification information so that information about the object stands out. In particular, in the information notification system 1, the generation unit 54 generates notification information so that the information about the object is conspicuous by making other information inconspicuous as the degree of danger is higher in the danger image.
  • the information notification system 1 the necessary information can be appropriately notified to the user by adjusting the balance between the display of information about the object and the display of other information according to the degree of danger of the user.
  • the communication terminal 10, the object detection server 30, and the determination server 50 physically include a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like. It may be configured as.
  • the word “device” can be read as a circuit, device, unit, etc.
  • the hardware configuration of the communication terminal 10, the object detection server 30, and the determination server 50 may be configured to include one or more of the devices shown in FIG. 10, or may not include some of the devices. May be done.
  • the object detection server 30, and the determination server 50 For each function in the communication terminal 10, the object detection server 30, and the determination server 50, by loading predetermined software (program) on the hardware such as the processor 1001 and the memory 1002, the processor 1001 performs an operation and the communication device. It is realized by controlling communication by 1004 and reading and / or writing of data in the memory 1002 and the storage 1003.
  • predetermined software program
  • the processor 1001 operates, for example, an operating system to control the entire computer.
  • the processor 1001 may be configured by a central processing unit (CPU: Central Processing Unit) including an interface with a peripheral device, a control device, an arithmetic unit, a register, and the like.
  • CPU Central Processing Unit
  • the control function of the detection unit 33 of the object detection server 30 may be realized by the processor 1001.
  • the processor 1001 reads a program (program code), a software module and data from the storage 1003 and / or the communication device 1004 into the memory 1002, and executes various processes according to these.
  • program program code
  • a program that causes a computer to execute at least a part of the operations described in the above-described embodiment is used.
  • control function of the detection unit 33 of the object detection server 30 may be realized by a control program stored in the memory 1002 and operated by the processor 1001, and other functional blocks may be similarly realized.
  • processor 1001 may be executed simultaneously or sequentially by two or more processors 1001.
  • Processor 1001 may be mounted on one or more chips.
  • the program may be transmitted from the network via a telecommunication line.
  • the memory 1002 is a computer-readable recording medium, and is composed of at least one such as a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). May be done.
  • the memory 1002 may be referred to as a register, a cache, a main memory (main storage device), or the like.
  • the memory 1002 can store a program (program code), a software module, and the like that can be executed to implement the wireless communication method according to the embodiment of the present invention.
  • the storage 1003 is a computer-readable recording medium, and is, for example, an optical disk such as a CDROM (Compact Disc ROM), a hard disk drive, a flexible disk, an optical magnetic disk (for example, a compact disk, a digital versatile disk, or a Blu-ray (registration)). It may consist of at least one such as a (trademark) disk), a smart card, a flash memory (eg, a card, stick, key drive), a floppy (registered trademark) disk, a magnetic strip, and the like.
  • the storage 1003 may be referred to as an auxiliary storage device.
  • the storage medium described above may be, for example, a database, server or other suitable medium containing memory 1002 and / or storage 1003.
  • the communication device 1004 is hardware (transmission / reception device) for communicating between computers via a wired and / or wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, or the like.
  • the input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts an input from the outside.
  • the output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that outputs to the outside.
  • the input device 1005 and the output device 1006 may have an integrated configuration (for example, a touch panel).
  • each device such as the processor 1001 and the memory 1002 is connected by the bus 1007 for communicating information.
  • the bus 1007 may be composed of a single bus or may be composed of different buses between the devices.
  • the communication terminal 10, the object detection server 30, and the determination server 50 include a microprocessor, a digital signal processor (DSP: Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field Programmable). It may be configured to include hardware such as Gate Array), and a part or all of each functional block may be realized by the hardware.
  • the processor 1001 may be implemented on at least one of these hardware.
  • the information notification system 1 has been described as being configured to include the communication terminal 10, the object detection server 30, and the determination server 50, but the present invention is not limited to this, and each function of the information notification system 1 is a communication terminal. It may be realized only by 10.
  • the object detection server 30 has an acquisition unit 32 and a detection unit 33
  • the determination server 50 has a determination unit 53, a generation unit 54, and an output unit 55.
  • another server may include a part or all of each functional component
  • the communication terminal 10 may include a part of each functional component. You may.
  • the degree of danger may be determined for an object located in the area of the back surface, the right side surface, and the left side surface of the user, and notification information may be generated based on the degree of danger.
  • the notification information is sound information and image information, but the notification information may be only sound information or only image information, and for example, the lamp of the communication terminal 10 is lit. It may be other information such as optical information for the purpose.
  • the acquisition unit 32 acquires one captured image relating to the area around the user captured by the communication terminal 10 mounted on the user, and the detection unit 33 acquires one captured image of the user's surroundings based on the one captured image. Objects in the area may be detected.
  • the determination unit 53 has a possibility of contact based on whether or not the object is located in the area in front of the user and the movement vector of the object derived from the position of the object in a plurality of captured images. May be determined.
  • the determination unit 53 may determine the degree of danger based on the result detected by the detection unit 33. As an example, the determination unit 53 may determine the degree of danger based only on the contact possibility, and also, a method of comprehensively considering the contact possibility, the distance between the user and the object, and the approach speed. The degree of risk may be determined using a method different from that of.
  • the generation unit 54 may generate notification information so that the higher the degree of danger in the danger image, the more conspicuous the information about the object. As an example, the generation unit 54 may generate notification information so that the higher the risk, the smaller the other information is displayed.
  • information other than the notification information may be notified to the user.
  • information related to sound other than the notification information may be emitted from the speaker included in the communication terminal 10.
  • the detection unit 33 detects the object H2, which is a signboard arranged on the road, in addition to the person H1 as an object, and the generation unit 54 detects the object H2 based on the information about the object H2. , The position of the object, and the information for making a sound that conveys the name of the object may be generated.
  • the object H2 is arranged 5 m in front of the user and 1 m to the left of the user. Therefore, the generation unit 54 generates information for emitting the sound M2 indicating that there is a signboard 5 m in front of the front surface and 1 m ahead. Further, for example, as shown in FIG.
  • the detection unit 33 detects the object H3, which is a signboard with the store name on the street, as an object, and the generation unit 54 detects the object based on the information about the object H3.
  • Information may be generated for the sound M3 that conveys the position of H3, the type of store indicated by the object H3, and the name of the store to be emitted.
  • the type of store and the name of the store are stored in, for example, the data 300 of the object detection server 30.
  • the output unit 55 outputs the information for emitting the sound
  • the place corresponding to the object H2 in the speaker the left side in the left-right direction in the speaker with the communication terminal 10 attached to the user
  • the object H3 the object corresponding to the object H2 in the speaker (the left side in the left-right direction in the speaker with the communication terminal 10 attached to the user) and the object H3.
  • the above-mentioned sounds M2 and M3 are emitted from the place corresponding to (the center in the left-right direction in the speaker in which the communication terminal 10 is attached to the user).
  • Each aspect / embodiment described in the present specification includes LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G, 5G, FRA (Future Radio Access), W-CDMA. (Registered Trademark), GSM (Registered Trademark), CDMA2000, UMB (Ultra Mobile Broad-band), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, UWB (Ultra-Wide) Band), WiMAX®, and other systems that utilize suitable systems and / or extended next-generation systems based on them may be applied.
  • the input / output information and the like may be saved in a specific place (for example, a memory) or may be managed by a management table. Information to be input / output may be overwritten, updated, or added. The output information and the like may be deleted. The input information or the like may be transmitted to another device.
  • the determination may be made by a value represented by 1 bit (0 or 1), by a boolean value (Boolean: true or false), or by comparing numerical values (for example, a predetermined value). It may be done by comparison with the value).
  • the notification of predetermined information (for example, the notification of "being X") is not limited to the explicit one, but is performed implicitly (for example, the notification of the predetermined information is not performed). May be good.
  • Software whether referred to as software, firmware, middleware, microcode, hardware description language, or other names, is an instruction, instruction set, code, code segment, program code, program, subprogram, software module.
  • Applications, software applications, software packages, routines, subroutines, objects, executable files, execution threads, procedures, features, etc. should be broadly interpreted.
  • software, instructions, etc. may be transmitted and received via a transmission medium.
  • the software may use wired technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or wireless technology such as infrared, wireless and microwave to website, server, or other.
  • wired technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or wireless technology such as infrared, wireless and microwave to website, server, or other.
  • DSL digital subscriber line
  • wireless technology such as infrared, wireless and microwave to website, server, or other.
  • the information, signals, etc. described herein may be represented using any of a variety of different techniques.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description are voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may be represented by a combination of.
  • information, parameters, etc. described in the present specification may be represented by an absolute value, a relative value from a predetermined value, or another corresponding information. ..
  • the communication terminal 10 may be a mobile communication terminal, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communication device, a remote device, a mobile subscriber station, or an access terminal, depending on the person in the art. , Mobile device, wireless device, remote device, handset, user agent, mobile client, client, or some other suitable term.
  • any reference to that element does not generally limit the quantity or order of those elements. These designations can be used herein as a convenient way to distinguish between two or more elements. Thus, references to the first and second elements do not mean that only two elements can be adopted there, or that the first element must somehow precede the second element.
  • 1 Information notification system, 10 ... Communication terminal (terminal), 32 ... Acquisition unit, 33 ... Detection unit, 53 ... Judgment unit, 54 ... Generation unit, 55 ... Output unit, H1 ... Person (object), M1 ... Danger sound (Sound), P1 ... image (captured image), P2, P4 ... image (dangerous image), Pd ... image (information about an object), Z ... other information (information different from information about an object).

Landscapes

  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Rehabilitation Therapy (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pain & Pain Management (AREA)
  • Epidemiology (AREA)
  • Traffic Control Systems (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Rehabilitation Tools (AREA)
  • Navigation (AREA)

Abstract

This information notification system is provided with: an acquiring unit for acquiring one or a plurality of captured images relating to the area around a user, captured by a communication terminal fitted to the user; a detecting unit for detecting an object in the area around the user on the basis of the one or plurality of captured images; a determining unit for determining the degree of danger that the object detected by the detecting unit may come into contact with the user, on the basis of the results detected by the detecting unit; a generating unit for generating notification information indicating that the user is in danger, on the basis of the degree of danger; and an output unit for outputting the notification information to the communication terminal. The generating unit generates the notification information in such a way that the higher the degree of danger, the more readily the user perceives the degree of danger of contact with the object.

Description

情報通知システムInformation notification system

 本発明の一態様は、情報通知システムに関する。 One aspect of the present invention relates to an information notification system.

 ユーザに装着された端末によって、ユーザに接触する危険性のある物体についてユーザに通知する技術が知られている。特許文献1には、撮像された画像において物体像が注意喚起ラインから装着者側に進入すると、装着者が装着したイヤホン及び装着具の画像によって装着者に注意喚起が行われる歩行者用周辺監視装置が記載されている。 A technique is known in which a terminal attached to a user notifies the user of an object that may come into contact with the user. According to Patent Document 1, when an object image enters the wearer's side from the alert line in the captured image, the wearer is alerted by the image of the earphone and the wearer worn by the wearer. The device is described.

特開2013-8307号公報Japanese Unexamined Patent Publication No. 2013-8307

 特許文献1に記載された装置では、物体像が注意喚起ラインから装着者側に進入すると、一律にユーザに注意喚起が行われ、ユーザが当該注意喚起を煩わしく感じるおそれがある。一方で、ユーザに煩わしさを感じさせないために注意喚起を弱くすると、ユーザに接触する危険性が高い物体をユーザが認知しづらくなってしまう。 In the device described in Patent Document 1, when an object image enters the wearer side from the alert line, the user is uniformly alerted, and the user may find the alerting annoying. On the other hand, if the alerting is weakened so as not to make the user feel annoyed, it becomes difficult for the user to recognize an object having a high risk of contacting the user.

 本発明の一態様は上記実情に鑑みてなされたものであり、ユーザの危険度に応じて、ユーザに対して必要な情報を適切に通知することができる情報通知システムに関する。 One aspect of the present invention is made in view of the above circumstances, and relates to an information notification system capable of appropriately notifying a user of necessary information according to the degree of danger of the user.

 本発明の一態様に係る情報通知システムは、ユーザに装着された端末において撮像された、ユーザの周囲の領域に係る一又は複数の撮像画像を取得する取得部と、一又は複数の撮像画像に基づいて、ユーザの周囲の領域にある物体を検出する検出部と、検出部によって検出された結果に基づいて、検出部によって検出された物体がユーザに接触する危険度を判定する判定部と、危険度に基づいて、ユーザに危険であることを示す通知情報を生成する生成部と、通知情報を端末に出力する出力部と、を備え、生成部は、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなるように、通知情報を生成する。 The information notification system according to one aspect of the present invention includes an acquisition unit that acquires one or more captured images related to an area around the user, and one or a plurality of captured images captured by a terminal attached to the user. Based on the detection unit that detects objects in the area around the user, and the determination unit that determines the degree of risk that the object detected by the detection unit will come into contact with the user based on the result detected by the detection unit. A generator that generates notification information indicating that the user is dangerous based on the degree of danger and an output unit that outputs the notification information to the terminal are provided. The higher the degree of danger, the more the user is an object. Generate notification information to help you recognize the danger of contact.

 本発明の一態様に係る情報通知システムでは、ユーザの周囲の領域に係る一又は複数の撮像画像に基づいてユーザに接近する物体が認識され、当該物体がユーザに接触する危険度が判定される。そして、本情報通知システムでは、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなるように通知情報が生成され、当該通知情報が端末に出力される。このような構成によれば、ユーザに対して、物体が接触する危険性に応じた通知を行うことができる。具体的には、例えば、物体がユーザに接触する危険度が低い場合には、単にユーザに対して物体の接近を知らせる程度の通知情報が出力されることによって、判定された危険度に適さない過剰な通知がなされてユーザに煩わしさを感じさせることを抑制できる。また、例えば、物体がユーザに接触する危険度が高い場合には、ユーザに対して、危険が迫っていることを強調する通知情報が出力されることによって、物体とユーザとの接触を適切に回避することができる。以上のように、本発明の一態様に係る情報通知システムによれば、ユーザの危険度に応じて、ユーザに対して必要な情報を適切に通知することができる。 In the information notification system according to one aspect of the present invention, an object approaching the user is recognized based on one or a plurality of captured images related to the area around the user, and the risk of the object coming into contact with the user is determined. .. Then, in this information notification system, notification information is generated so that the user can easily recognize the danger of contact with an object as the degree of danger increases, and the notification information is output to the terminal. According to such a configuration, it is possible to notify the user according to the risk of contact with an object. Specifically, for example, when the risk of an object coming into contact with a user is low, it is not suitable for the determined risk by simply outputting notification information to the extent that the user is notified of the approach of the object. It is possible to prevent the user from feeling annoyed by excessive notification. Further, for example, when the risk of the object coming into contact with the user is high, the notification information emphasizing that the danger is imminent is output to the user, so that the contact between the object and the user is appropriate. It can be avoided. As described above, according to the information notification system according to one aspect of the present invention, it is possible to appropriately notify the user of necessary information according to the degree of danger of the user.

 本発明の一態様によれば、ユーザの危険度に応じて、ユーザに対して必要な情報を適切に通知することができる。 According to one aspect of the present invention, necessary information can be appropriately notified to the user according to the degree of danger of the user.

図1は、本実施形態に係る情報通知システムの概要を説明する図である。FIG. 1 is a diagram illustrating an outline of an information notification system according to the present embodiment. 図2は、本実施形態に係る情報通知システムの機能構成を示すブロック図である。FIG. 2 is a block diagram showing a functional configuration of the information notification system according to the present embodiment. 図3は、通知情報について説明する図である。FIG. 3 is a diagram illustrating notification information. 図4は、物体がユーザに接触する危険度について説明する図である。FIG. 4 is a diagram illustrating a degree of danger that an object comes into contact with a user. 図5は、通信端末に表示される画像の一例を示す図である。FIG. 5 is a diagram showing an example of an image displayed on a communication terminal. 図6は、通信端末に表示される危険画像の一例を示す図である。FIG. 6 is a diagram showing an example of a danger image displayed on a communication terminal. 図7は、通信端末に表示される危険画像の一例を示す図である。FIG. 7 is a diagram showing an example of a danger image displayed on a communication terminal. 図8は、本実施形態に係る情報通知システムが実施する処理を示すフローチャートである。FIG. 8 is a flowchart showing a process performed by the information notification system according to the present embodiment. 図9は、本実施形態に係る情報通知システムが実施する処理を示すフローチャートである。FIG. 9 is a flowchart showing a process performed by the information notification system according to the present embodiment. 図10は、本実施形態に係る情報通知システムに含まれる通信端末、物体検出サーバ、及び判定サーバのハードウェア構成を示す図である。FIG. 10 is a diagram showing a hardware configuration of a communication terminal, an object detection server, and a determination server included in the information notification system according to the present embodiment.

 以下、添付図面を参照しながら本発明の実施形態を詳細に説明する。図面の説明において、同一又は同等の要素には同一符号を用い、重複する説明を省略する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the description of the drawings, the same reference numerals are used for the same or equivalent elements, and duplicate description is omitted.

 図1は、本実施形態に係る情報通知システム1の概要を説明する図である。図2は、本実施形態に係る情報通知システム1の機能構成を表すブロック図である。図1及び図2に示される情報通知システム1は、ユーザに危険であることを示す通知情報を含む種々の情報を通知するシステムである。情報通知システム1は、ユーザに装着された通信端末(端末)10と、物体検出サーバ30と、判定サーバ50と、を備えている。 FIG. 1 is a diagram illustrating an outline of the information notification system 1 according to the present embodiment. FIG. 2 is a block diagram showing a functional configuration of the information notification system 1 according to the present embodiment. The information notification system 1 shown in FIGS. 1 and 2 is a system for notifying various information including notification information indicating that the user is dangerous. The information notification system 1 includes a communication terminal (terminal) 10 attached to the user, an object detection server 30, and a determination server 50.

 情報通知システム1では、通信端末10が撮像した複数の撮像画像が、物体検出サーバ30に送信される。複数の撮像画像は、時間的に連続している(すなわち、連続した複数の時間において撮像された画像である)。以下、時間的に連続した複数の撮像画像を、単に「複数の撮像画像」という場合がある。図1では、通信端末10において撮像された、時間的に連続した複数の撮像画像のうちの1枚である画像P1が例示されている。画像P1を含む複数の画像では、路上において正面からユーザに向かって走っている人物H1が物体として撮像されている。物体検出サーバ30は、通信端末10において撮像された複数の撮像画像について、撮像画像中の物体を検出し、検出した物体に関する情報を判定サーバ50に送信する。図1に示される例では、物体検出サーバ30は、画像P1を含む複数の撮像画像に基づいて、人物H1を物体として検出する。 In the information notification system 1, a plurality of captured images captured by the communication terminal 10 are transmitted to the object detection server 30. The plurality of captured images are temporally continuous (that is, images captured at a plurality of consecutive times). Hereinafter, a plurality of captured images that are continuous in time may be simply referred to as “a plurality of captured images”. FIG. 1 illustrates an image P1 which is one of a plurality of temporally continuous captured images captured by the communication terminal 10. In the plurality of images including the image P1, the person H1 running from the front toward the user on the road is imaged as an object. The object detection server 30 detects an object in the captured image for a plurality of captured images captured by the communication terminal 10, and transmits information about the detected object to the determination server 50. In the example shown in FIG. 1, the object detection server 30 detects the person H1 as an object based on a plurality of captured images including the image P1.

 判定サーバ50は、物体検出サーバ30が検出した検出結果に基づいて、検出された物体がユーザに接触する危険度を判定する。そして、判定サーバ50は、判定した危険度に基づいて、通知情報を生成する。通知情報は、ユーザに危険であることを示す情報である。判定サーバ50は、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなるように、通知情報を生成する。本実施形態では、通知情報は、音情報及び画像情報を含んでいる。音情報は、通信端末10が有するスピーカーにおいて音により通知される情報であり、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなる危険音に関する情報である。画像情報は、通信端末10が有する画面において通知(表示)される情報であり、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなる危険画像に関する情報である。画像情報及び音情報の詳細については後述する。そして、判定サーバ50が通知情報を通信端末10に出力すると、通信端末10が有するスピーカーからは危険音が発せられ、通信端末10が有する画面には危険画像が表示される。 The determination server 50 determines the degree of risk that the detected object will come into contact with the user based on the detection result detected by the object detection server 30. Then, the determination server 50 generates notification information based on the determined risk level. The notification information is information indicating that the user is in danger. The determination server 50 generates notification information so that the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object. In the present embodiment, the notification information includes sound information and image information. The sound information is information notified by sound in the speaker of the communication terminal 10, and is information on dangerous sounds that make it easier for the user to recognize the danger of contact with an object as the degree of danger increases. The image information is information to be notified (displayed) on the screen of the communication terminal 10, and is information related to a dangerous image in which the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object. The details of the image information and the sound information will be described later. Then, when the determination server 50 outputs the notification information to the communication terminal 10, a dangerous sound is emitted from the speaker of the communication terminal 10, and a dangerous image is displayed on the screen of the communication terminal 10.

 図1に示される例では、物体として検出された人物(物体)H1が正面からユーザに向かって走ってきており、人物H1がユーザに接触する危険性が高いと判定されるための所定の条件を満たすため、判定サーバ50は、人物H1がユーザに接触する危険度を「高」であると判定し(詳細は後述)、ユーザが人物H1の接触の危険性を認知しやすくなるように通知情報を生成する。そして、判定サーバ50が通知情報を通信端末10に出力すると、図3に示されるように、通信端末10が有するスピーカーからは、危険音M1が発せられ、通信端末10が有する画面には、危険画像である画像P2が表示される。なお、危険音及び危険画像が出力されるとの態様はあくまでも一例であり、いずれか一方のみが出力されてもよいし、別の通知情報が出力されてもよい。 In the example shown in FIG. 1, a person (object) H1 detected as an object is running from the front toward the user, and a predetermined condition for determining that the person H1 has a high risk of contacting the user. In order to satisfy the above conditions, the determination server 50 determines that the risk of contact with the user by the person H1 is "high" (details will be described later), and notifies the user so that the danger of contact with the person H1 can be easily recognized. Generate information. Then, when the determination server 50 outputs the notification information to the communication terminal 10, as shown in FIG. 3, a dangerous sound M1 is emitted from the speaker of the communication terminal 10, and the screen of the communication terminal 10 is dangerous. Image P2, which is an image, is displayed. The mode in which the dangerous sound and the dangerous image are output is only an example, and only one of them may be output or another notification information may be output.

 以上の処理を行うことにより、情報通知システム1では、ユーザに危険であることを示す通知情報が通知される。なお、図1及び図2に示される通信端末10の数は1台であるが、通信端末10の数は複数であってもよい。 By performing the above processing, the information notification system 1 notifies the user of notification information indicating that it is dangerous. Although the number of communication terminals 10 shown in FIGS. 1 and 2 is one, the number of communication terminals 10 may be plural.

 ここで、図2を参照して、通信端末10、物体検出サーバ30、及び判定サーバ50の機能的な構成要素について説明する。 Here, with reference to FIG. 2, functional components of the communication terminal 10, the object detection server 30, and the determination server 50 will be described.

 通信端末10は、例えば、無線通信を行うよう構成された端末である。通信端末10は、ユーザに装着される端末であって、例えば、ゴーグル型のウェアラブル機器である。通信端末10では、例えばアプリケーションが実行されると、実装されたカメラによって、連続的に(時間的に連続した)複数の撮像画像が撮像される。そして、通信端末10は、撮像された複数の撮像画像を物体検出サーバ30に送信する。取得された複数の撮像画像は、物体検出サーバ30による物体の検出等に用いられる。通信端末10は、記憶部11と、送信部12と、出力部13と、を有している。 The communication terminal 10 is, for example, a terminal configured to perform wireless communication. The communication terminal 10 is a terminal worn by a user, and is, for example, a goggle-type wearable device. In the communication terminal 10, for example, when an application is executed, a plurality of continuously (temporally continuous) captured images are captured by the mounted camera. Then, the communication terminal 10 transmits the plurality of captured images captured to the object detection server 30. The acquired plurality of captured images are used for detecting an object by the object detection server 30 and the like. The communication terminal 10 has a storage unit 11, a transmission unit 12, and an output unit 13.

 記憶部11は、複数の撮像画像、判定サーバ50から取得した通知情報等、種々の情報を記憶している。送信部12は、撮像された複数の撮像画像を、物体検出サーバ30及び判定サーバ50に送信する。出力部13は、記憶部11が記憶している通知情報に基づいて、特定の出力を行う。具体的には、出力部13は、通信端末10が有するスピーカーから危険音を発してもよい。また、出力部13は、通信端末10が有する画面上に危険画像を表示してもよい。 The storage unit 11 stores various information such as a plurality of captured images and notification information acquired from the determination server 50. The transmission unit 12 transmits a plurality of captured images to the object detection server 30 and the determination server 50. The output unit 13 outputs a specific output based on the notification information stored in the storage unit 11. Specifically, the output unit 13 may emit a dangerous sound from the speaker included in the communication terminal 10. Further, the output unit 13 may display a dangerous image on the screen of the communication terminal 10.

 物体検出サーバ30は、通信端末10から取得した複数の撮像画像に基づいて、ユーザの周囲の領域にある物体を検出するサーバである。物体検出サーバ30は、複数の撮像画像それぞれについて、物体の検出を行う。物体検出サーバ30は、機能的な構成要素として、記憶部31と、取得部32と、検出部33と、を有している。記憶部31は、データ300を記憶している。データ300は、ユーザに対して接触し得る物体として予めリストアップされた各物体のテンプレートと当該物体の名称とが紐づけられたデータである。このようなデータ300は、接触し得る物体として選別された限られたデータであってもよいし、何ら選別されたものではない様々な物体のデータであってもよい。また、記憶部31は、物体検出サーバ30の外部の構成であってもよい。すなわち、データ300は、物体検出サーバ30の外部のサーバに記憶されたデータであってもよい。 The object detection server 30 is a server that detects an object in the area around the user based on a plurality of captured images acquired from the communication terminal 10. The object detection server 30 detects an object for each of the plurality of captured images. The object detection server 30 has a storage unit 31, an acquisition unit 32, and a detection unit 33 as functional components. The storage unit 31 stores the data 300. The data 300 is data in which the template of each object listed in advance as an object that can come into contact with the user and the name of the object are associated with each other. Such data 300 may be limited data selected as objects that can come into contact with each other, or may be data of various objects that have not been selected at all. Further, the storage unit 31 may have an external configuration of the object detection server 30. That is, the data 300 may be data stored in a server outside the object detection server 30.

 取得部32は、通信端末10から、時間的に連続した複数の撮像画像を取得する。各撮像画像は、ユーザに装着された通信端末10において撮像され、ユーザの周囲の領域に係る画像である。複数の撮像画像とは、後述する物体の移動、ユーザと物体との距離、及びユーザへの接近速度等が判定部53によって推定されるのに十分な枚数の撮像画像であってもよい。 The acquisition unit 32 acquires a plurality of captured images that are continuous in time from the communication terminal 10. Each captured image is an image captured by a communication terminal 10 worn by the user and relating to an area around the user. The plurality of captured images may be a sufficient number of captured images for the determination unit 53 to estimate the movement of the object, the distance between the user and the object, the approach speed to the user, and the like, which will be described later.

 検出部33は、通信端末10において撮像された複数の撮像画像と、記憶部31に記憶されているデータ300とに基づいて、ユーザの周囲の領域にある物体を検出する。具体的には、検出部33は、データ300のテンプレートのデータを用いた公知の画像認識処理によって、各撮像画像中に撮像された物体を検出する。そして、検出部33は、検出結果として、検出した物体に関する情報である物体情報を判定サーバ50に送信する。物体情報には、検出された物体の名称、各撮像画像中における物体の位置情報、及び撮像画像が撮像された時間情報等が含まれている。 The detection unit 33 detects an object in the area around the user based on the plurality of captured images captured by the communication terminal 10 and the data 300 stored in the storage unit 31. Specifically, the detection unit 33 detects an object captured in each captured image by a known image recognition process using the data of the template of the data 300. Then, the detection unit 33 transmits the object information, which is the information about the detected object, to the determination server 50 as the detection result. The object information includes the name of the detected object, the position information of the object in each captured image, the time information when the captured image was captured, and the like.

 判定サーバ50は、検出部33によって検出された結果に基づいて、検出部33によって検出された物体がユーザに接触する危険度を判定する。そして、判定サーバ50は、危険度に基づいて、ユーザに危険であることを示す通知情報を生成し、生成した通知情報を通信端末10に出力する。 The determination server 50 determines the degree of danger that the object detected by the detection unit 33 comes into contact with the user based on the result detected by the detection unit 33. Then, the determination server 50 generates notification information indicating that it is dangerous to the user based on the degree of danger, and outputs the generated notification information to the communication terminal 10.

 判定サーバ50は、記憶部51と、取得部52と、判定部53と、生成部54と、出力部55と、を備えている。 The determination server 50 includes a storage unit 51, an acquisition unit 52, a determination unit 53, a generation unit 54, and an output unit 55.

 記憶部51は、危険度の判定等、判定サーバ50によって実施される種々の処理に用いられる情報を記憶している。具体的には、記憶部51は、通信端末10から取得した複数の撮像画像、物体検出サーバ30から取得した物体情報等を記憶している。 The storage unit 51 stores information used for various processes performed by the determination server 50, such as determination of the degree of danger. Specifically, the storage unit 51 stores a plurality of captured images acquired from the communication terminal 10, object information acquired from the object detection server 30, and the like.

 取得部52は、物体検出サーバ30から物体情報を取得する。また、取得部52は、通信端末10から複数の撮像画像を取得する。複数の撮像画像は、後述する生成部54による通知情報の生成に用いられる。 The acquisition unit 52 acquires object information from the object detection server 30. Further, the acquisition unit 52 acquires a plurality of captured images from the communication terminal 10. The plurality of captured images are used to generate notification information by the generation unit 54, which will be described later.

 判定部53は、検出部33によって検出された結果(物体情報)に基づいて、検出部33によって検出された物体がユーザに接触する危険度を判定する。以下、判定部53による危険度の判定方法について、図1に示される画像P1を含む複数の撮像画像を取得した場合の例を用いて説明する。判定部53は、検出部33によって検出された、複数の撮像画像における物体の位置に基づいて危険度を判定する。図4に示されるように、本実施形態では、危険度は、「高」、「中」、「低」、及び「無」の4つのランクに分類される。そして、本実施形態では、判定部53は、接触可能性、ユーザと物体との距離、及びユーザに対する物体の接近速度のそれぞれを判定し、判定した結果に基づいて、「高」、「中」、「低」、及び「無」のうち最も適した危険度を判定する。接触可能性は、物体がユーザに接触する可能性であって、例えば、「接触」及び「非接触」のいずれかに分類される。 The determination unit 53 determines the degree of risk that the object detected by the detection unit 33 will come into contact with the user based on the result (object information) detected by the detection unit 33. Hereinafter, the method of determining the degree of danger by the determination unit 53 will be described with reference to an example in which a plurality of captured images including the image P1 shown in FIG. 1 are acquired. The determination unit 53 determines the degree of danger based on the positions of the objects in the plurality of captured images detected by the detection unit 33. As shown in FIG. 4, in the present embodiment, the degree of danger is classified into four ranks of "high", "medium", "low", and "none". Then, in the present embodiment, the determination unit 53 determines each of the contact possibility, the distance between the user and the object, and the approach speed of the object to the user, and based on the determination result, "high" and "medium". , "Low", and "None" to determine the most suitable risk. Contactability is the possibility that an object will come into contact with the user and is classified as, for example, either "contact" or "non-contact".

 ここで、判定部53による接触可能性の判定方法について詳細に説明する。判定部53は、正面判定と、移動ベクトル判定とを実施することによって、接触可能性を判定する。正面判定は、ユーザに対する物体の位置の判定である。移動ベクトル判定は、物体が移動すると予測される領域の判定である。 Here, the method of determining the possibility of contact by the determination unit 53 will be described in detail. The determination unit 53 determines the possibility of contact by performing the frontal determination and the movement vector determination. The frontal determination is the determination of the position of the object with respect to the user. The movement vector determination is a determination of a region where an object is predicted to move.

 判定部53は、正面判定として、複数の撮像画像における物体の位置に基づいて、ユーザに対して物体が正面の領域に位置しているか否かの判定を行う。本実施形態では、判定部53は、ユーザから正面前方に向かって所定の角度の領域に物体が位置しているか否かを判定する。所定の角度は、例えば、±15°の角度である。つまり、本実施形態では、「正面の領域」は、鉛直方向から見た場合に、ユーザから正面前方に向かって±15°の領域を意味する。判定部53は、物体情報に含まれる物体の位置に基づいて、正面の領域に物体が位置しているか否かを判定する。判定部53は、正面判定において、上述した±15°の領域に物体が位置していると判定した場合、「正面」とし、上述した±15°の領域に物体が位置していないと判定した場合、「非正面」とする。なお、正面の領域は、ユーザから正面前方に向かって±15°の領域に限られず、他の角度の領域であってもよい。 As a frontal determination, the determination unit 53 determines whether or not the object is located in the frontal region with respect to the user based on the positions of the objects in the plurality of captured images. In the present embodiment, the determination unit 53 determines whether or not the object is located in a region of a predetermined angle from the user toward the front front. The predetermined angle is, for example, an angle of ± 15 °. That is, in the present embodiment, the "frontal region" means a region of ± 15 ° from the user toward the front of the front when viewed from the vertical direction. The determination unit 53 determines whether or not the object is located in the front area based on the position of the object included in the object information. When the determination unit 53 determines in the frontal determination that the object is located in the above-mentioned area of ± 15 °, it is regarded as “front” and it is determined that the object is not located in the above-mentioned area of ± 15 °. In the case, it is "non-front". The frontal region is not limited to the region of ± 15 ° from the user toward the front of the front, and may be a region of another angle.

 また、判定部53は、物体情報に含まれる物体の位置に基づいて、物体の移動ベクトル判定を行う。具体的には、判定部53は、連続する撮像画像同士において物体の位置の差分をとることによって、物体の移動ベクトルを算出する。一例として、判定部53は、複数の画像のうちの一の画像(以下、「第1画像」という)における物体の位置と、複数の画像のうちの一の画像であって第1画像の次に撮像された第2画像における物体の位置との時間差分をとる。そして、判定部53は、第2画像における物体の位置と、複数の画像のうちの一の画像であって第2画像の次に撮像された第3画像における物体の位置との時間差分をとる、といった処理を実施することによって、物体の移動ベクトルを算出する。そして、判定部53は、算出した移動ベクトルに基づいて、物体が移動する範囲の判定を行う。具体的には、判定部53は、移動ベクトルに基づいて、正面の領域にある物体については、当該物体が正面の領域内に留まって移動するか、或いは、正面の領域とは異なる領域に向かって移動するかを判定する。そして、判定部53は、正面の領域とは異なる領域にある物体については、当該物体が正面の領域とは異なる領域内に留まって移動するか、或いは、正面の領域に向かって移動するかを判定する。このようにして、判定部53は、複数の撮像画像における物体の位置から導出される物体の移動ベクトルに基づいて、物体が移動すると予測される領域の判定を行う。 Further, the determination unit 53 determines the movement vector of the object based on the position of the object included in the object information. Specifically, the determination unit 53 calculates the movement vector of the object by taking the difference in the position of the object between the continuous captured images. As an example, the determination unit 53 has a position of an object in an image of one of a plurality of images (hereinafter referred to as “first image”) and an image of one of the plurality of images following the first image. The time difference from the position of the object in the second image captured in is taken. Then, the determination unit 53 takes a time difference between the position of the object in the second image and the position of the object in the third image captured after the second image, which is one of the plurality of images. , Etc., to calculate the movement vector of the object. Then, the determination unit 53 determines the range in which the object moves based on the calculated movement vector. Specifically, based on the movement vector, the determination unit 53 moves the object in the front region while staying in the front region, or moves toward a region different from the front region. To determine whether to move. Then, the determination unit 53 determines whether the object stays in the area different from the front area and moves or moves toward the front area for the object in the area different from the front area. judge. In this way, the determination unit 53 determines the region where the object is predicted to move based on the movement vector of the object derived from the position of the object in the plurality of captured images.

 そして、判定部53は、正面判定、及び移動ベクトル判定のそれぞれの結果に基づいて、接触判定を実施する。具体的には、判定部53は、正面判定の結果が「正面」であって、移動ベクトル判定の結果が「正面の領域内に留まって移動する」である場合、接触判定を「接触」と判定する。これは、ユーザに対して正面の領域に位置し、且つ正面の領域内を移動する物体はユーザに接触する可能性が高いといえるからである。 Then, the determination unit 53 performs the contact determination based on the respective results of the frontal determination and the movement vector determination. Specifically, when the result of the frontal determination is "front" and the result of the movement vector determination is "stays in the front area and moves", the determination unit 53 sets the contact determination as "contact". judge. This is because it can be said that an object located in the area in front of the user and moving in the area in front of the user is likely to come into contact with the user.

 一方、判定部53は、正面判定の結果が「正面」であっても、移動ベクトル判定の結果が「正面の領域とは異なる領域に向かって移動する」である場合、接触判定を「非接触」と判定する。これは、物体が正面の領域に位置している場合であっても、物体が正面の領域とは異なる領域に向かって移動する場合には、物体がユーザに接触する危険性は低いといえるからである。つまり、判定部53は、物体がユーザに対して正面の領域に位置していると推定し且つ移動ベクトルに基づいて物体が正面の領域と異なる領域に移動すると推定した場合、物体が正面の領域内に留まって移動すると推定する場合と比較して、接触可能性を低く判定する。 On the other hand, even if the result of the frontal determination is "front", the determination unit 53 determines the contact determination as "non-contact" when the result of the movement vector determination is "moving toward a region different from the front region". Is determined. This is because even if the object is located in the front area, if the object moves toward a different area from the front area, the risk of the object touching the user is low. Is. That is, when the determination unit 53 estimates that the object is located in the area in front of the user and that the object moves to a region different from the area in front based on the movement vector, the object is in the area in front of the user. The possibility of contact is judged to be lower than the case where it is estimated that the object stays inside and moves.

 判定部53は、正面判定の結果が「非正面」であって、移動ベクトル判定の結果が「正面の領域とは異なる領域内に留まって移動する」である場合、接触判定を「非接触」と判定する。これは、正面の領域とは異なる領域を移動する物体は、ユーザに接触する可能性が低いといえるからである。 When the result of the frontal determination is "non-frontal" and the result of the movement vector determination is "stays and moves in a region different from the frontal region", the determination unit 53 determines the contact determination as "non-contact". Is determined. This is because it can be said that an object moving in an area different from the front area is unlikely to come into contact with the user.

 一方、判定部53は、正面判定の結果が「非正面」であっても、移動ベクトル判定の結果が「正面の領域に向かって移動する」である場合、接触判定を「接触」と判定する。これは、正面の領域とは異なる領域を移動する物体が正面の領域に向かって移動する場合、物体が正面の領域とは異なる領域内に留まって移動する場合と比較して、物体がユーザと接触する危険性は高いといえるためである。つまり、判定部53は、物体がユーザに対して正面の領域とは異なる領域に位置していると推定し且つ移動ベクトルに基づいて物体が正面の領域に向かって移動すると推定した場合、物体が正面の領域外に留まって移動すると推定する場合と比較して、接触可能性を高く判定する。 On the other hand, even if the result of the frontal determination is "non-frontal", the determination unit 53 determines that the contact determination is "contact" when the result of the movement vector determination is "moving toward the frontal region". .. This is because when an object moving in an area different from the front area moves toward the front area, the object stays in a different area from the front area and moves with the user. This is because the risk of contact is high. That is, when the determination unit 53 estimates that the object is located in a region different from the region in front of the user and estimates that the object moves toward the region in front based on the movement vector, the object is estimated. The possibility of contact is judged to be higher than the case where it is estimated that the object stays outside the front area and moves.

 次に、ユーザと物体との距離の判定について説明する。まず、判定部53は、検出部33によって検出された、複数の撮像画像における物体の位置に基づいて、ユーザと物体との距離を判定する。一例として、判定部53は、例えば、物体情報に含まれる物体の位置情報及び時間情報に基づいて、ユーザと物体との距離を推定する。そして、判定部53は、推定したユーザとの物体との距離が閾値(第2閾値)よりも小さいと判定した場合、ユーザと物体との距離が「近い(近)」と判定する。一方、判定部53は、推定したユーザとの物体との距離が閾値(第2閾値)よりも小さくないと判定した場合、ユーザと物体との距離が「遠い(遠)」と判定する。当該閾値は、予め定められた値であって、一般的にユーザに物体が接触する危険性が低いとされるユーザと物体との距離の限界許容値である。 Next, the determination of the distance between the user and the object will be described. First, the determination unit 53 determines the distance between the user and the object based on the positions of the objects in the plurality of captured images detected by the detection unit 33. As an example, the determination unit 53 estimates the distance between the user and the object based on, for example, the position information and the time information of the object included in the object information. Then, when the determination unit 53 determines that the estimated distance between the user and the object is smaller than the threshold value (second threshold value), the determination unit 53 determines that the distance between the user and the object is "close (near)". On the other hand, when the determination unit 53 determines that the estimated distance between the user and the object is not smaller than the threshold value (second threshold value), the determination unit 53 determines that the distance between the user and the object is "far (far)". The threshold value is a predetermined value, and is a limit allowable value of the distance between the user and the object, which is generally considered to have a low risk of the object coming into contact with the user.

 次に、ユーザに対する物体の接近速度(以下、単に「接近速度」という)の判定について説明する。まず、判定部53は、検出部33によって検出された、複数の撮像画像における物体の位置に基づいて、接近速度を判定する。一例として、判定部53は、例えば、物体情報に含まれる物体の位置情報及び時間情報に基づいて、接近速度を推定する。そして、判定部53は、推定した接近速度が閾値(第1閾値)よりも大きいと判定した場合、接近速度が「速い(速)」と判定する。一方、判定部53は、推定した接近速度が閾値(第1閾値)よりも大きくないと判定した場合、接近速度が「遅い(遅)」と判定する。当該閾値は、予め定められた値であって、一般的に、ユーザに物体が接触する危険性が低いとされる接近速度の限界許容値である。 Next, the determination of the approach speed of the object to the user (hereinafter, simply referred to as "approach speed") will be described. First, the determination unit 53 determines the approach speed based on the positions of the objects in the plurality of captured images detected by the detection unit 33. As an example, the determination unit 53 estimates the approach speed based on, for example, the position information and the time information of the object included in the object information. Then, when the determination unit 53 determines that the estimated approach speed is larger than the threshold value (first threshold value), the determination unit 53 determines that the approach speed is "fast (fast)". On the other hand, when the determination unit 53 determines that the estimated approach speed is not larger than the threshold value (first threshold value), the determination unit 53 determines that the approach speed is "slow (slow)". The threshold value is a predetermined value, and is generally a limit permissible value of the approach speed, which is considered to have a low risk of an object coming into contact with the user.

 そして、判定部53は、接触可能性、ユーザと物体との距離、及び接近速度を判定した結果に基づいて、危険度を判定する。まず、判定部53は、接触可能性の接触判定において「接触」と判定し、且つユーザと物体との距離の判定において「近」と判定した場合、接近速度の判定結果にかかわらず、危険度が「高」であると判定する。 Then, the determination unit 53 determines the degree of danger based on the results of determining the contact possibility, the distance between the user and the object, and the approach speed. First, when the determination unit 53 determines "contact" in the contact determination of the possibility of contact and "close" in the determination of the distance between the user and the object, the degree of danger is determined regardless of the determination result of the approach speed. Is determined to be "high".

 また、判定部53は、接触可能性の接触判定において「接触」と判定し、且つユーザと物体との距離の判定において「遠」と判定した場合、接近速度の判定も考慮して危険度を判定する。具体的には、判定部53は、接触可能性の接触判定において「接触」、ユーザと物体との距離の判定において「遠」、且つ接近速度の判定において「速」と判定した場合、危険度が「中」であると判定する。一方、判定部53は、接触可能性の接触判定において「接触」、ユーザと物体との距離の判定において「遠」、且つ接近速度の判定において「遅」と判定した場合、危険度が「低」であると判定する。また、判定部53は、接触可能性の接触判定において「非接触」と判定した場合、ユーザと物体との距離、及び接近速度の判定結果にかかわらず、危険度が「無」であると判定する。 Further, when the determination unit 53 determines "contact" in the contact determination of the possibility of contact and "far" in the determination of the distance between the user and the object, the degree of danger is determined in consideration of the determination of the approach speed. judge. Specifically, when the determination unit 53 determines "contact" in the contact determination of the possibility of contact, "far" in the determination of the distance between the user and the object, and "fast" in the determination of the approach speed, the degree of danger. Is determined to be "medium". On the other hand, when the determination unit 53 determines "contact" in the contact determination of the possibility of contact, "far" in the determination of the distance between the user and the object, and "slow" in the determination of the approach speed, the risk level is "low". ". Further, when the determination unit 53 determines "non-contact" in the contact determination of the possibility of contact, it determines that the degree of danger is "none" regardless of the determination result of the distance between the user and the object and the approach speed. do.

 以上のように、判定部53は、接近速度が第1閾値よりも大きいと判定した場合、接近速度が第1閾値よりも大きくないと判定した場合と比較して、危険度を高く判定する。また、判定部53は、ユーザと物体との距離が第2閾値よりも小さいと判定した場合、当該距離が第2閾値よりも小さくないと判定する場合と比較して、危険度を高く判定する。 As described above, when the determination unit 53 determines that the approach speed is larger than the first threshold value, the determination unit 53 determines that the degree of danger is higher than when it is determined that the approach speed is not larger than the first threshold value. Further, when the determination unit 53 determines that the distance between the user and the object is smaller than the second threshold value, the determination unit 53 determines that the degree of danger is higher than when it is determined that the distance is not smaller than the second threshold value. ..

 ここで、図1に示される画像P1の例を用いて、情報通知システム1において、複数の撮像画像の取得から危険度の判定までの一連の流れについて説明する。図1に示される画像P1を含む複数の撮像画像の例では、人物H1(物体)が撮像画像に含まれている。この場合、物体検出サーバ30では、取得部32が、画像P1を含む複数の画像を取得し、検出部33が、検出結果として、検出した人物H1に関する情報(物体の名称(人)、各撮像画像における人物H1の位置情報及び時間情報)を判定サーバ50に送信する。そして、判定サーバ50では、取得部52が、人物H1に関する情報を取得し、判定部53が、危険度の判定を行う。まず、判定部53は、接触可能性の判定(すなわち、接触判定)を行う。具体的には、判定部53は、画像P1を含む複数の撮像における人物H1の位置に基づいて、例えば人物H1(物体)が正面前方に向かって±15°の領域に位置している場合には、正面判定において「正面」(すなわち、人物H1が正面の領域に位置している)と判定する。また、図1に示される例では、人物H1は、正面の領域内に留まって移動する。したがって、判定部53は、画像P1を含む複数の撮像画像における人物H1の位置から導出される人物H1の移動ベクトルに基づいて、正面の領域にある人物H1が正面の領域内に留まって移動すると判定する。その結果、判定部53は、正面判定及び移動ベクトル判定の結果に基づいて、接触判定において「接触」と判定する。 Here, using the example of the image P1 shown in FIG. 1, a series of flow from the acquisition of a plurality of captured images to the determination of the degree of danger will be described in the information notification system 1. In the example of the plurality of captured images including the image P1 shown in FIG. 1, the person H1 (object) is included in the captured image. In this case, in the object detection server 30, the acquisition unit 32 acquires a plurality of images including the image P1, and the detection unit 33 obtains information about the detected person H1 as a detection result (object name (person), each image pickup). The position information and time information of the person H1 in the image) is transmitted to the determination server 50. Then, in the determination server 50, the acquisition unit 52 acquires information about the person H1, and the determination unit 53 determines the degree of danger. First, the determination unit 53 determines the possibility of contact (that is, contact determination). Specifically, the determination unit 53 is based on the position of the person H1 in a plurality of images including the image P1, for example, when the person H1 (object) is located in a region of ± 15 ° toward the front front. Determines "front" (that is, the person H1 is located in the front area) in the front determination. Further, in the example shown in FIG. 1, the person H1 stays and moves in the front area. Therefore, when the determination unit 53 moves the person H1 in the front area while staying in the front area based on the movement vector of the person H1 derived from the position of the person H1 in the plurality of captured images including the image P1. judge. As a result, the determination unit 53 determines "contact" in the contact determination based on the results of the frontal determination and the movement vector determination.

 そして、判定部53は、ユーザと人物H1との距離の判定、及び接近速度の判定を行う。図1に示される例では、画像P1を含む複数の撮像画像が撮像された時間において、ユーザと人物H1との距離は、閾値よりも小さくなく、人物H1は、閾値よりも大きい速度でユーザに接近しているとする。この場合、判定部53は、画像P1を含む複数の撮像画像における人物H1の位置に基づいて、ユーザと物体との距離が「遠い」と判定し、接近速度が「速い」と判定する。 Then, the determination unit 53 determines the distance between the user and the person H1 and determines the approach speed. In the example shown in FIG. 1, at the time when a plurality of captured images including the image P1 are captured, the distance between the user and the person H1 is not smaller than the threshold value, and the person H1 reaches the user at a speed larger than the threshold value. Suppose they are approaching. In this case, the determination unit 53 determines that the distance between the user and the object is "far" and the approach speed is "fast" based on the position of the person H1 in the plurality of captured images including the image P1.

 そして、判定部53は、上述した各判定の結果(接近可能性:「接触」、ユーザと人物H1との距離:「遠い」、及び接近速度:「速い」)に基づいて、人物H1がユーザに接触する危険度が「中」であると判定する(図4参照)。以上のように、図1に示される例における危険度が判定される。 Then, the determination unit 53 determines that the person H1 is the user based on the result of each determination described above (accessibility: "contact", distance between the user and the person H1: "far", and approach speed: "fast"). It is determined that the risk of contact with is "medium" (see FIG. 4). As described above, the degree of danger in the example shown in FIG. 1 is determined.

 生成部54は、判定部53が判定した危険度に基づいて、ユーザに危険であることを示す通知情報を生成する。生成部54は、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなるように、通知情報を生成する。本実施形態では、通知情報は、音情報及び画像情報を含んでいる。 The generation unit 54 generates notification information indicating that the user is dangerous based on the degree of danger determined by the determination unit 53. The generation unit 54 generates notification information so that the higher the risk, the easier it is for the user to recognize the danger of contact with an object. In the present embodiment, the notification information includes sound information and image information.

 音情報は、通信端末10が有するスピーカーにおいて危険音が発せられるための情報である。生成部54は、スピーカーにおいて物体の位置に対応する場所から危険音(危険度に応じた音)が発せられるように、音情報(通知情報)を生成する。 The sound information is information for emitting a dangerous sound in the speaker of the communication terminal 10. The generation unit 54 generates sound information (notification information) so that a dangerous sound (sound according to the degree of danger) is emitted from a place corresponding to the position of the object in the speaker.

 「スピーカーにおいて物体の位置に対応する場所」は、例えば、通信端末10がユーザに装着された状態のスピーカー(以下、単に「スピーカー」という)において、ユーザから見た物体の位置が反映された場所である。一例として、ユーザから見た物体の位置が、ユーザの正面前方且つ左側(以下、単に「正面左側」という)に位置している場合、スピーカーにおいてユーザの身体の左側に位置する場所が、スピーカーにおいて物体の位置に対応する場所である。生成部54は、例えば、物体情報に含まれる物体の位置情報に基づいて、スピーカーにおいて、ユーザの左右方向における物体の位置を知らせる危険音が発せられるように通知情報を生成する。具体的には、例えば、ユーザから見た物体の位置が正面左側に位置している場合、生成部54は、スピーカーにおいてユーザの左耳に近い場所から危険音が発せられるように通知情報を生成する。 The "place corresponding to the position of the object in the speaker" is, for example, a place in which the position of the object seen by the user is reflected in the speaker (hereinafter, simply referred to as "speaker") in which the communication terminal 10 is attached to the user. Is. As an example, when the position of the object as seen by the user is located in front of the user and on the left side (hereinafter, simply referred to as "front left side"), the position on the left side of the user's body in the speaker is in the speaker. It is a place corresponding to the position of the object. For example, the generation unit 54 generates notification information based on the position information of the object included in the object information so that the speaker emits a dangerous sound notifying the position of the object in the left-right direction of the user. Specifically, for example, when the position of the object seen by the user is located on the left side of the front, the generation unit 54 generates notification information so that a dangerous sound is emitted from a place close to the user's left ear in the speaker. do.

 また、通信端末10が有するスピーカーが3Dオーディオである場合、生成部54は、当該スピーカーにおいて、3次元(すなわち、ユーザの左右及び前後方向)における物体の位置を知らせる危険音が発せられるように通知情報を生成してもよい。具体的には、例えば、ユーザから見た物体の位置が正面左側に位置している場合、生成部54は、スピーカーにおいてユーザの左耳に近い場所から危険音が発せられ且つ正面左側の方向からユーザに危険音が聞こえるように、通知情報を生成する。 Further, when the speaker included in the communication terminal 10 is 3D audio, the generation unit 54 notifies the speaker that a dangerous sound indicating the position of an object in three dimensions (that is, in the left-right and front-back directions of the user) is emitted. Information may be generated. Specifically, for example, when the position of the object as seen by the user is located on the left side of the front, the generation unit 54 emits a dangerous sound from a place close to the left ear of the user in the speaker and from the direction of the left side of the front. Generate notification information so that the user can hear the danger sound.

危険音は、危険度に応じた音である。「危険度に応じた音」は、例えば、危険度が高いほど音量が大きい音、危険度が高いほど強い注意喚起を促す音等、危険度の高低に合わせて、音量、内容等が変化する音である。 Danger sounds are sounds according to the degree of danger. The volume, content, etc. of the "sound according to the degree of danger" changes according to the level of the degree of danger, for example, the higher the degree of danger, the louder the sound, and the higher the degree of danger, the stronger the alerting sound. It's a sound.

 また、本実施形態では、生成部54は、危険音として、物体の位置、物体の名称、物体が接近している旨、及び物体が接近している態様を伝える音を生成する。当該音は、例えば、物体情報に含まれる物体の名称、ユーザと物体との距離、及び接近速度に基づいて生成される。物体が接近している態様の例としては、例えば、物体が人物である場合、走っている、歩いている等の態様が挙げられる。図3に示される例では、物体である人物H1がユーザに対して正面前方5m先から走って接近している。したがって、生成部54は、正面前方5m先から人物H1が走って接近している旨を伝える危険音M1を生成する。 Further, in the present embodiment, the generation unit 54 generates, as dangerous sounds, a sound that conveys the position of the object, the name of the object, the fact that the object is approaching, and the mode in which the object is approaching. The sound is generated based on, for example, the name of the object included in the object information, the distance between the user and the object, and the approach speed. Examples of the mode in which the object is approaching include, for example, when the object is a person, a mode such as running or walking. In the example shown in FIG. 3, the person H1 who is an object runs and approaches the user from 5 m ahead in front of the user. Therefore, the generation unit 54 generates a danger sound M1 that conveys that the person H1 is running and approaching from 5 m ahead of the front surface.

 生成部54は、通信端末10が有する画面(以下、単に「画面」という)に、危険画像が表示されるように、画像情報(通知情報)を生成する。危険画像は、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなる画像である。危険画像には、物体に関する情報が表示される。物体に関する情報は、例えば、取得部32によって取得された撮像画像である。すなわち、危険画像とは、撮像画像に基づく画像である。また、危険画像には、危険度に応じて(具体的には危険度が低いほど)、物体とは異なる他の情報が含まれて(重畳されて)いてもよい。他の情報とは、例えば、ユーザに対するメール等のメッセージアプリケーションに係る情報、地図アプリケーションに係る情報等、物体とは異なる情報であって、例えば、他のサーバ(図示せず)から取得された情報である。生成部54は、複数の撮像画像のそれぞれに対応して時間的に連続した危険画像が画面に表示されるように、画像情報を生成する。 The generation unit 54 generates image information (notification information) so that a dangerous image is displayed on the screen of the communication terminal 10 (hereinafter, simply referred to as “screen”). The danger image is an image in which the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object. Information about the object is displayed in the danger image. The information about the object is, for example, a captured image acquired by the acquisition unit 32. That is, the dangerous image is an image based on the captured image. Further, the danger image may include (superimpose) other information different from the object depending on the degree of danger (specifically, the lower the degree of danger). The other information is information different from the object, such as information related to a message application such as an e-mail to a user and information related to a map application, and is, for example, information acquired from another server (not shown). Is. The generation unit 54 generates image information so that a dangerous image that is continuous in time corresponding to each of the plurality of captured images is displayed on the screen.

 ここで、画像に関する情報の生成について、図5、図6、及び図7に示される例を用いて説明する。生成部54は、ユーザの正面の領域に物体がない場合、或いは、判定部53によって危険度が「無」であると判定された場合に、撮像画像に他の情報が重畳された通常画像を生成する。生成部54は、複数の撮像画像のそれぞれに対応して時間的に連続した通常画像が画面に表示されるように、画像情報を生成する。図5に示される画像P3は、通常画像の一例であって、複数の撮像画像に含まれる一の画像Pnに、他の情報(物体に関する情報とは異なる情報)Zであるメッセージ情報、天気情報、及び地図情報が重畳された画像である。画像P3において、他の情報Zは透けていない表示状態で画像Pnに重畳されている。 Here, the generation of information about an image will be described with reference to the examples shown in FIGS. 5, 6, and 7. The generation unit 54 produces a normal image in which other information is superimposed on the captured image when there is no object in the area in front of the user or when the determination unit 53 determines that the risk level is “absent”. Generate. The generation unit 54 generates image information so that a normal image that is continuous in time corresponding to each of the plurality of captured images is displayed on the screen. The image P3 shown in FIG. 5 is an example of a normal image, in which one image Pn included in a plurality of captured images has other information (information different from information about an object) Z, message information, and weather information. , And the image on which the map information is superimposed. In the image P3, the other information Z is superimposed on the image Pn in a non-transparent display state.

 生成部54は、判定部53によって危険度が「低」であると判定された場合に、撮像画像に他の情報が重畳された危険画像を生成する。図6に示される画像(危険画像)P4は、危険度が「低」である場合の危険画像の一例であって、複数の撮像画像に含まれる一の画像Pdに、他の情報Zが重畳された画像である。しかしながら、画像P4においては、画像Pd(物体に関する情報)が目立つように表示されている。 When the determination unit 53 determines that the risk level is "low", the generation unit 54 generates a danger image in which other information is superimposed on the captured image. The image (danger image) P4 shown in FIG. 6 is an example of a danger image when the degree of danger is “low”, and other information Z is superimposed on one image Pd included in a plurality of captured images. It is an image that was made. However, in the image P4, the image Pd (information about the object) is conspicuously displayed.

 具体的には、生成部54は、他の情報の表示を予め定められた透明度にして危険画像を生成する。画像P3に含まれる他の情報Zは、透けていない表示状態であるのに対し、画像P4に含まれる他の情報Zは、半透明に表示されている。これにより、危険度が「低」であると判定された場合の画像情報の他の情報は、通常画像に含まれる他の情報Zと比較して目立たなくなるように(換言すれば、物体に関する情報である画像Pdが目立つように)生成される。 Specifically, the generation unit 54 generates a dangerous image by setting the display of other information to a predetermined transparency. The other information Z included in the image P3 is in a non-transparent display state, while the other information Z included in the image P4 is displayed semi-transparently. As a result, other information of the image information when the degree of danger is determined to be "low" is less conspicuous than other information Z contained in the normal image (in other words, information about the object). The image Pd is prominently generated).

 生成部54は、判定部53によって危険度が「中」であると判定された場合、及び「高」であると判定された場合に、撮像画像のみを危険画像として生成する。すなわち、判定部53によって危険度が「中」であると判定された場合、及び「高」であると判定された場合に生成される危険画像は、撮像画像に他の情報が重畳されない点で、危険度が「低」であると判定された場合に生成される危険画像と異なる。図7に示される画像(危険画像)P2は、判定部53によって危険度が「中」であると判定された場合、及び「高」であると判定された場合の危険画像の一例である。画像P2においては、物体に関する情報が目立つように表示されている。 The generation unit 54 generates only the captured image as a danger image when the determination unit 53 determines that the risk level is "medium" or "high". That is, the danger image generated when the determination unit 53 determines that the degree of danger is "medium" or "high" does not superimpose other information on the captured image. , It is different from the danger image generated when the degree of danger is determined to be "low". The image (danger image) P2 shown in FIG. 7 is an example of a danger image when the determination unit 53 determines that the degree of danger is “medium” and when it is determined to be “high”. In the image P2, the information about the object is conspicuously displayed.

 具体的には、生成部54は、他の情報を非表示にして危険画像を生成する。危険度が「低」である場合に生成される画像P4には、他の情報Zは半透明に表示されているのに対し、画像P2には、他の情報Zは含まれていない。また、更に、画像P2においては、物体である人物H1の周りが囲む枠が表示されている。これにより、危険度が「中」であると判定された場合、及び「高」であると判定された場合の画像情報は、危険度が「低」であると判定された場合と比較して更に他の情報が目立たなくなるように(換言すれば、更に物体に関する情報(画像Pd)が目立つように)生成される。以上のように、生成部54は、危険度が高いほどユーザが物体の接触の危険性を認知しやすいように、画像情報(通知情報)を生成する。 Specifically, the generation unit 54 hides other information and generates a dangerous image. The other information Z is displayed semi-transparently in the image P4 generated when the degree of risk is "low", whereas the other information Z is not included in the image P2. Further, in the image P2, a frame surrounding the person H1 which is an object is displayed. As a result, the image information when the risk level is determined to be "medium" and when the risk level is determined to be "high" is compared with the case where the risk level is determined to be "low". Further, other information is generated so as to be inconspicuous (in other words, information about the object (image Pd) is further conspicuous). As described above, the generation unit 54 generates image information (notification information) so that the higher the degree of danger, the easier it is for the user to recognize the danger of contact with an object.

 出力部55は、判定部53によって生成された通知情報を出力する。出力された通知情報は、通信端末10によって取得される。通知情報が通信端末10によって取得されると、通信端末10が有するスピーカーから危険音が発せられ、通信端末10が有する画面上に危険画像が表示される。 The output unit 55 outputs the notification information generated by the determination unit 53. The output notification information is acquired by the communication terminal 10. When the notification information is acquired by the communication terminal 10, a danger sound is emitted from the speaker of the communication terminal 10, and a danger image is displayed on the screen of the communication terminal 10.

 図1に示される画像P1を含む複数の画像が取得された例では、危険度が「高」であると判定され、危険度:「高」に対応した通知情報(音情報及び画像情報)が生成される。したがって、通信端末10が有するスピーカーからは、危険音M1が発せられ、通信端末10が有する画面には、危険画像である画像P2が表示される(図3参照)。具体的には、危険音M1は、スピーカーにおいて人物H1に対応する場所(ユーザの左右方向における中央の場所)から発せられる。また、危険音M1は、危険度「高」に応じた音として、危険度が「低」及び「中」であると判定された場合よりも大きい音量で発せられる。また、画像P2は、複数の撮像画像に対応する時間的に連続した各危険画像のうちの一の危険画像であって、画像P1に人物H1の周りを囲む枠が重畳された画像である。以上のように、情報通知システム1では、ユーザに危険であることを示す通知情報が通知される。 In the example in which a plurality of images including the image P1 shown in FIG. 1 are acquired, it is determined that the risk level is "high", and the notification information (sound information and image information) corresponding to the risk level: "high" is provided. Generated. Therefore, the dangerous sound M1 is emitted from the speaker of the communication terminal 10, and the image P2 which is a dangerous image is displayed on the screen of the communication terminal 10 (see FIG. 3). Specifically, the dangerous sound M1 is emitted from a place corresponding to the person H1 in the speaker (a central place in the left-right direction of the user). Further, the danger sound M1 is emitted as a sound corresponding to the danger degree "high" at a louder volume than when the danger degree is determined to be "low" and "medium". Further, the image P2 is a dangerous image of one of the temporally continuous dangerous images corresponding to a plurality of captured images, and is an image in which a frame surrounding the person H1 is superimposed on the image P1. As described above, in the information notification system 1, the user is notified of the notification information indicating that it is dangerous.

 次に、本実施形態に係る情報通知システム1が行う処理について、図8を参照して説明する。図8は、情報通知システム1が行う処理を示すフローチャートである。 Next, the process performed by the information notification system 1 according to the present embodiment will be described with reference to FIG. FIG. 8 is a flowchart showing a process performed by the information notification system 1.

 図8に示されるように、情報通知システム1では、物体検出サーバ30において、ユーザに装着された通信端末10から、時間的に連続した複数の撮像画像が取得される(ステップS11)。具体的には、物体検出サーバ30は、通信端末10において撮像され、ユーザの周囲の領域に係る複数の撮像画像を取得する。 As shown in FIG. 8, in the information notification system 1, the object detection server 30 acquires a plurality of temporally continuous captured images from the communication terminal 10 attached to the user (step S11). Specifically, the object detection server 30 is imaged by the communication terminal 10 and acquires a plurality of captured images related to the area around the user.

 続いて、物体検出サーバ30において、ステップS11で取得された複数の撮像画像と、物体検出サーバ30に記憶されているデータ300とに基づいて、ユーザの周囲の領域にある物体が検出される(ステップS12)。そして、物体検出サーバ30において、検出された物体情報が、判定サーバ50に送信される。 Subsequently, the object detection server 30 detects an object in the area around the user based on the plurality of captured images acquired in step S11 and the data 300 stored in the object detection server 30 (). Step S12). Then, the detected object information in the object detection server 30 is transmitted to the determination server 50.

 続いて、判定サーバ50において、物体検出サーバ30によって検出された結果(物体情報)に基づいて、検出された物体がユーザに接触する危険度が判定される(ステップS13)。ステップS13の処理については、図9のフローチャートを参照して詳細に説明する。 Subsequently, the determination server 50 determines the risk of the detected object coming into contact with the user based on the result (object information) detected by the object detection server 30 (step S13). The process of step S13 will be described in detail with reference to the flowchart of FIG.

 まず、判定サーバ50において、物体情報に基づいて、ユーザに対して物体が正面の領域に位置しているか否かが判定される(ステップS21)。具体的には、判定部53は、ユーザから正面前方に向かって±15°の領域に物体が位置しているか否かを判定する。 First, the determination server 50 determines whether or not the object is located in the area in front of the user based on the object information (step S21). Specifically, the determination unit 53 determines whether or not the object is located in a region of ± 15 ° from the user toward the front front.

 ユーザに対して物体が正面の領域に位置していると判定された場合(ステップS21:YES)、判定サーバ50において、物体の移動ベクトルに基づいて、物体が正面の領域とは異なる領域に向かって移動するか否かが判定される(ステップS22)。具体的には、判定サーバ50は、物体情報に含まれる物体の位置に基づいて、移動ベクトルを算出し、算出された移動ベクトルに基づいて、物体が移動すると予測される領域の判定を行う。物体が正面の領域とは異なる領域に向かって移動すると判定された場合(ステップS22:YES)、判定サーバ50において、危険度が「無」と判定され(ステップS23)、図9に示される処理が終了する。一方、物体が正面の領域とは異なる領域に向かって移動しないと判定された場合(ステップS22:NO)、処理がステップS25に進む。 When it is determined to the user that the object is located in the front area (step S21: YES), the determination server 50 heads for an area different from the front area based on the movement vector of the object. It is determined whether or not to move (step S22). Specifically, the determination server 50 calculates a movement vector based on the position of the object included in the object information, and determines a region where the object is predicted to move based on the calculated movement vector. When it is determined that the object moves toward an area different from the front area (step S22: YES), the determination server 50 determines that the degree of danger is "none" (step S23), and the process shown in FIG. Is finished. On the other hand, when it is determined that the object does not move toward a region different from the front region (step S22: NO), the process proceeds to step S25.

 ユーザに対して物体が正面の領域に位置していないと判定された場合(ステップS21:NO)、判定サーバ50において、物体の移動ベクトルに基づいて、物体が正面の領域に向かって移動するか否かが判定される(ステップS24)。物体が正面の領域に向かって移動すると判定された場合(ステップS24:YES)、処理がS25に進む。一方、物体が正面の領域に向かって移動しないと判定された場合(ステップS24:NO)、処理がステップS23に進む。 When it is determined by the user that the object is not located in the front area (step S21: NO), whether the object moves toward the front area based on the movement vector of the object in the determination server 50. Whether or not it is determined (step S24). If it is determined that the object moves toward the front area (step S24: YES), the process proceeds to S25. On the other hand, when it is determined that the object does not move toward the front area (step S24: NO), the process proceeds to step S23.

 続いて、判定サーバ50において、物体情報に含まれる物体の位置情報及び時間情報に基づいて推定したユーザと物体との距離が閾値(第2閾値)よりも小さいか否かが判定される(ステップS25)。 Subsequently, the determination server 50 determines whether or not the distance between the user and the object estimated based on the position information and the time information of the object included in the object information is smaller than the threshold value (second threshold value) (step). S25).

 推定した距離が当該閾値よりも小さいと判定された場合(ステップS25:YES)、判定サーバ50において、危険度が「高」であると判定され(ステップS26)、図9に示される処理が終了する。一方、推定した距離が当該閾値よりも大きくないと判定された場合(ステップS25:NO)、判定サーバ50において、物体情報に含まれる物体の位置情報及び時間情報に基づいて推定した接近速度が閾値(第1閾値)よりも大きいか否かが判定される(ステップS27)。 When it is determined that the estimated distance is smaller than the threshold value (step S25: YES), the determination server 50 determines that the risk level is "high" (step S26), and the process shown in FIG. 9 ends. do. On the other hand, when it is determined that the estimated distance is not larger than the threshold value (step S25: NO), the threshold value is the approach speed estimated based on the position information and the time information of the object included in the object information in the determination server 50. It is determined whether or not it is larger than (first threshold value) (step S27).

 推定した接近速度が当該閾値よりも大きいと判定された場合(ステップS27:YES)、判定サーバ50において、危険度が「中」であると判定され(ステップS28)、図9に示される処理が終了する。一方、推定した接近速度が当該閾値よりも大きくないと判定された場合(ステップS27:NO)、判定サーバ50において、危険度が「低」であると判定され(ステップS29)、図9に示される処理が終了する。 When it is determined that the estimated approach speed is larger than the threshold value (step S27: YES), the determination server 50 determines that the risk level is "medium" (step S28), and the process shown in FIG. 9 is performed. finish. On the other hand, when it is determined that the estimated approach speed is not larger than the threshold value (step S27: NO), the determination server 50 determines that the risk level is “low” (step S29), which is shown in FIG. Processing ends.

 図8に示される処理のフローチャートの説明に戻る。判定サーバ50において、危険度に基づいて、ユーザに危険であることを示す通知情報が生成される(ステップS14)。具体的には、判定サーバ50は、スピーカーにおいて物体の位置に対応する場所から危険音が発せられるように、音情報を生成し、通信端末10が有する画面に、危険画像が表示されるように、画像情報を生成する。 Return to the explanation of the flow chart of the process shown in FIG. In the determination server 50, notification information indicating that the user is dangerous is generated based on the degree of danger (step S14). Specifically, the determination server 50 generates sound information so that the dangerous sound is emitted from the place corresponding to the position of the object in the speaker, and the dangerous image is displayed on the screen of the communication terminal 10. , Generate image information.

 続いて、判定サーバ50において、生成された通知情報が通信端末10に出力される(ステップS15)。これにより、通知情報が通信端末10によって取得されると、通信端末10が有するスピーカーから危険音が発せられ、通信端末10が有する画面上に危険画像が表示される。 Subsequently, the determination server 50 outputs the generated notification information to the communication terminal 10 (step S15). As a result, when the notification information is acquired by the communication terminal 10, a dangerous sound is emitted from the speaker of the communication terminal 10, and a dangerous image is displayed on the screen of the communication terminal 10.

 次に、本実施形態に係る情報通知システム1の作用効果について説明する。 Next, the operation and effect of the information notification system 1 according to the present embodiment will be described.

 本実施形態に係る情報通知システム1は、ユーザに装着された通信端末10において撮像された、ユーザの周囲の領域に係る一又は複数の撮像画像を取得する取得部32と、一又は複数の撮像画像に基づいて、ユーザの周囲の領域にある物体を検出する検出部33と、検出部33によって検出された結果に基づいて、検出部33によって検出された物体がユーザに接触する危険度を判定する判定部53と、危険度に基づいて、ユーザに危険であることを示す通知情報を生成する生成部54と、通知情報を通信端末10に出力する出力部55と、を備えている。生成部54は、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなるように、通知情報を生成する。 The information notification system 1 according to the present embodiment includes an acquisition unit 32 that acquires one or a plurality of captured images related to an area around the user, and one or a plurality of images captured by the communication terminal 10 mounted on the user. Based on the image, the detection unit 33 that detects an object in the area around the user, and based on the result detected by the detection unit 33, determines the degree of danger that the object detected by the detection unit 33 comes into contact with the user. A determination unit 53 is provided, a generation unit 54 that generates notification information indicating that the user is dangerous based on the degree of danger, and an output unit 55 that outputs the notification information to the communication terminal 10. The generation unit 54 generates notification information so that the higher the risk, the easier it is for the user to recognize the danger of contact with an object.

 本実施形態に係る情報通知システム1では、ユーザの周囲の領域に係る複数の撮像画像に基づいてユーザに接近する物体が認識され、当該物体がユーザに接触する危険度が判定される。そして、情報通知システム1では、危険度が高いほどユーザが物体の接触の危険性を認知しやすくなるように通知情報が生成され、当該通知情報が通信端末10に出力される。このような構成によれば、ユーザに対して、物体が接触する危険性に応じた通知を行うことができる。具体的には、例えば、物体がユーザに接触する危険度が低い場合には、単にユーザに対して物体の接近を知らせる程度の通知情報が出力されることによって、判定された危険度に適さない過剰な通知がなされてユーザに煩わしさを感じさせることを抑制できる。また、例えば、物体がユーザに接触する危険度が高い場合には、ユーザに対して、危険が迫っていることを強調する通知情報が出力されることによって、物体とユーザとの接触を適切に回避することができる。以上のように、情報通知システム1によれば、ユーザの危険度に応じて、ユーザに対して必要な情報を適切に通知することができる。また、情報通知システム1では、判定された危険度に適さない過剰な通知がなされることが抑制されるので、処理負荷を軽減することができるという技術的効果を奏する。 In the information notification system 1 according to the present embodiment, an object approaching the user is recognized based on a plurality of captured images related to the area around the user, and the degree of danger that the object comes into contact with the user is determined. Then, in the information notification system 1, notification information is generated so that the user can easily recognize the danger of contact with an object as the degree of danger increases, and the notification information is output to the communication terminal 10. According to such a configuration, it is possible to notify the user according to the risk of contact with an object. Specifically, for example, when the risk of an object coming into contact with a user is low, it is not suitable for the determined risk by simply outputting notification information to the extent that the user is notified of the approach of the object. It is possible to prevent the user from feeling annoyed by excessive notification. Further, for example, when the risk of the object coming into contact with the user is high, the notification information emphasizing that the danger is imminent is output to the user, so that the contact between the object and the user is appropriate. It can be avoided. As described above, according to the information notification system 1, it is possible to appropriately notify the user of necessary information according to the degree of danger of the user. In addition, the information notification system 1 suppresses excessive notifications that are not suitable for the determined risk level, and thus has a technical effect of reducing the processing load.

 取得部32は、時間的に連続した複数の撮像画像を取得し、判定部53は、検出部33によって検出された、複数の撮像画像における物体の位置に基づいて、物体がユーザに接触する危険度を判定する。このことで、時間的に連続した複数の撮像画像間における物体の位置(例えば、当該複数の撮像画像間における物体の位置の変化)が考慮されることにより、物体がユーザに接触する危険度をより高精度に判定することができる。 The acquisition unit 32 acquires a plurality of captured images that are continuous in time, and the determination unit 53 has a risk that the object comes into contact with the user based on the position of the object in the plurality of captured images detected by the detection unit 33. Determine the degree. This reduces the risk of an object coming into contact with the user by taking into account the position of the object between the plurality of captured images that are continuous in time (for example, the change in the position of the object between the plurality of captured images). It can be determined with higher accuracy.

 判定部53は、複数の撮像画像における物体の位置に基づいて、物体がユーザに接触する可能性である接触可能性を判定し、接触可能性の判定結果に基づいて、危険度を判定し、ユーザに対して物体が正面の領域に位置しているか否か、及び、複数の撮像画像における物体の位置から導出される物体の移動ベクトルに基づいて、接触可能性を判定する。これにより、ユーザに対する物体の位置及び物体の移動方向に基づいて物体の接触可能性が判定される。そして、このようにして判定される物体の接触可能性の判定結果から危険度が判定されることにより、例えば物体が接触する可能性が高い場合に危険度を高くすることができ、危険度をより高精度に判定することができる。 The determination unit 53 determines the contact possibility that the object may come into contact with the user based on the position of the object in the plurality of captured images, and determines the degree of danger based on the determination result of the contact possibility. The possibility of contact is determined based on whether or not the object is located in the area in front of the user and the movement vector of the object derived from the position of the object in a plurality of captured images. As a result, the contact possibility of the object is determined based on the position of the object with respect to the user and the moving direction of the object. Then, by determining the degree of danger from the determination result of the contact possibility of the object determined in this way, for example, when the possibility of contact with the object is high, the degree of danger can be increased, and the degree of danger can be increased. It can be determined with higher accuracy.

 判定部53は、物体がユーザに対して正面の領域とは異なる領域に位置していると推定し且つ移動ベクトルに基づいて物体が正面の領域に向かって移動すると推定した場合、物体が正面の領域とは異なる領域内に留まって移動すると推定する場合と比較して、接触可能性を高く判定する。 When the determination unit 53 estimates that the object is located in a region different from the front region with respect to the user and estimates that the object moves toward the front region based on the movement vector, the object is in front of the front region. Compared with the case where it is estimated that the object stays and moves in a region different from the region, the possibility of contact is judged to be higher.

 一般的に、正面の領域とは異なる領域を移動する物体はユーザに接触する可能性が低いといえる。しかしながら、例えば、当該物体が正面の領域に向かって移動する場合、物体が正面の領域とは異なる領域内に留まって移動する場合と比較して、物体がユーザと接触する危険性は高いと考えられる。仮に、判定部53がユーザに対する物体の位置のみに基づいて危険度を判定する場合、物体が正面の領域に向かって移動しても、接触可能性が低く判定されてしまい、その結果、ユーザに物体が接触するおそれがある。情報通知システム1では、正面の領域とは異なる領域に位置している物体が正面の領域に向かって移動するか否かを考慮して接触可能性を判定するため、危険度をより高精度に判定することができる。 In general, it can be said that an object moving in an area different from the front area is unlikely to come into contact with the user. However, for example, when the object moves toward the front area, the risk of the object coming into contact with the user is considered to be higher than when the object stays in a different area from the front area and moves. Be done. If the determination unit 53 determines the degree of danger based only on the position of the object with respect to the user, even if the object moves toward the front area, the possibility of contact is determined to be low, and as a result, the user is determined. Objects may come into contact. In the information notification system 1, the possibility of contact is determined in consideration of whether or not an object located in an area different from the front area moves toward the front area, so that the degree of danger is made more accurate. It can be determined.

 判定部53は、物体がユーザに対して正面の領域に位置していると推定し且つ移動ベクトルに基づいて物体が正面の領域と異なる領域に移動すると推定した場合、物体が正面の領域内に留まって移動すると推定する場合と比較して、接触可能性を低く判定する。 When the determination unit 53 estimates that the object is located in the front area with respect to the user and estimates that the object moves to a different area from the front area based on the movement vector, the determination unit 53 moves the object into the front area. The possibility of contact is judged to be lower than in the case of presuming that the object stays and moves.

 一般的に、正面の領域を移動する物体はユーザに接触する可能性が高いといえる。しかしながら、例えば、ユーザに接近する物体が正面の領域を移動しても、当該物体はユーザの前を横切るだけである場合、物体がユーザと接触する危険性は低いと考えられる。仮に、判定部53がユーザに対する物体の位置のみに基づいて危険度を判定する場合、物体がユーザの前を横切るに過ぎなくても物体が正面の領域に位置していると接触可能性が高く判定されてしまい、その結果、判定された危険度に適さない過剰な通知がなされるおそれがある。情報通知システム1では、正面の領域に位置している物体が正面の領域とは異なる領域に向かって移動するか否かが考慮されて接触可能性が判定されるため、危険度をより高精度に判定することができる。 In general, it can be said that an object moving in the front area is likely to come into contact with the user. However, for example, if an object approaching the user moves in the area in front of the user, but the object only crosses in front of the user, the risk of the object coming into contact with the user is considered to be low. If the determination unit 53 determines the degree of danger based only on the position of the object with respect to the user, there is a high possibility of contact if the object is located in the front area even if the object only crosses in front of the user. It may be determined, and as a result, excessive notification may be given that is not suitable for the determined risk level. In the information notification system 1, the possibility of contact is determined in consideration of whether or not an object located in the front area moves toward an area different from the front area, so that the degree of danger is more accurate. Can be determined.

 判定部53は、物体がユーザに接近する方向に移動する速度が第1閾値よりも大きいと判定した場合、速度が第1閾値よりも大きくないと判定した場合と比較して、危険度を高く判定する。 When the determination unit 53 determines that the speed at which the object moves in the direction approaching the user is greater than the first threshold value, the degree of danger is higher than when it is determined that the speed is not greater than the first threshold value. judge.

 物体がユーザに接触する危険性は、物体がユーザに接近する速度に応じて異なってくる。具体的には、例えば、物体が正面の領域内を移動する場合であっても、物体がユーザに対して低速で近づいてくる場合、ユーザと物体とが接触する危険性は低いと考えられる。情報通知システム1では、物体がユーザに接近する速度が考慮されて危険度が判定されるため、より高精度な危険度の判定が可能となる。 The risk of an object coming into contact with the user depends on the speed at which the object approaches the user. Specifically, for example, even when the object moves in the front area, if the object approaches the user at a low speed, the risk of contact between the user and the object is considered to be low. In the information notification system 1, the risk level is determined in consideration of the speed at which the object approaches the user, so that the risk level can be determined with higher accuracy.

 判定部53は、ユーザと物体との距離が第2閾値よりも小さいと判定した場合、距離が第2閾値よりも小さくないと判定した場合と比較して、危険度を高く判定する。 When the determination unit 53 determines that the distance between the user and the object is smaller than the second threshold value, the determination unit 53 determines that the degree of danger is higher than when it is determined that the distance is not smaller than the second threshold value.

 物体がユーザに接触する危険性は、ユーザと物体との距離に応じて異なってくる。具体的には、例えば、物体がユーザに対して正面の領域内を移動している場合であっても、物体がユーザに対して遠くに位置する場合、物体がユーザに接触する危険性は低いといえる。情報通知システム1では、ユーザに対する物体の距離が考慮されて危険度が判定されるため、より高精度な危険度の判定が可能となる。 The risk of an object coming into contact with the user depends on the distance between the user and the object. Specifically, for example, even if the object is moving in the area in front of the user, if the object is located far away from the user, the risk of the object touching the user is low. It can be said that. In the information notification system 1, the degree of danger is determined in consideration of the distance of the object to the user, so that the degree of danger can be determined with higher accuracy.

 情報通知システム1では、通信端末10が有するスピーカーにおいて音により通知される情報を含み、生成部54は、スピーカーにおいて物体の位置に対応する場所から危険度に応じた音が発せられるように、通知情報を生成する。これにより、例えば、視覚に障害があるユーザに対しても、ユーザの危険度に応じてユーザに必要な適切な通知を行うことができ、接触する危険度の高い物体の場所を当該ユーザが把握しやすくすることができる。 The information notification system 1 includes information notified by sound in the speaker of the communication terminal 10, and the generation unit 54 notifies the speaker so that the sound corresponding to the position of the object is emitted from the place corresponding to the position of the object. Generate information. As a result, for example, even for a visually impaired user, it is possible to give an appropriate notification to the user according to the degree of danger of the user, and the user can grasp the location of an object having a high degree of contact. Can be made easier.

 情報通知システム1では、通知情報が、通信端末10が有する画面において危険度に応じた態様で表示される危険画像により通知される情報を含み、生成部54は、危険度が高いほど危険画像において物体に関する情報が目立つように、通知情報を生成する。特に、情報通知システム1では、生成部54は、危険画像において危険度が高いほど他の情報を目立たせなくすることで、物体に関する情報が目立つように、通知情報を生成する。 In the information notification system 1, the notification information includes information notified by a danger image displayed in a mode corresponding to the degree of danger on the screen of the communication terminal 10, and the generation unit 54 generates the information in the danger image as the degree of danger increases. Generate notification information so that information about the object stands out. In particular, in the information notification system 1, the generation unit 54 generates notification information so that the information about the object is conspicuous by making other information inconspicuous as the degree of danger is higher in the danger image.

 これにより、ユーザと物体とが接触する危険度が低い場合においては、例えば、受信したメッセージ情報、天気に関する情報等の他の情報をユーザが確認しやすく、危険度が高い場合においては、接触のおそれがある物体をユーザが認知しやすくなる。すなわち、情報通知システム1では、ユーザの危険度に応じて、物体に関する情報の表示と他の情報の表示とのバランスを調整することによって、ユーザに必要な情報を適切に通知することができる。 This makes it easier for the user to check other information such as received message information and weather information when the risk of contact between the user and the object is low, and when the risk is high, the contact It makes it easier for the user to recognize objects that may be at risk. That is, in the information notification system 1, the necessary information can be appropriately notified to the user by adjusting the balance between the display of information about the object and the display of other information according to the degree of danger of the user.

 次に、情報通知システム1に含まれた通信端末10、物体検出サーバ30、及び判定サーバ50のハードウェア構成について、図10を参照して説明する。上述の通信端末10、物体検出サーバ30、及び判定サーバ50は、物理的には、プロセッサ1001、メモリ1002、ストレージ1003、通信装置1004、入力装置1005、出力装置1006、バス1007などを含むコンピュータ装置として構成されてもよい。 Next, the hardware configurations of the communication terminal 10, the object detection server 30, and the determination server 50 included in the information notification system 1 will be described with reference to FIG. The communication terminal 10, the object detection server 30, and the determination server 50 physically include a processor 1001, a memory 1002, a storage 1003, a communication device 1004, an input device 1005, an output device 1006, a bus 1007, and the like. It may be configured as.

 なお、以下の説明では、「装置」という文言は、回路、デバイス、ユニットなどに読み替えることができる。通信端末10、物体検出サーバ30、及び判定サーバ50のハードウェア構成は、図10に示した各装置を1つ又は複数含むように構成されてもよいし、一部の装置を含まずに構成されてもよい。 In the following explanation, the word "device" can be read as a circuit, device, unit, etc. The hardware configuration of the communication terminal 10, the object detection server 30, and the determination server 50 may be configured to include one or more of the devices shown in FIG. 10, or may not include some of the devices. May be done.

 通信端末10、物体検出サーバ30、及び判定サーバ50における各機能は、プロセッサ1001、メモリ1002などのハードウェア上に所定のソフトウェア(プログラム)を読み込ませることで、プロセッサ1001が演算を行い、通信装置1004による通信や、メモリ1002及びストレージ1003におけるデータの読み出し及び/又は書き込みを制御することで実現される。 For each function in the communication terminal 10, the object detection server 30, and the determination server 50, by loading predetermined software (program) on the hardware such as the processor 1001 and the memory 1002, the processor 1001 performs an operation and the communication device. It is realized by controlling communication by 1004 and reading and / or writing of data in the memory 1002 and the storage 1003.

 プロセッサ1001は、例えば、オペレーティングシステムを動作させてコンピュータ全体を制御する。プロセッサ1001は、周辺装置とのインタフェース、制御装置、演算装置、レジスタなどを含む中央処理装置(CPU:Central Processing Unit)で構成されてもよい。例えば、物体検出サーバ30の検出部33等の制御機能はプロセッサ1001で実現されてもよい。 The processor 1001 operates, for example, an operating system to control the entire computer. The processor 1001 may be configured by a central processing unit (CPU: Central Processing Unit) including an interface with a peripheral device, a control device, an arithmetic unit, a register, and the like. For example, the control function of the detection unit 33 of the object detection server 30 may be realized by the processor 1001.

 また、プロセッサ1001は、プログラム(プログラムコード)、ソフトウェアモジュールやデータを、ストレージ1003及び/又は通信装置1004からメモリ1002に読み出し、これらに従って各種の処理を実行する。プログラムとしては、上述の実施の形態で説明した動作の少なくとも一部をコンピュータに実行させるプログラムが用いられる。 Further, the processor 1001 reads a program (program code), a software module and data from the storage 1003 and / or the communication device 1004 into the memory 1002, and executes various processes according to these. As the program, a program that causes a computer to execute at least a part of the operations described in the above-described embodiment is used.

 例えば、物体検出サーバ30の検出部33等の制御機能は、メモリ1002に格納され、プロセッサ1001で動作する制御プログラムによって実現されてもよく、他の機能ブロックについても同様に実現されてもよい。上述の各種処理は、1つのプロセッサ1001で実行される旨を説明してきたが、2以上のプロセッサ1001により同時又は逐次に実行されてもよい。プロセッサ1001は、1以上のチップで実装されてもよい。なお、プログラムは、電気通信回線を介してネットワークから送信されても良い。 For example, the control function of the detection unit 33 of the object detection server 30 may be realized by a control program stored in the memory 1002 and operated by the processor 1001, and other functional blocks may be similarly realized. Although it has been described that the various processes described above are executed by one processor 1001, they may be executed simultaneously or sequentially by two or more processors 1001. Processor 1001 may be mounted on one or more chips. The program may be transmitted from the network via a telecommunication line.

 メモリ1002は、コンピュータ読み取り可能な記録媒体であり、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、EEPROM(Electrically Erasable Programmable ROM)、RAM(Random Access Memory)などの少なくとも1つで構成されてもよい。メモリ1002は、レジスタ、キャッシュ、メインメモリ(主記憶装置)などと呼ばれてもよい。メモリ1002は、本発明の一実施の形態に係る無線通信方法を実施するために実行可能なプログラム(プログラムコード)、ソフトウェアモジュールなどを保存することができる。 The memory 1002 is a computer-readable recording medium, and is composed of at least one such as a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), an EEPROM (Electrically Erasable Programmable ROM), and a RAM (Random Access Memory). May be done. The memory 1002 may be referred to as a register, a cache, a main memory (main storage device), or the like. The memory 1002 can store a program (program code), a software module, and the like that can be executed to implement the wireless communication method according to the embodiment of the present invention.

 ストレージ1003は、コンピュータ読み取り可能な記録媒体であり、例えば、CDROM(Compact Disc ROM)などの光ディスク、ハードディスクドライブ、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリ(例えば、カード、スティック、キードライブ)、フロッピー(登録商標)ディスク、磁気ストリップなどの少なくとも1つで構成されてもよい。ストレージ1003は、補助記憶装置と呼ばれてもよい。上述の記憶媒体は、例えば、メモリ1002及び/又はストレージ1003を含むデータベース、サーバその他の適切な媒体であってもよい。 The storage 1003 is a computer-readable recording medium, and is, for example, an optical disk such as a CDROM (Compact Disc ROM), a hard disk drive, a flexible disk, an optical magnetic disk (for example, a compact disk, a digital versatile disk, or a Blu-ray (registration)). It may consist of at least one such as a (trademark) disk), a smart card, a flash memory (eg, a card, stick, key drive), a floppy (registered trademark) disk, a magnetic strip, and the like. The storage 1003 may be referred to as an auxiliary storage device. The storage medium described above may be, for example, a database, server or other suitable medium containing memory 1002 and / or storage 1003.

 通信装置1004は、有線及び/又は無線ネットワークを介してコンピュータ間の通信を行うためのハードウェア(送受信デバイス)であり、例えばネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュールなどともいう。 The communication device 1004 is hardware (transmission / reception device) for communicating between computers via a wired and / or wireless network, and is also referred to as, for example, a network device, a network controller, a network card, a communication module, or the like.

 入力装置1005は、外部からの入力を受け付ける入力デバイス(例えば、キーボード、マウス、マイクロフォン、スイッチ、ボタン、センサなど)である。出力装置1006は、外部への出力を実施する出力デバイス(例えば、ディスプレイ、スピーカー、LEDランプなど)である。なお、入力装置1005及び出力装置1006は、一体となった構成(例えば、タッチパネル)であってもよい。 The input device 1005 is an input device (for example, a keyboard, a mouse, a microphone, a switch, a button, a sensor, etc.) that accepts an input from the outside. The output device 1006 is an output device (for example, a display, a speaker, an LED lamp, etc.) that outputs to the outside. The input device 1005 and the output device 1006 may have an integrated configuration (for example, a touch panel).

 また、プロセッサ1001やメモリ1002などの各装置は、情報を通信するためのバス1007で接続される。バス1007は、単一のバスで構成されてもよいし、装置間で異なるバスで構成されてもよい。 Further, each device such as the processor 1001 and the memory 1002 is connected by the bus 1007 for communicating information. The bus 1007 may be composed of a single bus or may be composed of different buses between the devices.

 また、通信端末10、物体検出サーバ30、及び判定サーバ50は、マイクロプロセッサ、デジタル信号プロセッサ(DSP:Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)などのハードウェアを含んで構成されてもよく、当該ハードウェアにより、各機能ブロックの一部又は全てが実現されてもよい。例えば、プロセッサ1001は、これらのハードウェアの少なくとも1つで実装されてもよい。 Further, the communication terminal 10, the object detection server 30, and the determination server 50 include a microprocessor, a digital signal processor (DSP: Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), and an FPGA (Field Programmable). It may be configured to include hardware such as Gate Array), and a part or all of each functional block may be realized by the hardware. For example, the processor 1001 may be implemented on at least one of these hardware.

 以上、本実施形態について詳細に説明したが、当業者にとっては、本実施形態が本明細書中に説明した実施形態に限定されるものではないということは明らかである。本実施形態は、特許請求の範囲の記載により定まる本発明の趣旨及び範囲を逸脱することなく修正及び変更態様として実施することができる。したがって、本明細書の記載は、例示説明を目的とするものであり、本実施形態に対して何ら制限的な意味を有するものではない。 Although the present embodiment has been described in detail above, it is clear to those skilled in the art that the present embodiment is not limited to the embodiment described in the present specification. This embodiment can be implemented as an amendment or modification without departing from the spirit and scope of the present invention as determined by the description of the scope of claims. Therefore, the description herein is for purposes of illustration only and has no limiting implications for this embodiment.

 例えば、情報通知システム1は、通信端末10、物体検出サーバ30、及び判定サーバ50を含んで構成されているとして説明したが、これに限定されず、情報通知システム1の各機能が、通信端末10のみによって実現されてもよい。 For example, the information notification system 1 has been described as being configured to include the communication terminal 10, the object detection server 30, and the determination server 50, but the present invention is not limited to this, and each function of the information notification system 1 is a communication terminal. It may be realized only by 10.

 また、上記実施形態では、物体検出サーバ30が、取得部32及び検出部33を有し、判定サーバ50が、判定部53と、生成部54と、出力部55と、を有している構成を例示した。しかしながら、情報通知システム1では、例えば、他のサーバが各機能的構成要素の一部あるいは全部を備えていてもよく、また、例えば、通信端末10が各機能的構成要素の一部を備えていてもよい。 Further, in the above embodiment, the object detection server 30 has an acquisition unit 32 and a detection unit 33, and the determination server 50 has a determination unit 53, a generation unit 54, and an output unit 55. Was exemplified. However, in the information notification system 1, for example, another server may include a part or all of each functional component, and for example, the communication terminal 10 may include a part of each functional component. You may.

 情報通知システム1では、ユーザに対して背面、右側面及び左側面の領域に位置している物体を対象に危険度が判定され、当該危険度に基づいて通知情報が生成されてもよい。 In the information notification system 1, the degree of danger may be determined for an object located in the area of the back surface, the right side surface, and the left side surface of the user, and notification information may be generated based on the degree of danger.

 また、上記実施形態では、通知情報は、音情報及び画像情報であったが、通知情報は、音情報のみ、或いは画像情報のみであってもよく、また、例えば通信端末10が有するランプが点灯するための光情報等、他の情報であってもよい。 Further, in the above embodiment, the notification information is sound information and image information, but the notification information may be only sound information or only image information, and for example, the lamp of the communication terminal 10 is lit. It may be other information such as optical information for the purpose.

 取得部32は、ユーザに装着された通信端末10において撮像された、ユーザの周囲の領域に係る一の撮像画像を取得し、検出部33は、一の撮像画像に基づいて、ユーザの周囲の領域にある物体を検出してもよい。 The acquisition unit 32 acquires one captured image relating to the area around the user captured by the communication terminal 10 mounted on the user, and the detection unit 33 acquires one captured image of the user's surroundings based on the one captured image. Objects in the area may be detected.

 判定部53は、ユーザに対して物体が正面の領域に位置しているか否か、及び、複数の撮像画像における物体の位置から導出される物体の移動ベクトル、の一方に基づいて、接触可能性を判定してもよい。 The determination unit 53 has a possibility of contact based on whether or not the object is located in the area in front of the user and the movement vector of the object derived from the position of the object in a plurality of captured images. May be determined.

 判定部53は、検出部33によって検出された結果に基づいて危険度を判定すればよい。一例として、判定部53は、接触可能性のみに基づいて危険度を判定してもよく、また、接触可能性、ユーザと物体との距離、及び接近速度を総合的に考慮して判定する方法とは異なる方法を用いて危険度を判定してもよい。 The determination unit 53 may determine the degree of danger based on the result detected by the detection unit 33. As an example, the determination unit 53 may determine the degree of danger based only on the contact possibility, and also, a method of comprehensively considering the contact possibility, the distance between the user and the object, and the approach speed. The degree of risk may be determined using a method different from that of.

 生成部54は、危険画像において危険度が高いほど物体に関する情報が目立つように、通知情報を生成すればよい。一例として、生成部54は、危険度が高いほど他の情報が小さく表示されるように通知情報を生成してもよい。 The generation unit 54 may generate notification information so that the higher the degree of danger in the danger image, the more conspicuous the information about the object. As an example, the generation unit 54 may generate notification information so that the higher the risk, the smaller the other information is displayed.

 情報通知システム1では、通知情報以外の情報がユーザに通知されてもよい。一例として、通信端末10が有するスピーカーからは、通知情報以外の音に関する情報が発せられてもよい。 In the information notification system 1, information other than the notification information may be notified to the user. As an example, information related to sound other than the notification information may be emitted from the speaker included in the communication terminal 10.

 例えば、図3に示されるように、検出部33が、物体として、人物H1に加えて、路上に配置された看板である物体H2を検出し、生成部54が、物体H2に関する情報に基づいて、物体の位置、物体の名称を伝える音が発せられるための情報を生成してもよい。物体H2は、ユーザに対して正面前方5m且つ左1m先に配置されている。したがって、生成部54は、正面前方5m且つ1m先に看板がある旨を伝える音M2が発せられるための情報を生成する。また、例えば、図3に示されるように、検出部33が、物体として、路上の店名が記載された看板である物体H3を検出し、生成部54が、物体H3に関する情報に基づいて、物体H3の位置、及び物体H3が示す店の種類及び店の名称を伝える音M3が発せられるための情報を生成してもよい。この場合、店の種類及び店の名称は、例えば、物体検出サーバ30のデータ300に記憶されている。 For example, as shown in FIG. 3, the detection unit 33 detects the object H2, which is a signboard arranged on the road, in addition to the person H1 as an object, and the generation unit 54 detects the object H2 based on the information about the object H2. , The position of the object, and the information for making a sound that conveys the name of the object may be generated. The object H2 is arranged 5 m in front of the user and 1 m to the left of the user. Therefore, the generation unit 54 generates information for emitting the sound M2 indicating that there is a signboard 5 m in front of the front surface and 1 m ahead. Further, for example, as shown in FIG. 3, the detection unit 33 detects the object H3, which is a signboard with the store name on the street, as an object, and the generation unit 54 detects the object based on the information about the object H3. Information may be generated for the sound M3 that conveys the position of H3, the type of store indicated by the object H3, and the name of the store to be emitted. In this case, the type of store and the name of the store are stored in, for example, the data 300 of the object detection server 30.

 そして、出力部55が、当該音が発せられるための情報を出力すると、スピーカーにおいて物体H2に対応する場所(通信端末10がユーザに装着された状態のスピーカーにおいて左右方向における左側)、及び物体H3に対応する場所(通信端末10がユーザに装着された状態のスピーカーにおいて左右方向における真ん中)から、上述したそれぞれの音M2及びM3が発せられる。 Then, when the output unit 55 outputs the information for emitting the sound, the place corresponding to the object H2 in the speaker (the left side in the left-right direction in the speaker with the communication terminal 10 attached to the user) and the object H3. The above-mentioned sounds M2 and M3 are emitted from the place corresponding to (the center in the left-right direction in the speaker in which the communication terminal 10 is attached to the user).

 本明細書で説明した各態様/実施形態は、LTE(Long Term Evolution)、LTE-A(LTE-Advanced)、SUPER 3G、IMT-Advanced、4G、5G、FRA(Future Radio Access)、W-CDMA(登録商標)、GSM(登録商標)、CDMA2000、UMB(Ultra Mobile Broad-band)、IEEE 802.11(Wi-Fi)、IEEE 802.16(WiMAX)、IEEE 802.20、UWB(Ultra-Wide Band)、Bluetooth(登録商標)、その他の適切なシステムを利用するシステム及び/又はこれらに基づいて拡張された次世代システムに適用されてもよい。 Each aspect / embodiment described in the present specification includes LTE (Long Term Evolution), LTE-A (LTE-Advanced), SUPER 3G, IMT-Advanced, 4G, 5G, FRA (Future Radio Access), W-CDMA. (Registered Trademark), GSM (Registered Trademark), CDMA2000, UMB (Ultra Mobile Broad-band), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, UWB (Ultra-Wide) Band), WiMAX®, and other systems that utilize suitable systems and / or extended next-generation systems based on them may be applied.

 本明細書で説明した各態様/実施形態の処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本明細書で説明した方法については、例示的な順序で様々なステップの要素を提示しており、提示した特定の順序に限定されない。 The order of the processing procedures, sequences, flowcharts, etc. of each aspect / embodiment described in the present specification may be changed as long as there is no contradiction. For example, the methods described herein present elements of various steps in an exemplary order and are not limited to the particular order presented.

 入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルで管理してもよい。入出力される情報等は、上書き、更新、または追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 The input / output information and the like may be saved in a specific place (for example, a memory) or may be managed by a management table. Information to be input / output may be overwritten, updated, or added. The output information and the like may be deleted. The input information or the like may be transmitted to another device.

 判定は、1ビットで表される値(0か1か)によって行われてもよいし、真偽値(Boolean:trueまたはfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 The determination may be made by a value represented by 1 bit (0 or 1), by a boolean value (Boolean: true or false), or by comparing numerical values (for example, a predetermined value). It may be done by comparison with the value).

 本明細書で説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行うものに限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 Each aspect / embodiment described in the present specification may be used alone, in combination, or may be switched and used according to the execution. Further, the notification of predetermined information (for example, the notification of "being X") is not limited to the explicit one, but is performed implicitly (for example, the notification of the predetermined information is not performed). May be good.

 ソフトウェアは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称で呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 Software, whether referred to as software, firmware, middleware, microcode, hardware description language, or other names, is an instruction, instruction set, code, code segment, program code, program, subprogram, software module. , Applications, software applications, software packages, routines, subroutines, objects, executable files, execution threads, procedures, features, etc. should be broadly interpreted.

 また、ソフトウェア、命令などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、同軸ケーブル、光ファイバケーブル、ツイストペア及びデジタル加入者回線(DSL)などの有線技術及び/又は赤外線、無線及びマイクロ波などの無線技術を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び/又は無線技術は、伝送媒体の定義内に含まれる。 Further, software, instructions, etc. may be transmitted and received via a transmission medium. For example, the software may use wired technology such as coaxial cable, fiber optic cable, twisted pair and digital subscriber line (DSL) and / or wireless technology such as infrared, wireless and microwave to website, server, or other. When transmitted from a remote source, these wired and / or wireless technologies are included within the definition of transmission medium.

 本明細書で説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 The information, signals, etc. described herein may be represented using any of a variety of different techniques. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description are voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, light fields or photons, or any of these. It may be represented by a combination of.

 なお、本明細書で説明した用語及び/又は本明細書の理解に必要な用語については、同一の又は類似する意味を有する用語と置き換えてもよい。 Note that the terms described in the present specification and / or the terms necessary for understanding the present specification may be replaced with terms having the same or similar meanings.

 また、本明細書で説明した情報、パラメータなどは、絶対値で表されてもよいし、所定の値からの相対値で表されてもよいし、対応する別の情報で表されてもよい。 Further, the information, parameters, etc. described in the present specification may be represented by an absolute value, a relative value from a predetermined value, or another corresponding information. ..

 通信端末10は、当業者によって、移動通信端末、加入者局、モバイルユニット、加入者ユニット、ワイヤレスユニット、リモートユニット、モバイルデバイス、ワイヤレスデバイス、ワイヤレス通信デバイス、リモートデバイス、モバイル加入者局、アクセス端末、モバイル端末、ワイヤレス端末、リモート端末、ハンドセット、ユーザエージェント、モバイルクライアント、クライアント、またはいくつかの他の適切な用語で呼ばれる場合もある。 The communication terminal 10 may be a mobile communication terminal, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communication device, a remote device, a mobile subscriber station, or an access terminal, depending on the person in the art. , Mobile device, wireless device, remote device, handset, user agent, mobile client, client, or some other suitable term.

 本明細書で使用する「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 The statement "based on" as used herein does not mean "based on" unless otherwise stated. In other words, the statement "based on" means both "based only" and "at least based on".

 本明細書で「第1の」、「第2の」などの呼称を使用した場合においては、その要素へのいかなる参照も、それらの要素の量または順序を全般的に限定するものではない。これらの呼称は、2つ以上の要素間を区別する便利な方法として本明細書で使用され得る。したがって、第1および第2の要素への参照は、2つの要素のみがそこで採用され得ること、または何らかの形で第1の要素が第2の要素に先行しなければならないことを意味しない。 When the terms "first", "second", etc. are used herein, any reference to that element does not generally limit the quantity or order of those elements. These designations can be used herein as a convenient way to distinguish between two or more elements. Thus, references to the first and second elements do not mean that only two elements can be adopted there, or that the first element must somehow precede the second element.

 「含む(include)」、「含んでいる(including)」、およびそれらの変形が、本明細書あるいは特許請求の範囲で使用されている限り、これら用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。さらに、本明細書あるいは特許請求の範囲において使用されている用語「または(or)」は、排他的論理和ではないことが意図される。 As long as "include", "including", and variations thereof are used herein or within the scope of the claims, these terms are similar to the term "comprising". In addition, it is intended to be inclusive. Moreover, the term "or" as used herein or in the claims is intended to be non-exclusive.

 本明細書において、文脈または技術的に明らかに1つのみしか存在しない装置である場合以外は、複数の装置をも含むものとする。 In the present specification, a plurality of devices shall be included unless the device has only one device apparently in the context or technically.

 本開示の全体において、文脈から明らかに単数を示したものではなければ、複数のものを含むものとする。 In the whole of this disclosure, if it does not clearly indicate the singular from the context, it shall include multiple things.

 1…情報通知システム、10…通信端末(端末)、32…取得部、33…検出部、53…判定部、54…生成部、55…出力部、H1…人物(物体)、M1…危険音(音)、P1…画像(撮像画像)、P2,P4…画像(危険画像)、Pd…画像(物体に関する情報)、Z…他の情報(物体に関する情報とは異なる情報)。 1 ... Information notification system, 10 ... Communication terminal (terminal), 32 ... Acquisition unit, 33 ... Detection unit, 53 ... Judgment unit, 54 ... Generation unit, 55 ... Output unit, H1 ... Person (object), M1 ... Danger sound (Sound), P1 ... image (captured image), P2, P4 ... image (dangerous image), Pd ... image (information about an object), Z ... other information (information different from information about an object).

Claims (10)

 ユーザに装着された端末において撮像された、前記ユーザの周囲の領域に係る一又は複数の撮像画像を取得する取得部と、
 前記一又は複数の撮像画像に基づいて、前記ユーザの周囲の領域にある物体を検出する検出部と、
 前記検出部によって検出された結果に基づいて、前記検出部によって検出された前記物体が前記ユーザに接触する危険度を判定する判定部と、
 前記危険度に基づいて、前記ユーザに危険であることを示す通知情報を生成する生成部と、
 前記通知情報を前記端末に出力する出力部と、を備え、
 前記生成部は、前記危険度が高いほど前記ユーザが前記物体の接触の危険性を認知しやすくなるように、前記通知情報を生成する、情報通知システム。
An acquisition unit that acquires one or more captured images related to the area around the user, which is captured by the terminal attached to the user.
A detection unit that detects an object in the area around the user based on the one or more captured images.
Based on the result detected by the detection unit, a determination unit for determining the degree of danger that the object detected by the detection unit will come into contact with the user, and a determination unit.
A generation unit that generates notification information indicating that the user is dangerous based on the degree of danger.
An output unit that outputs the notification information to the terminal is provided.
The generation unit is an information notification system that generates the notification information so that the user is more likely to recognize the danger of contact with the object as the degree of danger is higher.
 前記取得部は、時間的に連続した複数の前記撮像画像を取得し、
 前記判定部は、前記検出部によって検出された、前記複数の撮像画像における前記物体の位置に基づいて、前記物体が前記ユーザに接触する危険度を判定する、請求項1に記載の情報通知システム。
The acquisition unit acquires a plurality of the captured images that are continuous in time, and obtains the images.
The information notification system according to claim 1, wherein the determination unit determines the degree of risk of the object coming into contact with the user based on the position of the object in the plurality of captured images detected by the detection unit. ..
 前記判定部は、
 前記複数の撮像画像における前記物体の位置に基づいて、前記物体が前記ユーザに接触する可能性である接触可能性を判定し、前記接触可能性の判定結果に基づいて、前記危険度を判定し、
 前記ユーザに対して前記物体が正面の領域に位置しているか否か、及び、前記複数の撮像画像における前記物体の位置から導出される前記物体の移動ベクトル、の少なくともいずれか一方に基づいて、前記接触可能性を判定する、請求項2に記載の情報通知システム。
The determination unit
Based on the position of the object in the plurality of captured images, the contact possibility that the object may come into contact with the user is determined, and the risk level is determined based on the determination result of the contact possibility. ,
Based on at least one of whether or not the object is located in a region in front of the user and the movement vector of the object derived from the position of the object in the plurality of captured images. The information notification system according to claim 2, wherein the contact possibility is determined.
 前記判定部は、前記物体が前記ユーザに対して正面の領域とは異なる領域に位置していると推定し且つ前記移動ベクトルに基づいて前記物体が前記正面の領域に向かって移動すると推定した場合、前記物体が前記正面の領域とは異なる領域内に留まって移動すると推定する場合と比較して、前記接触可能性を高く判定する、請求項3に記載の情報通知システム。 When the determination unit estimates that the object is located in a region different from the region in front of the user and that the object moves toward the region in front of the user based on the movement vector. The information notification system according to claim 3, wherein the contact possibility is determined higher than in the case where the object is estimated to stay and move in a region different from the front region.  前記判定部は、前記物体が前記ユーザに対して正面の領域に位置していると推定し且つ前記移動ベクトルに基づいて前記物体が前記正面の領域と異なる領域に移動すると推定した場合、前記物体が前記正面の領域内に留まって移動すると推定する場合と比較して、前記接触可能性を低く判定する、請求項3又は4に記載の情報通知システム。 When the determination unit estimates that the object is located in a region in front of the user and that the object moves to a region different from the front region based on the movement vector, the object The information notification system according to claim 3 or 4, wherein the contact possibility is determined to be lower than in the case where is estimated to stay and move in the front area.  前記判定部は、前記物体が前記ユーザに接近する方向に移動する速度が第1閾値よりも大きいと判定した場合、前記速度が前記第1閾値よりも大きくないと判定した場合と比較して、前記危険度を高く判定する、請求項2~5のいずれか一項に記載の情報通知システム。 When the determination unit determines that the speed at which the object moves in the direction approaching the user is larger than the first threshold value, the determination unit compares with the case where the speed is not higher than the first threshold value. The information notification system according to any one of claims 2 to 5, which determines the degree of risk to be high.  前記判定部は、前記ユーザと前記物体との距離が第2閾値よりも小さいと判定した場合、前記距離が前記第2閾値よりも小さくないと判定した場合と比較して、前記危険度を高く判定する、請求項1~6のいずれか一項に記載の情報通知システム。 When the determination unit determines that the distance between the user and the object is smaller than the second threshold value, the degree of danger is higher than when it is determined that the distance is not smaller than the second threshold value. The information notification system according to any one of claims 1 to 6 for determination.  前記通知情報は、前記端末が有するスピーカーにおいて音により通知される情報を含み、
 前記生成部は、前記スピーカーにおいて前記物体の位置に対応する場所から前記危険度に応じた前記音が発せられるように、前記通知情報を生成する、請求項1~7のいずれか一項に記載の情報通知システム。
The notification information includes information notified by sound in the speaker of the terminal.
The generation unit according to any one of claims 1 to 7, wherein the generation unit generates the notification information so that the sound corresponding to the degree of danger is emitted from a place corresponding to the position of the object in the speaker. Information notification system.
 前記通知情報は、前記端末が有する画面において前記危険度に応じた態様で表示される危険画像により通知される情報を含み、
 前記生成部は、前記危険度が高いほど前記危険画像において前記物体に関する情報が目立つように、前記通知情報を生成する、請求項1~8のいずれか一項に記載の情報通知システム。
The notification information includes information notified by a danger image displayed in a mode corresponding to the degree of danger on the screen of the terminal.
The information notification system according to any one of claims 1 to 8, wherein the generation unit generates the notification information so that the information about the object is more conspicuous in the danger image as the degree of danger is higher.
 前記生成部は、前記危険画像において前記危険度が高いほど前記物体に関する情報とは異なる情報を目立たせなくすることで前記物体に関する情報が目立つように、前記通知情報を生成する、請求項9に記載の情報通知システム。 The generation unit generates the notification information so that the information about the object is conspicuous by making the information different from the information about the object inconspicuous as the degree of danger is higher in the danger image. The information notification system described.
PCT/JP2021/016518 2020-05-12 2021-04-23 Information notification system Ceased WO2021230049A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022521807A JP7504201B2 (en) 2020-05-12 2021-04-23 Information Notification System

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-083731 2020-05-12
JP2020083731 2020-05-12

Publications (1)

Publication Number Publication Date
WO2021230049A1 true WO2021230049A1 (en) 2021-11-18

Family

ID=78525674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/016518 Ceased WO2021230049A1 (en) 2020-05-12 2021-04-23 Information notification system

Country Status (2)

Country Link
JP (1) JP7504201B2 (en)
WO (1) WO2021230049A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007148835A (en) * 2005-11-28 2007-06-14 Fujitsu Ten Ltd Object distinction device, notification controller, object distinction method and object distinction program
JP2009110065A (en) * 2007-10-26 2009-05-21 Toyota Central R&D Labs Inc Driving assistance device
JP2015118667A (en) * 2013-12-20 2015-06-25 株式会社大成化研 Approach notification device
US20160253560A1 (en) * 2015-02-27 2016-09-01 Sony Corporation Visibility enhancement devices, systems, and methods
JP2016186786A (en) * 2015-01-21 2016-10-27 トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド Wearable smart device for hazard detection and warning based on image and audio data
JP2017536595A (en) * 2014-09-26 2017-12-07 ハーマン インターナショナル インダストリーズ インコーポレイテッド Pedestrian information system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014096661A (en) 2012-11-08 2014-05-22 International Business Maschines Corporation Method for realtime diminishing of moving object in moving image during photographing of moving image, moving image photographing apparatus for the same, and program for mentioned moving image photographing apparatus
JPWO2014188565A1 (en) * 2013-05-23 2017-02-23 パイオニア株式会社 Display control device
JP2019066564A (en) 2017-09-28 2019-04-25 日本精機株式会社 Display, display control method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007148835A (en) * 2005-11-28 2007-06-14 Fujitsu Ten Ltd Object distinction device, notification controller, object distinction method and object distinction program
JP2009110065A (en) * 2007-10-26 2009-05-21 Toyota Central R&D Labs Inc Driving assistance device
JP2015118667A (en) * 2013-12-20 2015-06-25 株式会社大成化研 Approach notification device
JP2017536595A (en) * 2014-09-26 2017-12-07 ハーマン インターナショナル インダストリーズ インコーポレイテッド Pedestrian information system
JP2016186786A (en) * 2015-01-21 2016-10-27 トヨタ モーター エンジニアリング アンド マニュファクチャリング ノース アメリカ,インコーポレイティド Wearable smart device for hazard detection and warning based on image and audio data
US20160253560A1 (en) * 2015-02-27 2016-09-01 Sony Corporation Visibility enhancement devices, systems, and methods

Also Published As

Publication number Publication date
JPWO2021230049A1 (en) 2021-11-18
JP7504201B2 (en) 2024-06-21

Similar Documents

Publication Publication Date Title
KR101668165B1 (en) Displaying sound indications on a wearable computing system
KR102244856B1 (en) Method for providing user interaction with wearable device and wearable device implenenting thereof
US10096301B2 (en) Method for controlling function and electronic device thereof
US9390607B2 (en) Smart device safety mechanism
US20150094118A1 (en) Mobile device edge view display insert
US10848606B2 (en) Divided display of multiple cameras
JP6705656B2 (en) Visual aids and object classification detection methods
KR20150129423A (en) Electronic Device And Method For Recognizing Gestures Of The Same
US20150158426A1 (en) Apparatus, control method thereof and computer-readable storage medium
KR20150099650A (en) Method and apparatus for displaying biometric information
US9826303B2 (en) Portable terminal and portable terminal system
JP2024506809A (en) Methods and devices for identifying dangerous acts, electronic devices, and storage media
WO2021090080A1 (en) Proximity detecting headphone devices
US10085107B2 (en) Sound signal reproduction device, sound signal reproduction method, program, and recording medium
CN107219920A (en) The recognition methods of AR glasses, device and AR glasses based on scene
US12444142B2 (en) Positioning system to position a terminal carried by a user in a vehicle
CN111274043B (en) Near field communication method, near field communication device, near field communication system, storage medium and electronic equipment
WO2021230049A1 (en) Information notification system
WO2023084945A1 (en) Output control device
KR20160022082A (en) Wearable device and method for controlling the same
JP7246255B2 (en) Information processing device and program
WO2021235147A1 (en) Information processing system
US20230091669A1 (en) Information processing apparatus, information processing system, and non-transitory computer readable medium storing program
WO2020230892A1 (en) Processing device
WO2021172137A1 (en) Content sharing system and terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21803162

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022521807

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21803162

Country of ref document: EP

Kind code of ref document: A1