WO2007037185A1 - Information presentation device, information presentation method, information presentation program, and recording medium - Google Patents
Information presentation device, information presentation method, information presentation program, and recording medium Download PDFInfo
- Publication number
- WO2007037185A1 WO2007037185A1 PCT/JP2006/318915 JP2006318915W WO2007037185A1 WO 2007037185 A1 WO2007037185 A1 WO 2007037185A1 JP 2006318915 W JP2006318915 W JP 2006318915W WO 2007037185 A1 WO2007037185 A1 WO 2007037185A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- unit
- package
- destination
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
Definitions
- Information presentation device information presentation method, information presentation program, and recording medium
- the present invention relates to an information presentation device, an information presentation method, an information presentation program, and a recording medium that are mounted on a moving body such as a vehicle.
- a moving body such as a vehicle.
- the use of the present invention is not limited to the information presentation device, the information presentation method, the information presentation program, and the recording medium described above. Background art
- Such a navigation device includes, for example, own vehicle position detection means for detecting the current position of the own vehicle, route search means for searching for the optimum route from the current position to the destination according to the map information, and the optimum route.
- the decision means for judging the branch road in the traveling direction of the vehicle and the route to travel on the branch road, and the guidance means for guiding the judgment result by voice message when the vehicle reaches the branch road. It is configured and provides voice guidance for routes that should travel on a branch road (see, for example, Patent Document 1).
- Patent Document 1 Japanese Patent Application Laid-Open No. 09-329456
- the above-described conventional navigation device reports that the baggage and other belongings brought into the vehicle at the time of boarding are placed in the vehicle when arriving at the destination or waypoint. There is no function. As a result, there is a problem that the vehicle power drops even if the luggage or belongings placed on the seats, in the trunk or in other parts of the car are left behind.
- the problem to be solved by the present invention includes the above-described problem as an example.
- the information presentation device includes a position acquisition means for acquiring a current position, a setting means for setting a destination or a waypoint, and a load for detecting that a load has been loaded.
- An object detection means an input means for receiving input of additional information for the load detected to be loaded by the load detection means; a storage means for storing the additional information; and the current position is the destination or Memorized in the storage means when approaching the waypoint! Presenting means for presenting the additional information.
- the information presentation method according to the invention of claim 5 includes a setting step of setting a destination or a transit point, a baggage detection step of detecting that a baggage is loaded, and detection of loading.
- an information presentation program according to claim 6 causes a computer to execute the information presentation method according to claim 5.
- a recording medium according to claim 7 is characterized in that the information presentation program according to claim 6 is recorded.
- FIG. 1 is a block diagram illustrating an example of a functional configuration of an information presentation device according to an embodiment.
- FIG. 2 is a flowchart for explaining an example of an information presentation processing procedure of the information presentation apparatus according to the embodiment.
- FIG. 3 is a block diagram of an example of a hardware configuration of the navigation device according to the first embodiment.
- FIG. 4 is a flowchart of an example of a voice memo registration process procedure in the information presentation process of the navigation device according to the first embodiment.
- FIG. 5 is a diagram showing a screen display example when registering a voice memo.
- FIG. 6 is a flowchart of an example of a voice memo reproduction process procedure in the information presentation process of the navigation device according to the first embodiment.
- FIG. 7 is a diagram showing a screen display example when a voice memo is reproduced.
- FIG. 8 shows an example of a hardware configuration of the navigation device according to the second embodiment. It is a block diagram.
- FIG. 9 is a flowchart of an example of a voice memo registration process procedure in the information presentation process of the navigation device according to the second embodiment.
- FIG. 10 is a flowchart of an example of a voice memo reproduction process procedure in the information presentation process of the navigation device according to the second embodiment.
- FIG. 11 is a flowchart of an example of a video memo registration process procedure in the information presentation process of the navigation device according to the third embodiment.
- FIG. 1 is a block diagram showing an example of a functional configuration of an information presentation device that is useful for an embodiment of the present invention.
- the information presentation device 100 includes a package detection unit 101, an input unit 102, a setting unit 103, a storage unit 104, a position acquisition unit 105, and a presentation unit 106. Further, the information presentation apparatus 100 may include an image information acquisition unit 107 and an identification information acquisition unit 108.
- the information presentation device 100 is mounted on a moving body such as a vehicle (including a four-wheeled vehicle and a two-wheeled vehicle).
- the load detection unit 101 detects that a load has been loaded in the moving body.
- a section for placing luggage is provided in the moving body, and when the luggage is placed in the compartment, the luggage detection unit 101 detects that the luggage has been placed.
- compartments for storing luggage include a so-called drink holder, a cup holder, a pocket (storage part) provided inside the door, a glove box, a seat, and a trunk.
- a large area such as a seat or a trunk may be subdivided into a plurality of areas.
- the input unit 102 accepts input of additional information for a package detected to be loaded by the package detection unit 101.
- the additional information of the package is information that can identify the package, such as the name of the owner, the appearance and size of the package, and the contents of the package, and one or more of them can be specified. Information.
- the setting unit 103 sets the destination and waypoint based on information input by the user.
- Storage unit 104 stores the additional information of the package received by input unit 102 in association with the destination or waypoint where the package should be unloaded, among the destinations and waypoints set by setting unit 103. To do.
- the storage unit 104 can be written and read at any time, and holds the stored additional information in a nonvolatile manner.
- the storage unit 104 is, for example, an EE It is composed of a rewritable nonvolatile semiconductor memory such as PROM (Electronically Erasable and Programmable Read Only Memory) and FRAM (ferroelectric memory), and a storage device such as HD (Hard Disk).
- the position acquisition unit 105 acquires the current position information of the moving object.
- the current position information is information indicating the current position of the moving object based on information such as latitude / longitude and altitude obtained by GPS.
- the presenting unit 106 arrives at or approaches the destination or waypoint set by the current position setting unit 103 acquired from the position obtaining unit 105, the presenting unit 106 reads the destination or waypoint from the storage unit 104. The additional information of the package associated with is read and presented.
- the image information acquisition unit 107 acquires image information of a load detected to be loaded by the package detection unit 101.
- the image information acquired by the image information acquisition unit 107 is received by the input unit 102 as additional information on the package.
- the input unit 102 has a function of receiving the image information acquired by the image information acquisition unit 107 as additional information on the package.
- the image information receiving function of the image information acquisition unit 107 and the input unit 102 is unnecessary.
- the identification information acquisition unit 108 acquires identification information for identifying individual packages.
- the identification information is, for example, information on a tag attached to the package or the weight of the package.
- the tag information is different for each package, so it is preferable as identification information.
- the weight of the package usually varies from package to package. Therefore, the weight of the package can be used as identification information.
- the identification information acquisition unit 108 is configured by a device that reads the tag information.
- the identification information acquisition unit 108 is configured by a device capable of detecting the package weight, such as a weight sensor or a pressure sensor.
- the package identification information acquired by the identification information acquisition unit 108 is stored in the storage unit 104 in association with the additional information of the package received by the input unit 102.
- the storage unit 104 has a function of storing the additional information of the package in association with the identification information of the package.
- the presentation unit 106 also provides additional information on the package. Only when a new identification information acquisition unit 108 acquires the identification information of the package, and the newly acquired identification information matches the identification information stored in the storage unit 104. Present additional information. In the case where the baggage confirmation operation using the package identification information is not performed, the identification information storage function of the identification information acquisition unit 108 and the storage unit 104 and the package confirmation function of the presentation unit 106 are unnecessary.
- FIG. 2 is a flowchart for explaining an example of the information presentation processing procedure of the information presentation apparatus according to the embodiment of the present invention.
- the setting unit 103 causes the destination “ A waypoint is set (step S201).
- the package detection unit 101 detects the package placed in a predetermined section in the moving body (step S202).
- the information input by the user of the information presentation device 100 or the image information about the package placed in each section by the input unit 102 is displayed.
- the image information acquired by the acquisition unit 107 is received as additional information, and the additional information is stored in the storage unit 104 (step S203).
- the additional information of each package should be stored in association with the destination or waypoint where the package should be unloaded.
- step S201 When a destination or waypoint is already set before starting a series of information presentation processing according to this flowchart, or when additional information of each package is not associated with the destination or waypoint
- the package detection unit 101 detects the package in step S202.
- the identification information acquisition unit 108 acquires the identification information of each package, the identification information and the additional information are stored in association with each package.
- the position acquisition unit 105 acquires the current position of the host vehicle, and determines whether or not the current position is in the vicinity of the destination / route via (step S204). If the current position is not near the destination's destination (Step S204: No), the process returns to Step S204 and the current position Check the position. If the current position is in the vicinity of the destination point (step S204: Yes), the presentation unit 106 reads the additional information from the storage unit 104 and presents the additional information (step S205). At this time, if the additional information of the package is stored in association with the destination or waypoint where the package should be unloaded, the presenting unit 106 makes the destination or waypoint near the current position or its vicinity from the storage unit 104. The additional information associated with is read and presented.
- the identification information acquisition unit 108 acquires the package identification information again. Then, the newly obtained identification information is compared with the identification information stored in the storage unit 104 by the presenting unit 106, and only when the identification information matches, the additional information of the matched package is obtained. Present. If the new and old identification information do not match, the presentation unit 106 tries to present additional information and the package is said to have been placed, and the package actually placed in the compartment. Is not different, so no additional information is presented. When the additional information is presented by the presentation unit 106, the series of information presentation processing according to this flowchart ends.
- the presentation unit 106 reads the additional information from the storage unit 104 and presents the additional information to the user. Therefore, the user can know that there is a load loaded in the moving body when the moving strength is also lowered, so that the user can get off the moving strength without leaving the luggage in the moving body. In other words, things left behind can be prevented.
- the destination or waypoint is set by the setting unit 103, and the additional information of the package is stored in the storage unit 104 in association with the destination or waypoint, and arrives at or near the destination or waypoint
- the presentation unit 106 presents additional information on the packages associated with the destination or waypoint, so it is possible to present only the calorie information with the luggage to be dropped at the destination or waypoint. it can. Therefore, it is possible to prevent the baggage to be unloaded at another waypoint or destination from being unintentionally dropped.
- the identification information acquisition unit 108 acquires the package identification information, and the presentation unit 106 collates the new and old identification information when presenting the additional information. Since the additional information is presented only for the package that has been removed, it is possible to prevent the wrong package from being unintentionally dropped. In addition, when using the image information acquired by the image information acquisition unit 107 as additional information of the package, the user may unload the wrong package by looking at the image information presented as the additional information. Can be prevented more reliably.
- the information presentation apparatus 100 when the information presentation apparatus 100 is turned on, or when the destination or waypoint is set by the setting unit 103, it is confirmed that the baggage is placed by the baggage detection unit 101. When detected, the input unit 102 automatically enters additional package information. Therefore, an operation for inputting additional information is simplified.
- the additional information on a package is called a voice memo.
- FIG. 3 is a block diagram illustrating an example of a hardware configuration of the navigation device according to the first embodiment of the present invention.
- the navigation device 300 includes a navigation control unit 301, a user operation unit 302, a display unit 303, a position acquisition unit 304, a recording medium 305, a recording medium decoding unit 306, and an audio output unit. 307, a communication unit 308, a route search unit 309, a route guidance unit 310, an audio generation unit 311, a speaker 312, a storage tray switch unit 313, an audio input unit 314, and a microphone 315.
- the navigation control unit 301 controls the entire navigation device 300.
- the navigation control unit 301 includes, for example, a CPU (Central Process Unit) that executes predetermined arithmetic processing, a ROM (Read Only Memory) that stores various control programs, and a RAM (Random) that functions as a work area for the CPU. Access Memory) It can be realized by a microcomputer configured as described above.
- the navigation control unit 301 includes a nonvolatile semiconductor memory such as an EEPROM or FRAM for storing a voice memo of a package.
- the navigation control unit 301 inputs and outputs information related to route guidance between the route search unit 309, the route guidance unit 310, and the voice generation unit 311 when performing route guidance, and information obtained as a result thereof. Is output to the display unit 303 and the audio output unit 307. Further, the navigation control unit 301 stores the voice memo input via the microphone 315 and the voice input unit 314 in the nonvolatile semiconductor memory in the navigation control unit 301 when registering the voice memo. Further, the navigation control unit 301 reads the voice memo from the non-volatile semiconductor memory in the navigation control unit 301 and outputs it to the voice output unit 307 when presenting the voice memo.
- the user operation unit 302 outputs information input by the user, such as characters, numerical values, and various instructions, to the navigation control unit 301.
- the user operation unit 302 is, for example, a push button switch, a touch panel, a remote control, or the like. Further, the user operation unit 302 may be configured to perform an input operation by voice using the voice input unit 314 and the microphone 315.
- the display unit 303 includes, for example, a CRT (Cathode Ray Tube), a TFT (Thin Film Transistor) liquid crystal display, an organic EL (Electroluminescence) display, a plasma display, and the like.
- the display unit 303 can be configured by, for example, a video IZF or a video display device connected to the video IZF.
- the video IZF includes, for example, a graphic controller that controls the entire display device, a buffer memory such as a VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphic Based on image information output from the controller, it is configured by a control IC that controls display of the display device.
- This display section 303 displays map data, route guidance information, voice memo information, and various other information.
- the position acquisition unit 304 includes a GPS receiver and various sensor forces, and the position of the moving object
- the GPS receiver receives the radio wave from the GPS satellite and determines the geometric position with the GPS satellite.
- GPS is an abbreviation for Global Positioning System, and is a system for accurately obtaining a position on the ground by receiving radio waves from four or more satellites.
- the GPS receiver consists of an antenna for receiving radio waves from GPS satellites, a tuner that demodulates the received radio waves, and an arithmetic circuit that calculates the current position based on the demodulated information.
- the various sensors are sensors such as a speed sensor, an angular speed sensor, an acceleration sensor, and the like mounted on the mobile body or the navigation device 300. From the information output from these sensors, the mobile body displacement, the mobile speed, etc. Calculate the moving direction, inclination angle, etc. In this way, by using the output information of the various sensors described above together with the information obtained from the radio wave received by the GPS receiver, the position of the moving object can be recognized with higher accuracy.
- the recording medium 305 can be realized by, for example, an HD, a DVD (Digital Versatile Disk), a CD (Compact Disk), a memory card, or the like. Note that the recording medium 305 may accept writing of information by the recording medium decoding unit 306 and record the written information in a nonvolatile manner.
- the recording medium 305 stores map data used for route search and route guidance.
- the map data recorded in the recording medium 305 has background data that represents features (features) such as buildings, rivers, and the ground surface, and road shape data that represents the shape of the road. It is drawn in 2D or 3D on the 303 display screen.
- the navigation apparatus 300 When the navigation apparatus 300 is guiding a route, the map data read from the recording medium 305 by the recording medium decoding unit 306 and the mark indicating the position of the moving body acquired by the position acquisition unit 304 The force is displayed by the navigation control unit 301 so as to overlap the display screen of the display unit 303.
- the background data represents the background shape data representing the shape of the background and the background type.
- Background type data includes, for example, the representative point of the feature 'polyline' polygon 'and the coordinates of the feature.
- the background type data includes, for example, text data representing the name, address and telephone number of the feature, and type data of the feature such as the building 'river' ground surface.
- the road shape data is data relating to a road network having a plurality of nodes and links.
- the node indicates an intersection where a plurality of roads intersect such as a three-way crossroad 'crossroad' and a five-way crossing.
- the link indicates a road connecting the nodes.
- the road shape data further includes traffic condition data.
- the traffic condition data includes, for example, the presence or absence of traffic lights, crosswalks, etc., the presence or absence of highway exits and traffic, the length (distance) of each link, vehicle width, direction of travel, traffic prohibition, road type ( Such as expressways, toll roads, and general roads).
- This traffic condition data stores past traffic information obtained by statistically processing past traffic information based on seasons, days of the week, large holidays, and time.
- the force for recording the map data on the recording medium 305 is not limited to this.
- the map data may be provided outside the navigation device 300, not limited to the one that is provided integrally with the hardware of the navigation device 300.
- the navigation apparatus 300 acquires map data via a network, for example, through the communication unit 308. Acquired map data is recorded in RAM.
- the recording medium decoding unit 306 controls read / write of information with respect to the recording medium 305.
- the sound output unit 307 reproduces sound such as guidance sound by controlling output to the connected speaker 312.
- the audio output unit 307 can be constituted by, for example, a DZA converter that performs DZA conversion of audio digital information and an amplifier that amplifies an audio analog signal output from the DZA converter.
- the communication unit 308 includes, for example, an FM multiplex tuner, a VICS (registered trademark) Z beacon receiver, a wireless communication device, and other communication devices, and performs communication with other communication devices. Do.
- the communication unit 308 may be configured to perform communication via a communication medium such as a mobile phone, PHS, communication card, and wireless LAN.
- road traffic information such as traffic congestion and traffic regulation, which is recognized as VICS (Vehicle Information and Communication System) center force
- VICS Vehicle Information and Communication System
- the route search unit 309 uses the map data acquired from the recording medium 305 via the recording medium decoding unit 306, the VICS information acquired via the communication unit 308, and the like from the departure point to the destination. Find the best route to Here, the optimal route is the route that best meets the conditions specified by the user. In general, there are an infinite number of routes from the starting point to the destination. For this reason, items to be considered in the route search are set, and a route that matches the conditions is searched.
- the starting point of the route searched by the route search unit 309 is the current position of the moving body (the current position of the navigation device 300) acquired by the position acquisition unit 304 or the user from the user operation unit 302 by the user.
- the designated departure place is set.
- destinations and waypoints destinations and waypoints entered by the user, as well as facilities that have been searched for map data based on genre search, etc. may be set as destinations and waypoints. Yes.
- the route guidance unit 310 obtains the optimum route information searched by the route search unit 309, the position information of the moving body acquired by the position acquisition unit 304, and the recording medium 305 via the recording medium decoding unit 306. Based on the obtained map data, route guidance information for guiding the user to the destination is generated.
- the route guidance information generated by the route guidance unit 310 may be information that considers the traffic jam information received by the communication unit 308.
- the route guidance information generated by the route guidance unit 310 is output to the display unit 303 via the navigation control unit 301.
- the voice generation unit 311 generates various types of voice information such as guidance sounds. That is, based on the route guidance information generated by the route guidance unit 310, the virtual sound source corresponding to the guidance point Are generated and voice guidance information is generated, and this is output to the voice output unit 307 via the navigation control unit 301.
- the speaker 312 reproduces (outputs) the navigation guidance sound output from the audio output unit 307 and the audio output from the navigation control unit 301 via the audio output unit 307. Further, for example, the speaker 312 may be provided with headphones or the like so that the output form of the guidance sound and voice is appropriately changed so that the sound field of the guidance sound and voice output from the entire interior of the vehicle is not generated.
- the storage tray switch unit 313 includes a tray for placing a load and a switch for detecting that the load has been placed on the tray.
- a tray for example, a loose door pocket provided on the inside of the door, a storage part that is initially installed in a moving body such as a center console, a glove box, or a trunk, or a so-called drink holder.
- back-end items such as cup holders and capsule-shaped storage cases attached to the roof.
- the storage tray includes the storage unit, the retrofitting articles, and the luggage storage member. It should be noted that this is a configuration that detects that a load has been placed using only the switch without providing a storage tray.
- a sensor capable of detecting the weight of the load such as a weight sensor or a pressure sensor, can be used.
- a sensor is attached to the bottom of the storage tray.
- the weight sensor and pressure sensor are activated by the weight of the load, and the sensor force is also controlled by navigation.
- a detection signal is output to the unit 301.
- the voice input unit 314 converts the voice analog information input from the connected microphone 315 into voice digital information and outputs the voice digital information to the navigation control unit 301.
- the voice input unit 314 includes an AZD converter that performs AZD conversion of voice analog information.
- the microphone 315 is used for voice input to the navigation device 300.
- the luggage detection unit 101 in FIG. 1 which is a functional configuration of the information presentation device 100 that is relevant to the embodiment includes, for example, a navigation control unit 301 and a storage trace.
- the function is realized by the switch 313 or the like.
- the input unit 102 realizes its function by, for example, the navigation control unit 301, the voice input unit 314, the microphone 315, and the like.
- the setting unit 103 realizes its function by, for example, the navigation control unit 301 and the user operation unit 302.
- the storage unit 104 realizes its function by, for example, the navigation control unit 301.
- the position acquisition unit 105 realizes its function by, for example, the navigation control unit 301 and the position acquisition unit 304.
- the presentation unit 106 realizes its function by, for example, a navigation control unit 301, a display unit 303, an audio output unit 307, an audio generation unit 311 and a speaker 312.
- the image information acquisition unit 107 and the identification information acquisition unit 108 of FIG. 1 are not provided.
- FIG. 4 is a flowchart showing an example of a voice memo registration processing procedure.
- FIG. 5 is a diagram showing an example of a screen displayed when registering a voice memo.
- FIG. 6 is a flowchart showing an example of a voice memo reproduction processing procedure.
- FIG. 7 is a diagram showing an example of a screen displayed when a voice memo is played.
- the processing shown in FIGS. 4 and 6 includes, for example, a program stored in (recorded in) the RAM, ROM, or recording medium 305 of the navigation control unit 301 shown in FIG. This is realized by the CPU of the control unit 301 executing.
- the description will be made mainly with reference to FIG. 3. However, the same reference numerals are given to the portions overlapping with those already described, and the description will be omitted.
- the voice memo registration processing procedure will be described.
- the navigation control unit 301 obtains information input from, for example, the user operation unit 302, and sets a destination / route point (step S401).
- the navigation control unit 301 sets 1 to [n ⁇ l], that is, the value of n representing the storage tray number (step S402).
- the navigation control unit 301 It is determined whether the storage tray switch-n of the storage tray switch section 313 is on (step S403).
- the storage tray switch—n is a switch of the nth storage tray.
- step S403: Yes the navigation control unit 301 determines that the luggage is placed on the storage tray n, and the voice memo for this luggage is stored. The user is notified that input is possible (step S404).
- the storage tray-n is the nth storage tray. If the storage tray switch—n is OFF (step S403: No), step S404 to step S410 are skipped and the process proceeds to step S411.
- the voice generation unit 311, the voice output unit 307 and the speaker 312 may be used to output a guidance prompting input using synthesized speech. You can also display a message prompting you for input on the display 303 screen. Or you may just sound a buzzer. In addition, a recorded voice may be used instead of the synthesized voice.
- FIG. 5 shows a display example when a message prompting input is displayed.
- 500 is a display screen
- 501 is a message prompting input, for example, “Register voice memo”. This is a message display area that displays messages such as “t”.
- 502 is a memo information display area
- 503 and 504 are selection button display areas.
- step S404 the navigation control unit 301 determines whether or not the voice memo associated with the storage tray-n has been stored (step S405). As a result, if stored (step S405: Yes), the navigation control unit 301 displays, for example, that there is a stored voice memo in the message display area 501 (see FIG. 5) of the display screen 500. (Step S406). For example, the message “Voice memo is registered” is displayed. Then, the process proceeds to step S407.
- step S405 No
- step S406 is skipped and the process proceeds to step S407.
- the navigation control unit 301 determines whether or not the voice memo skip button has been pressed via the user operation unit 302 (step S407).
- step S407: Yes step S408 to step S410 are skipped and the process proceeds to step S411.
- step S 407: No the navigation control unit 301 determines whether or not the voice memo start button has been pressed via the user operation unit 302. (Step S408).
- the voice memo skip button is a button that is pressed when the voice memo is not stored.
- the voice memo start button is a button that is pressed when storing a voice memo.
- These buttons may be provided as push button switches on the casing of the navigation device 300 or the remote control. Alternatively, these buttons may be displayed on the screen of the display unit 303 as button images.
- a “start” button image corresponding to the voice memo start button is displayed in one selection button display area 503, and the other selection button display area 504 is displayed.
- a “skip” button image corresponding to the voice memo skip button may be displayed, and one of the button images may be selected by operating the remote controller to push the button in a pseudo manner.
- the display screen 500 of the display unit 303 may be configured with a touch panel, and either button may be pressed by touching a “start” button image or a “skip” button image.
- step S408 when the voice memo start button is pressed (step S408: Yes), the luggage placed in the storage tray n by the navigation control unit 301, the voice input unit 314, and the microphone 315.
- the information about is input by voice and the voice memo associated with the storage tray n is recorded (step S409). Then, the process proceeds to step S411.
- the voice memo is information that can identify the package, such as information about the package, such as the name of the owner of the package, the appearance and size of the package, and the contents of the package. , Associated with the storage tray where the package was placed.
- step S408 determines whether or not a certain time has elapsed after the start of the voice memo input enabled display is displayed by the navigation control unit 301. Is determined (step S410). If the predetermined time has elapsed (step S410: Yes), the process proceeds to step S411. If the fixed time has not elapsed (step S410: No), the process returns to step S407, and the voice memo skip button or the voice memo start button is pressed until the predetermined time has elapsed after the voice memo input enabled display starts. Wait for.
- step S403 when the storage tray switch-n is off (step S403:
- step S410 Yes
- the navigation control unit 301 increments [n ⁇ n + 1], that is, the value of n by 1 (step S411).
- the navigation control unit 301 determines whether or not [n> Nmax] (step S412).
- step S412 If [n ⁇ Nmax] (step S412: No), the process returns to step S403 and repeats steps S403 to S412, so that the above-described series of processing is performed for all the storage trays. I do. If [n> Nmax] (step S412: Yes), the series of voice memo registration processing according to this flowchart is terminated.
- the destination and waypoint are set in step S401.
- the destination and waypoint are already set before starting a series of voice memo registration processing according to this flowchart.
- the processing after step S402 may be performed with the power-on of the navigation device 300 as a trigger.
- the processing after step S402 is performed using the starting of the moving body engine as a trigger.
- step S601 the navigation control unit 301 and the position acquisition unit 304 acquire the current position of the moving body, and determine whether the current position is a force near the destination / route point. Judgment is made (step S601). If the current position is not in the vicinity of the destination point (step S601: No), the process returns to step S601 to check the current position.
- step S602 When the current position is in the vicinity of the destination / route point (step S601: Yes), the navigation control unit 301 sets [n ⁇ l], that is, n to 1 (step S60). 2).
- the timing for performing step S602 is when the moving body is stopped or the moving engine is stopped near the destination or waypoint, or when the distance from the destination or waypoint is less than a certain distance. This is the case.
- step S603 it is determined whether or not the storage tray switch n of the storage tray switch unit 313 is on (step S603). As a result, if the storage tray switch n is off (step S603: No), step S604 to step S609 are skipped and the process proceeds to step S610. If the storage tray switch-n is on (step S603: Yes), it is determined whether or not the voice memo associated with the storage tray n is already registered (step S604).
- Step S604 No
- step S605 to step S609 are skipped, and the process proceeds to step S610.
- the navigation control unit 301 can notify the user that the voice memo is registered (step S605).
- the voice generation unit 311, the voice output unit 307 and the speaker 312 may be used to report the synthesized voice, or the display unit 303.
- a message may be displayed on the screen. Or just sound the buzzer.
- FIG. 7 shows a display example when displaying a message reporting that a voice memo is registered. As shown in FIG. 7, for example, the navigation control unit 301 displays the display screen 500. In the message display area 501, a message saying “There is a voice memo” is displayed.
- the navigation control unit 301 may display the storage tray number in the memo information display area 502 of the display screen 500, or the voice generation unit 311, the voice output unit 307, and the speaker 312.
- the number of the storage tray may be reported by synthetic voice.
- registered memo information associated with the storage tray n may be displayed.
- the registered voice memo is stored in the well-known voice recognition chain. It is necessary to convert it into character information using gin or the like, and the navigation control unit 301 has this function.
- step S605 the navigation control unit 301 determines whether or not the voice memo skip button has been pressed via the user operation unit 302 (step S606). If the voice memo skip button has been pressed (step S606: Yes), step S607 to step S609 are skipped and step S610 is advanced. If the voice memo skip button has not been pressed (step S606: No), the navigation control unit 301 determines whether or not the voice memo playback button has been pressed via the user operation unit 302 (step S606). S607).
- the voice memo skip button is a button that is pressed when the voice memo is not reproduced.
- the voice memo playback button is a button that is pressed when the voice memo is played back.
- These buttons may be provided as push button switches on the casing of the navigation device 300 or the remote control. Alternatively, these buttons may be displayed on the screen of the display unit 303 as button images.
- a “play” button image corresponding to the voice memo playback button is displayed in one selection button display area 503, and the other selection button display area 5004 is displayed.
- a “skip” button image corresponding to the voice memo skip button may be displayed, and one of the button images may be selected by operating the remote controller to push the button in a pseudo manner.
- the display screen 500 of the display unit 303 may be configured by a touch panel, and either button may be pressed by touching the “play” button image or the “skip” button image.
- step S607 when the voice memo playback button is pressed (step S607: Yes), the navigation tray 301, the voice generator 311, the voice output unit 307, and the speaker 312 allow the storage tray-n.
- the voice memo associated with is played (step S608). Then, proceed to Step S610.
- step S607 if the voice memo playback button is not pressed (step S607: No), the navigation control unit 301 displays whether the voice memo has been registered or not after a certain period of time has passed. Is determined (step S609). When a certain period of time has passed (Step S609 : Yes), go to step S610. If the fixed time has not passed (step S609: No), the process returns to step S606, and the voice memo skip button or the voice memo playback button is not displayed until the fixed time has passed after displaying the voice memo registered. Wait for it to be pressed.
- step S603 when the storage tray switch-n is off (step S603:
- Step S60 4 when the voice memo associated with the storage tray n is not registered (Step S60 4: No), when the voice memo skip button is pressed (Step S606: Yes), the voice memo playback button is If a certain period of time has elapsed since the voice memo registered display was displayed without being pressed (step S609: Yes), the navigation control unit 301 [n n + 1], that is, the value of n is set to 1. Increment (step S610). Next, the navigation control unit 301 determines whether or not [n> Nmax] (step S611).
- step S611 If [n ⁇ Nmax] (step S611: No), the process returns to step S603 and repeats step S603 to step S611, whereby the above-described series of processing is performed for all the storage trays. I do. If [n> Nmax] (step S611: Yes), the series of voice memo playback processing according to this flowchart is terminated.
- the navigation control unit 301 and the position acquisition unit 304 can reach the destination or the waypoint or have come close to it.
- the voice memo is reproduced by the navigation control unit 301, the display unit 303, the voice output unit 307, the voice generation unit 311 and the speaker 312 to inform the user of the contents of the voice memo. Therefore, when the user gets off the moving body, the user can know that there is a load loaded in the moving body, so that the user can get off the moving power without leaving the luggage in the moving body. In other words, things left behind can be prevented.
- the navigation control unit 301 and the user operation unit 302 set the destination and waypoint, and store the voice memo of the package in association with the destination and waypoint, and arrive at the destination and waypoint. Or when it comes close, the navigation control unit 301, the display unit 303, the audio output unit 307, the audio generation unit 311 and the speaker 312 reproduce the voice memo associated with the destination or waypoint. Only voice memos of packages to be dropped at the destination or waypoint can be played. Therefore, it is possible to prevent the baggage to be unloaded at another waypoint or destination from being unintentionally dropped.
- the navigation control unit 301 and the storage device are stored.
- the tray switch unit 313 detects that a baggage is placed, the navigation control unit 301, the voice input unit 314, and the microphone 315 automatically enter a voice memo. Therefore, the operation of inputting a voice memo becomes simple.
- the route for reaching the destination via the waypoint is set, and the route guidance is particularly referred to. It may be set and guided.
- the route search unit 309 searches for a route to the destination via the waypoint, and the route guidance unit 310 guides the route. Then, during the route guidance, when the current position of the vehicle comes close to the destination or the waypoint, the voice memo reproduction process of steps S601 to S611 described above is performed.
- identification information for identifying individual packages is stored together with the voice memo. Then, using this identification information, confirm that the package at the time of voice memo registration and the package actually placed in the storage tray from which the voice memo is to be played back are the same. Play a note.
- tag information attached to a package is used as identification information. Therefore, the identification information is hereinafter referred to as tag information.
- tag information is about the same structure as Example 1, the same code
- the additional information on the package is also called a voice memo.
- FIG. 8 is a block diagram of an example of a hardware configuration of the navigation device according to the second embodiment of the present invention.
- a navigation apparatus 800 according to the second embodiment includes an RF tag reader unit 816 in addition to the configuration of the navigation apparatus 300 according to the first embodiment shown in FIG.
- the RF tag reader unit 816 reads tag information included in the RF tag attached to the package.
- the tag information includes a unique number for the package, the name of the package, the weight of the package, and the location where the package is unloaded Information such as location is included. Among these pieces of information, one or more pieces of information suitable for identifying the package, for example, a unique number of the package, is used as the package identification information.
- the tag information read by the RF tag reader unit 816 is associated with the voice memo by the navigation control unit 301 and stored in the nonvolatile semiconductor memory in the navigation control unit 301.
- the navigation control unit 301 receives tag information from the RF tag reader unit 816 and stores the tag information in a nonvolatile semiconductor memory. Further, the navigation control unit 301 reads tag information from the semiconductor memory, and determines whether the tag information read from the semiconductor memory and the tag information newly acquired by the RF tag reader unit 816 match. To do.
- the identification information acquisition unit 108 in FIG. 1 which is a functional configuration of the information presentation device 100 that is relevant to the embodiment is performed by, for example, the navigation control unit 301 and the RF tag reader unit 816. Realize the function.
- the image information acquisition unit 107 in FIG. 1 is not provided.
- Other configurations are the same as those in the first embodiment.
- FIG. 9 is a flowchart showing an example of a voice memo registration processing procedure.
- FIG. 10 is a flowchart showing an example of a voice memo reproduction processing procedure.
- the processing shown in FIGS. 9 and 10 includes, for example, a program stored (recorded) in the RAM, ROM, or recording medium 305 of the navigation control unit 301 shown in FIG. This is realized by the CPU of the control unit 301 executing.
- description will be made mainly with reference to FIG. 8. However, portions overlapping with those already described are denoted by the same reference numerals and description thereof is omitted.
- the voice memo registration processing procedure will be described.
- the voice memo registration processing procedure of the second embodiment differs from that of the first embodiment in that the RF tag information is read by the navigation control unit 301 and the RF tag reader unit 816 following step S409. And store it in relation to the storage bin n (Step S913) It is.
- the tag information is stored, the process proceeds to step S411.
- the other procedures are the same as in Example 1.
- the mobile engine is started and the navigation device 800 is powered.
- the process after step S402 may be performed using ON as a trigger.
- the voice memo playback processing procedure will be described.
- the voice memo playback processing procedure of the second embodiment is different from that of the first embodiment in the following three points.
- the first point is that instead of step S601, the current position of the moving body is acquired by the navigation control unit 301 and the position acquisition unit 304, and the current position is in the vicinity of the destination It is to determine whether or not there is a certain force (step S1001).
- the registered place is a point registered by the user by the operation of the user operation unit 302, although it is not the destination or the waypoint.
- the second point is that, following step S604, the RF tag reader force of the storage tray—n section is also read by the navigation control unit 301 and the RF tag reader unit 816 (step S1002). .
- step S1003 If both pieces of tag information match (step S1003: Yes), the process proceeds to step S605.
- step S 1003 If the tag information does not match (step S 1003: No), the process proceeds to step S610.
- the route search unit 309 may search for a route to the destination via the waypoint, and the route guide unit 310 may guide the route.
- the same effect as the first embodiment can be obtained.
- tag information is acquired by the navigation control unit 301 and the RF tag reader unit 816 and the voice memo is reproduced by the navigation control unit 301, the display unit 303, the audio output unit 3 07, the audio generation unit 311 and the speaker 312. New and old The tag information is checked, and the voice memo is played only for the package that matches the tag information, so it is possible to prevent the wrong package from being unintentionally dropped.
- a sensor capable of detecting the weight of the load such as a weight sensor or a pressure sensor, is attached to the bottom of the storage tray, and this sensor detects the weight of the load and uses it to identify the load. It may be used as
- the additional information on the package is called a video memo.
- the hardware configuration of the navigation device according to the third embodiment of the present invention is the same as the hardware configuration of the navigation device 300 according to the first embodiment shown in FIG.
- the communication unit 308 communicates with a digital camera or a camera of a camera-equipped mobile phone and receives electronic data (hereinafter referred to as “video data”) of video that has been captured by the camera power.
- video data electronic data
- a general communication standard between electronic devices such as Bluetooth (registered trademark) can be used.
- the image information acquisition unit 107 in FIG. 1 which is a functional configuration of the information presentation device 100 according to the embodiment, realizes its function by the navigation control unit 301 and the communication unit 308, for example.
- the identification information acquisition unit 108 of FIG. 1 is not provided.
- FIG. 11 is a flowchart showing an example of the video memo registration processing procedure.
- the process shown in FIG. 11 includes, for example, a program stored (recorded) in the RAM, ROM, or recording medium 305 of the navigation control unit 301 shown in FIG. This is realized by executing 301 CPUs.
- a program stored (recorded) in the RAM, ROM, or recording medium 305 of the navigation control unit 301 shown in FIG. This is realized by executing 301 CPUs.
- the description will be made mainly with reference to FIG. 3, but the same parts as those already described are denoted by the same reference numerals and the description thereof is omitted.
- the video memo registration process procedure will be described.
- step S 1101 a photograph of the load on the storage tray is taken with a digital camera or a mobile phone camera.
- the navigation control unit 301 obtains information input from the user operation unit 302, for example, and sets a destination 'waypoint (step S1102).
- the navigation control unit 301 sets [n 1], that is, 1 to the value of n (step S1103).
- the navigation control unit 301 determines whether or not the storage tray switch-n of the storage tray switch unit 313 is on (step S 1104).
- step S1104 determines that a load is placed on the storage tray—n, and the video for this load is displayed. The user is notified that the memo can be set (step S1105). If the container tray switch-n is off (step S1104: No), step SI105 to step S1113 are skipped, and the process proceeds to step S1114.
- the voice generation unit 311, the voice output unit 307, and the speaker 312 are used to output a guidance prompting the setting operation with synthesized voice.
- a message prompting the setting operation may be displayed on the screen of the display unit 303. Or you may just sound a buzzer.
- a recorded voice may be used instead of the synthesized voice.
- the navigation control unit 301 and the communication unit 308 capture the digital camera or mobile phone power video data (step S 1106).
- the navigation control unit 301 and the communication unit 308 can capture video data using, for example, a connection cable, Bluetooth, infrared communication, or the like.
- the navigation control unit 301 and the display unit 303 display the captured video data as a list on the screen (step S 1107).
- step S1108 determines whether or not the video memo associated with the storage tray n has been stored. As a result, remembered If there is (step SI 108: Yes), the navigation control unit 301 displays that there is a stored video memo in the message display area 501 (see FIG. 5) of the display screen 500 (step S 1109). ). For example, the message “Your video memo is registered and will be displayed!” Is displayed. Then, proceed to step S1110.
- a stored video memo associated with the storage tray n may be displayed in the memo information display area 502 of the display screen 500.
- step S1109 is skipped and the process proceeds to step S1110.
- step S1110 determines whether or not the video memo skip button has been pressed via the user operation unit 302 (step S1110). If the video memo skip button is pressed (step S1110: Yes), step S1111 to step S1113 are skipped and the process proceeds to step S1114. If the video memo skip button is not pressed (step S1110: No), the navigation control unit 301 selects an image from the list display via the user operation unit 302, and the video memo registration button is pressed. It is determined whether or not the force is applied (step S1111).
- the video memo skip button is a button that is pressed when the video memo is not stored.
- the video memo registration button is a button that is pressed when storing a video memo.
- These buttons may be provided as push button switches on the casing of the navigation device 300 or the remote control. Alternatively, these buttons may be displayed on the screen of the display unit 303 as button images.
- a “register” button image corresponding to the image memo registration button is displayed, and the other selection button is displayed.
- a “skip” button image corresponding to the video memo skip button may be displayed in area 504, and one of the button images may be selected by operating the remote controller so as to push the button in a pseudo manner.
- the display screen 500 of the display unit 303 may be configured with a touch panel, and either button may be pressed by touching a “register” button image or a “skip” button image.
- step S1112 if the video memo registration button is pressed (step Sl lll: Yes), The pigeon control 301 stores the video memo associated with the storage tray n (step S1112). Then, the process proceeds to step S 1114. On the other hand, when the video memo registration button is not pressed (step Sl lll: No), the navigation control unit 301 determines whether or not a certain time has passed since the start of the video memo input enabled display. (Step S1113).
- step SI 113 If the fixed time has elapsed (step SI 113: Yes), the process proceeds to step S1114. If the fixed time has passed and it is unsatisfactory (step SI 113: No), return to step SI 110 and register the video memo skip button or video memo until a certain time has elapsed after the video memo input enabled display starts. Wait for the button to be pressed.
- step S1104 when the storage tray switch—n is off (step S1104:
- step S1110 Yes
- step S1113 Yes
- the navigation control unit 301 increments [n ⁇ n + 1], that is, the value of n by 1 (step SI 114).
- step S 1115 determines whether or not [n> Nmax] (step S 1115).
- step S1115 No
- the process returns to step S1104 and repeats step S1104 to step S1115, thereby repeating the above-described series of operations for all the storage trays.
- step S1115: Yes If [n> Nmax] (step S1115: Yes), the series of video memo registration processing according to this flowchart is terminated.
- step S1103 when the video memo is not associated with the destination or waypoint, as in the first embodiment, the mobile engine is started and the navigation device 300 is powered.
- the process after step S1103 may be performed using ON as a trigger.
- the video memo playback processing procedure will be described.
- the voice memo may be read as the video memo in the flowchart shown in FIG. 6 and the description of the voice memo playback processing procedure of the first embodiment.
- the voice memo may be read as the video memo in the flowchart shown in FIG. 6 and the description of the voice memo playback processing procedure of the first embodiment.
- the navigation control unit 301 and the communication unit 308 receive and store powerful video data such as a digital camera or a camera-equipped mobile phone camera, so users who do not like voice memos and voice memos. Even a user who does not know how to create a video memo can register a video memo simply by selecting an appropriate image from among the already-captured images displayed on the display unit 303. Therefore, video memos can be easily registered. Furthermore, by reproducing the video memo by the navigation control unit 301 and the display unit 303, it is possible to more reliably prevent the wrong baggage from being unintentionally dropped.
- the baggage at the time of registering the video memo and the baggage actually placed in the storage tray where the video memo is to be played back
- the video memo may be played after confirming that and are the same.
- the route search unit 309 searches for a route to the destination via the waypoint, and the route guidance unit 310 guides the route. During the route guidance, the current position of the host vehicle is determined. Alternatively, the video memo reproduction process may be performed when the vehicle comes near the waypoint.
- the information presentation method described in the present embodiment can be realized by executing a program prepared in advance on a computer such as a personal computer or a workstation.
- This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, an MO, and a DVD, and is executed by reading the recording medium force by a computer.
- the program may be a transmission medium that can be distributed via a network such as the Internet.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
Description
明 細 書 Specification
情報提示装置、情報提示方法、情報提示プログラムおよび記録媒体 技術分野 Information presentation device, information presentation method, information presentation program, and recording medium
[0001] この発明は、車両などの移動体に搭載される情報提示装置、情報提示方法、情報 提示プログラムおよび記録媒体に関する。ただし、この発明の利用は、上述した情報 提示装置、情報提示方法、情報提示プログラムおよび記録媒体には限られない。 背景技術 The present invention relates to an information presentation device, an information presentation method, an information presentation program, and a recording medium that are mounted on a moving body such as a vehicle. However, the use of the present invention is not limited to the information presentation device, the information presentation method, the information presentation program, and the recording medium described above. Background art
[0002] 従来より、車両に搭載されるナビゲーシヨン装置として、目的地までの経路を探索し 、その経路に従って走行できるように案内メッセージを音声で出力する機能を備えた ものが公知である。そのようなナビゲーシヨン装置は、例えば、自車の現在位置を検 出する自車位置検出手段と、地図情報に従って現在位置から目的地に至る最適ル ートを探索するルート探索手段と、最適ルートに基づいて自車の進行方向における 分岐路と該分岐路において進行すべきルートとを判断する判断手段と、自車がその 分岐路に至るときに判断結果を音声メッセージによりガイドする誘導手段とから構成 されており、分岐路において進行すべきルートの音声案内を行う(例えば、特許文献 1参照。)。 Conventionally, navigation devices mounted on vehicles have been known that have a function of searching for a route to a destination and outputting a guidance message by voice so that the vehicle can travel along the route. Such a navigation device includes, for example, own vehicle position detection means for detecting the current position of the own vehicle, route search means for searching for the optimum route from the current position to the destination according to the map information, and the optimum route. On the basis of the vehicle, the decision means for judging the branch road in the traveling direction of the vehicle and the route to travel on the branch road, and the guidance means for guiding the judgment result by voice message when the vehicle reaches the branch road. It is configured and provides voice guidance for routes that should travel on a branch road (see, for example, Patent Document 1).
[0003] 特許文献 1:特開平 09— 329456号公報 [0003] Patent Document 1: Japanese Patent Application Laid-Open No. 09-329456
発明の開示 Disclosure of the invention
発明が解決しょうとする課題 Problems to be solved by the invention
[0004] し力しながら、上述した従来のナビゲーシヨン装置には、目的地や経由地に到着し たときに、乗車時に車内に持ち込んだ荷物やその他の持ち物が車内に置いてあるこ とを報せる機能がない。そのため、座席の上やトランクの中や車内のその他の場所に 置いた荷物や持ち物を置き忘れたまま、車両力も降りてしまうという問題が生じる。本 発明が解決しょうとする課題には、上記した問題が一例として挙げられる。 [0004] However, the above-described conventional navigation device reports that the baggage and other belongings brought into the vehicle at the time of boarding are placed in the vehicle when arriving at the destination or waypoint. There is no function. As a result, there is a problem that the vehicle power drops even if the luggage or belongings placed on the seats, in the trunk or in other parts of the car are left behind. The problem to be solved by the present invention includes the above-described problem as an example.
課題を解決するための手段 Means for solving the problem
[0005] 請求項 1の発明にかかる情報提示装置は、現在位置を取得する位置取得手段と、 目的地または経由地を設定する設定手段と、荷物が積み込まれたことを検出する荷 物検出手段と、前記荷物検出手段により積み込まれたことが検出された荷物に対す る付加情報の入力を受け付ける入力手段と、前記付加情報を記憶する記憶手段と、 前記現在位置が前記目的地または前記経由地に近づいたときに前記記憶手段に記 憶されて!、る付加情報を提示する提示手段と、を備えることを特徴とする。 [0005] The information presentation device according to the invention of claim 1 includes a position acquisition means for acquiring a current position, a setting means for setting a destination or a waypoint, and a load for detecting that a load has been loaded. An object detection means; an input means for receiving input of additional information for the load detected to be loaded by the load detection means; a storage means for storing the additional information; and the current position is the destination or Memorized in the storage means when approaching the waypoint! Presenting means for presenting the additional information.
[0006] また、請求項 5の発明にかかる情報提示方法は、目的地または経由地を設定する 設定工程と、荷物が積み込まれたことを検出する荷物検出工程と、積み込まれたこと が検出された荷物に対する付加情報の入力を受け付け、該付加情報を記憶する記 憶工程と、現在位置を取得し、該現在位置が前記目的地または前記経由地に近づ いたときに、記憶されている付加情報を提示する提示工程と、を含むことを特徴とす る。 [0006] In addition, the information presentation method according to the invention of claim 5 includes a setting step of setting a destination or a transit point, a baggage detection step of detecting that a baggage is loaded, and detection of loading. A process for receiving additional information for the package and storing the additional information, and acquiring the current position and storing the additional information stored when the current position approaches the destination or the waypoint And a presentation process for presenting information.
[0007] また、請求項 6の発明に力かる情報提示プログラムは、請求項 5に記載の情報提示 方法をコンピュータに実行させることを特徴とする。 [0007] Further, an information presentation program according to claim 6 causes a computer to execute the information presentation method according to claim 5.
[0008] また、請求項 7の発明に力かる記録媒体は、請求項 6に記載の情報提示プログラム を記録したことを特徴とする。 [0008] Further, a recording medium according to claim 7 is characterized in that the information presentation program according to claim 6 is recorded.
図面の簡単な説明 Brief Description of Drawings
[0009] [図 1]図 1は、実施の形態にかかる情報提示装置の機能的構成の一例を示すブロック 図である。 FIG. 1 is a block diagram illustrating an example of a functional configuration of an information presentation device according to an embodiment.
[図 2]図 2は、実施の形態にかかる情報提示装置の情報提示処理手順の一例を説明 するフローチャートである。 FIG. 2 is a flowchart for explaining an example of an information presentation processing procedure of the information presentation apparatus according to the embodiment.
[図 3]図 3は、実施例 1にかかるナビゲーシヨン装置のハードウェア構成の一例を示す ブロック図である。 FIG. 3 is a block diagram of an example of a hardware configuration of the navigation device according to the first embodiment.
[図 4]図 4は、実施例 1にかかるナビゲーシヨン装置の情報提示処理における音声メ モ登録処理手順の一例を示すフローチャートである。 FIG. 4 is a flowchart of an example of a voice memo registration process procedure in the information presentation process of the navigation device according to the first embodiment.
[図 5]図 5は、音声メモを登録する際の画面表示例を示す図である。 FIG. 5 is a diagram showing a screen display example when registering a voice memo.
[図 6]図 6は、実施例 1にかかるナビゲーシヨン装置の情報提示処理における音声メ モ再生処理手順の一例を示すフローチャートである。 FIG. 6 is a flowchart of an example of a voice memo reproduction process procedure in the information presentation process of the navigation device according to the first embodiment.
[図 7]図 7は、音声メモを再生する際の画面表示例を示す図である。 [FIG. 7] FIG. 7 is a diagram showing a screen display example when a voice memo is reproduced.
[図 8]図 8は、実施例 2にかかるナビゲーシヨン装置のハードウェア構成の一例を示す ブロック図である。 FIG. 8 shows an example of a hardware configuration of the navigation device according to the second embodiment. It is a block diagram.
[図 9]図 9は、実施例 2にかかるナビゲーシヨン装置の情報提示処理における音声メ モ登録処理手順の一例を示すフローチャートである。 FIG. 9 is a flowchart of an example of a voice memo registration process procedure in the information presentation process of the navigation device according to the second embodiment.
[図 10]図 10は、実施例 2にかかるナビゲーシヨン装置の情報提示処理における音声 メモ再生処理手順の一例を示すフローチャートである。 FIG. 10 is a flowchart of an example of a voice memo reproduction process procedure in the information presentation process of the navigation device according to the second embodiment.
[図 11]図 11は、実施例 3にかかるナビゲーシヨン装置の情報提示処理における映像 メモ登録処理手順の一例を示すフローチャートである。 FIG. 11 is a flowchart of an example of a video memo registration process procedure in the information presentation process of the navigation device according to the third embodiment.
符号の説明 Explanation of symbols
100 情報提示装置 100 Information presentation device
101 荷物検出部 101 Luggage detector
102 入力部 102 Input section
103 設定部 103 Setting section
104 記憶部 104 Memory
105, 304 位置取得部 105, 304 Position acquisition unit
106 提示部 106 Presentation section
107 画像情報取得部 107 Image information acquisition unit
108 識別情報取得部 108 Identification information acquisition unit
300, 800 ナビゲーシヨン装置 300, 800 navigation equipment
301 ナビゲーシヨン制御部 301 Navigation control unit
302 ユーザ操作部 302 User control
303 表示部 303 Display
307 音声出力部 307 Audio output section
308 通信部 308 Communication Department
311 音声生成部 311 Speech generator
312 スピーカ 312 Speaker
313 物置トレイスイッチ部 313 Storage tray switch
314 音声入力部 314 Voice input section
315 マイク 816 RFタグリーダ部 315 microphone 816 RF tag reader
発明を実施するための最良の形態 BEST MODE FOR CARRYING OUT THE INVENTION
[0011] 以下に添付図面を参照して、この発明にかかる情報提示装置、情報提示方法、情 報提示プログラムおよび記録媒体の好適な実施の形態を詳細に説明する。 Hereinafter, preferred embodiments of an information presentation device, an information presentation method, an information presentation program, and a recording medium according to the present invention will be described in detail with reference to the accompanying drawings.
[0012] (実施の形態) [0012] (Embodiment)
(情報提示装置の機能的構成) (Functional configuration of information presentation device)
まず、この発明の実施の形態に力かる情報提示装置の内容について説明する。図 1は、この発明の実施の形態に力かる情報提示装置の機能的構成の一例を示すブ ロック図である。図 1に示すように、情報提示装置 100は、荷物検出部 101、入力部 1 02、設定部 103、記憶部 104、位置取得部 105および提示部 106を備えている。ま た、情報提示装置 100は、画像情報取得部 107および識別情報取得部 108を備え ていてもよい。情報提示装置 100は、例えば車両(四輪車、二輪車を含む)などの移 動体に搭載される。 First, the content of the information presentation apparatus which is useful for the embodiment of the present invention will be described. FIG. 1 is a block diagram showing an example of a functional configuration of an information presentation device that is useful for an embodiment of the present invention. As shown in FIG. 1, the information presentation device 100 includes a package detection unit 101, an input unit 102, a setting unit 103, a storage unit 104, a position acquisition unit 105, and a presentation unit 106. Further, the information presentation apparatus 100 may include an image information acquisition unit 107 and an identification information acquisition unit 108. The information presentation device 100 is mounted on a moving body such as a vehicle (including a four-wheeled vehicle and a two-wheeled vehicle).
[0013] 荷物検出部 101は、移動体の中に荷物が積み込まれたことを検出する。例えば移 動体の中には、荷物を置くための区画が設けられており、その区画に荷物が置かれ ると、荷物検出部 101により荷物が置かれたことを検出する。荷物を置くための区画と しては、例えばいわゆるドリンクホルダーと呼ばれるコップ置き、ドアの内側に設けら れたポケット (収納部)、グローブボックス、座席およびトランクなどである。座席やトラ ンクなどのように面積が広い区画は、複数の区画に細分化されて 、てもよ 、。 [0013] The load detection unit 101 detects that a load has been loaded in the moving body. For example, a section for placing luggage is provided in the moving body, and when the luggage is placed in the compartment, the luggage detection unit 101 detects that the luggage has been placed. Examples of compartments for storing luggage include a so-called drink holder, a cup holder, a pocket (storage part) provided inside the door, a glove box, a seat, and a trunk. A large area such as a seat or a trunk may be subdivided into a plurality of areas.
[0014] 入力部 102は、荷物検出部 101により積み込まれたことが検出された荷物に対する 付加情報の入力を受け付ける。荷物の付加情報とは、例えば所有者の名前、荷物の 外観や大きさ、および荷物の中身など、荷物を特定することができるような情報であり 、それらのうちの 1つまたは 2つ以上の情報である。設定部 103は、ユーザによって入 力された情報に基づいて、目的地や経由地を設定する。 The input unit 102 accepts input of additional information for a package detected to be loaded by the package detection unit 101. The additional information of the package is information that can identify the package, such as the name of the owner, the appearance and size of the package, and the contents of the package, and one or more of them can be specified. Information. The setting unit 103 sets the destination and waypoint based on information input by the user.
[0015] 記憶部 104は、入力部 102により受け付けられた荷物の付加情報を、設定部 103 により設定された目的地や経由地のうち、その荷物を降ろすべき目的地や経由地と 関連付けて記憶する。記憶部 104は、随時、書き込みおよび読み出しが可能で、か つ記憶した付加情報を不揮発に保持する。記憶部 104は、具体的には、例えば EE PROM (Electronically Erasable and Programmable Read Only Memo ry)や FRAM (強誘電体メモリ)などの書き換えが可能な不揮発性の半導体メモリや 、例えば HD (Hard Disk)などの記憶装置によって構成される。 [0015] Storage unit 104 stores the additional information of the package received by input unit 102 in association with the destination or waypoint where the package should be unloaded, among the destinations and waypoints set by setting unit 103. To do. The storage unit 104 can be written and read at any time, and holds the stored additional information in a nonvolatile manner. Specifically, the storage unit 104 is, for example, an EE It is composed of a rewritable nonvolatile semiconductor memory such as PROM (Electronically Erasable and Programmable Read Only Memory) and FRAM (ferroelectric memory), and a storage device such as HD (Hard Disk).
[0016] 位置取得部 105は、移動体の現在位置情報を取得する。現在位置情報とは、 GPS など力 取得した緯度'経度 ·高度などの情報に基づく移動体の現在位置を示す情 報のことをいう。提示部 106は、位置取得部 105から取得した現在位置力 設定部 1 03により設定された目的地または経由地に着いたとき、または近づいたときに、記憶 部 104から、その目的地または経由地に関連付けられた荷物の付加情報を読み出し て、提示する。 [0016] The position acquisition unit 105 acquires the current position information of the moving object. The current position information is information indicating the current position of the moving object based on information such as latitude / longitude and altitude obtained by GPS. When the presenting unit 106 arrives at or approaches the destination or waypoint set by the current position setting unit 103 acquired from the position obtaining unit 105, the presenting unit 106 reads the destination or waypoint from the storage unit 104. The additional information of the package associated with is read and presented.
[0017] 画像情報取得部 107は、荷物検出部 101により積み込まれたことが検出された荷 物の画像情報を取得する。画像情報取得部 107により取得された画像情報は、入力 部 102により荷物の付加情報として受け付けられる。この構成の場合には、入力部 1 02は、画像情報取得部 107により取得された画像情報を荷物の付加情報として受け 付ける機能を備えている。荷物の付加情報として画像情報を用いない場合には、画 像情報取得部 107と入力部 102の画像情報の受け付け機能は不要である。 The image information acquisition unit 107 acquires image information of a load detected to be loaded by the package detection unit 101. The image information acquired by the image information acquisition unit 107 is received by the input unit 102 as additional information on the package. In the case of this configuration, the input unit 102 has a function of receiving the image information acquired by the image information acquisition unit 107 as additional information on the package. When image information is not used as the additional information of the package, the image information receiving function of the image information acquisition unit 107 and the input unit 102 is unnecessary.
[0018] 識別情報取得部 108は、個々の荷物を識別するための識別情報を取得する。識別 情報とは、例えば荷物に付けられたタグの情報や、荷物の重量などである。一般に、 タグの情報は荷物ごとに異なるので、識別情報として好ましい。また、中身や外装が 全く同一の荷物を複数個、運ぶという特殊な場合を除いて、通常、荷物の重量は荷 物ごとに異なるので、荷物の重量を識別情報として用いても、差し支えない。 [0018] The identification information acquisition unit 108 acquires identification information for identifying individual packages. The identification information is, for example, information on a tag attached to the package or the weight of the package. In general, the tag information is different for each package, so it is preferable as identification information. In addition, except for the special case of carrying a plurality of packages with the same contents and exterior, the weight of the package usually varies from package to package. Therefore, the weight of the package can be used as identification information.
[0019] タグの情報を荷物の識別情報とする場合には、識別情報取得部 108は、タグの情 報を読み取る装置により構成される。荷物の重量を荷物の識別情報とする場合には 、識別情報取得部 108は、重量センサや圧力センサなど荷物の重量を検出可能な 装置により構成される。識別情報取得部 108により取得された荷物の識別情報は、 入力部 102により受け付けられたその荷物の付加情報と関連付けられて記憶部 104 に さ 4 る。 When the tag information is used as the package identification information, the identification information acquisition unit 108 is configured by a device that reads the tag information. When the package weight is used as package identification information, the identification information acquisition unit 108 is configured by a device capable of detecting the package weight, such as a weight sensor or a pressure sensor. The package identification information acquired by the identification information acquisition unit 108 is stored in the storage unit 104 in association with the additional information of the package received by the input unit 102.
[0020] この構成の場合には、記憶部 104は、荷物の付加情報を、その荷物の識別情報と 関連付けて記憶する機能を備えている。また、提示部 106は、荷物の付加情報を提 示する際に、新たに識別情報取得部 108により荷物の識別情報を取得し、その新た に取得された識別情報と記憶部 104に記憶されている識別情報とがー致する荷物に ついてのみ、付加情報を提示する。荷物の識別情報を利用した荷物の確認作業を 行わない場合には、識別情報取得部 108と記憶部 104の識別情報の記憶機能と提 示部 106の荷物確認機能は不要である。 [0020] In the case of this configuration, the storage unit 104 has a function of storing the additional information of the package in association with the identification information of the package. The presentation unit 106 also provides additional information on the package. Only when a new identification information acquisition unit 108 acquires the identification information of the package, and the newly acquired identification information matches the identification information stored in the storage unit 104. Present additional information. In the case where the baggage confirmation operation using the package identification information is not performed, the identification information storage function of the identification information acquisition unit 108 and the storage unit 104 and the package confirmation function of the presentation unit 106 are unnecessary.
[0021] (情報提示装置の情報提示処理手順) (Information presentation processing procedure of information presentation device)
次に、この発明の実施の形態に力かる情報提示装置の情報提示処理手順につい て説明する。図 2は、この発明の実施の形態にかかる情報提示装置の情報提示処理 手順の一例を説明するフローチャートである。 Next, an information presentation processing procedure of the information presentation apparatus that is relevant to the embodiment of the present invention will be described. FIG. 2 is a flowchart for explaining an example of the information presentation processing procedure of the information presentation apparatus according to the embodiment of the present invention.
[0022] 図 2に示すように、まず、情報提示装置 100のユーザによる入力操作などに伴い入 力された、図示しない操作部力もの入力情報などに基づいて、設定部 103によって、 目的地'経由地を設定する (ステップ S201)。次いで、荷物検出部 101によって、移 動体内の所定の区画に置かれた荷物を検出する (ステップ S202)。 As shown in FIG. 2, first, based on input information of an operation unit force (not shown) input in accordance with an input operation by a user of the information presentation device 100, the setting unit 103 causes the destination “ A waypoint is set (step S201). Next, the package detection unit 101 detects the package placed in a predetermined section in the moving body (step S202).
[0023] 荷物が置かれていることを検出すると、入力部 102によって、各区画に置かれた荷 物について、情報提示装置 100のユーザによる入力操作などに伴い入力された情 報や、画像情報取得部 107によって取得された画像情報を付加情報として受け付け 、その付加情報を記憶部 104に記憶させる (ステップ S203)。その際、各荷物の付加 情報を、荷物を降ろすべき目的地や経由地と関連付けて記憶させる構成とするとよ い。 [0023] When it is detected that a package is placed, the information input by the user of the information presentation device 100 or the image information about the package placed in each section by the input unit 102 is displayed. The image information acquired by the acquisition unit 107 is received as additional information, and the additional information is stored in the storage unit 104 (step S203). At this time, the additional information of each package should be stored in association with the destination or waypoint where the package should be unloaded.
[0024] 本フローチャートによる一連の情報提示処理を開始する前に既に目的地や経由地 が設定されている場合や、各荷物の付加情報を目的地や経由地と関連付けない構 成の場合には、上記ステップ S201はなくてもよい。その代わり、情報提示装置 100 の電源がオンになったときに、ステップ S202で、荷物検出部 101によって荷物を検 出する。また、識別情報取得部 108によって各荷物の識別情報を取得する構成の場 合には、各荷物ごとにその識別情報と付加情報を関連付けて記憶させる。 [0024] When a destination or waypoint is already set before starting a series of information presentation processing according to this flowchart, or when additional information of each package is not associated with the destination or waypoint The step S201 is not necessary. Instead, when the information presentation apparatus 100 is turned on, the package detection unit 101 detects the package in step S202. In the case where the identification information acquisition unit 108 acquires the identification information of each package, the identification information and the additional information are stored in association with each package.
[0025] 次いで、位置取得部 105によって、自車の現在位置を取得し、その現在位置が目 的地 ·経由地の近傍であるカゝ否かを判断する (ステップ S204)。現在位置が目的地' 経由地の近傍でない場合 (ステップ S204 :No)には、ステップ S204に戻り、現在位 置のチェックを行う。現在位置が目的地 '経由地の近傍にある場合 (ステップ S204 : Yes)には、提示部 106によって、記憶部 104から付加情報を読み出し、その付加情 報を提示する (ステップ S205)。その際、荷物の付加情報がその荷物を降ろすべき 目的地や経由地と関連付けて記憶されている場合には、提示部 106によって、記憶 部 104から、現在位置またはその近傍の目的地または経由地に関連付けられた付 加情報を読み出し、提示する。 Next, the position acquisition unit 105 acquires the current position of the host vehicle, and determines whether or not the current position is in the vicinity of the destination / route via (step S204). If the current position is not near the destination's destination (Step S204: No), the process returns to Step S204 and the current position Check the position. If the current position is in the vicinity of the destination point (step S204: Yes), the presentation unit 106 reads the additional information from the storage unit 104 and presents the additional information (step S205). At this time, if the additional information of the package is stored in association with the destination or waypoint where the package should be unloaded, the presenting unit 106 makes the destination or waypoint near the current position or its vicinity from the storage unit 104. The additional information associated with is read and presented.
[0026] また、情報提示装置 100が荷物の識別情報を利用する構成となっている場合には 、識別情報取得部 108によって、再度、荷物の識別情報を取得する。そして、提示部 106によって、新たに取得された識別情報と、記憶部 104に記憶されている識別情 報とを比較し、両識別情報が一致する場合にのみ、その一致した荷物の付加情報を 提示する。新旧二つの識別情報が一致しない場合には、提示部 106によって付加情 報を提示しょうとして 、る荷物と、その荷物が置かれて 、るとされて 、る区画に実際に 置かれている荷物とが異なるので、付加情報を提示しない。提示部 106によって付 加情報を提示したら、本フローチャートによる一連の情報提示処理を終了する。 [0026] If the information presentation device 100 is configured to use the package identification information, the identification information acquisition unit 108 acquires the package identification information again. Then, the newly obtained identification information is compared with the identification information stored in the storage unit 104 by the presenting unit 106, and only when the identification information matches, the additional information of the matched package is obtained. Present. If the new and old identification information do not match, the presentation unit 106 tries to present additional information and the package is said to have been placed, and the package actually placed in the compartment. Is not different, so no additional information is presented. When the additional information is presented by the presentation unit 106, the series of information presentation processing according to this flowchart ends.
[0027] 以上説明したように、この発明の実施の形態に力かる情報提示装置 100によれば、 位置取得部 105によって目的地や経由地に到着、またはその近くまで来たことを認 識すると、提示部 106によって、記憶部 104から付加情報を読み出し、ユーザにその 付加情報を提示する。従って、ユーザは、移動体力も降りる際に、移動体内に積み 込まれた荷物があることを知ることができるので、移動体内に荷物を置き忘れることな ぐ移動体力 降りることができる。つまり、忘れ物を防ぐことができる。 [0027] As described above, according to the information presentation device 100 according to the embodiment of the present invention, when the position acquisition unit 105 recognizes that it has arrived at or near the destination or waypoint. Then, the presentation unit 106 reads the additional information from the storage unit 104 and presents the additional information to the user. Therefore, the user can know that there is a load loaded in the moving body when the moving strength is also lowered, so that the user can get off the moving strength without leaving the luggage in the moving body. In other words, things left behind can be prevented.
[0028] また、設定部 103によって目的地や経由地を設定し、荷物の付加情報をその目的 地や経由地に関連付けて記憶部 104に記憶させ、目的地や経由地に到着、または その近くまで来たときに、提示部 106によって、その目的地や経由地に関連付けられ た荷物の付加情報を提示するので、その目的地や経由地で降ろすべき荷物の付カロ 情報だけを提示することができる。従って、別の経由地や目的地で降ろすべき荷物を 誤って降ろしてしまうのを防ぐことができる。 [0028] In addition, the destination or waypoint is set by the setting unit 103, and the additional information of the package is stored in the storage unit 104 in association with the destination or waypoint, and arrives at or near the destination or waypoint When present, the presentation unit 106 presents additional information on the packages associated with the destination or waypoint, so it is possible to present only the calorie information with the luggage to be dropped at the destination or waypoint. it can. Therefore, it is possible to prevent the baggage to be unloaded at another waypoint or destination from being unintentionally dropped.
[0029] さらに、識別情報取得部 108によって荷物の識別情報を取得し、提示部 106によつ て付加情報を提示する際に新旧二つの識別情報の照合を行い、両識別情報が一致 した荷物についてのみ付加情報を提示するので、誤って違う荷物を降ろしてしまうの を防ぐことができる。また、荷物の付加情報として、画像情報取得部 107によって取 得した画像情報を利用する場合には、ユーザが付加情報として提示された画像情報 を見ることによって、誤って違う荷物を降ろしてしまうのをより確実に防ぐことができる。 [0029] Further, the identification information acquisition unit 108 acquires the package identification information, and the presentation unit 106 collates the new and old identification information when presenting the additional information. Since the additional information is presented only for the package that has been removed, it is possible to prevent the wrong package from being unintentionally dropped. In addition, when using the image information acquired by the image information acquisition unit 107 as additional information of the package, the user may unload the wrong package by looking at the image information presented as the additional information. Can be prevented more reliably.
[0030] また、情報提示装置 100の電源がオンになったとき、または設定部 103によって目 的地や経由地が設定されたときに、荷物検出部 101によって荷物が置かれているこ とを検出すると、入力部 102によって、自動的に荷物の付加情報を入力する状態とな る。従って、付加情報を入力する操作が簡便になる。 [0030] Further, when the information presentation apparatus 100 is turned on, or when the destination or waypoint is set by the setting unit 103, it is confirmed that the baggage is placed by the baggage detection unit 101. When detected, the input unit 102 automatically enters additional package information. Therefore, an operation for inputting additional information is simplified.
[0031] 次に、この発明の実施の形態に力かる実施例について説明する。ここでは、この実 施の形態にかかる情報提示装置 100を、例えば移動体としての自動車などの車両に 搭載されるナビゲーシヨン装置に適用した場合を例示して説明する。 [0031] Next, examples that are useful for the embodiment of the present invention will be described. Here, the case where the information presentation device 100 according to this embodiment is applied to a navigation device mounted on a vehicle such as an automobile as a moving body will be described as an example.
実施例 Example
[0032] (実施例 1) [Example 1]
実施例 1では、荷物の付加情報を音声で入力する場合について説明する。従って 、実施例 1では、荷物の付加情報を音声メモと呼ぶことにする。 In the first embodiment, a case where additional information on a package is input by voice will be described. Therefore, in the first embodiment, the additional information on the package is called a voice memo.
[0033] (ナビゲーシヨン装置のハードウェア構成) [0033] (Hardware configuration of navigation device)
まず、この発明の実施例 1にかかるナビゲーシヨン装置のハードウェア構成につい て説明する。図 3は、この発明の実施例 1にかかるナビゲーシヨン装置のハードウェア 構成の一例を示すブロック図である。 First, the hardware configuration of the navigation device according to the first embodiment of the present invention will be described. FIG. 3 is a block diagram illustrating an example of a hardware configuration of the navigation device according to the first embodiment of the present invention.
[0034] 図 3に示すように、ナビゲーシヨン装置 300は、ナビゲーシヨン制御部 301、ユーザ 操作部 302、表示部 303、位置取得部 304、記録媒体 305、記録媒体デコード部 30 6、音声出力部 307、通信部 308、経路探索部 309、経路誘導部 310、音声生成部 311、スピーカ 312、物置トレイスイッチ部 313、音声入力部 314およびマイク 315を 備えている。 As shown in FIG. 3, the navigation device 300 includes a navigation control unit 301, a user operation unit 302, a display unit 303, a position acquisition unit 304, a recording medium 305, a recording medium decoding unit 306, and an audio output unit. 307, a communication unit 308, a route search unit 309, a route guidance unit 310, an audio generation unit 311, a speaker 312, a storage tray switch unit 313, an audio input unit 314, and a microphone 315.
[0035] ナビゲーシヨン制御部 301は、ナビゲーシヨン装置 300全体を制御する。ナビゲー シヨン制御部 301は、例えば所定の演算処理を実行する CPU (Central Process! ng Unit)や各種制御プログラムを格納する ROM (Read Only Memory)、およ び CPUのワークエリアとして機能する RAM (Random Access Memory)などによ つて構成されるマイクロコンピュータなどによって実現することができる。また、ナビゲ ーシヨン制御部 301は、荷物の音声メモを記憶する EEPROMや FRAMなどの不揮 発性の半導体メモリを有する。 The navigation control unit 301 controls the entire navigation device 300. The navigation control unit 301 includes, for example, a CPU (Central Process Unit) that executes predetermined arithmetic processing, a ROM (Read Only Memory) that stores various control programs, and a RAM (Random) that functions as a work area for the CPU. Access Memory) It can be realized by a microcomputer configured as described above. The navigation control unit 301 includes a nonvolatile semiconductor memory such as an EEPROM or FRAM for storing a voice memo of a package.
[0036] また、ナビゲーシヨン制御部 301は、経路誘導に際し、経路探索部 309、経路誘導 部 310および音声生成部 311との間で経路誘導に関する情報の入出力を行い、そ の結果得られる情報を表示部 303および音声出力部 307へ出力する。さらに、ナビ ゲーシヨン制御部 301は、音声メモの登録に際し、マイク 315および音声入力部 314 を介して入力された音声メモを、ナビゲーシヨン制御部 301内の不揮発性の半導体メ モリに記憶させる。また、ナビゲーシヨン制御部 301は、音声メモの提示に際し、ナビ ゲーシヨン制御部 301内の不揮発性の半導体メモリから音声メモを読み出して、音声 出力部 307へ出力する。 [0036] In addition, the navigation control unit 301 inputs and outputs information related to route guidance between the route search unit 309, the route guidance unit 310, and the voice generation unit 311 when performing route guidance, and information obtained as a result thereof. Is output to the display unit 303 and the audio output unit 307. Further, the navigation control unit 301 stores the voice memo input via the microphone 315 and the voice input unit 314 in the nonvolatile semiconductor memory in the navigation control unit 301 when registering the voice memo. Further, the navigation control unit 301 reads the voice memo from the non-volatile semiconductor memory in the navigation control unit 301 and outputs it to the voice output unit 307 when presenting the voice memo.
[0037] ユーザ操作部 302は、文字、数値、各種指示など、ユーザによって入力操作された 情報をナビゲーシヨン制御部 301に対して出力する。このユーザ操作部 302は、例え ば、押しボタン式スィッチ、タツチパネル、リモコンなどである。また、ユーザ操作部 30 2は、音声入力部 314およびマイク 315を用いて、音声によって入力操作を行う形態 としてちよい。 [0037] The user operation unit 302 outputs information input by the user, such as characters, numerical values, and various instructions, to the navigation control unit 301. The user operation unit 302 is, for example, a push button switch, a touch panel, a remote control, or the like. Further, the user operation unit 302 may be configured to perform an input operation by voice using the voice input unit 314 and the microphone 315.
[0038] 表示部 303は、例えば、 CRT (Cathode Ray Tube)、 TFT (Thin Film Tran sistor)液晶ディスプレイ、有機 EL (Electroluminescence)ディスプレイ、プラズマ ディスプレイなどを含む。表示部 303は、具体的には、例えば、映像 IZFや映像 IZ Fに接続された映像表示用のディスプレイ装置によって構成することができる。 The display unit 303 includes, for example, a CRT (Cathode Ray Tube), a TFT (Thin Film Transistor) liquid crystal display, an organic EL (Electroluminescence) display, a plasma display, and the like. Specifically, the display unit 303 can be configured by, for example, a video IZF or a video display device connected to the video IZF.
[0039] 映像 IZFは、具体的には、例えば、ディスプレイ装置全体の制御を行うグラフィック コントローラと、即時表示可能な画像情報を一時的に記憶する VRAM (Video RA M)などのバッファメモリと、グラフィックコントローラから出力される画像情報に基づい て、ディスプレイ装置を表示制御する制御 ICなどによって構成される。この表示部 30 3には、地図データや経路誘導に関する情報、音声メモに関する情報、その他各種 情報が表示される。 [0039] Specifically, the video IZF includes, for example, a graphic controller that controls the entire display device, a buffer memory such as a VRAM (Video RAM) that temporarily stores image information that can be displayed immediately, and a graphic Based on image information output from the controller, it is configured by a control IC that controls display of the display device. This display section 303 displays map data, route guidance information, voice memo information, and various other information.
[0040] 位置取得部 304は、 GPSレシーバおよび各種センサ力 構成され、移動体の位置 [0040] The position acquisition unit 304 includes a GPS receiver and various sensor forces, and the position of the moving object
(ナビゲーシヨン装置 300の位置)の情報を取得する。そして、取得した位置情報を ナビゲーシヨン制御部 301に出力する。 GPSレシーバは、 GPS衛星からの電波を受 信し、 GPS衛星との幾何学的位置を求める。 Information on (position of navigation device 300) is acquired. And the acquired location information The data is output to the navigation control unit 301. The GPS receiver receives the radio wave from the GPS satellite and determines the geometric position with the GPS satellite.
[0041] なお、 GPSとは、 Global Positioning Systemの略称であり、 4つ以上の衛星か らの電波を受信することによって地上での位置を正確に求めるシステムである。 GPS レシーバは、 GPS衛星からの電波を受信するためのアンテナ、受信した電波を復調 するチューナおよび復調した情報に基づいて現在位置を算出する演算回路などによ つて構成される。 [0041] GPS is an abbreviation for Global Positioning System, and is a system for accurately obtaining a position on the ground by receiving radio waves from four or more satellites. The GPS receiver consists of an antenna for receiving radio waves from GPS satellites, a tuner that demodulates the received radio waves, and an arithmetic circuit that calculates the current position based on the demodulated information.
[0042] 各種センサは、速度センサや角速度センサ、加速度センサなど移動体またはナビ ゲーシヨン装置 300に搭載されたセンサであり、これらのセンサから出力される情報 から、移動体などの移動変位、移動速度、移動方向、傾斜角などを算出する。このよ うに、 GPSレシーバの受信電波から得られた情報と合わせて、上記各種センサの出 力情報を用いることによって、より高い精度で移動体の位置の認識を行うことができる [0042] The various sensors are sensors such as a speed sensor, an angular speed sensor, an acceleration sensor, and the like mounted on the mobile body or the navigation device 300. From the information output from these sensors, the mobile body displacement, the mobile speed, etc. Calculate the moving direction, inclination angle, etc. In this way, by using the output information of the various sensors described above together with the information obtained from the radio wave received by the GPS receiver, the position of the moving object can be recognized with higher accuracy.
[0043] 記録媒体 305には、各種制御プログラムや各種情報がコンピュータに読み取り可 能な状態で記録されている。この記録媒体 305は、例えば、 HDや DVD (Digital V ersatile Disk)、 CD (Compact Disk)、メモリカードなどによって実現することが できる。なお、記録媒体 305は、記録媒体デコード部 306による情報の書き込みを受 け付けるとともに、書き込まれた情報を不揮発に記録するようにしてもょ 、。 [0043] Various control programs and various information are recorded on the recording medium 305 in a state readable by a computer. The recording medium 305 can be realized by, for example, an HD, a DVD (Digital Versatile Disk), a CD (Compact Disk), a memory card, or the like. Note that the recording medium 305 may accept writing of information by the recording medium decoding unit 306 and record the written information in a nonvolatile manner.
[0044] また、記録媒体 305には、経路探索および経路誘導に用いられる地図データが記 録されている。記録媒体 305に記録されている地図データは、建物、河川、地表面な どの地物 (フィーチャ)をあらわす背景データと、道路の形状をあらわす道路形状デ 一タとを有しており、表示部 303の表示画面において 2次元または 3次元に描画され る。 [0044] Further, the recording medium 305 stores map data used for route search and route guidance. The map data recorded in the recording medium 305 has background data that represents features (features) such as buildings, rivers, and the ground surface, and road shape data that represents the shape of the road. It is drawn in 2D or 3D on the 303 display screen.
[0045] ナビゲーシヨン装置 300が経路誘導中の場合は、記録媒体デコード部 306によつ て記録媒体 305から読み取られた地図データと、位置取得部 304によって取得され た移動体の位置を示すマークと力 ナビゲーシヨン制御部 301によって表示部 303 の表示画面に重ねて表示されることとなる。 [0045] When the navigation apparatus 300 is guiding a route, the map data read from the recording medium 305 by the recording medium decoding unit 306 and the mark indicating the position of the moving body acquired by the position acquisition unit 304 The force is displayed by the navigation control unit 301 so as to overlap the display screen of the display unit 303.
[0046] 背景データは、背景の形状をあらわす背景形状データと、背景の種類をあらわす 背景種別データとを有する。背景形状データは、例えば地物の代表点'ポリライン'ポ リゴン '地物の座標などを含んでいる。また、背景種別データは、例えば地物の名称 や住所 ·電話番号をあらわすテキストデータ、建物 '河川'地表面などの地物の種別 データを含んでいる。 [0046] The background data represents the background shape data representing the shape of the background and the background type. Background type data. The background shape data includes, for example, the representative point of the feature 'polyline' polygon 'and the coordinates of the feature. The background type data includes, for example, text data representing the name, address and telephone number of the feature, and type data of the feature such as the building 'river' ground surface.
[0047] 道路形状データは、複数のノードおよびリンクを有する道路ネットワークに関するデ ータである。ノードは、三叉路 '十字路'五叉路など複数の道路が交差する交差点を 示している。リンクは、ノード間を連結する道路を示している。リンクには、形状補間点 を有するものもあり、この形状補間点によって曲線道路の表現が可能となる。 [0047] The road shape data is data relating to a road network having a plurality of nodes and links. The node indicates an intersection where a plurality of roads intersect such as a three-way crossroad 'crossroad' and a five-way crossing. The link indicates a road connecting the nodes. Some links have shape interpolation points, and curved roads can be expressed by these shape interpolation points.
[0048] なお、道路形状データは、さらに交通条件データを有する。交通条件データには、 例えば各ノードについて、信号や横断歩道などの有無、高速道路の出入口ゃジヤン クシヨンの有無、各リンクの長さ(距離)、車幅、進行方向、通行禁止、道路種別(高速 道路、有料道路、一般道路など)などの情報が含まれている。この交通条件データは 、過去の渋滞情報を季節 ·曜日 ·大型連休 ·時刻などを基準に統計処理した過去渋 滞情報を記憶している。 [0048] The road shape data further includes traffic condition data. The traffic condition data includes, for example, the presence or absence of traffic lights, crosswalks, etc., the presence or absence of highway exits and traffic, the length (distance) of each link, vehicle width, direction of travel, traffic prohibition, road type ( Such as expressways, toll roads, and general roads). This traffic condition data stores past traffic information obtained by statistically processing past traffic information based on seasons, days of the week, large holidays, and time.
[0049] なお、本実施例 1では地図データを記録媒体 305に記録するようにした力 これに 限るものではない。地図データは、ナビゲーシヨン装置 300のハードウェアと一体に 設けられているものに限って記録されているものではなぐナビゲーシヨン装置 300外 部に設けられていてもよい。その場合、ナビゲーシヨン装置 300は、例えば通信部 30 8を通じて、ネットワークを介して地図データを取得する。取得された地図データは R AMなどに記 '慮される。 [0049] In the first embodiment, the force for recording the map data on the recording medium 305 is not limited to this. The map data may be provided outside the navigation device 300, not limited to the one that is provided integrally with the hardware of the navigation device 300. In that case, the navigation apparatus 300 acquires map data via a network, for example, through the communication unit 308. Acquired map data is recorded in RAM.
[0050] 記録媒体デコード部 306は、記録媒体 305に対する情報のリード Zライトの制御を 行う。音声出力部 307は、接続されたスピーカ 312への出力を制御することによって 、案内音などの音声を再生する。スピーカ 312は、一つであってもよいし、複数であつ てもよい。具体的には、音声出力部 307は、例えば音声ディジタル情報の DZA変換 を行う DZAコンバータと、 DZAコンバータから出力される音声アナログ信号を増幅 する増幅器とから構成することができる。 The recording medium decoding unit 306 controls read / write of information with respect to the recording medium 305. The sound output unit 307 reproduces sound such as guidance sound by controlling output to the connected speaker 312. There may be one speaker 312 or a plurality of speakers 312. Specifically, the audio output unit 307 can be constituted by, for example, a DZA converter that performs DZA conversion of audio digital information and an amplifier that amplifies an audio analog signal output from the DZA converter.
[0051] 通信部 308は、例えば FM多重チューナ、 VICS (登録商標) Zビーコンレシーバ、 無線通信機器およびその他の通信機器によって構成され、他の通信機器との通信を 行う。また、通信部 308は、携帯電話機、 PHS、通信カードおよび無線 LANなどの 通信媒体を介して通信を行う構成としてもょ ヽ。 [0051] The communication unit 308 includes, for example, an FM multiplex tuner, a VICS (registered trademark) Z beacon receiver, a wireless communication device, and other communication devices, and performs communication with other communication devices. Do. The communication unit 308 may be configured to perform communication via a communication medium such as a mobile phone, PHS, communication card, and wireless LAN.
[0052] 本実施例 1において、通信部 308によって取得される情報として、例えば VICS (V ehicle Information and Communication System)センター力ら目 Sf れる 渋滞や交通規制などの道路交通情報が挙げられる。また、全国の道路交通情報を 蓄積しているサーバに対しネットワークを介して所望の地域の道路交通情報を要求し 、要求した道路交通情報を取得するようにしてもよい。 In the first embodiment, as information acquired by the communication unit 308, for example, road traffic information such as traffic congestion and traffic regulation, which is recognized as VICS (Vehicle Information and Communication System) center force, can be cited. Further, it is also possible to request the road traffic information of a desired area via a network to a server that accumulates the road traffic information of the whole country, and acquire the requested road traffic information.
[0053] 経路探索部 309は、記録媒体 305から記録媒体デコード部 306を介して取得され る地図データや、通信部 308を介して取得する VICS情報などを利用して、出発地か ら目的地までの最適な経路を探索する。ここで、最適な経路とは、ユーザが指定した 条件にもっとも合致する経路である。一般に、出発地から目的地までの経路は無数 存在する。このため、経路探索に当たって考慮される事項を設定し、条件に合致する 経路を探索するようにして 、る。 [0053] The route search unit 309 uses the map data acquired from the recording medium 305 via the recording medium decoding unit 306, the VICS information acquired via the communication unit 308, and the like from the departure point to the destination. Find the best route to Here, the optimal route is the route that best meets the conditions specified by the user. In general, there are an infinite number of routes from the starting point to the destination. For this reason, items to be considered in the route search are set, and a route that matches the conditions is searched.
[0054] 例えば、経路探索部 309が探索する経路の出発地には、位置取得部 304によって 取得される移動体の現在位置(ナビゲーシヨン装置 300の現在位置)、またはユーザ 操作部 302からユーザによって指定される出発地などが設定される。また、目的地や 経由地には、ユーザによって入力された目的地や経由地の他、ジャンル検索などに 基づいて、地図データ力 検索された施設などを目的地や経由地として設定してもよ い。 [0054] For example, the starting point of the route searched by the route search unit 309 is the current position of the moving body (the current position of the navigation device 300) acquired by the position acquisition unit 304 or the user from the user operation unit 302 by the user. The designated departure place is set. For destinations and waypoints, destinations and waypoints entered by the user, as well as facilities that have been searched for map data based on genre search, etc. may be set as destinations and waypoints. Yes.
[0055] 経路誘導部 310は、経路探索部 309によって探索された最適経路情報、位置取得 部 304によって取得された移動体の位置情報および記録媒体 305から記録媒体デ コード部 306を経由して得られた地図データに基づいて、ユーザを目的地まで誘導 するための経路誘導情報の生成を行う。 The route guidance unit 310 obtains the optimum route information searched by the route search unit 309, the position information of the moving body acquired by the position acquisition unit 304, and the recording medium 305 via the recording medium decoding unit 306. Based on the obtained map data, route guidance information for guiding the user to the destination is generated.
[0056] このとき、経路誘導部 310によって生成される経路誘導情報は、通信部 308によつ て受信した渋滞情報を考慮したものであってもよい。経路誘導部 310で生成された 経路誘導情報は、ナビゲーシヨン制御部 301を介して表示部 303へ出力される。 At this time, the route guidance information generated by the route guidance unit 310 may be information that considers the traffic jam information received by the communication unit 308. The route guidance information generated by the route guidance unit 310 is output to the display unit 303 via the navigation control unit 301.
[0057] 音声生成部 311は、案内音などの各種音声の情報を生成する。すなわち、経路誘 導部 310で生成された経路誘導情報に基づいて、案内ポイントに対応した仮想音源 の設定と音声ガイダンス情報の生成を行 、、これをナビゲーシヨン制御部 301を介し て音声出力部 307へ出力する。 [0057] The voice generation unit 311 generates various types of voice information such as guidance sounds. That is, based on the route guidance information generated by the route guidance unit 310, the virtual sound source corresponding to the guidance point Are generated and voice guidance information is generated, and this is output to the voice output unit 307 via the navigation control unit 301.
[0058] スピーカ 312は、音声出力部 307から出力されるナビゲーシヨンの案内音やナビゲ ーシヨン制御部 301から音声出力部 307を介して出力される音声を再生(出力)する 。また、例えばこのスピーカ 312にヘッドフォンなどを設け、車両内部全体が出力され る案内音や音声の音場とならないように、案内音や音声の出力形態を適宜変更する ように構成してもよい。 The speaker 312 reproduces (outputs) the navigation guidance sound output from the audio output unit 307 and the audio output from the navigation control unit 301 via the audio output unit 307. Further, for example, the speaker 312 may be provided with headphones or the like so that the output form of the guidance sound and voice is appropriately changed so that the sound field of the guidance sound and voice output from the entire interior of the vehicle is not generated.
[0059] 物置トレイスイッチ部 313は、荷物を置くトレイと、そのトレィ上に荷物が置かれたこ とを検出するスィッチとから構成されている。このトレイとしては、例えばドアの内側に 設けられた 、わゆるドアポケットや、センターコンソールやグローブボックスやトランク などのように、最初力 移動体に備えつけられている収納部や、いわゆるドリンクホル ダ一と呼ばれるコップ置きや、屋根に取り付けるカプセル状の収納ケースなどの後付 けの用品が挙げられる。 [0059] The storage tray switch unit 313 includes a tray for placing a load and a switch for detecting that the load has been placed on the tray. As this tray, for example, a loose door pocket provided on the inside of the door, a storage part that is initially installed in a moving body such as a center console, a glove box, or a trunk, or a so-called drink holder. Examples include back-end items such as cup holders and capsule-shaped storage cases attached to the roof.
[0060] また、座席に荷物を置くことができるので、座席やトランクなどのように面積が広い区 画に皿状の荷物置き部材を複数個並べて、荷物を置く場所としてもよい。本明細書 では、上述した収納部や後付けの用品や荷物置き部材を含めて物置トレイとする。な お、物置トレィを設けずに、スィッチのみで荷物が置かれたことを検出する構成として ちょい。 [0060] Further, since the luggage can be placed on the seat, a plurality of dish-like luggage placing members may be arranged in a large area such as a seat or a trunk to place the luggage. In this specification, the storage tray includes the storage unit, the retrofitting articles, and the luggage storage member. It should be noted that this is a configuration that detects that a load has been placed using only the switch without providing a storage tray.
[0061] スィッチとしては、重量センサや圧力センサなど、荷物の重量を検出可能なセンサ を用いることができる。この物置トレイの底には、このようなセンサが取り付けられてお り、物置トレイの上に荷物が置かれると、その荷物の重量により重量センサや圧力セ ンサが作動し、センサ力もナビゲーシヨン制御部 301に検出信号を出力する。 [0061] As the switch, a sensor capable of detecting the weight of the load, such as a weight sensor or a pressure sensor, can be used. Such a sensor is attached to the bottom of the storage tray. When a load is placed on the storage tray, the weight sensor and pressure sensor are activated by the weight of the load, and the sensor force is also controlled by navigation. A detection signal is output to the unit 301.
[0062] 音声入力部 314は、接続されたマイク 315から入力された音声アナログ情報を音声 ディジタル情報に変換してナビゲーシヨン制御部 301に出力する。音声入力部 314 は、音声アナログ情報の AZD変換を行う AZDコンバータを備えている。マイク 315 は、ナビゲーシヨン装置 300に対する音声入力に使用される。 The voice input unit 314 converts the voice analog information input from the connected microphone 315 into voice digital information and outputs the voice digital information to the navigation control unit 301. The voice input unit 314 includes an AZD converter that performs AZD conversion of voice analog information. The microphone 315 is used for voice input to the navigation device 300.
[0063] なお、具体的には、実施の形態に力かる情報提示装置 100の機能的構成である図 1における荷物検出部 101は、例えばナビゲーシヨン制御部 301および物置トレイス イッチ部 313などによってその機能を実現する。入力部 102は、例えばナビゲーショ ン制御部 301、音声入力部 314およびマイク 315などによってその機能を実現する。 設定部 103は、例えばナビゲーシヨン制御部 301およびユーザ操作部 302などによ つてその機能を実現する。 [0063] Note that, specifically, the luggage detection unit 101 in FIG. 1 which is a functional configuration of the information presentation device 100 that is relevant to the embodiment includes, for example, a navigation control unit 301 and a storage trace. The function is realized by the switch 313 or the like. The input unit 102 realizes its function by, for example, the navigation control unit 301, the voice input unit 314, the microphone 315, and the like. The setting unit 103 realizes its function by, for example, the navigation control unit 301 and the user operation unit 302.
[0064] 記憶部 104は、例えばナビゲーシヨン制御部 301などによってその機能を実現する 。位置取得部 105は、例えばナビゲーシヨン制御部 301および位置取得部 304など によってその機能を実現する。提示部 106は、例えばナビゲーシヨン制御部 301、表 示部 303、音声出力部 307、音声生成部 311およびスピーカ 312などによってその 機能を実現する。実施例 1では、図 1の画像情報取得部 107と識別情報取得部 108 は設けられていない。 [0064] The storage unit 104 realizes its function by, for example, the navigation control unit 301. The position acquisition unit 105 realizes its function by, for example, the navigation control unit 301 and the position acquisition unit 304. The presentation unit 106 realizes its function by, for example, a navigation control unit 301, a display unit 303, an audio output unit 307, an audio generation unit 311 and a speaker 312. In the first embodiment, the image information acquisition unit 107 and the identification information acquisition unit 108 of FIG. 1 are not provided.
[0065] (ナビゲーシヨン装置の情報提示処理手順) [0065] (Information presentation processing procedure of navigation device)
次に、この発明の実施例 1にかかるナビゲーシヨン装置の情報提示処理手順につ いて説明する。情報提示処理は、音声メモ登録処理と音声メモ再生処理に分けられ る。図 4は、音声メモ登録処理手順の一例を示すフローチャートである。図 5は、音声 メモを登録する際に表示される画面表示例を示す図である。また、図 6は、音声メモ 再生処理手順の一例を示すフローチャートである。図 7は、音声メモを再生する際に 表示される画面表示例を示す図である。 Next, an information presentation process procedure of the navigation device according to the first embodiment of the present invention will be described. The information presentation process is divided into a voice memo registration process and a voice memo playback process. FIG. 4 is a flowchart showing an example of a voice memo registration processing procedure. FIG. 5 is a diagram showing an example of a screen displayed when registering a voice memo. FIG. 6 is a flowchart showing an example of a voice memo reproduction processing procedure. FIG. 7 is a diagram showing an example of a screen displayed when a voice memo is played.
[0066] 具体的に、図 4および図 6に示す処理は、例えば図 3に示したナビゲーシヨン制御 部 301の RAM、 ROM、あるいは記録媒体 305などに記憶(記録)されたプログラム を、ナビゲーシヨン制御部 301の CPUが実行することによって実現する。なお、以降 において、主に図 3を参照しながら説明を行うが、既に説明した部分と重複する箇所 は同一の符号を附して説明を省略する。まず、音声メモ登録処理手順について説明 する。 [0066] Specifically, the processing shown in FIGS. 4 and 6 includes, for example, a program stored in (recorded in) the RAM, ROM, or recording medium 305 of the navigation control unit 301 shown in FIG. This is realized by the CPU of the control unit 301 executing. In the following, the description will be made mainly with reference to FIG. 3. However, the same reference numerals are given to the portions overlapping with those already described, and the description will be omitted. First, the voice memo registration processing procedure will be described.
[0067] (音声メモ登録処理手順) [0067] (Voice memo registration processing procedure)
図 4において、まず、ナビゲーシヨン制御部 301によって、例えばユーザ操作部 30 2から入力された情報を取得して、目的地 ·経由地を設定する (ステップ S401)。次い で、ナビゲーシヨン制御部 301によって、 [n^l]、すなわち物置トレイの番号を表す nの値に 1を設定する(ステップ S402)。そして、ナビゲーシヨン制御部 301によって、 物置トレイスイッチ部 313の、物置トレイスイッチ—nがオンであるか否かを判断する( ステップ S403)。ここで、物置トレイスイッチ— nは、 n番目の物置トレイのスィッチであ る。 In FIG. 4, first, the navigation control unit 301 obtains information input from, for example, the user operation unit 302, and sets a destination / route point (step S401). Next, the navigation control unit 301 sets 1 to [n ^ l], that is, the value of n representing the storage tray number (step S402). And by the navigation control unit 301, It is determined whether the storage tray switch-n of the storage tray switch section 313 is on (step S403). Here, the storage tray switch—n is a switch of the nth storage tray.
[0068] その結果、物置トレイスイッチ nがオンであれば (ステップ S403: Yes)、ナビゲー シヨン制御部 301によって、物置トレイ— nに荷物が置かれていると判断し、この荷物 に対する音声メモの入力が可能なことをユーザに報せる (ステップ S404)。ここで、物 置トレイ— nは、 n番目の物置トレイである。物置トレイスイッチ— nがオフであれば (ス テツプ S403 :No)、ステップ S404〜ステップ S410を飛ばして、ステップ S411に進 む。 [0068] As a result, if the storage tray switch n is on (step S403: Yes), the navigation control unit 301 determines that the luggage is placed on the storage tray n, and the voice memo for this luggage is stored. The user is notified that input is possible (step S404). Here, the storage tray-n is the nth storage tray. If the storage tray switch—n is OFF (step S403: No), step S404 to step S410 are skipped and the process proceeds to step S411.
[0069] 音声メモの入力が可能なことをユーザに報せる手段としては、音声生成部 311、音 声出力部 307およびスピーカ 312によって、入力を促す案内を合成音声にて出力す るようにしてもょ 、し、表示部 303の画面に入力を促すメッセージを表示させてもょ ヽ 。あるいは、ブザーを鳴らすだけでもよい。また、合成音声に代えて、録音音声を用 いてもよい。 [0069] As a means for reporting to the user that the voice memo can be input, the voice generation unit 311, the voice output unit 307 and the speaker 312 may be used to output a guidance prompting input using synthesized speech. You can also display a message prompting you for input on the display 303 screen. Or you may just sound a buzzer. In addition, a recorded voice may be used instead of the synthesized voice.
[0070] 図 5には、入力を促すメッセージを表示させる場合の表示例が示されている力 図 5 において、 500は表示画面であり、 501は入力を促すメッセージ、例えば「音声メモを 登録してくださ 、」 t 、うメッセージなどを表示するメッセージ表示領域である。図 5に おいて、 502はメモ情報表示領域であり、 503および 504は選択ボタン表示領域で ある。 FIG. 5 shows a display example when a message prompting input is displayed. In FIG. 5, 500 is a display screen, and 501 is a message prompting input, for example, “Register voice memo”. This is a message display area that displays messages such as “t”. In FIG. 5, 502 is a memo information display area, and 503 and 504 are selection button display areas.
[0071] 図 4に戻り、ステップ S404に続いて、ナビゲーシヨン制御部 301によって、物置トレ ィ—nに関連付けられた音声メモは記憶済みである力否かを判断する (ステップ S40 5)。その結果、記憶済みであれば (ステップ S405 : Yes)、ナビゲーシヨン制御部 30 1によって、例えば表示画面 500のメッセージ表示領域 501 (図 5参照)に、記憶済み の音声メモがあることを表示する (ステップ S406)。例えば「音声メモが登録されてい ます」というメッセージを表示する。そして、ステップ S407に進む。 Returning to FIG. 4, following step S404, the navigation control unit 301 determines whether or not the voice memo associated with the storage tray-n has been stored (step S405). As a result, if stored (step S405: Yes), the navigation control unit 301 displays, for example, that there is a stored voice memo in the message display area 501 (see FIG. 5) of the display screen 500. (Step S406). For example, the message “Voice memo is registered” is displayed. Then, the process proceeds to step S407.
[0072] このとき、表示画面 500のメモ情報表示領域 502に、物置トレイ— nに関連付けられ た記憶済みのメモ情報を表示させてもよい。この場合には、記憶済みの音声メモを周 知の音声認識エンジンなどを利用して文字情報に変換する必要があり、ナビゲーショ ン制御部 301はその機能を有している。物置トレイ— nに関連付けられた音声メモが 記憶されていない場合 (ステップ S405 : No)には、ステップ S406を飛ばして、ステツ プ S407に進む。 At this time, stored memo information associated with the storage tray n may be displayed in the memo information display area 502 of the display screen 500. In this case, it is necessary to convert the stored voice memo into text information using a known voice recognition engine, etc. The control unit 301 has this function. When the voice memo associated with the storage tray n is not stored (step S405: No), step S406 is skipped and the process proceeds to step S407.
[0073] そして、ナビゲーシヨン制御部 301によって、ユーザ操作部 302を介して音声メモス キップボタンが押された力否かを判断する(ステップ S407)。音声メモスキップボタン が押された場合 (ステップ S407 : Yes)には、ステップ S408〜ステップ S410を飛ばし て、ステップ S411に進む。音声メモスキップボタンが押されなかった場合 (ステップ S 407 : No)には、ナビゲーシヨン制御部 301によって、ユーザ操作部 302を介して音 声メモ開始ボタンが押されたカゝ否かを判断する (ステップ S408)。 [0073] Then, the navigation control unit 301 determines whether or not the voice memo skip button has been pressed via the user operation unit 302 (step S407). When the voice memo skip button is pressed (step S407: Yes), step S408 to step S410 are skipped and the process proceeds to step S411. If the voice memo skip button is not pressed (step S 407: No), the navigation control unit 301 determines whether or not the voice memo start button has been pressed via the user operation unit 302. (Step S408).
[0074] ここで、音声メモスキップボタンは、音声メモの記憶を行わない場合に押されるボタ ンである。音声メモ開始ボタンは、音声メモの記憶を行う場合に押されるボタンである 。これらのボタンは、ナビゲーシヨン装置 300の筐体やリモコンに押しボタン式スイツ チとして設けられていてもよい。あるいは、これらのボタンが表示部 303の画面にボタ ン画像として表示される形態であってもよ 、。 Here, the voice memo skip button is a button that is pressed when the voice memo is not stored. The voice memo start button is a button that is pressed when storing a voice memo. These buttons may be provided as push button switches on the casing of the navigation device 300 or the remote control. Alternatively, these buttons may be displayed on the screen of the display unit 303 as button images.
[0075] この場合、例えば図 5に示すように、一方の選択ボタン表示領域 503に音声メモ開 始ボタンに相当する「開始」のボタン画像を表示し、もう一方の選択ボタン表示領域 5 04に音声メモスキップボタンに相当する「スキップ」のボタン画像を表示し、どちらか のボタン画像をリモコンの操作で選択することにより疑似的にボタンを押す構成として もよい。また、表示部 303の表示画面 500をタツチパネルで構成し、「開始」のボタン 画像または「スキップ」のボタン画像に触れることにより、どちらかのボタンを押す構成 としてちよい。 In this case, for example, as shown in FIG. 5, a “start” button image corresponding to the voice memo start button is displayed in one selection button display area 503, and the other selection button display area 504 is displayed. A “skip” button image corresponding to the voice memo skip button may be displayed, and one of the button images may be selected by operating the remote controller to push the button in a pseudo manner. Further, the display screen 500 of the display unit 303 may be configured with a touch panel, and either button may be pressed by touching a “start” button image or a “skip” button image.
[0076] 図 4に戻り、音声メモ開始ボタンが押された場合 (ステップ S408 : Yes)には、ナビゲ ーシヨン制御部 301、音声入力部 314およびマイク 315によって、物置トレィ— nに置 かれた荷物に関する情報を音声で入力し、物置トレィー nに関連付けられた音声メモ を記録する (ステップ S409)。そして、ステップ S411に進む。なお、音声メモは、荷物 に関する情報、例えばその荷物の所有者の名前、荷物の外観や大きさ、および荷物 の中身など、荷物を特定することができるような情報であるが、これらの情報は、その 荷物が置かれた物置トレイに関連付けられる。 [0077] 一方、音声メモ開始ボタンが押されな力つた場合 (ステップ S408 : No)には、ナビ ゲーシヨン制御部 301によって、音声メモ入力可能表示の開始後一定時間が経過し たカゝ否かを判断する (ステップ S410)。一定時間が経過した場合 (ステップ S410: Ye s)には、ステップ S411に進む。一定時間が経過していない場合 (ステップ S410 : No )には、ステップ S407に戻り、音声メモ入力可能表示の開始後一定時間が経過する まで、音声メモスキップボタンまたは音声メモ開始ボタンが押されるのを待つ。 [0076] Returning to FIG. 4, when the voice memo start button is pressed (step S408: Yes), the luggage placed in the storage tray n by the navigation control unit 301, the voice input unit 314, and the microphone 315. The information about is input by voice and the voice memo associated with the storage tray n is recorded (step S409). Then, the process proceeds to step S411. The voice memo is information that can identify the package, such as information about the package, such as the name of the owner of the package, the appearance and size of the package, and the contents of the package. , Associated with the storage tray where the package was placed. [0077] On the other hand, if the voice memo start button is not pressed (step S408: No), whether or not a certain time has elapsed after the start of the voice memo input enabled display is displayed by the navigation control unit 301. Is determined (step S410). If the predetermined time has elapsed (step S410: Yes), the process proceeds to step S411. If the fixed time has not elapsed (step S410: No), the process returns to step S407, and the voice memo skip button or the voice memo start button is pressed until the predetermined time has elapsed after the voice memo input enabled display starts. Wait for.
[0078] 上述した処理において、物置トレイスイッチ—nがオフである場合 (ステップ S403 : [0078] In the above-described process, when the storage tray switch-n is off (step S403:
No)、音声メモスキップボタンが押された場合 (ステップ S407 : Yes)、音声メモ開始 ボタンが押されない状態で音声メモ入力可能表示の開始後一定時間が経過した場 合 (ステップ S410 :Yes)には、ナビゲーシヨン制御部 301によって、 [n^n+ l]、す なわち nの値を 1だけインクリメントする(ステップ S411)。次いで、 nの値の最大値を N maxとすると、ナビゲーシヨン制御部 301によって、 [n>Nmax]であるか否かを判断 する(ステップ S412)。 No), when the voice memo skip button is pressed (step S407: Yes), or when a certain time has elapsed after the start of the voice memo input enabled display without pressing the voice memo start button (step S410: Yes) The navigation control unit 301 increments [n ^ n + 1], that is, the value of n by 1 (step S411). Next, assuming that the maximum value of n is N max, the navigation control unit 301 determines whether or not [n> Nmax] (step S412).
[0079] [n≤Nmax]である場合 (ステップ S412 :No)には、ステップ S403に戻り、ステップ S403〜ステップ S412を繰り返すこと〖こよって、すべての物置トレイに対して上述し た一連の処理を行う。 [n>Nmax]である場合 (ステップ S412 :Yes)には、本フロー チャートによる一連の音声メモ登録処理を終了する。 [0079] If [n≤Nmax] (step S412: No), the process returns to step S403 and repeats steps S403 to S412, so that the above-described series of processing is performed for all the storage trays. I do. If [n> Nmax] (step S412: Yes), the series of voice memo registration processing according to this flowchart is terminated.
[0080] なお、上述した例では、ステップ S401で目的地や経由地を設定しているが、本フロ 一チャートによる一連の音声メモ登録処理を開始する前に既に目的地や経由地が設 定されている場合や、音声メモを目的地や経由地と関連付けない構成の場合には、 ナビゲーシヨン装置 300の電源オンをトリガーとしてステップ S402以降の処理を行う 構成とすればよい。この場合、一般的に、ナビゲーシヨン装置 300の電源は、移動体 のエンジンを起動したときにオンとなるので、移動体のエンジンの起動をトリガーとし てステップ S402以降の処理を行うことになる。次に、音声メモ再生処理手順につい て説明する。 [0080] In the above example, the destination and waypoint are set in step S401. However, the destination and waypoint are already set before starting a series of voice memo registration processing according to this flowchart. In the case where the voice memo is not associated with the destination or waypoint, the processing after step S402 may be performed with the power-on of the navigation device 300 as a trigger. In this case, since the power supply of the navigation apparatus 300 is generally turned on when the moving body engine is started, the processing after step S402 is performed using the starting of the moving body engine as a trigger. Next, the voice memo playback processing procedure will be described.
[0081] (音声メモ再生処理手順) [0081] (Voice memo playback processing procedure)
図 6において、まず、ナビゲーシヨン制御部 301および位置取得部 304によって、 移動体の現在位置を取得し、その現在位置が目的地 ·経由地の近傍である力否かを 判断する (ステップ S601)。現在位置が目的地 '経由地の近傍でない場合 (ステップ S601 :No)に ίま、ステップ S601に戻り、現在位置のチェックを行う。 In FIG. 6, first, the navigation control unit 301 and the position acquisition unit 304 acquire the current position of the moving body, and determine whether the current position is a force near the destination / route point. Judgment is made (step S601). If the current position is not in the vicinity of the destination point (step S601: No), the process returns to step S601 to check the current position.
[0082] 現在位置が目的地 ·経由地の近傍にある場合 (ステップ S601: Yes)には、ナビゲ ーシヨン制御部 301によって、 [n^l]、すなわち nの値に 1を設定する(ステップ S60 2)。ここで、ステップ S602を行うタイミングとしては、 目的地または経由地の近傍で移 動体を停止もしくは移動体のエンジンを停止させた場合、または目的地や経由地か らの距離がある一定の距離以下になった場合である。 [0082] When the current position is in the vicinity of the destination / route point (step S601: Yes), the navigation control unit 301 sets [n ^ l], that is, n to 1 (step S60). 2). Here, the timing for performing step S602 is when the moving body is stopped or the moving engine is stopped near the destination or waypoint, or when the distance from the destination or waypoint is less than a certain distance. This is the case.
[0083] 次いで、物置トレイスイッチ部 313の、物置トレイスイッチ nがオンであるか否かを 判断する(ステップ S603)。その結果、物置トレイスイッチ— nがオフであれば (ステツ プ S603 :No)、ステップ S604〜ステップ S609を飛ばして、ステップ S610に進む。 物置トレイスイッチ—nがオンであれば (ステップ S603 : Yes)、物置トレィー nに関連 付けられた音声メモは登録済みである力否かを判断する (ステップ S604)。 Next, it is determined whether or not the storage tray switch n of the storage tray switch unit 313 is on (step S603). As a result, if the storage tray switch n is off (step S603: No), step S604 to step S609 are skipped and the process proceeds to step S610. If the storage tray switch-n is on (step S603: Yes), it is determined whether or not the voice memo associated with the storage tray n is already registered (step S604).
[0084] その結果、登録されて!、なければ (ステップ S604: No)、ステップ S605〜ステップ S609を飛ばして、ステップ S610に進む。音声メモが登録されていれば (ステップ S6 04 : Yes)、ナビゲーシヨン制御部 301によって、音声メモが登録されていることをュ 一ザに報せる(ステップ S605)。 As a result, if it is not registered! (Step S604: No), step S605 to step S609 are skipped, and the process proceeds to step S610. If the voice memo is registered (step S604: Yes), the navigation control unit 301 can notify the user that the voice memo is registered (step S605).
[0085] 音声メモが登録されていることをユーザに報せる手段としては、音声生成部 311、 音声出力部 307およびスピーカ 312によって、合成音声にて報せるようにしてもよい し、表示部 303の画面にメッセージを表示するようにしてもよい。あるいは、ブザーを 鳴らすだけでもよい。図 7には、音声メモが登録されていることを報せるメッセージを 表示させる場合の表示例が示されている力 同図に示すように、例えば、ナビゲーシ ヨン制御部 301によって、表示画面 500のメッセージ表示領域 501に「音声メモがあり ます」と 、うメッセージを表示する。 As means for reporting to the user that the voice memo is registered, the voice generation unit 311, the voice output unit 307 and the speaker 312 may be used to report the synthesized voice, or the display unit 303. A message may be displayed on the screen. Or just sound the buzzer. FIG. 7 shows a display example when displaying a message reporting that a voice memo is registered. As shown in FIG. 7, for example, the navigation control unit 301 displays the display screen 500. In the message display area 501, a message saying “There is a voice memo” is displayed.
[0086] このとき、ナビゲーシヨン制御部 301によって、表示画面 500のメモ情報表示領域 5 02に物置トレイの番号を表示させてもよいし、音声生成部 311、音声出力部 307およ びスピーカ 312によって、物置トレイの番号を合成音声にて報せるようにしてもよい。 また、メモ情報表示領域 502に、物置トレイ— nに関連付けられた登録済みのメモ情 報を表示させてもよい。この場合には、登録済みの音声メモを周知の音声認識ェン ジンなどを利用して文字情報に変換する必要があり、ナビゲーシヨン制御部 301はそ の機能を有している。 [0086] At this time, the navigation control unit 301 may display the storage tray number in the memo information display area 502 of the display screen 500, or the voice generation unit 311, the voice output unit 307, and the speaker 312. Thus, the number of the storage tray may be reported by synthetic voice. In the memo information display area 502, registered memo information associated with the storage tray n may be displayed. In this case, the registered voice memo is stored in the well-known voice recognition chain. It is necessary to convert it into character information using gin or the like, and the navigation control unit 301 has this function.
[0087] 図 6に戻り、ステップ S605に続いて、ナビゲーシヨン制御部 301によって、ユーザ 操作部 302を介して音声メモスキップボタンが押されたカゝ否かを判断する (ステップ S 606)。音声メモスキップボタンが押された場合 (ステップ S606 : Yes)には、ステップ S607〜ステップ S609を飛ばして、ステップ S610〖こ進む。音声メモスキップボタンが 押されなかった場合 (ステップ S606 : No)には、ナビゲーシヨン制御部 301によって、 ユーザ操作部 302を介して音声メモ再生ボタンが押されたカゝ否かを判断する (ステツ プ S607)。 Returning to FIG. 6, following step S605, the navigation control unit 301 determines whether or not the voice memo skip button has been pressed via the user operation unit 302 (step S606). If the voice memo skip button has been pressed (step S606: Yes), step S607 to step S609 are skipped and step S610 is advanced. If the voice memo skip button has not been pressed (step S606: No), the navigation control unit 301 determines whether or not the voice memo playback button has been pressed via the user operation unit 302 (step S606). S607).
[0088] ここで、音声メモスキップボタンは、音声メモの再生を行わない場合に押されるボタ ンである。音声メモ再生ボタンは、音声メモの再生を行う場合に押されるボタンである 。これらのボタンは、ナビゲーシヨン装置 300の筐体やリモコンに押しボタン式スイツ チとして設けられていてもよい。あるいは、これらのボタンが表示部 303の画面にボタ ン画像として表示される形態であってもよ 、。 Here, the voice memo skip button is a button that is pressed when the voice memo is not reproduced. The voice memo playback button is a button that is pressed when the voice memo is played back. These buttons may be provided as push button switches on the casing of the navigation device 300 or the remote control. Alternatively, these buttons may be displayed on the screen of the display unit 303 as button images.
[0089] この場合、例えば図 7に示すように、一方の選択ボタン表示領域 503に音声メモ再 生ボタンに相当する「再生」のボタン画像を表示し、もう一方の選択ボタン表示領域 5 04に音声メモスキップボタンに相当する「スキップ」のボタン画像を表示し、どちらか のボタン画像をリモコンの操作で選択することにより疑似的にボタンを押す構成として もよい。また、表示部 303の表示画面 500をタツチパネルで構成し、「再生」のボタン 画像または「スキップ」のボタン画像に触れることにより、どちらかのボタンを押す構成 としてちよい。 In this case, for example, as shown in FIG. 7, a “play” button image corresponding to the voice memo playback button is displayed in one selection button display area 503, and the other selection button display area 5004 is displayed. A “skip” button image corresponding to the voice memo skip button may be displayed, and one of the button images may be selected by operating the remote controller to push the button in a pseudo manner. Further, the display screen 500 of the display unit 303 may be configured by a touch panel, and either button may be pressed by touching the “play” button image or the “skip” button image.
[0090] 図 6に戻り、音声メモ再生ボタンが押された場合 (ステップ S607 : Yes)には、ナビゲ ーシヨン制御部 301、音声生成部 311、音声出力部 307およびスピーカ 312によって 、物置トレイ— nに関連付けられた音声メモを再生する (ステップ S608)。そして、ステ ップ S610に進む。 [0090] Returning to FIG. 6, when the voice memo playback button is pressed (step S607: Yes), the navigation tray 301, the voice generator 311, the voice output unit 307, and the speaker 312 allow the storage tray-n. The voice memo associated with is played (step S608). Then, proceed to Step S610.
[0091] 一方、音声メモ再生ボタンが押されな力つた場合 (ステップ S607 : No)には、ナビ ゲーシヨン制御部 301によって、音声メモ登録済みの表示をしてから一定時間が経 過した力否かを判断する(ステップ S609)。一定時間が経過した場合 (ステップ S609 : Yes)には、ステップ S610に進む。一定時間が経過していない場合 (ステップ S609 : No)には、ステップ S606に戻り、音声メモ登録済みの表示をしてから一定時間が経 過するまで、音声メモスキップボタンまたは音声メモ再生ボタンが押されるのを待つ。 [0091] On the other hand, if the voice memo playback button is not pressed (step S607: No), the navigation control unit 301 displays whether the voice memo has been registered or not after a certain period of time has passed. Is determined (step S609). When a certain period of time has passed (Step S609 : Yes), go to step S610. If the fixed time has not passed (step S609: No), the process returns to step S606, and the voice memo skip button or the voice memo playback button is not displayed until the fixed time has passed after displaying the voice memo registered. Wait for it to be pressed.
[0092] 上述した処理において、物置トレイスイッチ—nがオフである場合 (ステップ S603 : [0092] In the above-described processing, when the storage tray switch-n is off (step S603:
No)、物置トレイ— nに関連付られた音声メモが登録されていない場合 (ステップ S60 4 : No)、音声メモスキップボタンが押された場合 (ステップ S606 : Yes)、音声メモ再 生ボタンが押されない状態で音声メモ登録済みの表示をしてから一定時間が経過し た場合 (ステップ S609 : Yes)には、ナビゲーシヨン制御部 301によって、 [n n+ 1] 、すなわち nの値を 1だけインクリメントする(ステップ S610)。次いで、ナビゲーシヨン 制御部 301によって、 [n>Nmax]であるか否かを判断する(ステップ S611)。 No), when the voice memo associated with the storage tray n is not registered (Step S60 4: No), when the voice memo skip button is pressed (Step S606: Yes), the voice memo playback button is If a certain period of time has elapsed since the voice memo registered display was displayed without being pressed (step S609: Yes), the navigation control unit 301 [n n + 1], that is, the value of n is set to 1. Increment (step S610). Next, the navigation control unit 301 determines whether or not [n> Nmax] (step S611).
[0093] [n≤Nmax]である場合 (ステップ S611 :No)には、ステップ S603に戻り、ステップ S603〜ステップ S611を繰り返すこと〖こよって、すべての物置トレイに対して上述し た一連の処理を行う。 [n>Nmax]である場合 (ステップ S611 :Yes)には、本フロー チャートによる一連の音声メモ再生処理を終了する。 [0093] If [n≤Nmax] (step S611: No), the process returns to step S603 and repeats step S603 to step S611, whereby the above-described series of processing is performed for all the storage trays. I do. If [n> Nmax] (step S611: Yes), the series of voice memo playback processing according to this flowchart is terminated.
[0094] 以上説明したように、実施例 1にかかるナビゲーシヨン装置 300によれば、ナビゲー シヨン制御部 301および位置取得部 304によって目的地や経由地に到着、またはそ の近くまで来たことを認識すると、ナビゲーシヨン制御部 301、表示部 303、音声出力 部 307、音声生成部 311およびスピーカ 312によって音声メモを再生し、ユーザに音 声メモの内容を報せる。従って、ユーザは、移動体から降りる際に、移動体内に積み 込まれた荷物があることを知ることができるので、移動体内に荷物を置き忘れることな ぐ移動体力 降りることができる。つまり、忘れ物を防ぐことができる。 [0094] As described above, according to the navigation device 300 according to the first embodiment, the navigation control unit 301 and the position acquisition unit 304 can reach the destination or the waypoint or have come close to it. When recognized, the voice memo is reproduced by the navigation control unit 301, the display unit 303, the voice output unit 307, the voice generation unit 311 and the speaker 312 to inform the user of the contents of the voice memo. Therefore, when the user gets off the moving body, the user can know that there is a load loaded in the moving body, so that the user can get off the moving power without leaving the luggage in the moving body. In other words, things left behind can be prevented.
[0095] また、ナビゲーシヨン制御部 301およびユーザ操作部 302によって目的地や経由 地を設定し、荷物の音声メモをその目的地や経由地に関連付けて記憶させ、 目的地 や経由地に到着、またはその近くまで来たときに、ナビゲーシヨン制御部 301、表示 部 303、音声出力部 307、音声生成部 311およびスピーカ 312によって、その目的 地や経由地に関連付けられた音声メモを再生するので、その目的地や経由地で降 ろすべき荷物の音声メモだけを再生することができる。従って、別の経由地や目的地 で降ろすべき荷物を誤って降ろしてしまうのを防ぐことができる。 [0096] また、ナビゲーシヨン装置 300の電源がオンになったとき、またはナビゲーシヨン制 御部 301およびユーザ操作部 302によって目的地や経由地が設定されたときに、ナ ピゲーシヨン制御部 301および物置トレイスイッチ部 313によって荷物が置かれてい ることを検出すると、ナビゲーシヨン制御部 301、音声入力部 314およびマイク 315に よって、自動的に音声メモを入力する状態となる。従って、音声メモを入力する操作 が簡便になる。 [0095] In addition, the navigation control unit 301 and the user operation unit 302 set the destination and waypoint, and store the voice memo of the package in association with the destination and waypoint, and arrive at the destination and waypoint. Or when it comes close, the navigation control unit 301, the display unit 303, the audio output unit 307, the audio generation unit 311 and the speaker 312 reproduce the voice memo associated with the destination or waypoint. Only voice memos of packages to be dropped at the destination or waypoint can be played. Therefore, it is possible to prevent the baggage to be unloaded at another waypoint or destination from being unintentionally dropped. [0096] Further, when the navigation apparatus 300 is turned on, or when a destination or waypoint is set by the navigation control unit 301 and the user operation unit 302, the navigation control unit 301 and the storage device are stored. When the tray switch unit 313 detects that a baggage is placed, the navigation control unit 301, the voice input unit 314, and the microphone 315 automatically enter a voice memo. Therefore, the operation of inputting a voice memo becomes simple.
[0097] なお、上記説明では、経由地を経由して目的地へ到達するための経路を設定し、 その経路の誘導を行うことにつ 、ては特に言及して 、な 、が、経路を設定して誘導を 行うようにしてもよい。その場合には、経路探索部 309によって、経由地を経由して目 的地へ至る経路を探索し、経路誘導部 310によってその経路の誘導を行う。そして、 経路の誘導中に、 自車の現在位置が目的地や経由地の近傍に来たときに、上述し たステップ S601〜ステップ S611の音声メモ再生処理を行う。 [0097] In the above description, the route for reaching the destination via the waypoint is set, and the route guidance is particularly referred to. It may be set and guided. In that case, the route search unit 309 searches for a route to the destination via the waypoint, and the route guidance unit 310 guides the route. Then, during the route guidance, when the current position of the vehicle comes close to the destination or the waypoint, the voice memo reproduction process of steps S601 to S611 described above is performed.
[0098] (実施例 2) [Example 2]
実施例 2では、音声メモを登録する際に、音声メモと一緒に、個々の荷物を識別す るための識別情報を記憶させておく。そして、この識別情報を利用して、音声メモ登 録時の荷物と、音声メモを再生しょうとしている物置トレイに実際に置かれている荷物 とが同じものであることを確認してから、音声メモを再生する。実施例 2では、一例とし て、識別情報として荷物に付与されたタグ情報を利用するので、以下、識別情報をタ グ情報とする。実施例 1と同様の構成については、同一の符号を付して重複する説 明を省略する。なお、実施例 2においても、荷物の付加情報を音声メモと呼ぶ。 In the second embodiment, when registering a voice memo, identification information for identifying individual packages is stored together with the voice memo. Then, using this identification information, confirm that the package at the time of voice memo registration and the package actually placed in the storage tray from which the voice memo is to be played back are the same. Play a note. In the second embodiment, as an example, tag information attached to a package is used as identification information. Therefore, the identification information is hereinafter referred to as tag information. About the same structure as Example 1, the same code | symbol is attached | subjected and the overlapping description is abbreviate | omitted. In the second embodiment, the additional information on the package is also called a voice memo.
[0099] (ナビゲーシヨン装置のハードウェア構成) [0099] (Hardware configuration of navigation device)
まず、この発明の実施例 2にかかるナビゲーシヨン装置のハードウェア構成につい て説明する。図 8は、この発明の実施例 2にかかるナビゲーシヨン装置のハードウェア 構成の一例を示すブロック図である。図 8に示すように、実施例 2のナビゲーシヨン装 置 800は、図 3に示す実施例 1のナビゲーシヨン装置 300の構成にカ卩えて、 RFタグリ ーダ部 816を備えている。 First, the hardware configuration of the navigation device according to the second embodiment of the present invention will be described. FIG. 8 is a block diagram of an example of a hardware configuration of the navigation device according to the second embodiment of the present invention. As shown in FIG. 8, a navigation apparatus 800 according to the second embodiment includes an RF tag reader unit 816 in addition to the configuration of the navigation apparatus 300 according to the first embodiment shown in FIG.
[0100] RFタグリーダ部 816は、荷物に付与された RFタグに含まれるタグ情報を読み取る。 [0100] The RF tag reader unit 816 reads tag information included in the RF tag attached to the package.
タグ情報には、荷物に固有の番号、荷物の名称、荷物の重量および荷物を降ろす場 所などの情報が含まれている。これらの情報のうち、荷物の識別情報として、荷物を 識別するのに適当な 1以上の情報、例えば荷物の固有番号が用いられる。 RFタグリ ーダ部 816によって読み取られたタグ情報は、ナビゲーシヨン制御部 301によって、 音声メモと関連付けられてナビゲーシヨン制御部 301内の不揮発性の半導体メモリにThe tag information includes a unique number for the package, the name of the package, the weight of the package, and the location where the package is unloaded Information such as location is included. Among these pieces of information, one or more pieces of information suitable for identifying the package, for example, a unique number of the package, is used as the package identification information. The tag information read by the RF tag reader unit 816 is associated with the voice memo by the navigation control unit 301 and stored in the nonvolatile semiconductor memory in the navigation control unit 301.
SC fedれる。 SC fed.
[0101] ナビゲーシヨン制御部 301は、 RFタグリーダ部 816からタグ情報を受け取って不揮 発性の半導体メモリに記憶させる。また、ナビゲーシヨン制御部 301は、その半導体メ モリからタグ情報を読み出し、その半導体メモリから読み出されたタグ情報と、新たに RFタグリーダ部 816によって取得したタグ情報が一致する力否かを判断する。 [0101] The navigation control unit 301 receives tag information from the RF tag reader unit 816 and stores the tag information in a nonvolatile semiconductor memory. Further, the navigation control unit 301 reads tag information from the semiconductor memory, and determines whether the tag information read from the semiconductor memory and the tag information newly acquired by the RF tag reader unit 816 match. To do.
[0102] 具体的には、実施の形態に力かる情報提示装置 100の機能的構成である図 1にお ける識別情報取得部 108は、例えばナビゲーシヨン制御部 301および RFタグリーダ 部 816などによってその機能を実現する。実施例 2では、図 1の画像情報取得部 107 は設けられていない。その他の構成は、実施例 1と同じである。 [0102] Specifically, the identification information acquisition unit 108 in FIG. 1 which is a functional configuration of the information presentation device 100 that is relevant to the embodiment is performed by, for example, the navigation control unit 301 and the RF tag reader unit 816. Realize the function. In the second embodiment, the image information acquisition unit 107 in FIG. 1 is not provided. Other configurations are the same as those in the first embodiment.
[0103] (ナビゲーシヨン装置の情報提示処理手順) [0103] (Information presentation processing procedure of the navigation device)
次に、この発明の実施例 2にかかるナビゲーシヨン装置の情報提示処理手順につ いて説明する。情報提示処理は、音声メモ登録処理と音声メモ再生処理に分けられ る。図 9は、音声メモ登録処理手順の一例を示すフローチャートである。また、図 10 は、音声メモ再生処理手順の一例を示すフローチャートである。 Next, an information presentation process procedure of the navigation device according to the second embodiment of the present invention will be described. The information presentation process is divided into a voice memo registration process and a voice memo playback process. FIG. 9 is a flowchart showing an example of a voice memo registration processing procedure. FIG. 10 is a flowchart showing an example of a voice memo reproduction processing procedure.
[0104] 具体的に、図 9および図 10に示す処理は、例えば図 8に示したナビゲーシヨン制御 部 301の RAM、 ROM、あるいは記録媒体 305などに記憶(記録)されたプログラム を、ナビゲーシヨン制御部 301の CPUが実行することによって実現する。なお、以降 において、主に図 8を参照しながら説明を行うが、既に説明した部分と重複する箇所 は同一の符号を付して説明を省略する。まず、音声メモ登録処理手順について説明 する。 [0104] Specifically, the processing shown in FIGS. 9 and 10 includes, for example, a program stored (recorded) in the RAM, ROM, or recording medium 305 of the navigation control unit 301 shown in FIG. This is realized by the CPU of the control unit 301 executing. In the following, description will be made mainly with reference to FIG. 8. However, portions overlapping with those already described are denoted by the same reference numerals and description thereof is omitted. First, the voice memo registration processing procedure will be described.
[0105] (音声メモ登録処理手順) [0105] (Voice memo registration process)
図 9に示すように、実施例 2の音声メモ登録処理手順が実施例 1と異なるのは、ステ ップ S409に続いて、ナビゲーシヨン制御部 301および RFタグリーダ部 816によって 、 RFタグ情報を読み込み、物置トレイ— nに関連付けて記憶する (ステップ S913)こ とである。そして、タグ情報を記憶したら、ステップ S411に進む。その他の手順は、実 施例 1と同じである。 As shown in FIG. 9, the voice memo registration processing procedure of the second embodiment differs from that of the first embodiment in that the RF tag information is read by the navigation control unit 301 and the RF tag reader unit 816 following step S409. And store it in relation to the storage bin n (Step S913) It is. When the tag information is stored, the process proceeds to step S411. The other procedures are the same as in Example 1.
[0106] なお、実施例 2においても、音声メモを目的地や経由地と関連付けない構成の場 合には、実施例 1と同様に、移動体のエンジンの起動やナビゲーシヨン装置 800の電 源オンをトリガーとしてステップ S402以降の処理を行う構成とすればよい。次に、音 声メモ再生処理手順にっ 、て説明する。 [0106] In the second embodiment as well, in the case where the voice memo is not associated with the destination or waypoint, as in the first embodiment, the mobile engine is started and the navigation device 800 is powered. The process after step S402 may be performed using ON as a trigger. Next, the voice memo playback processing procedure will be described.
[0107] (音声メモ再生処理手順) [0107] (Voice memo playback processing procedure)
図 10に示すように、実施例 2の音声メモ再生処理手順が実施例 1と異なるのは、以 下の 3点である。第 1の点は、ステップ S601〖こ代えて、ナビゲーシヨン制御部 301お よび位置取得部 304によって、移動体の現在位置を取得し、その現在位置が目的地 •経由地'登録地の近傍である力否かを判断する(ステップ S1001)ことである。登録 地とは、 目的地や経由地ではないが、ユーザ操作部 302の操作によってユーザによ り登録された地点のことである。 As shown in FIG. 10, the voice memo playback processing procedure of the second embodiment is different from that of the first embodiment in the following three points. The first point is that instead of step S601, the current position of the moving body is acquired by the navigation control unit 301 and the position acquisition unit 304, and the current position is in the vicinity of the destination It is to determine whether or not there is a certain force (step S1001). The registered place is a point registered by the user by the operation of the user operation unit 302, although it is not the destination or the waypoint.
[0108] 第 2の点は、ステップ S604に続いて、ナビゲーシヨン制御部 301および RFタグリー ダ部 816によって、物置トレイ— n部の RFタグリーダ力も RFタグ情報を読み込む (ス テツプ S1002)ことである。第 3の点は、ステップ S 1002に続いて、ナビゲーシヨン制 御部 301によって、ステップ S 1002で読み込まれたタグ情報と、図 9のステップ S913 で記憶されたタグ情報とがー致するか否か、すなわち [物置トレィー nに関連付けて 記憶されているタグ情報 =読み込んだタグ情報]である力否かを判断する (ステップ S 1003)ことである。 [0108] The second point is that, following step S604, the RF tag reader force of the storage tray—n section is also read by the navigation control unit 301 and the RF tag reader unit 816 (step S1002). . The third point is whether the tag information read in step S1002 by the navigation control unit 301 following step S1002 matches the tag information stored in step S913 in FIG. That is, it is determined whether or not the force is [tag information stored in association with the storage tray n = read tag information] (step S 1003).
[0109] 両タグ情報が一致する場合 (ステップ S 1003 : Yes)には、ステップ S605に進む。 [0109] If both pieces of tag information match (step S1003: Yes), the process proceeds to step S605.
両タグ情報が一致しない場合 (ステップ S 1003 : No)には、ステップ S610に進む。な お、実施例 2においても、経路探索部 309によって、経由地を経由して目的地へ至る 経路を探索し、経路誘導部 310によってその経路の誘導を行うようにしてもょ 、。 If the tag information does not match (step S 1003: No), the process proceeds to step S610. In the second embodiment, the route search unit 309 may search for a route to the destination via the waypoint, and the route guide unit 310 may guide the route.
[0110] 以上説明したように、実施例 2にかかるナビゲーシヨン装置 800によれば、実施例 1 と同様の効果が得られる。また、ナビゲーシヨン制御部 301および RFタグリーダ部 81 6によってタグ情報を取得し、ナビゲーシヨン制御部 301、表示部 303、音声出力部 3 07、音声生成部 311およびスピーカ 312によって音声メモを再生する際に新旧 2つ のタグ情報の照合を行 、、両タグ情報が一致した荷物につ 、てのみ音声メモを再生 するので、誤って違う荷物を降ろしてしまうのを防ぐことができる。 RFタグ情報の代わ りに、物置トレイの底に重量センサや圧力センサなどの荷物の重量を検出可能なセ ンサを取り付けておき、このセンサによって荷物の重量を検出し、それを荷物の識別 情報として利用してもよい。 As described above, according to the navigation device 800 according to the second embodiment, the same effect as the first embodiment can be obtained. When tag information is acquired by the navigation control unit 301 and the RF tag reader unit 816 and the voice memo is reproduced by the navigation control unit 301, the display unit 303, the audio output unit 3 07, the audio generation unit 311 and the speaker 312. New and old The tag information is checked, and the voice memo is played only for the package that matches the tag information, so it is possible to prevent the wrong package from being unintentionally dropped. Instead of the RF tag information, a sensor capable of detecting the weight of the load, such as a weight sensor or a pressure sensor, is attached to the bottom of the storage tray, and this sensor detects the weight of the load and uses it to identify the load. It may be used as
[0111] (実施例 3) [0111] (Example 3)
実施例 3では、デジタルカメラまたはカメラ付き携帯電話機のカメラなど、撮影した 映像を電子データで出力可能なカメラで撮影した荷物の映像を荷物の付加情報とし て利用する場合について説明する。従って、実施例 3では、荷物の付加情報を映像 メモと呼ぶことにする。 In the third embodiment, a case will be described in which a baggage video shot by a camera capable of outputting a shot video such as a digital camera or a camera-equipped mobile phone as electronic data is used as additional information on the baggage. Therefore, in the third embodiment, the additional information on the package is called a video memo.
[0112] (ナビゲーシヨン装置のハードウェア構成) [0112] (Hardware configuration of navigation device)
この発明の実施例 3にかかるナビゲーシヨン装置のハードウェア構成は、図 3に示 す実施例 1にかかるナビゲーシヨン装置 300のハードウェア構成と同じである。ただし 、通信部 308は、デジタルカメラまたはカメラ付き携帯電話機のカメラなどと通信し、 それらのカメラ力も撮影した映像の電子データ(以下、映像データとする)を受け取る 。このときの通信規格としては、 Bluetooth (登録商標)などの一般的な電子機器間 の通信規格を用いることができる。 The hardware configuration of the navigation device according to the third embodiment of the present invention is the same as the hardware configuration of the navigation device 300 according to the first embodiment shown in FIG. However, the communication unit 308 communicates with a digital camera or a camera of a camera-equipped mobile phone and receives electronic data (hereinafter referred to as “video data”) of video that has been captured by the camera power. As a communication standard at this time, a general communication standard between electronic devices such as Bluetooth (registered trademark) can be used.
[0113] 従って、実施の形態に力かる情報提示装置 100の機能的構成である図 1における 画像情報取得部 107は、例えばナビゲーシヨン制御部 301および通信部 308などに よってその機能を実現する。実施例 3では、図 1の識別情報取得部 108は設けられて いない。 Therefore, the image information acquisition unit 107 in FIG. 1, which is a functional configuration of the information presentation device 100 according to the embodiment, realizes its function by the navigation control unit 301 and the communication unit 308, for example. In the third embodiment, the identification information acquisition unit 108 of FIG. 1 is not provided.
[0114] (ナビゲーシヨン装置の情報提示処理手順) [0114] (Information presentation processing procedure of the navigation device)
次に、この発明の実施例 3にかかるナビゲーシヨン装置の情報提示処理手順につ いて説明する。情報提示処理は、映像メモ登録処理と映像メモ再生処理に分けられ る。図 11は、映像メモ登録処理手順の一例を示すフローチャートである。 Next, the information presentation processing procedure of the navigation device according to Embodiment 3 of the present invention will be described. Information presentation processing is divided into video memo registration processing and video memo playback processing. FIG. 11 is a flowchart showing an example of the video memo registration processing procedure.
[0115] 具体的に、図 11に示す処理は、例えば図 3に示したナビゲーシヨン制御部 301の R AM、 ROM,あるいは記録媒体 305などに記憶(記録)されたプログラムを、ナビゲ ーシヨン制御部 301の CPUが実行することによって実現する。なお、以降において、 主に図 3を参照しながら説明を行うが、既に説明した部分と重複する箇所は同一の符 号を付して説明を省略する。まず、映像メモ登録処理手順について説明する。 [0115] Specifically, the process shown in FIG. 11 includes, for example, a program stored (recorded) in the RAM, ROM, or recording medium 305 of the navigation control unit 301 shown in FIG. This is realized by executing 301 CPUs. In the following, The description will be made mainly with reference to FIG. 3, but the same parts as those already described are denoted by the same reference numerals and the description thereof is omitted. First, the video memo registration process procedure will be described.
[0116] (映像メモ登録処理手順) [0116] (Video Memo Registration Processing Procedure)
図 11において、まず、デジタルカメラまたは携帯電話機のカメラで物置トレィ上の荷 物の写真を撮影する (ステップ S 1101)。次いで、ナビゲーシヨン制御部 301によって 、例えばユーザ操作部 302から入力された情報を取得して、 目的地'経由地を設定 する(ステップ S 1102)。次いで、ナビゲーシヨン制御部 301によって、 [n 1]、すな わち nの値に 1を設定する (ステップ S1103)。そして、ナビゲーシヨン制御部 301によ つて、物置トレイスイッチ部 313の、物置トレイスイッチ—nがオンであるか否かを判断 する(ステップ S 1104)。 In FIG. 11, first, a photograph of the load on the storage tray is taken with a digital camera or a mobile phone camera (step S 1101). Next, the navigation control unit 301 obtains information input from the user operation unit 302, for example, and sets a destination 'waypoint (step S1102). Next, the navigation control unit 301 sets [n 1], that is, 1 to the value of n (step S1103). Then, the navigation control unit 301 determines whether or not the storage tray switch-n of the storage tray switch unit 313 is on (step S 1104).
[0117] その結果、物置トレイスイッチ— nがオンであれば (ステップ S1104 : Yes)、ナビゲ ーシヨン制御部 301によって、物置トレイ— nに荷物が置かれていると判断し、この荷 物に対する映像メモの設定が可能なことをユーザに報せる (ステップ S1105)。物置ト レイスイッチ— nがオフであれば (ステップ S1104 :No)、ステップ SI 105〜ステップ S 1113を飛ばして、ステップ S 1114に進む。 [0117] As a result, if the storage tray switch—n is on (step S1104: Yes), the navigation control unit 301 determines that a load is placed on the storage tray—n, and the video for this load is displayed. The user is notified that the memo can be set (step S1105). If the container tray switch-n is off (step S1104: No), step SI105 to step S1113 are skipped, and the process proceeds to step S1114.
[0118] 映像メモの設定が可能なことをユーザに報せる手段としては、音声生成部 311、音 声出力部 307およびスピーカ 312によって、設定操作を促す案内を合成音声にて出 力するようにしてもよいし、表示部 303の画面に設定操作を促すメッセージを表示さ せてもよい。あるいは、ブザーを鳴らすだけでもよい。また、合成音声に代えて、録音 音声を用いてもよい。 [0118] As a means for reporting to the user that the video memo can be set, the voice generation unit 311, the voice output unit 307, and the speaker 312 are used to output a guidance prompting the setting operation with synthesized voice. Alternatively, a message prompting the setting operation may be displayed on the screen of the display unit 303. Or you may just sound a buzzer. In addition, a recorded voice may be used instead of the synthesized voice.
[0119] ステップ S1105に続いて、ナビゲーシヨン制御部 301および通信部 308によって、 デジタルカメラまたは携帯電話機力 映像データを取り込む (ステップ S 1106)。ナビ ゲーシヨン制御部 301および通信部 308は、例えば、接続ケーブル、 Bluetooth,赤 外線通信などを用いて映像データを取り込むことができる。そして、ナビゲーシヨン制 御部 301および表示部 303によって、取り込んだ映像データを画面上に一覧表示す る(ステップ S 1107)。 Subsequent to step S1105, the navigation control unit 301 and the communication unit 308 capture the digital camera or mobile phone power video data (step S 1106). The navigation control unit 301 and the communication unit 308 can capture video data using, for example, a connection cable, Bluetooth, infrared communication, or the like. Then, the navigation control unit 301 and the display unit 303 display the captured video data as a list on the screen (step S 1107).
[0120] 次いで、ナビゲーシヨン制御部 301によって、物置トレィー nに関連付けられた映像 メモは記憶済みである力否かを判断する (ステップ S1108)。その結果、記憶済みで あれば (ステップ SI 108 : Yes)、ナビゲーシヨン制御部 301によって、例えば表示画 面 500のメッセージ表示領域 501 (図 5参照)に、記憶済みの映像メモがあることを表 示する(ステップ S 1109)。例えば「映像メモが登録されて 、ます」と!、うメッセージを 表示する。そして、ステップ S1110〖こ進む。 Next, the navigation control unit 301 determines whether or not the video memo associated with the storage tray n has been stored (step S1108). As a result, remembered If there is (step SI 108: Yes), the navigation control unit 301 displays that there is a stored video memo in the message display area 501 (see FIG. 5) of the display screen 500 (step S 1109). ). For example, the message “Your video memo is registered and will be displayed!” Is displayed. Then, proceed to step S1110.
[0121] このとき、表示画面 500のメモ情報表示領域 502に、物置トレイ— nに関連付けられ た記憶済みの映像メモを表示させてもよい。物置トレィー nに関連付けられた映像メ モが記憶されていない場合 (ステップ S 1108 : No)には、ステップ S1109を飛ばして 、ステップ S 1110に進む。 At this time, a stored video memo associated with the storage tray n may be displayed in the memo information display area 502 of the display screen 500. When the video memo associated with the storage tray n is not stored (step S1108: No), step S1109 is skipped and the process proceeds to step S1110.
[0122] そして、ナビゲーシヨン制御部 301によって、ユーザ操作部 302を介して映像メモス キップボタンが押されたカゝ否かを判断する(ステップ S1110)。映像メモスキップボタ ンが押された場合 (ステップ S 1110 : Yes)には、ステップ S1111〜ステップ S1113を 飛ばして、ステップ S1114に進む。映像メモスキップボタンが押されなかった場合 (ス テツプ S1110 :No)には、ナビゲーシヨン制御部 301によって、ユーザ操作部 302を 介して一覧表示の中から画像が選択され、映像メモ登録ボタンが押された力否かを 判断する (ステップ S1111)。 Then, the navigation control unit 301 determines whether or not the video memo skip button has been pressed via the user operation unit 302 (step S1110). If the video memo skip button is pressed (step S1110: Yes), step S1111 to step S1113 are skipped and the process proceeds to step S1114. If the video memo skip button is not pressed (step S1110: No), the navigation control unit 301 selects an image from the list display via the user operation unit 302, and the video memo registration button is pressed. It is determined whether or not the force is applied (step S1111).
[0123] ここで、映像メモスキップボタンは、映像メモの記憶を行わない場合に押されるボタ ンである。映像メモ登録ボタンは、映像メモの記憶を行う場合に押されるボタンである 。これらのボタンは、ナビゲーシヨン装置 300の筐体やリモコンに押しボタン式スイツ チとして設けられていてもよい。あるいは、これらのボタンが表示部 303の画面にボタ ン画像として表示される形態であってもよ 、。 [0123] Here, the video memo skip button is a button that is pressed when the video memo is not stored. The video memo registration button is a button that is pressed when storing a video memo. These buttons may be provided as push button switches on the casing of the navigation device 300 or the remote control. Alternatively, these buttons may be displayed on the screen of the display unit 303 as button images.
[0124] この場合、例えば図 5に示す表示例において、「開始」のボタン画像の代わりに、映 像メモ登録ボタンに相当する「登録」のボタン画像を表示し、もう一方の選択ボタン表 示領域 504に映像メモスキップボタンに相当する「スキップ」のボタン画像を表示し、 どちらかのボタン画像をリモコンの操作で選択することにより疑似的にボタンを押す構 成としてもよい。また、表示部 303の表示画面 500をタツチパネルで構成し、「登録」 のボタン画像または「スキップ」のボタン画像に触れることにより、どちらかのボタンを 押す構成としてもよい。 In this case, for example, in the display example shown in FIG. 5, instead of the “start” button image, a “register” button image corresponding to the image memo registration button is displayed, and the other selection button is displayed. A “skip” button image corresponding to the video memo skip button may be displayed in area 504, and one of the button images may be selected by operating the remote controller so as to push the button in a pseudo manner. Alternatively, the display screen 500 of the display unit 303 may be configured with a touch panel, and either button may be pressed by touching a “register” button image or a “skip” button image.
[0125] 図 11に戻り、映像メモ登録ボタンが押された場合 (ステップ Sl l l l : Yes)には、ナ ピゲーシヨン制御部 301によって、物置トレィー nに関連付けられた映像メモを記憶 する(ステップ S 1112)。そして、ステップ S 1114に進む。一方、映像メモ登録ボタン が押されな力つた場合 (ステップ Sl l l l :No)には、ナビゲーシヨン制御部 301によつ て、映像メモ入力可能表示の開始後一定時間が経過したカゝ否かを判断する (ステツ プ S1113)。 [0125] Returning to FIG. 11, if the video memo registration button is pressed (step Sl lll: Yes), The pigeon control 301 stores the video memo associated with the storage tray n (step S1112). Then, the process proceeds to step S 1114. On the other hand, when the video memo registration button is not pressed (step Sl lll: No), the navigation control unit 301 determines whether or not a certain time has passed since the start of the video memo input enabled display. (Step S1113).
[0126] 一定時間が経過した場合 (ステップ SI 113 : Yes)には、ステップ S1114に進む。一 定時間が経過して ヽな 、場合 (ステップ SI 113: No)には、ステップ SI 110に戻り、 映像メモ入力可能表示の開始後一定時間が経過するまで、映像メモスキップボタン または映像メモ登録ボタンが押されるのを待つ。 [0126] If the fixed time has elapsed (step SI 113: Yes), the process proceeds to step S1114. If the fixed time has passed and it is unsatisfactory (step SI 113: No), return to step SI 110 and register the video memo skip button or video memo until a certain time has elapsed after the video memo input enabled display starts. Wait for the button to be pressed.
[0127] 上述した処理において、物置トレイスイッチ— nがオフである場合 (ステップ S1104 : [0127] In the processing described above, when the storage tray switch—n is off (step S1104:
No)、映像メモスキップボタンが押された場合 (ステップ S 1110 : Yes)、映像メモ登録 ボタンが押されない状態で映像メモ入力可能表示の開始後一定時間が経過した場 合 (ステップ S1113 :Yes)には、ナビゲーシヨン制御部 301によって、 [n^n+ l]、 すなわち nの値を 1だけインクリメントする(ステップ SI 114)。次いで、ナビゲーシヨン 制御部 301によって、 [n > Nmax]であるか否かを判断する(ステップ S 1115)。 No), when the video memo skip button is pressed (step S1110: Yes), when a certain time has elapsed after the start of the video memo input enabled display without the video memo registration button being pressed (step S1113: Yes) The navigation control unit 301 increments [n ^ n + 1], that is, the value of n by 1 (step SI 114). Next, the navigation control unit 301 determines whether or not [n> Nmax] (step S 1115).
[0128] [n≤Nmax]である場合 (ステップ S1115 :No)には、ステップ S1104に戻り、ステ ップ S 1104〜ステップ S 1115を繰り返すことによって、すべての物置トレイに対して 上述した一連の処理を行う。 [n> Nmax]である場合 (ステップ S 1115 : Yes)には、 本フローチャートによる一連の映像メモ登録処理を終了する。 [0128] When [n≤Nmax] (step S1115: No), the process returns to step S1104 and repeats step S1104 to step S1115, thereby repeating the above-described series of operations for all the storage trays. Process. If [n> Nmax] (step S1115: Yes), the series of video memo registration processing according to this flowchart is terminated.
[0129] なお、実施例 3においても、映像メモを目的地や経由地と関連付けない構成の場 合には、実施例 1と同様に、移動体のエンジンの起動やナビゲーシヨン装置 300の電 源オンをトリガーとしてステップ S1103以降の処理を行う構成とすればよい。次に、映 像メモ再生処理手順にっ 、て説明する。 [0129] In the third embodiment, when the video memo is not associated with the destination or waypoint, as in the first embodiment, the mobile engine is started and the navigation device 300 is powered. The process after step S1103 may be performed using ON as a trigger. Next, the video memo playback processing procedure will be described.
[0130] (映像メモ再生処理手順) [0130] (Video memo playback processing procedure)
映像メモ再生処理手順については、図 6に示すフローチャートおよび実施例 1の音 声メモ再生処理手順の説明において、音声メモを映像メモと読み替えればよい。た だし、表示画面 500のメモ情報表示領域 502に、物置トレイ— nに関連付けられた登 録済みの映像メモを表示させる場合には、音声メモを文字情報に変換する必要がな いので、音声認識エンジンなどは不要である。 Regarding the video memo playback processing procedure, the voice memo may be read as the video memo in the flowchart shown in FIG. 6 and the description of the voice memo playback processing procedure of the first embodiment. However, when displaying the registered video memo associated with the storage tray n in the memo information display area 502 of the display screen 500, it is not necessary to convert the voice memo into character information. Therefore, no voice recognition engine is required.
[0131] 以上説明したように、実施例 3にかかるナビゲーシヨン装置 300によれば、実施例 1 と同様の効果が得られる。また、ナビゲーシヨン制御部 301および通信部 308によつ て、デジタルカメラまたはカメラ付き携帯電話機のカメラなど力 映像データを受け取 つて記憶するので、音声によるメモの作成を好まないユーザや、音声によるメモの作 成方法がわ力 ないユーザでも、表示部 303に一覧表示される撮影済みの画像の 中から適当な画像を選択するだけで映像メモを登録することができる。従って、簡単 に映像メモを登録することが可能になる。さらに、ナビゲーシヨン制御部 301および表 示部 303によって映像メモを再生することにより、誤って違う荷物を降ろしてしまうのを より確実に防ぐことができる。 [0131] As described above, according to the navigation device 300 according to the third embodiment, the same effect as that of the first embodiment can be obtained. In addition, the navigation control unit 301 and the communication unit 308 receive and store powerful video data such as a digital camera or a camera-equipped mobile phone camera, so users who do not like voice memos and voice memos. Even a user who does not know how to create a video memo can register a video memo simply by selecting an appropriate image from among the already-captured images displayed on the display unit 303. Therefore, video memos can be easily registered. Furthermore, by reproducing the video memo by the navigation control unit 301 and the display unit 303, it is possible to more reliably prevent the wrong baggage from being unintentionally dropped.
[0132] なお、実施例 2のように、 RFタグ情報や荷物の重量などを利用して、映像メモ登録 時の荷物と、映像メモを再生しょうとしている物置トレイに実際に置かれている荷物と が同じものであることを確認してから、映像メモを再生するようにしてもよい。また、経 路探索部 309によって、経由地を経由して目的地へ至る経路を探索し、経路誘導部 310によってその経路の誘導を行い、経路の誘導中に、自車の現在位置が目的地 や経由地の近傍に来たときに映像メモ再生処理を行うようにしてもよい。 [0132] As in Example 2, using the RF tag information and the weight of the baggage, the baggage at the time of registering the video memo and the baggage actually placed in the storage tray where the video memo is to be played back The video memo may be played after confirming that and are the same. In addition, the route search unit 309 searches for a route to the destination via the waypoint, and the route guidance unit 310 guides the route. During the route guidance, the current position of the host vehicle is determined. Alternatively, the video memo reproduction process may be performed when the vehicle comes near the waypoint.
[0133] なお、本実施の形態で説明した情報提示方法は、予め用意されたプログラムをパ ーソナル.コンピュータやワークステーション等のコンピュータで実行することにより実 現することができる。このプログラムは、ハードディスク、フレキシブルディスク、 CD- ROM, MO、 DVD等のコンピュータで読み取り可能な記録媒体に記録され、コンビ ユータによって記録媒体力も読み出されることによって実行される。またこのプロダラ ムは、インターネット等のネットワークを介して配布することが可能な伝送媒体であつ てもよい。 Note that the information presentation method described in the present embodiment can be realized by executing a program prepared in advance on a computer such as a personal computer or a workstation. This program is recorded on a computer-readable recording medium such as a hard disk, a flexible disk, a CD-ROM, an MO, and a DVD, and is executed by reading the recording medium force by a computer. The program may be a transmission medium that can be distributed via a network such as the Internet.
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2005-282869 | 2005-09-28 | ||
| JP2005282869 | 2005-09-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2007037185A1 true WO2007037185A1 (en) | 2007-04-05 |
Family
ID=37899612
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2006/318915 Ceased WO2007037185A1 (en) | 2005-09-28 | 2006-09-25 | Information presentation device, information presentation method, information presentation program, and recording medium |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2007037185A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009182867A (en) * | 2008-01-31 | 2009-08-13 | Sogo Keibi Hosho Co Ltd | Terminal for mounting on cargo, and simple positioning method |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH06111197A (en) * | 1992-09-28 | 1994-04-22 | Nippondenso Co Ltd | Home delivery navigation system |
| JPH10281788A (en) * | 1997-04-10 | 1998-10-23 | Hitachi Ltd | Collection and delivery navigation system |
| JPH1131294A (en) * | 1997-07-14 | 1999-02-02 | Toshiba Corp | Collection and delivery management system and collection and delivery management terminal device |
| JP2002265063A (en) * | 2001-03-09 | 2002-09-18 | Sharp Corp | Parcel transportation system |
| JP2004010201A (en) * | 2002-06-04 | 2004-01-15 | Seibu Electric & Mach Co Ltd | Shelf inventory information management method in automatic warehouse system |
| JP2005173682A (en) * | 2003-12-08 | 2005-06-30 | Olympus Corp | Object discrimination device, system, method, and program |
-
2006
- 2006-09-25 WO PCT/JP2006/318915 patent/WO2007037185A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH06111197A (en) * | 1992-09-28 | 1994-04-22 | Nippondenso Co Ltd | Home delivery navigation system |
| JPH10281788A (en) * | 1997-04-10 | 1998-10-23 | Hitachi Ltd | Collection and delivery navigation system |
| JPH1131294A (en) * | 1997-07-14 | 1999-02-02 | Toshiba Corp | Collection and delivery management system and collection and delivery management terminal device |
| JP2002265063A (en) * | 2001-03-09 | 2002-09-18 | Sharp Corp | Parcel transportation system |
| JP2004010201A (en) * | 2002-06-04 | 2004-01-15 | Seibu Electric & Mach Co Ltd | Shelf inventory information management method in automatic warehouse system |
| JP2005173682A (en) * | 2003-12-08 | 2005-06-30 | Olympus Corp | Object discrimination device, system, method, and program |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009182867A (en) * | 2008-01-31 | 2009-08-13 | Sogo Keibi Hosho Co Ltd | Terminal for mounting on cargo, and simple positioning method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP0827126B1 (en) | Land vehicle navigation apparatus with local route guidance selectivity and storage medium therefor | |
| US8423281B2 (en) | Apparatus and method for providing travel route in navigation system | |
| TW201009762A (en) | Navigation device & method | |
| WO2014073028A1 (en) | Navigation device | |
| WO2005093373A1 (en) | Navigation device, route searching method, route searching program, and computer-readable recording medium | |
| US6374183B1 (en) | Vehicle guidance method for navigation system | |
| CN102410839B (en) | The course guiding method of guider and guider | |
| JP4400775B2 (en) | Navigation device, facility search method, program, and recording medium for recording program | |
| US7693655B2 (en) | Navigation system including database for storing feature information for points of interest | |
| WO2005038404A1 (en) | Navigation apparatus and method, and navigation program | |
| EP1873491A1 (en) | Navigation device | |
| JP5280186B2 (en) | Car navigation system | |
| WO2007037185A1 (en) | Information presentation device, information presentation method, information presentation program, and recording medium | |
| JP2007071665A (en) | Navigation system | |
| JP3585720B2 (en) | Car navigation system | |
| JP2002286478A (en) | Guiding route re-searching method for on-vehicle navigators | |
| WO2007105540A1 (en) | Navigation device and navigation method | |
| JP3482917B2 (en) | Car navigation system | |
| WO2007077829A1 (en) | Navigation device and guidance map display method | |
| WO2007037186A1 (en) | Waypoint setting device, information presentation device, waypoint setting method, waypoint setting program, and recording medium | |
| JP2008122340A (en) | Navigation system | |
| JP4799695B2 (en) | Navigation device, route search method, and route search program | |
| JP2005234991A (en) | Information retrieval apparatus, information retrieval method, and information retrieval program | |
| JPWO2008026377A1 (en) | Information registration device and information registration method, etc. | |
| KR100255186B1 (en) | Navigation device for moving object |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 06810483 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |