WO2018198314A1 - Système de distribution d'icone sonore pour terminal portable, et procédé et programme - Google Patents
Système de distribution d'icone sonore pour terminal portable, et procédé et programme Download PDFInfo
- Publication number
- WO2018198314A1 WO2018198314A1 PCT/JP2017/016936 JP2017016936W WO2018198314A1 WO 2018198314 A1 WO2018198314 A1 WO 2018198314A1 JP 2017016936 W JP2017016936 W JP 2017016936W WO 2018198314 A1 WO2018198314 A1 WO 2018198314A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- voice
- wearable terminal
- icon
- uttered
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01G—HORTICULTURE; CULTIVATION OF VEGETABLES, FLOWERS, RICE, FRUIT, VINES, HOPS OR SEAWEED; FORESTRY; WATERING
- A01G7/00—Botany in general
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
Definitions
- the present invention relates to a voice icon placement system, method, and program for wearable terminals.
- Patent Document 1 Conventionally, a technique for converting the contents of recorded voice into text has been proposed (see Patent Document 1).
- the present invention has been made in view of such a demand, and by recording a work situation or the like, a system that allows a user to more intuitively grasp the recorded content with positional information.
- the purpose is to provide.
- the present invention provides the following solutions.
- the invention which concerns on 1st characteristic is the audio
- Voice acquisition means for acquiring voice uttered by a user of the wearable terminal;
- Position acquisition means for acquiring a position where the voice is uttered;
- Voice recognition means for voice recognition of the voice;
- Classification means for classifying the voice into a predetermined category according to the voice-recognized content;
- a voice icon arrangement system for wearable terminals is provided.
- the voice acquisition unit acquires the voice uttered by the user of the wearable terminal
- the voice recognition unit recognizes the voice
- the classification unit classifies the voice into a predetermined category.
- the classifying and displaying means arranges and displays icons corresponding to the classified categories on the map according to the position acquired by the position acquiring means on the display unit of the wearable terminal.
- the invention according to the second feature is the invention according to the first feature,
- the voice acquisition means provides a wearable terminal voice icon arrangement system that acquires the uttered voice from a microphone of the wearable terminal.
- the user can cause the voice acquisition unit to acquire the user's voice without holding the wearable terminal by hand. Therefore, it is possible to provide a system that is even more convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
- the invention according to the third feature is the invention according to the first or second feature,
- the position acquisition means provides a wearable terminal voice icon arrangement system that acquires a position where the voice is uttered from position information of the wearable terminal.
- the user can acquire the position where the voice is uttered by the position acquisition means, even if the user does not hold the wearable terminal by hand or the user does not particularly explain the position. Can be made. Therefore, it is possible to provide a system that is even more convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
- the invention according to the fourth feature is the invention according to any one of the first to third features,
- the classification means provides a wearable terminal voice icon arrangement system that classifies the voice according to whether the voice-recognized content is positive or negative.
- the display means since the classification means classifies the speech-recognized content according to whether it is positive or negative, the display means includes an icon indicating positive content, a negative It can be displayed on a map by distinguishing it from an icon indicating a detailed content. Therefore, it is possible to provide a system in which a position indicating a positive situation and a position showing a negative situation can be grasped more intuitively via a wearable terminal.
- the invention according to the fifth feature is the invention according to any one of the first to fourth features,
- the classification means provides a wearable terminal voice icon arrangement system that classifies the voice according to whether or not a specific keyword is included in the voice-recognized content.
- the display unit since the classification unit classifies the specific keyword according to whether the specific keyword is included, the display unit includes an icon indicating that the specific keyword is included, and a specific item. It can be distinguished from an icon indicating that no keyword is entered and displayed on the map. Therefore, it is possible to provide a system in which a position in a specific keyword situation and a position that is not so can be grasped more intuitively via a wearable terminal.
- the invention according to a sixth feature is the invention according to any one of the first to fifth features, Provided is a wearable terminal voice icon arrangement system further comprising switching means for switching ON / OFF the displayed icon under a predetermined condition.
- the size of the display means of the wearable terminal is limited, and if too much information is displayed on the display means at once, it will be difficult for the user to understand.
- the display means of the wearable terminal since the icon to be displayed on the display means can be turned on and the icon to be hidden from the display means can be turned off, the display means of the wearable terminal having a limited size Even so, it is possible to provide a system that the user can use without any hesitation.
- the present invention it is possible to provide a system capable of intuitively grasping the recorded contents by voice through the wearable terminal by causing the obtaining means to obtain voice.
- a wearable terminal since a wearable terminal is used, it is not necessary to carry the terminal.
- FIG. 1 is a block diagram showing a hardware configuration and software functions of a wearable terminal voice icon arrangement system 1 according to the present embodiment.
- FIG. 2 is a flowchart showing a voice icon arrangement method according to this embodiment.
- FIG. 3 is an example for explaining the contents of the voice acquisition module 11.
- FIG. 4 is an example following FIG.
- FIG. 5 is an example following FIG.
- FIG. 6 is an example of the voice database 31 in the present embodiment.
- FIG. 7 is an example of the dictionary database 32 in the present embodiment.
- FIG. 8 is an example of the Web content database 33 in the present embodiment.
- FIG. 9 is an example of the classification database 34 in the present embodiment.
- FIG. 10 is an example when all icons are displayed in the image display unit 70 of the present embodiment.
- FIG. 11 is an example when some icons are displayed on the image display unit 70 of the present embodiment.
- FIG. 1 is a block diagram for explaining the hardware configuration and software functions of a wearable terminal voice icon arrangement system 1 according to this embodiment.
- the voice icon arrangement system 1 includes a control unit 10 that controls data, a communication unit 20 that communicates with other devices, a storage unit 30 that stores data, an input unit 40 that receives user operations, and user voices.
- the voice icon arrangement system 1 is a wearable terminal such as a smart glass, a wearable terminal, and a smart watch. Thereby, since a user such as a farmer does not need to carry the terminal, the user can provide the voice icon arrangement system 1 that is highly convenient for a user whose both hands tend to be occupied with work tools.
- the voice icon placement system 1 may be a smartphone. In this case, it is essential that the smartphone is attached to the body and both hands are in a freehand state.
- the voice icon arrangement system 1 may be a stand-alone type system provided integrally with a wearable terminal, or a cloud type system including a wearable terminal and a server connected to the wearable terminal via a network. It may be. In the present embodiment, for the sake of simplicity, the audio icon arrangement system 1 will be described as a stand-alone system.
- the control unit 10 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like.
- a CPU Central Processing Unit
- RAM Random Access Memory
- ROM Read Only Memory
- the communication unit 20 includes a device for enabling communication with other devices, for example, a Wi-Fi (Wireless Fidelity) compatible device compliant with IEEE 802.11.
- Wi-Fi Wireless Fidelity
- the control unit 10 reads a predetermined program and cooperates with the communication unit 20 as necessary, thereby obtaining a voice acquisition module 11, a position acquisition module 12, a voice recognition module 13, a specific module 14, and a classification module. 15, the display module 16, and the switching module 17 are realized.
- the storage unit 30 is a device that stores data and files, and includes a data storage unit such as a hard disk, a semiconductor memory, a recording medium, and a memory card.
- the storage unit 30 stores an audio database 31, a dictionary database 32, a web content database 33, a classification database 34, and a map database 35, which will be described later.
- the storage unit 30 also stores image data to be displayed on the image display unit 70.
- the type of the input unit 40 is not particularly limited. Examples of the input unit 40 include a keyboard, a mouse, and a touch panel.
- the type of the sound collecting unit 50 is not particularly limited. Examples of the sound collecting unit 50 include a microphone.
- the position detection unit 60 is not particularly limited as long as it is a device that can detect the latitude and longitude where the voice icon arrangement system 1 is located. Examples of the position detection unit 60 include a GPS (Global Positioning System).
- the type of the image display unit 70 is not particularly limited. Examples of the image display unit 70 include a monitor and a touch panel.
- FIG. 2 is a flowchart showing a voice icon placement method using the voice icon placement system 1. The processing executed by each hardware and the software module described above will be described.
- Step S10 Acquisition of voice
- positioning system 1 performs the audio
- Step S11 Acquisition of a position where sound is generated
- the control part 10 performs the position acquisition module 12, and acquires the position where the audio
- the control unit 10 refers to a calendar (not shown) stored in the storage unit 30 and further acquires the date on which the voice was uttered.
- step S10 and step S11 are examples for explaining the processing of step S10 and step S11.
- the farmer who operates Yamada Farm observes the state of the long leek field cultivated on Yamada Farm A.
- the stem grew to 30 cm. “The soil is good. It seems to be about a week before harvesting.”
- the sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30.
- the position detector 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located.
- the position detection unit 60 detects that the latitude is 35 degrees 52 minutes 7 seconds north latitude and the longitude is 139 degrees 46 minutes 56 seconds east longitude.
- the information regarding the position is also set in a predetermined area of the storage unit 30 together with the A / D converted information.
- the farmhouse moved to a point of latitude: 35 degrees 52 minutes 2 seconds north latitude, longitude 139 degrees 47 minutes 52 seconds east longitude, and voiced “There was a pest A here.” It has occurred.
- the sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30.
- the position detection unit 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located, and the position information is also stored in a predetermined area of the storage unit 30 together with the A / D converted information. Set.
- the farmhouse moves to a point at latitude: 35 ° 51: 57 latitude N and longitude: 139 ° 47: 1 east longitude, and produces a voice saying “A lot of locusts have occurred.” ing.
- the sound collection unit 50 of the sound icon arrangement system 1 collects the sound. Then, the control unit 10 A / D converts the sound collected by the sound collection unit 50 and sets the A / D converted information in a predetermined area of the storage unit 30.
- the position detection unit 60 of the voice icon placement system 1 detects the latitude and longitude where the voice icon placement system 1 is located, and the position information is also stored in a predetermined area of the storage unit 30 together with the A / D converted information. Set.
- Step S12 Speech recognition
- the control unit 10 transcribes the voice collected by the sound collection unit 50 from the waveform of the sound wave included in the A / D converted information.
- the information A / D-converted at the stage shown in FIG. It is said.
- the information that has been A / D converted in the stage shown in FIG. The information that has been A / D converted at the stage shown in FIG. 5 is referred to as “Inagoga Taiyo Hassei”.
- the control unit 10 refers to the dictionary database 32 shown in FIG. 7, replaces the transcribed information with a language, and converts it into a sentence.
- the information A / D converted at the stage shown in FIG. 3 is “It was rainy in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good. It is said.
- the information A / D converted at the stage shown in FIG. 4 is “There was a pest A here”.
- the information subjected to A / D conversion at the stage shown in FIG. 5 is “Large locusts”.
- All the documented information is set in a predetermined area of the storage unit 30 in association with A / D converted information and position information.
- Step S13 Identification of Web Content
- the control unit 10 refers to the Web content database 33.
- FIG. 8 is an example of the Web content database 33.
- information on the field and the range of the field is stored in advance in association with the identification number.
- an area surrounded by latitude 35 ° 51: 55 to 35 ° 52: 10 and latitude longitude 139 ° 46: 55 to 139 ° 47: 5 east is an area of Yamada Farm A.
- the area of Yamada Farm A is associated with the identification number “1”.
- the area surrounded by latitude 35 ° 52: 10 seconds to 35 ° 52: 20 seconds north and longitude 139 ° 46: 55 seconds to 139 ° 47: 5 seconds east is the Yamada Farm B area.
- the area of Yamada Farm B is associated with the identification number “2”.
- the position information set in the predetermined area of the storage unit 30 through the steps of FIG. 3 to FIG. 5 includes (1) latitude: 35 ° 52: 7 latitude north, longitude: 139 ° 46: 56 east longitude, (2) Latitude: north latitude 35 degrees 52 minutes 2 seconds, longitude: 139 longitude 47 minutes 52 seconds east longitude, (3) latitude: north latitude 35 degrees 51 minutes 57 seconds, longitude: east longitude 139 degrees 47 minutes 1 second.
- the control unit 10 can specify that the Web content associated with the position information acquired in the process of Step S10 is the Web content of Yamada Farm A with the identification number “1”.
- the control unit 10 determines whether the position acquired in the process of step S ⁇ b> 11 is inside a specific range defined by the Web content database 33, and specifies the Web content associated with the specific range. I am doing so. For example, in the case of occupations that work in a wide area with a certain area, such as agriculture, if the position where the voice is uttered is set too strictly, the amount of data will be too large and it may become a system that is difficult to use. possible. According to the invention described in the present embodiment, since Web content is managed in association with a specific range, it is possible to prevent the amount of data from becoming too large and complicated.
- the control unit 10 reads a calendar (not shown) stored in the storage unit 30 so that “February 14”, which is today's date, is recorded in advance in the “date” of the web content database 33. ing. Further, the control unit 10 reads out weather information from an external weather forecast providing website via the communication unit 20, so that the “weather” in the web content database 33 includes “sunny” that is the current weather in advance. It is recorded.
- control unit 10 uses the past information to record information such as “Yamada Farm A” and “Leek onion” in advance.
- the control unit 10 acquires the voice, the position where the voice is uttered, and the date when the voice is uttered, and in the process of step S13, The control unit 10 identifies Web content associated with the position and date acquired in the process of step S10. Thereby, since the date is associated with the Web content, it is possible to provide the voice icon arrangement system 1 that is more convenient for the user.
- Step S14 Classify speech into predetermined categories
- the control unit 10 of the voice icon arrangement system 1 executes the classification module 15 to classify and record the contents recognized by the voice in the process of step S12 into the predetermined category in the Web content specified in the process of step S13. (Step S14).
- the control unit 10 reads the content that has been voice-recognized in the process of step S11. In a predetermined area of the storage unit 30, in order, “It was raining in the weather forecast, but it was clear. The stem grew to 30 cm. The soil was good. And the information “This is dead” is stored. The control unit 10 reads out these pieces of information from a predetermined area of the storage unit 30.
- the control unit 10 refers to the classification database 34.
- FIG. 9 is an example of the classification database 34.
- the classification database whether the word included in the sentenced content, the item listed in the Web content database 33, the word included in the sentenced content is positive or negative, A relationship with a flag for identifying whether the keyword is a specific keyword is recorded in advance.
- the Web content database 33 (FIG. 8) includes “date”, “weather”, “farm field”, “crop”, “stem”, “soil”, “harvest”, “pest”, “withered”. Etc. "are listed.
- word groups related to these items are recorded.
- the control unit 10 refers to the classification database 34 and associates “30 cm” included in this information with the item “stem”. Further, “good” is associated with the item “soil”, and “one week” is associated with the item “harvest”. Therefore, the control unit 10 sets the item “stem” in the Web content database 33 (FIG. 8) with the identification number “1” of “2. Crop growth state” and the date “February 14, 2017”. The information “30 cm”, the information “good” in the item “soil”, and the information “around one week” are set in the item “harvest”.
- the control unit 10 refers to the classification database 34 and associates “here pest” included in this information with the item “pest”. Therefore, the control unit 10 sets the item “pest” at the identification number “1” of the “2. Crop growth state” and the date “February 14, 2017” in the Web content database 33 (FIG. 8). , "Latitude: 35 degrees 52 minutes 2 seconds north latitude, longitude: 139 degrees 47 minutes 52 seconds east longitude", which is the position information when the information "There was pest A here.” Information on the type of “pest A” is set.
- the control unit 10 refers to the classification database 34 and associates “Locust” included in this information with the item “Pest”. Therefore, the control unit 10 sets the item “pest” at the identification number “1” of the “2. Crop growth state” and the date “February 14, 2017” in the Web content database 33 (FIG. 8). , "Latitude: 35 degrees 51 minutes 57 seconds north latitude, longitude: 139 degrees 47 minutes 1 second east longitude", which is the position information when the information "Locus occurs in large quantities”, is set.
- “Locust” included in the above information corresponds to a specific word set in advance. Therefore, a flag indicating that the word is a specific word is set for the information “Large locusts are generated.”
- control unit 10 when there is already information in the Web content specified in the process of step S12, the control unit 10 overwrites and records the content recognized in the process of step S11. This makes it possible to manage the work records of farm work in time series.
- control unit 10 specifies specific items (for example, date, weather, field, crop, stem, soil, harvest) in the Web content specified in the process in step S12 based on the content recognized in the process in step S11. , Pests, withering items, etc.), the related specific content recognized by voice recognition is recorded.
- specific items for example, date, weather, field, crop, stem, soil, harvest
- the type of flag that is contained most in one piece of information may be set.
- a flag given to a word or the like may be weighted and the flag with the highest weight may be set.
- Step S15 Web Content Image Display
- FIG. 10 shows a display example of the image display unit 70 of the wearable terminal at that time.
- Information recorded in the Web content database 33 is displayed on the image display unit 70 of the wearable terminal. Specifically, “2017/2/14” which is today's date is displayed on the upper right, and “sunny” which is today's weather is displayed next to it.
- control unit 10 refers to the map database 35 and causes the image display unit 70 of the wearable terminal to display a map of an area corresponding to the identification number “1” of the Web content database 33. Then, the control unit 10 displays an icon corresponding to the flag classified in the process of step S14 on the map according to the position detected by the position detection unit 60 in the process of step S11 on the image display unit 70 of the wearable terminal. Place and display in.
- the index of the mark on the map is displayed to the right of the image display unit 70 of the wearable terminal.
- White circles on the map indicate positions that contain positive information.
- White circles on the map indicate positions that contain positive information.
- Boxes with halftone dots on the map indicate positions that contain negative information.
- a hatched box on the map indicates a position including information having a specific word relating to, for example, “Locust”.
- indexes are all “ON”. This indicates that all indexes are displayed on the map.
- Step S16 Switch Icon
- FIG. 11 shows a display example of the image display unit 70 of the wearable terminal at that time.
- the size of the image display unit 70 of the wearable terminal is limited, and if too much information is displayed on the image display unit 70 at one time, it is difficult for the user to understand.
- the image display unit of the wearable terminal having a size limit Even if it is 70, the icon arrangement
- the voice recognition module 13 executes the voice recognition.
- the voice is classified into a predetermined category (for each predetermined flag) by executing the classification module 15, and the icon corresponding to the classified category (flag) is displayed on the image display unit 70 of the wearable terminal by executing the display module 16.
- voice icon arrangement system 1 which can grasp
- a wearable terminal since a wearable terminal is used, it is not necessary to carry the terminal. As a result, it is possible to provide a voice icon arrangement system 1 that is particularly convenient for a user who tends to be occupied with work tools in both hands, such as farm work.
- the control unit 10 acquires the voice generated by the user from the sound collection unit 50 of the wearable terminal by executing the voice acquisition module 11.
- the control unit 10 can acquire the position where the voice is uttered from the position information of the wearable terminal by executing the position acquisition module 12. Even if the user does not hold the wearable terminal by hand, and the user does not particularly explain the position, the user can acquire the contents of the voice and the information on the position where the voice is uttered. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for a user who tends to be occupied with work tools in both hands like farm work.
- control unit 10 can execute the classification module 15 to classify the speech-recognized content according to whether it is positive or negative.
- the image display unit 70 of the wearable terminal an icon indicating positive contents and an icon indicating negative contents can be distinguished from each other and displayed on a map. Therefore, it is possible to provide the voice icon arrangement system 1 in which a position showing a positive situation and a position showing a negative situation can be grasped more intuitively via a wearable terminal.
- the control unit 10 can classify according to whether a specific keyword is included in the speech-recognized content.
- an icon indicating that the specific keyword is included and an icon indicating that the specific keyword is not included are distinguished from each other and displayed on the map. Can do. Therefore, it is possible to provide the voice icon arrangement system 1 in which the position in the situation of the specific keyword and the position that is not so can be grasped more intuitively through the wearable terminal.
- the displayed icon can be switched on / off under a predetermined condition.
- the size of the image display unit 70 of the wearable terminal is limited, and if too much information is displayed on the image display unit 70 at one time, it is difficult for the user to understand.
- an icon that is desired to be displayed on the image display unit 70 can be turned on, and an icon that is desired to be hidden from the image display unit 70 can be turned off. Even with the image display unit 70 of the terminal, it is possible to provide the voice icon arrangement system 1 that the user can use without any difficulty.
- control unit 10 when the control unit 10 acquires voice in the process of step S10, the control unit 10 recognizes the voice in the process of step S12, and the control is performed in the process of step S13.
- the unit 10 specifies the Web content associated with the position where the voice is acquired.
- the control unit 10 records the speech-recognized content in the specified web content.
- positioning system 1 which links
- the Web content displayed on the image display unit 70 includes a map including position information including the position where the voice is acquired, and the control unit 10 performs voice recognition on the map of the Web content in the process of step S12.
- the displayed contents are displayed in a superimposed manner.
- the voice content acquired by the control unit 10 is recorded on the Web content in association with the position where the voice is uttered by acquiring the voice in the process of step S10. .
- the speech-recognized content is superimposed and displayed on the Web content map. Therefore, it is possible to provide the voice icon arrangement system 1 that is even more convenient for the user.
- the means and functions described above are realized by a computer (including a CPU, an information processing apparatus, and various terminals) reading and executing a predetermined program.
- the program is provided in a form recorded on a computer-readable recording medium such as a flexible disk, CD (CD-ROM, etc.), DVD (DVD-ROM, DVD-RAM, etc.).
- the computer reads the program from the recording medium, transfers it to the internal storage device or the external storage device, stores it, and executes it.
- the program may be recorded in advance in a storage device (recording medium) such as a magnetic disk, an optical disk, or a magneto-optical disk, and provided from the storage device to a computer via a communication line.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Animal Husbandry (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Mining & Mineral Resources (AREA)
- Marine Sciences & Fisheries (AREA)
- Marketing (AREA)
- Agronomy & Crop Science (AREA)
- Biodiversity & Conservation Biology (AREA)
- Botany (AREA)
- Ecology (AREA)
- Forests & Forestry (AREA)
- Environmental Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Le problème décrit par la présente invention est de fournir un système qui, en créant un enregistrement audio de conditions de travail, etc., relie le contenu enregistré à des informations de position pour permettre à l'utilisateur de comprendre plus intuitivement ledit contenu enregistré. La solution porte sur un système de distribution d'icone sonore 1, dans lequel une unité de commande 10 exécute un module d'acquisition de son 11 ; lorsqu'un son prononcé par l'utilisateur du terminal portable est acquis, alors ce son est reconnu par l'exécution d'un module de reconnaissance de son 13, le son est classé en des catégories prescrites (pour chaque drapeau prescrit) en exécutant un module de classification 15 et, en exécutant un module d'affichage 16, une icone correspondant à la catégorie classée (drapeau) est affichée sur une unité d'affichage d'image 70 du terminal portable, agencée sur une carte et correspondant à une position acquise par l'exécution d'un module d'acquisition de position 12.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2017/016936 WO2018198314A1 (fr) | 2017-04-28 | 2017-04-28 | Système de distribution d'icone sonore pour terminal portable, et procédé et programme |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2017/016936 WO2018198314A1 (fr) | 2017-04-28 | 2017-04-28 | Système de distribution d'icone sonore pour terminal portable, et procédé et programme |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018198314A1 true WO2018198314A1 (fr) | 2018-11-01 |
Family
ID=63920356
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2017/016936 Ceased WO2018198314A1 (fr) | 2017-04-28 | 2017-04-28 | Système de distribution d'icone sonore pour terminal portable, et procédé et programme |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018198314A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022179440A1 (fr) * | 2021-02-28 | 2022-09-01 | International Business Machines Corporation | Enregistrement d'un son séparé à partir d'un mélange de flux sonores sur un dispositif personnel |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004220149A (ja) * | 2003-01-10 | 2004-08-05 | National Agriculture & Bio-Oriented Research Organization | 圃場作付け状況確認システム |
| JP2012216135A (ja) * | 2011-04-01 | 2012-11-08 | Olympus Corp | 画像生成システム、プログラム及び情報記憶媒体 |
| JP2013254356A (ja) * | 2012-06-07 | 2013-12-19 | Topcon Corp | 営農支援システム |
| JP2015084226A (ja) * | 2014-10-24 | 2015-04-30 | パイオニア株式会社 | 端末装置、表示方法、表示プログラム、システム及びサーバ |
| WO2015059764A1 (fr) * | 2013-10-22 | 2015-04-30 | 三菱電機株式会社 | Serveur pour navigation, système de navigation et procédé de navigation |
-
2017
- 2017-04-28 WO PCT/JP2017/016936 patent/WO2018198314A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004220149A (ja) * | 2003-01-10 | 2004-08-05 | National Agriculture & Bio-Oriented Research Organization | 圃場作付け状況確認システム |
| JP2012216135A (ja) * | 2011-04-01 | 2012-11-08 | Olympus Corp | 画像生成システム、プログラム及び情報記憶媒体 |
| JP2013254356A (ja) * | 2012-06-07 | 2013-12-19 | Topcon Corp | 営農支援システム |
| WO2015059764A1 (fr) * | 2013-10-22 | 2015-04-30 | 三菱電機株式会社 | Serveur pour navigation, système de navigation et procédé de navigation |
| JP2015084226A (ja) * | 2014-10-24 | 2015-04-30 | パイオニア株式会社 | 端末装置、表示方法、表示プログラム、システム及びサーバ |
Non-Patent Citations (1)
| Title |
|---|
| SHIN'YA HIRUTA ET AL.: "Detection and Visualization of Place-triggered Geotagged Tweets", INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 54, no. 2, 15 February 2013 (2013-02-15), pages 710 - 720, XP055526748 * |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022179440A1 (fr) * | 2021-02-28 | 2022-09-01 | International Business Machines Corporation | Enregistrement d'un son séparé à partir d'un mélange de flux sonores sur un dispositif personnel |
| GB2619229A (en) * | 2021-02-28 | 2023-11-29 | Ibm | Recording a separated sound from a sound stream mixture on a personal device |
| JP2024512178A (ja) * | 2021-02-28 | 2024-03-19 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 個人用デバイス上で混合音ストリームから分離された音を録音すること |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20210312930A1 (en) | Computer system, speech recognition method, and program | |
| US11881209B2 (en) | Electronic device and control method | |
| EP3591577A1 (fr) | Appareil de traitement d'informations, procédé de traitement d'informations et programme | |
| CN109885810A (zh) | 基于语义解析的人机问答方法、装置、设备和存储介质 | |
| WO2016137797A1 (fr) | Procédés, systèmes et interface utilisateur empathique pour l'interfaçage avec un dispositif informatique empathique | |
| WO2021147528A1 (fr) | Procédé exécutable par ordinateur relatif aux mauvaises herbes et système informatique | |
| JP2019053126A (ja) | 成長型対話装置 | |
| CN109902158A (zh) | 语音交互方法、装置、计算机设备及存储介质 | |
| US20220172047A1 (en) | Information processing system and information processing method | |
| WO2019133638A1 (fr) | Étiquetage vocal de vidéo pendant un enregistrement | |
| Ganchev | Computational bioacoustics: biodiversity monitoring and assessment | |
| JP2022053521A (ja) | 伐採時期判別プログラム | |
| KR20160072489A (ko) | 사용자 단말 장치 및 그의 대상 인식 방법 | |
| WO2022262586A1 (fr) | Procédé d'identification de plante, système informatique et support de stockage lisible par ordinateur | |
| WO2023018908A1 (fr) | Système d'intelligence artificielle conversationnelle dans un espace de réalité virtuelle | |
| Darapaneni et al. | Farmer-bot: An interactive bot for farmers | |
| WO2023065989A1 (fr) | Procédé et système de diagnostic de maladie de plante et d'insecte nuisible, et support de stockage lisible | |
| JP2015104078A (ja) | 撮像装置、撮像システム、サーバ、撮像方法、及び撮像プログラム | |
| WO2018198314A1 (fr) | Système de distribution d'icone sonore pour terminal portable, et procédé et programme | |
| KR20170086233A (ko) | 라이프 음성 로그 및 라이프 영상 로그를 이용한 점증적 음향 모델 및 언어 모델 학습 방법 | |
| JP7273563B2 (ja) | 情報処理装置、情報処理方法、および、プログラム | |
| JP6845446B2 (ja) | 音声内容記録システム、方法及びプログラム | |
| JP2014048924A (ja) | 情報処理装置、情報処理方法およびプログラム | |
| Ortenzi et al. | Italian speech commands for forestry applications | |
| US20220238109A1 (en) | Information processor and information processing method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17907613 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17907613 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |