JPH09265457A - Online conversation system - Google Patents
Online conversation systemInfo
- Publication number
- JPH09265457A JPH09265457A JP8075840A JP7584096A JPH09265457A JP H09265457 A JPH09265457 A JP H09265457A JP 8075840 A JP8075840 A JP 8075840A JP 7584096 A JP7584096 A JP 7584096A JP H09265457 A JPH09265457 A JP H09265457A
- Authority
- JP
- Japan
- Prior art keywords
- data
- symbol image
- display
- communication
- communication terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Computer And Data Communications (AREA)
- User Interface Of Digital Computer (AREA)
- Digital Computer Display Output (AREA)
- Information Transfer Between Computers (AREA)
Abstract
(57)【要約】
【課題】従来のオンライン会話システムでは、通信端末
の表示画面上に、各端末からの発言内容の羅列が表示さ
れるだけであったため、オンライン会話システムにおけ
る通信端末の操作画面を、視覚的にわかりやすくし、よ
り一層、会話の臨場感を高める。
【解決手段】通信ネットワーク102を介して、ホスト
コンピュータ103と接続した通信端末101に、同時
に会話を行っている各通信端末に対応したシンボル画像
を表示する接続端末表示手段109と、最後の発言者に
対応する通信端末のシンボル画像を強調表示する送信元
強調手段110を設ける。通信端末101は、ホストコ
ンピュータ103から、発言内容の文字データと発言者
(送信元端末)の識別情報を受信し、発言内容を表示
し、送信元に該当するシンボル画像を強調表示する。
(57) [Summary] [Problem] In the conventional online conversation system, since the list of the utterances from each terminal is only displayed on the display screen of the communication terminal, the operation screen of the communication terminal in the online conversation system is displayed. To make it visually easy to understand and to enhance the realism of conversation. SOLUTION: On a communication terminal 101 connected to a host computer 103 via a communication network 102, a connection terminal display means 109 for displaying a symbol image corresponding to each communication terminal having a conversation at the same time, and a last speaker. The transmission source emphasizing means 110 for emphasizing and displaying the symbol image of the communication terminal corresponding to is provided. The communication terminal 101 receives from the host computer 103 the character data of the utterance content and the identification information of the speaker (sender terminal), displays the utterance content, and highlights the symbol image corresponding to the sender.
Description
【0001】[0001]
【発明の属する技術分野】本発明は、計算機によるデー
タ処理技術と、通信ネットワークによるオンライン通信
技術に関する。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a data processing technique using a computer and an online communication technique using a communication network.
【0002】[0002]
【従来の技術】キーボードなどの入力手段と、CRTな
どの表示手段と、MODEMなどの通信手段を備えた通
信端末、例えば、パーソナルコンピュータ同士を、公衆
回線などの通信ネットワークで接続することにより、文
字によるオンライン会話システムを構築することができ
る。つまり、一方の通信端末で入力した文字データを相
手側の端末に送信し、相手側の通信端末では、受信した
文字データを表示手段上に表示することで、互いの発言
内容を交換することができる。ここで、会話の相手が唯
一に特定できない場合、つまり、三つ以上の通信端末間
で、同様のオンライン会話システムを構築する場合に
は、直接通信端末間でデータの送受信を行うのではな
く、通常、ホストコンピュータなどに設けられた共通の
データ格納エリアを介した間接的な方法を使用する。つ
まり、各通信端末が、ホストコンピュータに文字データ
を送信することで、共通のデータ格納エリアに発言内容
を書き込み、ホストコンピュータが、共通のデータ格納
エリアに書き込まれた内容を、会話システムに参加して
いる全ての通信端末に送信することで、前記発言内容を
全ての通信端末に伝える。2. Description of the Related Art A communication terminal equipped with input means such as a keyboard, display means such as CRT, and communication means such as MODEM, for example, personal computers, is connected by a communication network such as a public line so that characters can be displayed. You can build an online conversation system by. That is, the character data input by one communication terminal is transmitted to the other party's terminal, and the other party's communication terminal displays the received character data on the display means, thereby exchanging the contents of mutual remarks. it can. Here, if the conversation partner cannot be uniquely identified, that is, if a similar online conversation system is constructed among three or more communication terminals, data is not directly transmitted and received between the communication terminals, Usually, an indirect method is used through a common data storage area provided in a host computer or the like. In other words, each communication terminal writes character data in the common data storage area by sending character data to the host computer, and the host computer participates in the conversation system with the content written in the common data storage area. The communication contents are transmitted to all communication terminals by transmitting them to all communication terminals.
【0003】図2はオンライン会話システムの操作画面
の例の説明図である。ここでは、A,B,Cという識別
名称を持った三つの通信端末間での会話例を示す。通信
端末Aの表示画面201上に、発言者を示す送信元識別
表示202と、発言内容203〜206が表示されてい
る。時間的に後の発言が、表示画面201の下方に表示
される。図2(a)では、発言内容206が最新の発言
に相当する。ここで、通信端末Aから、「太郎です」と
いう文字データを送信すると、図2(b)の発言内容2
07のように追加表示される。FIG. 2 is an explanatory diagram of an example of an operation screen of the online conversation system. Here, an example of conversation between three communication terminals having identification names A, B and C is shown. On the display screen 201 of the communication terminal A, the sender identification display 202 indicating the speaker and the contents 203 to 206 of the message are displayed. The utterance that is later in time is displayed below the display screen 201. In FIG. 2A, the statement content 206 corresponds to the latest statement. Here, when the character data “I am Taro” is transmitted from the communication terminal A, the content 2 of the message in FIG.
It is additionally displayed like 07.
【0004】従来技術では、以上のような文字データを
主体としたオンライン会話システムの他に、会話の臨場
感を高めるため、会話の場面をビジュアルに表示する例
がある。図3は、Worlds Inc.社の「Wor
lds Chat」と呼ばれるオンライン会話システム
の操作画面の例の説明図である。通信端末の表示画面3
01は、発言内容表示エリア302と、参加者表示エリ
ア303で構成される。発言内容表示エリア302は、
図2で説明した表示画面と、基本的に構成は同じであ
る。参加者表示エリア303は、オンライン会話システ
ムに参加している、個々の通信端末に対応するシンボル
画像によって構成される。図3は、三つの通信端末間で
会話を行っている例を示しており、各通信端末に対応す
るシンボル画像304〜306が表示されている。In the prior art, in addition to the online conversation system mainly composed of the character data as described above, there is an example in which the scene of the conversation is visually displayed in order to enhance the realism of the conversation. FIG. 3 illustrates Worlds Inc. "Wor
It is explanatory drawing of the example of the operation screen of the online conversation system called "lds Chat." Display screen 3 of communication terminal
01 is composed of a statement content display area 302 and a participant display area 303. The statement content display area 302 is
The configuration is basically the same as the display screen described in FIG. The participant display area 303 is composed of symbol images corresponding to individual communication terminals participating in the online conversation system. FIG. 3 shows an example in which a conversation is held between three communication terminals, and symbol images 304 to 306 corresponding to the respective communication terminals are displayed.
【0005】[0005]
【発明が解決しようとする課題】図3で示した従来技術
において、ビジュアルに参加者を表示した参加者表示エ
リア303の効果としては、参加者を一覧できること、
及び、会話の臨場感を高くすることなどが挙げられる。
しかし、オンラインで会話が始まった後は、発言内容表
示エリア302のみが必要であり、参加者表示エリア3
03は、操作の役に立っていない。つまり、会話中は、
図2の例で示したような、発言内容の羅列が表示される
だけであり、臨場感が高いとは言いがたい。また、表示
画面内の参加者表示エリア303の領域が使われない
分、効率的な表示方法とは言いがたい。本発明の解決す
べき課題の一つは、従来技術における、参加者表示エリ
ア303と発言内容表示エリア302を融合し、会話中
でも臨場感の高い操作画面を実現することである。ま
た、従来技術では、発言内容が文字データで構成されて
いるが、代わりに音声データを送受信することで、より
臨場感のあるオンライン会話システムを構築することが
できる。この場合、発言内容が音声で出力されるため、
発言内容表示エリア302が不要となり、ビジュアルな
参加者表示エリア303のみで良いことになる。しか
し、参加者表示エリア303を無くした場合、操作画面
上から送信元識別表示も無くなることになり、どの通信
端末からの発言なのか判別がつかなくなる。そこで、本
発明の解決すべきもう一つの課題は、参加者表示エリア
を無くした操作画面で、発言内容の送信元を明示するこ
とである。In the prior art shown in FIG. 3, the effect of the participant display area 303 in which the participants are visually displayed is that the participants can be listed.
Another example is to make the conversation more realistic.
However, after the online conversation has started, only the utterance content display area 302 is necessary, and the participant display area 3
03 is not useful for the operation. In other words, during conversation,
As shown in the example of FIG. 2, only a list of utterance contents is displayed, and it cannot be said that the sense of presence is high. In addition, since the area of the participant display area 303 in the display screen is not used, it cannot be said to be an efficient display method. One of the problems to be solved by the present invention is to integrate the participant display area 303 and the statement content display area 302 in the prior art to realize an operation screen with a high sense of presence even during conversation. Further, in the related art, the utterance content is composed of character data, but by transmitting and receiving voice data instead, it is possible to construct a more realistic online conversation system. In this case, the speech content is output as voice,
The utterance content display area 302 is unnecessary, and only the visual participant display area 303 is sufficient. However, when the participant display area 303 is eliminated, the sender identification display is also eliminated from the operation screen, and it is difficult to determine which communication terminal is making the speech. Therefore, another problem to be solved by the present invention is to clearly indicate the sender of the utterance content on the operation screen without the participant display area.
【0006】[0006]
【課題を解決するための手段】前記の課題を実現するた
めに、本発明のオンライン会話システムは、以下の手段
により構成される。In order to achieve the above object, the online conversation system of the present invention comprises the following means.
【0007】キーボードやタブレット装置などの文字入
力手段と、CRTや液晶パネルなどの表示手段と、通信
ネットワークに接続し、データの送受信を行うための電
話機やMODEMなどから構成される第1の通信手段
と、ホストコンピュータから受信した接続端末リスト情
報に基づき、シンボル画像の表示データを作成するシン
ボル画像設定手段と、前記シンボル画像の表示データを
格納する、メモリやハードディスクなどから構成される
シンボル画像記憶手段と、前記シンボル画像記憶手段に
格納されたシンボル画像を表示手段に出力する接続端末
表示手段と、ホストコンピュータから受信した送信元識
別情報にて特定される通信端末に対応するシンボル画像
を、シンボル画像記憶手段に格納されたシンボル画像デ
ータの中から検索し、該当するシンボル画像の表示デー
タを変更する送信元強調手段を備えた通信端末と、公衆
回線や専用線などから構成される通信ネットワークと、
通信ネットワークに接続し、データの送受信を行うため
の電話機やMODEMなどの第2の通信手段と、第2の
通信手段から受信した文字データと前記データの送信元
の通信端末を特定する送信元識別情報を格納する、メモ
リやハードディスクなどの受信データ記憶手段と、接続
中の通信端末を特定するための接続端末リスト情報を格
納する、メモリやハードディスクなどの接続端末記憶手
段を備えたホストコンピュータから構成される。A first communication means including a character input means such as a keyboard and a tablet device, a display means such as a CRT and a liquid crystal panel, a telephone for connecting to a communication network and transmitting and receiving data, a MODEM and the like. And a symbol image setting means for creating display data of the symbol image based on the connection terminal list information received from the host computer, and a symbol image storage means for storing the display data of the symbol image, which is composed of a memory, a hard disk, or the like. A symbol image corresponding to the communication terminal identified by the transmission source identification information received from the host computer, and the connection terminal display means for outputting the symbol image stored in the symbol image storage means to the display means. Search from the symbol image data stored in the storage means A communication terminal having a source emphasizing means for changing the display data of the corresponding symbol image, a communication network and the like public line or a dedicated line,
A second communication means such as a telephone or a MODEM for connecting to a communication network and transmitting / receiving data, a character data received from the second communication means, and a sender identification for specifying a communication terminal as a sender of the data It is composed of a reception data storage means such as a memory and a hard disk for storing information, and a host computer provided with a connection terminal storage means such as a memory and a hard disk for storing connection terminal list information for specifying a connected communication terminal. To be done.
【0008】また、前述の通信端末に関して、音声デー
タを入力するためのマイクなどの音声入力手段を備える
ことが好ましい。また、音声データを出力するためのス
ピーカなどの音声出力手段を備えることが好ましい。Further, it is preferable that the above-mentioned communication terminal is provided with a voice input means such as a microphone for inputting voice data. Further, it is preferable to include a voice output unit such as a speaker for outputting voice data.
【0009】[0009]
【発明の実施の形態】以下、添付の図面を参照して、本
発明につき説明を加える。DETAILED DESCRIPTION OF THE INVENTION The present invention will be described below with reference to the accompanying drawings.
【0010】図1は本発明の実施例にかかるオンライン
会話システムの構成を示すブロック図である。本実施例
にかかるオンライン会話システムは、ホストコンピュー
タと通信ネットワークを介して接続された複数の通信端
末間で、文字データや音声データを用いた会話を行う場
合に、各端末からの発言内容と発言者識別情報を通知す
る構成になっている。FIG. 1 is a block diagram showing the configuration of an online conversation system according to an embodiment of the present invention. The online conversation system according to the present embodiment, when a conversation using character data or voice data is performed between a plurality of communication terminals connected to a host computer via a communication network, the content and the statement from each terminal It is configured to notify the person identification information.
【0011】図1に示すように、本発明のオンライン会
話システムは、全体的に、通信システム101、通信ネ
ットワーク102、及び、ホストコンピュータ103に
より構成される。なお、複数の通信端末101が、通信
ネットワーク102に接続されるが、説明の便宜上、図
1には、単一の通信端末101のみ示している。以下
に、各ブロック構成を説明する。As shown in FIG. 1, the online conversation system of the present invention generally comprises a communication system 101, a communication network 102, and a host computer 103. Although a plurality of communication terminals 101 are connected to the communication network 102, only a single communication terminal 101 is shown in FIG. 1 for convenience of explanation. Each block configuration will be described below.
【0012】まず、通信端末101の構成について説明
する。キーボードやタブレット装置などからなり、通信
端末101の使用者の入力操作によって、オンライン会
議への発言内容である文字データを入力する文字入力手
段104と、CRTや液晶パネルなどからなり、ホスト
コンピュータ103から受信した文字データや、ホスト
コンピュータに同時に接続している通信端末を示すシン
ボル画像などを表示する表示手段105と、回線端末装
置やMODEMなどからなり、通信ネットワーク102
と接続し、ホストコンピュータ103との間でデータの
送受信を行う第1の通信手段106と、メモリやハード
ディスクなどの記憶装置からなり、複数のシンボル画像
の表示データを格納するシンボル画像記憶手段107
と、CPUなどのデータ演算装置とメモリなどの記憶装
置からなる、三つのデータ処理手段、つまり、シンボル
画像設定手段108、接続端末表示手段109、送信元
強調手段110から構成される。また、マイクなどから
なり、使用者の声など音声データを入力する音声入力手
段114と、スピーカなどからなり、第1の通信手段1
06によって、ホストコンピュータ103から受信した
音声データなどを出力する音声出力手段115を含むこ
ともある。First, the configuration of the communication terminal 101 will be described. A keyboard, a tablet device, etc., and a character input means 104 for inputting character data, which is the content of the utterance to the online conference, by an input operation of the user of the communication terminal 101, a CRT, a liquid crystal panel, etc., from the host computer 103. The communication network 102 includes the display means 105 for displaying the received character data and the symbol image showing the communication terminal connected to the host computer at the same time, the line terminal device, the MODEM and the like.
A symbol image storage unit 107 that is connected to the host computer 103 and that transmits and receives data to and from the host computer 103 and a storage device such as a memory or a hard disk and that stores display data of a plurality of symbol images.
And a data processing device such as a CPU and a storage device such as a memory, namely, three data processing means, that is, a symbol image setting means 108, a connection terminal display means 109, and a transmission source emphasizing means 110. The first communication unit 1 includes a voice input unit 114 including a microphone, which inputs voice data such as a user's voice, and a speaker.
The audio output means 115 for outputting audio data or the like received from the host computer 103 may also be included depending on 06.
【0013】第1の通信手段106は、文字入力手段1
04から入力された文字データや、音声入力手段114
から入力された音声データなどを、通信ネットワーク1
02を介してホストコンピュータ103に送信する。ま
た、通信ネットワーク102を介して、ホストコンピュ
ータ103から文字データや音声データなどの発言内容
に関するデータを受信し、表示手段105や音声出力手
段115へ出力する。また、ホストコンピュータ103
から送信元識別情報を受信し、送信元強調手段110に
伝達する。また、ホストコンピュータ103から接続端
末リスト情報を受信し、シンボル画像設定手段108に
通知する。接続端末リスト情報には、通信端末の識別情
報だけでなく、各通信端末に対応するシンボル画像の表
示データに関する情報、例えば、シンボル画像のイメー
ジデータ、シンボル画像のパターン番号、あるいは、シ
ンボル画像の色などの表示属性が含まれることが好まし
い。The first communication means 106 is the character input means 1
Character data input from 04 and voice input means 114
Voice data etc. input from the communication network 1
02 to the host computer 103. Further, via the communication network 102, it receives data relating to the content of the utterance such as character data and voice data from the host computer 103 and outputs it to the display means 105 and the voice output means 115. In addition, the host computer 103
The transmission source identification information is received from and transmitted to the transmission source emphasizing means 110. Also, the connection terminal list information is received from the host computer 103 and notified to the symbol image setting means 108. The connection terminal list information includes not only identification information of communication terminals but also information about display data of symbol images corresponding to each communication terminal, for example, image data of symbol images, pattern numbers of symbol images, or color of symbol images. It is preferable that display attributes such as
【0014】シンボル画像設定手段108は、第1の通
信装置106によって、ホストコンピュータから受信し
た接続端末リスト情報によって特定される通信端末に対
応するシンボル画像の表示データを作成する。シンボル
データの作成では、予め通信端末内に格納してあるシン
ボル画像の表示データを複写して使用することが好まし
い。また、ホストコンピュータから、接続端末リスト情
報としてシンボル画像の表示データを受信し、使用する
ことも好ましい。The symbol image setting means 108 creates display data of a symbol image corresponding to the communication terminal specified by the connection terminal list information received from the host computer by the first communication device 106. In creating the symbol data, it is preferable to copy and use the display data of the symbol image stored in the communication terminal in advance. It is also preferable to receive and use the display data of the symbol image as the connection terminal list information from the host computer.
【0015】接続端末表示手段109は、シンボル画像
記憶手段107に格納された、表示データに基づき、表
示手段105の表示画面上にシンボル画像を出力する。The connection terminal display means 109 outputs a symbol image on the display screen of the display means 105 based on the display data stored in the symbol image storage means 107.
【0016】送信元強調手段110は、第1の通信手段
106から受け取った送信元識別情報で特定される通信
端末に対応したシンボル画像を、シンボル画像記憶手段
107に格納されたシンボル画像の中から検索し、該当
するシンボル画像の表示データを変更する。The transmission source emphasizing means 110 selects the symbol image corresponding to the communication terminal specified by the transmission source identification information received from the first communication means 106 from the symbol images stored in the symbol image storage means 107. Search and change the display data of the corresponding symbol image.
【0017】また、通信ネットワーク102の構成は、
有線、あるいは無線通信に関わらず、また、公衆回線、
あるいは、専用線に関わらず、同時に複数の通信端末1
01と、ホストコンピュータ103を接続し、データの
送受信の仲介を行うことができる、従来技術による通信
ネットワークを使用する。The configuration of the communication network 102 is as follows.
Regardless of wired or wireless communication, public line,
Alternatively, regardless of the leased line, a plurality of communication terminals 1 can be simultaneously
01 and the host computer 103 are connected to each other, and a communication network according to the related art which can mediate the transmission and reception of data is used.
【0018】次に、ホストコンピュータ103の構成に
ついて説明する。回線端末装置やMODEMなどからな
り、通信ネットワーク102と接続し、複数の101と
の間でデータの送受信を行う第2の通信手段111と、
メモリやハードディスクからなり、第2の通信手段11
1から受信したデータと前記データの送信元の通信端末
を特定する送信元識別情報を格納する受信データ記憶手
段112と、接続中の通信端末を特定するための接続端
末リスト情報を格納する接続端末記憶手段113から構
成される。Next, the configuration of the host computer 103 will be described. Second communication means 111, which is composed of a line terminal device or MODEM, is connected to the communication network 102, and transmits / receives data to / from a plurality of 101.
The second communication means 11 includes a memory and a hard disk.
1. Received data storage means 112 that stores the data received from 1 and transmission source identification information that identifies the communication terminal that is the source of the data, and connection terminal that stores connection terminal list information that identifies the communication terminal that is currently connected. The storage means 113 is included.
【0019】受信データ記憶手段112は、第2の通信
手段から受信した文字データ,音声データなどの発言内
容に関するデータと前記データの送信元の通信端末を特
定する送信元識別情報を格納する。The received data storage means 112 stores data relating to the content of the utterance such as character data and voice data received from the second communication means, and transmission source identification information for specifying the communication terminal of the transmission source of the data.
【0020】接続端末記憶手段113は、接続中の通信
端末を特定するための接続端末リスト情報を格納する。The connection terminal storage means 113 stores connection terminal list information for specifying the communication terminal being connected.
【0021】次に、本発明の実施例にて用いる、接続端
末リスト情報とシンボル画像の表示データについて、デ
ータ構造を説明する。Next, the data structure of the connection terminal list information and the display data of the symbol image used in the embodiment of the present invention will be described.
【0022】接続端末リスト情報の構造を、図4を参照
して説明する。接続端末リスト情報のデータテーブル4
01は、通常、図4(a)のように通信端末の識別情
報、例えば、識別番号402から構成される。この識別
情報は、各接続中の各通信端末に1対1に対応し、重複
しない。ホストコンピュータ103は、接続中の通信端
末を監視し、接続中の通信端末に対応する識別情報のみ
が、データテーブル401に格納されるようにする。ま
た、図4(b)に示すように、接続中の各通信端末のシ
ンボル画像の表示データ403を格納することもある。The structure of the connection terminal list information will be described with reference to FIG. Data table 4 of connected terminal list information
01 is usually composed of the identification information of the communication terminal, for example, the identification number 402 as shown in FIG. This identification information corresponds to each connected communication terminal on a one-to-one basis and does not overlap. The host computer 103 monitors the connected communication terminal so that only the identification information corresponding to the connected communication terminal is stored in the data table 401. Further, as shown in FIG. 4B, the display data 403 of the symbol image of each connected communication terminal may be stored.
【0023】シンボル画像の表示データの構造を、図5
を参照して説明する。表示データのデータテーブル50
1は、例えば、通信端末の識別番号などの端末識別情報
502、表示装置の画面上におけるシンボル画像の表示
座標などの座標情報503、シンボル画像のサイズ、拡
大率などのサイズ情報504、シンボル画像の色,明る
さなどの表示カラー情報505、シンボル画像のイメー
ジデータなどのグラフィックデータ506、最も新しい
送信元の通信端末を識別する送信元識別情報507など
の要素を含んでいる。送信元識別情報507により特定
される、最も新しい送信元の通信端末とは、ホストコン
ピュータ103から各通信端末101に向けて最後に送
信された文字データ、あるいは、音声データの送信元の
通信端末のことである。ここで、グラフィックデータ5
06は、複数分のシンボル画像を格納することもある。
これにより、一つの通信端末に対して、状態に応じ複数
のシンボル画像を切り替え表示することができる。The structure of the display data of the symbol image is shown in FIG.
This will be described with reference to FIG. Display data data table 50
1 indicates, for example, terminal identification information 502 such as an identification number of a communication terminal, coordinate information 503 such as display coordinates of a symbol image on a screen of a display device, size information 504 such as a symbol image size and a magnification, and a symbol image. It includes elements such as display color information 505 such as color and brightness, graphic data 506 such as image data of a symbol image, and transmission source identification information 507 for identifying the communication terminal of the latest transmission source. The communication terminal of the latest transmission source identified by the transmission source identification information 507 is the communication terminal of the transmission source of the character data or the voice data transmitted last from the host computer 103 to each communication terminal 101. That is. Here, graphic data 5
06 may store a plurality of symbol images.
Accordingly, a plurality of symbol images can be switched and displayed on one communication terminal according to the state.
【0024】次に、通信端末101の作動について、シ
ンボル画像設定手段108、接続端末表示手段109
と、送信元強調手段110の処理の流れについて説明す
る。Next, regarding the operation of the communication terminal 101, the symbol image setting means 108 and the connection terminal display means 109.
Then, the flow of processing of the transmission source emphasizing means 110 will be described.
【0025】図6はシンボル画像設定手段108の処理
フローである。ステップ601で、ホストコンピュータ
103から受信した接続端末リスト情報から、1端末分
のデータを読み込む。この時、第1の通信手段106が
1端末分のデータを受信する毎に、ステップ601で読
み込んでも良いし、また、一旦、第1の通信手段106
が複数の端末分のデータを受信しメモリなどの記憶装置
に格納し、ステップ601で1端末分づつ、前記記憶装
置から読み込んでも良い。ステップ602で、読み込ん
だ1端末分のデータに、シンボル画像の表示データが含
まれるかを検出する。接続端末リスト情報は、図4を用
いて説明したように、各通信端末の識別情報以外に、各
通信端末に対応するシンボル画像を表示装置で表示する
ための表示データを含むこともある。このステップ60
2で、シンボル画像の表示データが含まれていないと判
定した場合には、ステップ603で、通信端末101内
のメモリなどの記憶装置に予め格納してある、シンボル
画像用の表示データを読み込み、該当する通信端末の表
示データとする。ステップ604で、通信端末の識別情
報と、シンボル画像の表示データを、シンボル画像記憶
手段107に格納する。これらのデータは、図5を用い
て説明したデータテーブルの構造で格納する。ホストコ
ンピュータ103から受信した、接続端末リスト情報で
特定される通信端末全てについて、ステップ604まで
の処理を行ったかを、ステップ605で判定し、処理を
終了をする。以上のシンボル画像設定手段108の処理
は、ホストコンピュータ103から接続端末リスト情報
を受信する毎に実行し、逐次、シンボル画像記憶手段1
07のデータが最新の内容に更新する。FIG. 6 is a processing flow of the symbol image setting means 108. In step 601, data for one terminal is read from the connected terminal list information received from the host computer 103. At this time, each time the first communication unit 106 receives data for one terminal, the data may be read in step 601. Alternatively, the first communication unit 106 may be read once.
May receive data for a plurality of terminals, store the data in a storage device such as a memory, and read the data for one terminal from the storage device in step 601. In step 602, it is detected whether the read data for one terminal includes the display data of the symbol image. As described with reference to FIG. 4, the connection terminal list information may include display data for displaying a symbol image corresponding to each communication terminal on the display device, in addition to the identification information of each communication terminal. This step 60
If it is determined in step 2 that the display data for the symbol image is not included, in step 603, the display data for the symbol image, which is stored in advance in a storage device such as a memory in the communication terminal 101, is read, It is the display data of the corresponding communication terminal. At step 604, the identification information of the communication terminal and the display data of the symbol image are stored in the symbol image storage means 107. These data are stored in the structure of the data table described with reference to FIG. In step 605, it is determined whether the processing up to step 604 has been performed for all the communication terminals specified by the connection terminal list information received from the host computer 103, and the processing is ended. The above-described processing of the symbol image setting means 108 is executed every time the connection terminal list information is received from the host computer 103, and the symbol image storage means 1 is sequentially executed.
The data of 07 is updated to the latest contents.
【0026】図7は、接続端末表示手段109の処理フ
ローである。ステップ701で、シンボル画像記憶手段
107から、1端末分のシンボル画像の表示データを読
み出す。ステップ702で、シンボル画像記憶手段10
7に格納された全ての通信端末に関して、シンボル画像
を表示したか判定する。未表示の通信端末があると判定
した場合には、ステップ703で、該当するシンボル画
像の表示データを読み込み、表示装置105に出力す
る。シンボル画像を出力する際に、図5の503〜50
6に設定されたデータに基づき表示を行う。FIG. 7 is a processing flow of the connection terminal display means 109. In step 701, the display data of the symbol image for one terminal is read from the symbol image storage means 107. In step 702, the symbol image storage means 10
It is determined whether the symbol images are displayed for all the communication terminals stored in 7. When it is determined that there is an undisplayed communication terminal, the display data of the corresponding symbol image is read and output to the display device 105 in step 703. When outputting the symbol image, 503 to 50 in FIG.
Display is performed based on the data set in 6.
【0027】図8は、送信元強調手段110の処理フロ
ーである。ステップ801で、第1の受信手段106が
ホストコンピュータ103から受信したデータの中か
ら、送信元識別情報を読み込む。ステップ802で、読
み込んだ送信元識別情報によって特定される通信端末に
対応するシンボル画像を、シンボル画像記憶手段107
に格納されたデータから検索する。図5の例を用いた場
合、送信元識別情報は、通信端末の識別番号であり、こ
の識別番号に合致するデータを、識別情報502のデー
タと比較しながら検索する。ステップ803で、検索さ
れたシンボル画像の表示データの内容を変更する。変更
する内容は、表示座標503、サイズ情報504、表示
カラー情報505、グラフィックデータ506の内の一
つ、または、複数である。また、送信元識別情報507
を変更することもある。この送信元識別情報507は、
二つの値をとり、通常は送信元ではないことを示す値が
設定されており、ステップ802で検索されたシンボル
画像についてのみ、送信元を示す値が設定される。つま
り、送信元識別情報507は、最も新しい発言者に対応
する通信端末に対してのみ、他と異なる値をとる。グラ
フィックデータ506に、複数の画像分のデータが格納
されている場合、この値に応じて、通常用のシンボル画
像と強調用のシンボル画像を切り替えて表示する。FIG. 8 is a processing flow of the transmission source emphasizing means 110. In step 801, the first receiving means 106 reads the sender identification information from the data received from the host computer 103. In step 802, the symbol image corresponding to the communication terminal specified by the read transmission source identification information is stored in the symbol image storage means 107.
Search from the data stored in. When the example of FIG. 5 is used, the transmission source identification information is the identification number of the communication terminal, and the data matching this identification number is searched while comparing with the data of the identification information 502. In step 803, the content of the display data of the retrieved symbol image is changed. The content to be changed is one or more of the display coordinates 503, the size information 504, the display color information 505, and the graphic data 506. Also, the sender identification information 507
May be changed. This transmission source identification information 507 is
It takes two values, and a value indicating that it is not a transmission source is usually set, and a value indicating a transmission source is set only for the symbol image searched in step 802. That is, the transmission source identification information 507 takes a value different from other values only for the communication terminal corresponding to the newest speaker. When the data for a plurality of images is stored in the graphic data 506, the normal symbol image and the emphasizing symbol image are switched and displayed according to this value.
【0028】最後に、以上の説明した実施例による、通
信端末101の操作画面例を説明する。図9は、送信元
強調手段110によって、シンボル画像の表示データの
内、表示カラー情報505を変更した場合である。図9
(a)で、通信端末101の表示画面図901内に、接
続中の通信端末に対応するシンボル画像902から90
4が表示されている。これらのシンボル画像902から
904に対応する通信端末からの発言内容は、各々、発
言内容表示905〜907に表示されている。この三つ
のシンボル画像902から904に対応する通信端末の
内、シンボル画像902に対応した通信端末からの発言
が最も新しい為、シンボル画像902のみが、異なった
表示色になっている。続いて、シンボル画像903に対
応する通信端末が、図9(b)の903に示すような発
言をすると、シンボル画像902の表示色は、他のシン
ボル画像と同じにもどり、代わりに、シンボル画像90
3の表示色が他と異なった色になる。図9の例の場合、
通信端末101の使用者は、シンボル画像の表示色によ
って、最も新しい発言者を識別することが可能となる。Finally, an operation screen example of the communication terminal 101 according to the above-described embodiment will be described. FIG. 9 shows a case where the display color information 505 in the display data of the symbol image is changed by the transmission source emphasizing means 110. FIG.
In (a), in the display screen diagram 901 of the communication terminal 101, symbol images 902 to 90 corresponding to the communication terminal being connected are displayed.
4 is displayed. The message contents from the communication terminal corresponding to these symbol images 902 to 904 are displayed on the message contents displays 905 to 907, respectively. Of the communication terminals corresponding to these three symbol images 902 to 904, the utterance from the communication terminal corresponding to the symbol image 902 is the newest, so only the symbol image 902 has a different display color. Subsequently, when the communication terminal corresponding to the symbol image 903 makes a statement as shown by 903 in FIG. 9B, the display color of the symbol image 902 returns to the same as that of the other symbol images, and instead, the symbol image is displayed. 90
The display color of 3 is different from the others. In the example of FIG. 9,
The user of the communication terminal 101 can identify the newest speaker by the display color of the symbol image.
【0029】図10は、送信元強調手段110によっ
て、シンボル画像の表示データの内、グラフィックデー
タ506を変更した場合である。ここでは、グラフィッ
クデータ506に、発言時のシンボル画像と、その他の
場合のシンボル画像の、形態の異なる2種類が格納され
ている。図10(a)で、通信端末101の表示画面図
1001内に、接続中の通信端末に対応するシンボル画
像1002から1004が表示されている。これらのシ
ンボル画像1002から1004に対応する通信端末か
らの発言内容は、各々、発言内容表示905〜907に
表示されている。この三つのシンボル画像902から9
04に対応する通信端末の内、シンボル画像1002に
対応した通信端末からの発言が最も新しい為、シンボル
画像1002のみが、発言者を示す異なった形態の画像
になっている。続いて、シンボル画像1004に対応す
る通信端末が、図9(b)の1007に示すような発言
をすると、シンボル画像1002の画像は、他のシンボ
ル画像と同じにもどり、代わりに、シンボル画像100
4の画像が、発言者を示す異なった形態になる。図10
の例の場合、通信端末101の使用者は、シンボル画像
の形態によって、最も新しい発言者を識別することが可
能となる。FIG. 10 shows a case where the graphic data 506 of the display data of the symbol image is changed by the transmission source emphasizing means 110. In this case, the graphic data 506 stores two types of different forms, a symbol image at the time of a statement and a symbol image at other times. In FIG. 10A, in the display screen diagram 1001 of the communication terminal 101, symbol images 1002 to 1004 corresponding to the communication terminal being connected are displayed. The utterance contents from the communication terminal corresponding to these symbol images 1002 to 1004 are displayed in the utterance contents displays 905 to 907, respectively. These three symbol images 902 to 9
Among the communication terminals corresponding to 04, the communication terminal corresponding to the symbol image 1002 is the latest, so only the symbol image 1002 is an image of a different form showing the speaker. Then, when the communication terminal corresponding to the symbol image 1004 makes a statement as shown by 1007 in FIG. 9B, the image of the symbol image 1002 returns to the same as other symbol images, and instead, the symbol image 100
The image of 4 has a different form showing the speaker. FIG.
In the case of the above example, the user of the communication terminal 101 can identify the newest speaker by the form of the symbol image.
【0030】図11は送信元強調手段110によって、
シンボル画像の表示データの内、座標情報503とサイ
ズ情報504を変更した場合である。図11(a)で、
通信端末101の表示画面図1101内に、接続中の通
信端末に対応するシンボル画像1102から1106が
表示されている。これらのシンボル画像1102から1
106に対応する通信端末からの発言内容の内、最も新
しい内容が、発言内容枠1107内に表示されている。
この五つのシンボル画像1102から1106に対応す
る通信端末の内、シンボル画像1104に対応した通信
端末からの発言が最も新しい為、その発言内容が発言枠
1107内に表示され、シンボル画像1104のみが、
画面中央よりの位置に移動し、サイズも大きくなってい
る。続いて、シンボル画像1106に対応する通信端末
が、図11(b)の1107に示すような発言をする
と、シンボル画像1104の画像は、他のシンボル画像
と同様に画面周囲側に戻り、サイズも他のシンボル画像
と同サイズに戻る。代わりに、シンボル画像1106の
画像が、画面中央よりの位置に移動し、サイズが大きく
なる。図11の例の場合、通信端末101の使用者は、
シンボル画像の位置とサイズによって、最も新しい発言
者を識別することが可能となる。FIG. 11 shows the transmission source emphasizing means 110.
This is a case where the coordinate information 503 and the size information 504 are changed in the display data of the symbol image. In FIG. 11 (a),
In the display screen view 1101 of the communication terminal 101, symbol images 1102 to 1106 corresponding to the communication terminal being connected are displayed. These symbol images 1102 to 1
Among the utterance contents from the communication terminal corresponding to 106, the newest contents are displayed in the utterance contents frame 1107.
Of the communication terminals corresponding to these five symbol images 1102 to 1106, the utterance from the communication terminal corresponding to the symbol image 1104 is the latest, so the content of the utterance is displayed in the utterance frame 1107, and only the symbol image 1104 is displayed.
It has moved to a position closer to the center of the screen, and the size has also increased. Subsequently, when the communication terminal corresponding to the symbol image 1106 makes a remark as shown by 1107 in FIG. 11B, the image of the symbol image 1104 returns to the screen peripheral side and has the same size as other symbol images. Return to the same size as other symbol images. Instead, the image of the symbol image 1106 moves to a position closer to the center of the screen and the size increases. In the case of the example in FIG. 11, the user of the communication terminal 101 is
The position and size of the symbol image makes it possible to identify the newest speaker.
【0031】また、以上の図9ないし図11で用いた例
では、各通信端末からの発言内容を、文字情報として扱
っているが、音声データを使用しても同様に実現でき
る。つまり、発言内容を表示する代わりに、通信端末1
01に備えた音声出力手段115によって出力する。こ
の場合、音声出力とともに、文字データの表示出力を行
っても良い。Further, in the examples used in FIGS. 9 to 11 described above, the contents of the utterance from each communication terminal are treated as character information, but the same can be realized by using voice data. That is, instead of displaying the content of the statement, the communication terminal 1
The audio output means 115 provided in 01 outputs. In this case, the character data may be displayed and output together with the voice output.
【0032】その他、表示データとして動画データを格
納することにより、送信元のシンボル画像を動画表示す
ることも、本実施例と同様の構成で実現することができ
る。In addition, by storing the moving image data as the display data, it is possible to realize the moving image display of the symbol image of the transmission source with the same configuration as this embodiment.
【0033】[0033]
【発明の効果】本発明のオンライン会話システムによ
り、従来のオンライン会話システムでは、単に、各端末
からの発言内容の羅列が表示されるだけであった操作画
面を、視覚的にわかりやすくし、より一層、会話の臨場
感を高めることできる。According to the online conversation system of the present invention, in the conventional online conversation system, the operation screen, which merely displays the list of the utterance contents from each terminal, can be made visually easy to understand. It is possible to further enhance the realism of conversation.
【0034】第一に、従来技術の通信端末の操作画面に
みられた、参加者表示エリア表示の技術をより発展さ
せ、会話の発言内容自体をビジュアルに表示できるよう
にした。特に、コンピュータ操作などに慣れていない、
初心者に対して、馴染みやすい操作環境を提供する子と
ができる。First, the technology of displaying the participant display area, which is seen in the operation screen of the conventional communication terminal, has been further developed so that the speech content itself of the conversation can be visually displayed. Especially, if you are not familiar with computer operation,
For beginners, the child can be provided with a familiar operating environment.
【0035】第2に、複数の発言内容を、前述のように
ビジュアルに同時表示した場合、あるいは、音声データ
により会話を行った場合、発言を受け取った通信端末側
では、発言順序が混乱する問題が生じる。本発明では、
最後の発言を送信した通信端末を、シンボル画像の強調
表現によって、ビジュアルに通知することができ、初心
者でも理解しやすい。Secondly, when a plurality of utterance contents are visually displayed at the same time as described above, or when a conversation is carried out by voice data, the communication terminal side which receives the utterance has a problem that the utterance order is confused. Occurs. In the present invention,
The communication terminal that has transmitted the last utterance can be visually notified by emphasizing the symbol image, which is easy for a beginner to understand.
【図1】本発明の一実施例のブロック図。FIG. 1 is a block diagram of one embodiment of the present invention.
【図2】オンライン会話システムの説明図。FIG. 2 is an explanatory diagram of an online conversation system.
【図3】従来技術の説明図。FIG. 3 is an explanatory diagram of a conventional technique.
【図4】接続端末リスト情報の説明図。FIG. 4 is an explanatory diagram of connection terminal list information.
【図5】シンボル画像の表示データの説明図。FIG. 5 is an explanatory diagram of display data of a symbol image.
【図6】シンボル画像設定手段の処理フローチャート。FIG. 6 is a processing flowchart of a symbol image setting unit.
【図7】接続端末表示手段の処理フローチャート。FIG. 7 is a processing flowchart of a connection terminal display unit.
【図8】送信元強調手段の処理フローチャート。FIG. 8 is a processing flowchart of a transmission source emphasizing unit.
【図9】シンボル画像の表示データの説明図。FIG. 9 is an explanatory diagram of display data of a symbol image.
【図10】シンボル画像の表示データの説明図。FIG. 10 is an explanatory diagram of display data of a symbol image.
【図11】シンボル画像の表示データの説明図。FIG. 11 is an explanatory diagram of display data of a symbol image.
101…通信端末、102…通信ネットワーク、103
…ホストコンピュータ、104…文字入力手段、105
…表示手段、106…第1の通信手段、107…シンボ
ル画像記憶手段、108…シンボル設定手段、109…
接続端末表示手段、110…送信元強調手段、111…
第2の通信手段、112…受信データ記憶手段、113
…接続端末記憶手段、114…音声入力手段、115…
音声出力手段。101 ... Communication terminal, 102 ... Communication network, 103
... host computer, 104 ... character input means, 105
Display unit 106 First communication unit 107 Symbol image storage unit 108 Symbol setting unit 109
Connection terminal display means, 110 ... Source emphasis means, 111 ...
Second communication means 112 ... Received data storage means 113
... connection terminal storage means, 114 ... voice input means, 115 ...
Audio output means.
Claims (6)
などの表示手段と、通信ネットワークと接続する第1の
通信手段を備え、前記文字入力手段から入力された文字
データを前記第1の通信手段によって送信し、前記第1
の通信手段から受信した文字データを表示手段に出力す
る通信端末と、通信ネットワークと、前記通信ネットワ
ークに接続する第2の通信手段と、前記第2の通信手段
から受信した文字データと前記文字データの送信元の通
信端末を特定する送信元識別情報を格納する受信データ
記憶手段と、接続中の通信端末を特定するための接続端
末リスト情報を格納する接続端末記憶手段を備え、接続
中の複数の通信端末に対して、前記受信データ記憶手段
に格納された文字データと前記送信元識別情報を一斉送
信するホストコンピュータとによって構成されるオンラ
イン会話システムにおいて、 前記ホストコンピュータが、接続端末記憶手段に格納さ
れた接続端末リスト情報を、各通信端末へ送信し、前記
通信端末が、受信した接続端末リスト情報で特定される
個々の通信端末に対応した、シンボル画像の表示データ
を作成するシンボル画像設定手段と、前記シンボル画像
を格納するシンボル画像記憶手段と、シンボル画像記憶
手段に格納された表示データに基づき、表示手段上に、
シンボル画像を表示する接続端末表示手段と、前記送信
元識別情報で特定される通信端末に対応したシンボル画
像を、シンボル画像記憶手段に格納されたシンボル画像
の中から検索し、該当するシンボル画像の表示データを
変更する送信元強調手段を備えること特徴とするオンラ
イン会話システム。1. A character input means such as a keyboard and a CRT.
Display means and a first communication means connected to a communication network, and the character data input from the character input means is transmitted by the first communication means,
Communication terminal for outputting the character data received from the communication means to the display means, the communication network, the second communication means connected to the communication network, the character data received from the second communication means, and the character data A receiving data storage means for storing transmission source identification information for identifying the communication terminal of the transmission source of the connection source and a connection terminal storage means for storing connection terminal list information for identifying the communication terminal being connected, To the communication terminal, the online conversation system composed of the character data stored in the reception data storage means and the host computer that simultaneously transmits the transmission source identification information, wherein the host computer is the connection terminal storage means. The stored connection terminal list information is transmitted to each communication terminal, and the communication terminal uses the received connection terminal list information. Based on the display data stored in the symbol image storage means, the symbol image setting means for creating the display data of the symbol image, the symbol image storage means for storing the symbol image, corresponding to the individual communication terminals specified, On the display means
The connection terminal display means for displaying the symbol image and the symbol image corresponding to the communication terminal specified by the transmission source identification information are searched from the symbol images stored in the symbol image storage means, and the corresponding symbol image An online conversation system comprising a source emphasizing means for changing display data.
クなどの音声入力手段と、スピーカなどの音声出力手段
を備え、音声入力手段から入力された音声データを、前
記第1の通信手段によって送信し、前記第1の通信手段
から受信した音声データを音声出力手段によって出力
し、前記ホストコンピュータが、前記第2の通信手段か
ら受信した音声データと前記音声データの送信元の通信
端末を特定する前記送信元識別情報を前記受信データ記
憶手段に格納し、接続中の複数の通信端末に対して、前
記受信データ記憶手段に格納された音声データと送信元
識別情報を一斉送信するオンライン会話システム。2. The device according to claim 1, wherein the terminal means includes a voice input means such as a microphone and a voice output means such as a speaker, and the voice data input from the voice input means is transmitted by the first communication means. The voice data received from the first communication means is output by the voice output means, and the host computer specifies the voice data received from the second communication means and the communication terminal from which the voice data is transmitted. An online conversation system in which the transmission source identification information is stored in the reception data storage means, and the voice data and the transmission source identification information stored in the reception data storage means are simultaneously transmitted to a plurality of connected communication terminals. .
画像の表示データに、前記シンボル画像の色,明るさな
どの表示属性が含まれ、前記送信元強調手段が、前記表
示データの内、前記表示属性を変更するオンライン会話
システム。3. The display data of the symbol image according to claim 1 or 2, wherein display attributes such as color and brightness of the symbol image are included, and the transmission source emphasizing means includes the display data in the display data. An online conversation system that changes display attributes.
画像の表示データに、前記シンボル画像のサイズ,拡大
率などの大きさの属性が含まれ、前記送信元強調手段
が、表示データの内、前記大きさの属性を変更するオン
ライン会話システム。4. The display data of the symbol image according to claim 1 or 2, wherein the display data of the symbol image includes an attribute of a size such as a size of the symbol image and an enlargement ratio, and the transmission source emphasizing means selects one of the display data. An online conversation system for changing the attribute of the size.
画像の表示データに、前記シンボル画像の表示座標など
の座標データが含まれ、前記送信元強調手段が、表示デ
ータの内、前記座標データを変更するオンライン会話シ
ステム。5. The display data of the symbol image according to claim 1 or 2, wherein the display data of the symbol image includes coordinate data such as display coordinates of the symbol image, and the transmission source emphasizing unit displays the coordinate data of the display data. Online conversation system to change.
末あたり複数のシンボル画像と、前記複数のシンボル画
像の中から表示すべきシンボル画像を特定するための表
示シンボル識別データを、シンボル画像記憶手段内に格
納し、前記送信元強調手段が、前記表示シンボル識別デ
ータを変更するオンライン会話システム。6. The symbol image storage according to claim 1, wherein a plurality of symbol images per communication terminal and display symbol identification data for specifying a symbol image to be displayed from among the plurality of symbol images are stored. An online conversation system stored in the means, wherein the source emphasizing means changes the display symbol identification data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP8075840A JPH09265457A (en) | 1996-03-29 | 1996-03-29 | Online conversation system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP8075840A JPH09265457A (en) | 1996-03-29 | 1996-03-29 | Online conversation system |
Publications (1)
Publication Number | Publication Date |
---|---|
JPH09265457A true JPH09265457A (en) | 1997-10-07 |
Family
ID=13587808
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP8075840A Pending JPH09265457A (en) | 1996-03-29 | 1996-03-29 | Online conversation system |
Country Status (1)
Country | Link |
---|---|
JP (1) | JPH09265457A (en) |
Cited By (121)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010047259A (en) * | 1999-11-18 | 2001-06-15 | 박광순 | Cyber beauty contest selection method, chat and shopping mall operation method |
JP2002288102A (en) * | 2001-03-26 | 2002-10-04 | Sharp Corp | Communication terminal device, chat server device, method for specifying statement order, program for realizing method for specifying statement order, and chat system |
KR100438347B1 (en) * | 2000-12-11 | 2004-07-02 | 김부식 | System, method and medium for language study |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US9330381B2 (en) | 2008-01-06 | 2016-05-03 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
JP2016122460A (en) * | 2016-02-12 | 2016-07-07 | 株式会社スクウェア・エニックス | Communication program, communication terminal, and communication method |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9600174B2 (en) | 2006-09-06 | 2017-03-21 | Apple Inc. | Portable electronic device for instant messaging |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9954996B2 (en) | 2007-06-28 | 2018-04-24 | Apple Inc. | Portable electronic device with conversation management for incoming instant messages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10348654B2 (en) * | 2003-05-02 | 2019-07-09 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US10607141B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
JP2024170463A (en) * | 2018-05-07 | 2024-12-10 | アップル インコーポレイテッド | Multi-Participant Live Communication User Interface |
US12242702B2 (en) | 2021-05-15 | 2025-03-04 | Apple Inc. | Shared-content session user interfaces |
US12267622B2 (en) | 2021-09-24 | 2025-04-01 | Apple Inc. | Wide angle video conference |
US12265696B2 (en) | 2020-05-11 | 2025-04-01 | Apple Inc. | User interface for audio message |
US12301979B2 (en) | 2021-01-31 | 2025-05-13 | Apple Inc. | User interfaces for wide angle video conference |
US12302035B2 (en) | 2010-04-07 | 2025-05-13 | Apple Inc. | Establishing a video conference during a phone call |
US12368946B2 (en) | 2021-09-24 | 2025-07-22 | Apple Inc. | Wide angle video conference |
-
1996
- 1996-03-29 JP JP8075840A patent/JPH09265457A/en active Pending
Cited By (172)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010047259A (en) * | 1999-11-18 | 2001-06-15 | 박광순 | Cyber beauty contest selection method, chat and shopping mall operation method |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
KR100438347B1 (en) * | 2000-12-11 | 2004-07-02 | 김부식 | System, method and medium for language study |
JP2002288102A (en) * | 2001-03-26 | 2002-10-04 | Sharp Corp | Communication terminal device, chat server device, method for specifying statement order, program for realizing method for specifying statement order, and chat system |
US10348654B2 (en) * | 2003-05-02 | 2019-07-09 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US10623347B2 (en) | 2003-05-02 | 2020-04-14 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10572142B2 (en) | 2006-09-06 | 2020-02-25 | Apple Inc. | Portable electronic device for instant messaging |
US11762547B2 (en) | 2006-09-06 | 2023-09-19 | Apple Inc. | Portable electronic device for instant messaging |
US11169690B2 (en) | 2006-09-06 | 2021-11-09 | Apple Inc. | Portable electronic device for instant messaging |
US9600174B2 (en) | 2006-09-06 | 2017-03-21 | Apple Inc. | Portable electronic device for instant messaging |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US9954996B2 (en) | 2007-06-28 | 2018-04-24 | Apple Inc. | Portable electronic device with conversation management for incoming instant messages |
US11122158B2 (en) | 2007-06-28 | 2021-09-14 | Apple Inc. | Portable electronic device with conversation management for incoming instant messages |
US11743375B2 (en) | 2007-06-28 | 2023-08-29 | Apple Inc. | Portable electronic device with conversation management for incoming instant messages |
US12348663B2 (en) | 2007-06-28 | 2025-07-01 | Apple Inc. | Portable electronic device with conversation management for incoming instant messages |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US10521084B2 (en) | 2008-01-06 | 2019-12-31 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US9792001B2 (en) | 2008-01-06 | 2017-10-17 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US9330381B2 (en) | 2008-01-06 | 2016-05-03 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US10503366B2 (en) | 2008-01-06 | 2019-12-10 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US11126326B2 (en) | 2008-01-06 | 2021-09-21 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US12087308B2 (en) | 2010-01-18 | 2024-09-10 | Apple Inc. | Intelligent automated assistant |
US10984327B2 (en) | 2010-01-25 | 2021-04-20 | New Valuexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10607140B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10607141B2 (en) | 2010-01-25 | 2020-03-31 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US12307383B2 (en) | 2010-01-25 | 2025-05-20 | Newvaluexchange Global Ai Llp | Apparatuses, methods and systems for a digital conversation management platform |
US11410053B2 (en) | 2010-01-25 | 2022-08-09 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US10984326B2 (en) | 2010-01-25 | 2021-04-20 | Newvaluexchange Ltd. | Apparatuses, methods and systems for a digital conversation management platform |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US12302035B2 (en) | 2010-04-07 | 2025-05-13 | Apple Inc. | Establishing a video conference during a phone call |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
JP2016122460A (en) * | 2016-02-12 | 2016-07-07 | 株式会社スクウェア・エニックス | Communication program, communication terminal, and communication method |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
JP2024170463A (en) * | 2018-05-07 | 2024-12-10 | アップル インコーポレイテッド | Multi-Participant Live Communication User Interface |
US12265696B2 (en) | 2020-05-11 | 2025-04-01 | Apple Inc. | User interface for audio message |
US12301979B2 (en) | 2021-01-31 | 2025-05-13 | Apple Inc. | User interfaces for wide angle video conference |
US12260059B2 (en) | 2021-05-15 | 2025-03-25 | Apple Inc. | Shared-content session user interfaces |
US12242702B2 (en) | 2021-05-15 | 2025-03-04 | Apple Inc. | Shared-content session user interfaces |
US12267622B2 (en) | 2021-09-24 | 2025-04-01 | Apple Inc. | Wide angle video conference |
US12368946B2 (en) | 2021-09-24 | 2025-07-22 | Apple Inc. | Wide angle video conference |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JPH09265457A (en) | Online conversation system | |
RU2488232C2 (en) | Communication network and devices for text to speech and text to facial animation conversion | |
JP3301983B2 (en) | Interactive communication device and method using characters | |
JP2005293280A (en) | Chat system, communication apparatus, control method thereof, and program | |
JP2003219047A (en) | Communication device | |
WO2018061173A1 (en) | Tv conference system, tv conference method, and program | |
JP2003067317A (en) | Message exchange method, computer, management device and recording medium | |
US20130016058A1 (en) | Electronic device, display method and computer-readable recording medium storing display program | |
JP3283506B2 (en) | Multimedia telemeeting terminal device, terminal device system, and operation method thereof | |
US20060099978A1 (en) | Wireless communication terminal with function of confiirming receiver's identity by displaying image corresponding to the receiver and method thereof | |
JP2003234842A (en) | Real-time handwritten communication system | |
JP2001306467A (en) | Information transmission method | |
JPS63180261A (en) | Telephone with data communication function | |
JP2006245876A (en) | Conference system using a projector having a network function | |
JPH0677992A (en) | Multimedia electronic mail system | |
JP3598509B2 (en) | Terminal device and control method thereof | |
CN110601958A (en) | Chat information display method and device in mobile phone game and server | |
JPH06253118A (en) | Multi-medium communication equipment | |
JP3107893B2 (en) | Teleconferencing system | |
WO2025187385A1 (en) | Information processing system | |
JPH11187178A (en) | Electronic blackboard conference system and communication method | |
JP2003016020A (en) | A plurality of mice, communication system using telephone, and communication method | |
JP3168423B2 (en) | Data processing method | |
JP2023092307A (en) | Terminal device, program, and operation method of terminal device | |
JPH08221338A (en) | Electronic conferencing equipment |