WO2024116529A1 - システム、システムの制御方法 - Google Patents
システム、システムの制御方法 Download PDFInfo
- Publication number
- WO2024116529A1 WO2024116529A1 PCT/JP2023/032738 JP2023032738W WO2024116529A1 WO 2024116529 A1 WO2024116529 A1 WO 2024116529A1 JP 2023032738 W JP2023032738 W JP 2023032738W WO 2024116529 A1 WO2024116529 A1 WO 2024116529A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- avatar
- information
- communication
- cpu
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
Definitions
- the present invention relates to a system for controlling the display of a user's avatar and a method for controlling the system.
- VR virtual reality
- users typically communicate in a virtual space using an avatar (a representation of the user in the system).
- Patent document 1 describes a technology that detects the movements, facial expressions, or five senses of a user who is a distributor, and if it is determined that the detection results satisfy certain conditions, changes the facial expression of an avatar to a specified facial expression and changes the pose of the avatar to a specified pose.
- the present invention therefore aims to provide technology that more appropriately controls how user information is reflected in an avatar.
- One aspect of the present invention is a method for producing a composition
- a system for realizing communication between a first user and a second user comprising: An acquisition means for acquiring real-time information of the first user; a control means for controlling, in a display device owned by the second user and displaying a virtual space including a first avatar of the first user, reflection of the real-time information of the first user in the first avatar based on a purpose of the communication;
- the system is characterized by having:
- One aspect of the present invention is a method for producing a composition
- a method for controlling a system for realizing communication between a first user and a second user comprising the steps of: acquiring real-time information of the first user; a control step of controlling, in a display device owned by the second user and displaying a virtual space including a first avatar of the first user, reflection of the real-time information of the first user in the first avatar based on a purpose of the communication;
- the present invention relates to a method for controlling a system comprising the steps of:
- the present invention makes it possible to more appropriately control how user information is reflected in an avatar.
- FIG. 1 is a configuration diagram of a communication system according to the first embodiment.
- FIG. 2 is a configuration diagram of a user terminal according to the first embodiment.
- FIG. 3 is a diagram illustrating the configuration of the server PC according to the first embodiment.
- FIG. 4 is a diagram showing a setting UI according to the first embodiment.
- 5A to 5C are diagrams for explaining group counseling according to the first embodiment.
- FIG. 6A is a flowchart of a process using remote rendering according to the first embodiment.
- FIG. 6B is a flowchart of a process using local rendering according to the first embodiment.
- 7A to 7C are diagrams showing a setting UI according to the second embodiment.
- 8A to 8C are diagrams showing the appearance of avatars in a virtual space according to the second embodiment.
- FIG. 9 is a flowchart of a process according to the second embodiment.
- FIG. 10 is a configuration diagram of a communication system according to the third embodiment.
- FIG. 11 is a flowchart of a process according to the third embodiment.
- 12A and 12B are diagrams showing the appearance of an avatar in a virtual space according to the fourth embodiment.
- FIG. 13 is a flowchart of a process according to the fourth embodiment.
- FIG. 1 is a diagram showing the overall configuration of a communication system according to the first embodiment.
- the communication system has a server PC 101 and multiple user terminals 102 (terminals connected to the server PC 101 via a network such as the Internet).
- the user terminal 102 is a display device such as a PC, smartphone, tablet, or HMD (head mounted display).
- the user terminal 102 may also be a controller (control device) capable of controlling these display devices.
- a case where the user terminal 102 is an HMD will be described. Note that in the first embodiment, a case where the HMD can be directly connected to a network will be described, but the HMD may also be connected to a network via another device (such as a PC or smartphone).
- FIG. 2 shows an example of a hardware configuration diagram of the user terminal 102 in embodiment 1.
- the user terminal 102 has a CPU 201, a display 202, a ROM 203, a RAM 204, a network I/F 205, and an internal bus 206.
- the user terminal 102 has a microphone 208, a sensor unit 209, a camera 210, a speaker 211, a storage device 212, and a short-range communication I/F 213.
- the CPU 201 is a control unit that performs overall control of various functions of the user terminal 102 via the internal bus 206 using programs stored in the ROM 203. The results of the execution of the programs by the CPU 201 are displayed on the display 202 so that the user can visually confirm them.
- ROM 203 is a flash memory or the like. ROM 203 stores various setting information and application programs as described above. RAM 204 functions as a memory and work area for CPU 201.
- the network I/F (interface) 205 is a module for connecting to a network.
- the microphone 208 picks up the voice uttered by the user.
- the sensor unit 209 includes one or more sensors. Specifically, the sensor unit 209 includes at least one of a GPS, a gyro sensor, an acceleration sensor, a proximity sensor, and a measurement sensor (a sensor that measures blood pressure, heart rate, or brain waves). The sensor unit 209 may also be equipped with a sensor for detecting physical information (information about the body; biometric information) to realize authentication (fingerprint authentication, vein authentication, iris authentication, etc.).
- Camera 210 is a fisheye camera (imaging unit) attached inside the HMD. Camera 210 can capture an image of the user's face. The captured image is stored in RAM 204 after the distortion of the fisheye lens is removed.
- the speaker 211 plays the voices of users participating in the communication system, sound effects, background music, etc.
- the storage device 212 is a storage medium.
- the storage device 212 is also a device that stores various data such as applications.
- the short-range communication I/F 213 is an interface used for communication with a controller held by the user.
- the user can input gestures to the user terminal 102 by moving the controller they are holding.
- the user can also give instructions to the user terminal 102 by operating buttons or a joystick provided on the controller.
- the controller may have sensors that measure the user's heart rate, pulse, sweat, and the like.
- the short-range communication I/F 213 may also communicate with a wearable device worn by the user to obtain the user's heart rate, pulse, sweat, and the like.
- the short-range communication I/F 213 may also communicate with a device (such as a camera or a group of sensors) installed in a room where the user is present.
- FIG. 3 shows an example of a hardware configuration diagram of the server PC 101 in embodiment 1.
- the server PC 101 has a display unit 301, a VRAM 302, a BMU 303, a keyboard 304, a PD 305, a CPU 306, a storage 307, a RAM 308, a ROM 309, and a flexible disk 310.
- the server PC 101 has a microphone 311, a speaker 312, a network I/F 313, and a bus 314.
- the display unit 301 displays, for example, live view video, icons, messages, menus, or other user interface information.
- VRAM 302 draws moving images to be displayed on display unit 301.
- the moving image data generated in VRAM 302 is transferred to display unit 301 according to a predetermined rule, and is thereby displayed on display unit 301.
- BMU (bit move unit) 303 controls data transfer between multiple memories (for example, between VRAM 302 and other memories). BMU (bit move unit) 303 also controls data transfer between memories and each I/O device (for example, network I/F 313).
- the keyboard 304 has various keys that allow the user to input characters, etc.
- the PD (pointing device) 305 is used, for example, to point to content (such as icons or menus) displayed on the display unit 301, or to drag and drop objects.
- the CPU 306 is a control unit that controls each component based on the OS and programs (control programs) stored in the storage 307, the ROM 309, or the flexible disk 310.
- Storage 307 is a hard disk drive (HDD) or a solid state drive (SSD). Storage 307 stores each control program, various data to be temporarily stored, etc.
- HDD hard disk drive
- SSD solid state drive
- RAM 308 includes a work area for CPU 306, an area for saving data during error processing, and an area for loading control programs.
- ROM 309 stores the control programs used in server PC 101, as well as data to be temporarily stored.
- the flexible disk 310 stores each control program and various data (data that needs to be stored temporarily).
- the microphone 311 picks up audio from around the server PC 101.
- the speaker 312 outputs audio contained in the video data.
- the network I/F 313 communicates with the user terminal 102 via the network.
- the bus 314 includes an address bus, a data bus, and a control bus.
- the control program can be provided to the CPU 306 from the storage 307, ROM 309, or flexible disk 310, or from another information processing device via the network I/F 313 over the network.
- FIG. 4 is a diagram showing a setting UI (user interface) 401 of the communication system according to the first embodiment.
- the setting UI 401 is displayed on the display 202 of the user terminal 102.
- the user uses the setting UI 401 to set (reflection setting) how physical information (information about the body; biometric information) including the user's symptoms is reflected in the user's avatar and communicated to other users.
- selection setting how physical information (information about the body; biometric information) including the user's symptoms is reflected in the user's avatar and communicated to other users.
- five settings with priorities of 1 to 5 are displayed.
- UI areas 402 to 404 each indicate a condition
- UI area 405 indicates the processing to be performed when those conditions are met.
- the user for whom the reflected settings are made will be referred to as the "first user,” and other users (users other than the first user who participate in the same virtual space community as the first user) will be referred to as the “other users.”
- the avatar of the first user will be referred to as the "avatar in use.”
- UI area 402 is a UI area in which the purpose of communication (e.g., counseling or business negotiations) is set.
- the UI area 403 is a UI area for setting the role of the other user (e.g., counselor or patient). Note that if nothing is entered in the UI area 403, this indicates that the role of the other user can be any role.
- the UI area 404 is a UI area for setting the type of physical information (physical information type) to be reflected in the avatar.
- Physical information types include, for example, tics, smiles, or tension.
- a tic is a quick body movement or vocalization that occurs involuntarily.
- the UI area 405 is a UI area for setting the degree to which the physical information indicated by the physical information type is reflected in the avatar (reflection method).
- the user can select one of the following options: “reflect as is,” “reflect with emphasis,” “reflect with suppression,” or “do not reflect.”
- Group counseling is a counseling method in which multiple patients gather together.
- multiple users with different roles such as counselors and patients, participate in a community (group) in the virtual space.
- a first user who is a patient may be concerned about his or her tic disorder and may not want other patients to see the symptoms of tic disorder.
- the first user sets, for example, as in setting group 406, "When the purpose of communication is to provide counseling, tics are not reflected in the avatar used that is shown to the other user whose role is a patient" (a setting with a priority of 2).
- the first user sets, "When the purpose of communication is to provide counseling, tics are reflected in the avatar used that is shown to the other user whose role is a counselor" (a setting with a priority of 1).
- the first user when reflecting the first user's own symptoms in the avatar being used, the first user can set whether or not to show the symptoms (and to what extent) depending on the role of the other user viewing the avatar being used.
- the first user when conducting business negotiations in a virtual space, it is assumed that the first user wishes to emphasize and reflect a friendly expression (such as a smile) in the avatar to ease the other party's guard, and conversely, to suppress and reflect a nervous appearance in the avatar to be used.
- the first user can set, as in setting group 407, "When the purpose of communication is to conduct business negotiations, the avatar to be used shown to the other user should emphasize and reflect the first user's smile, and suppress and reflect the first user's nervousness."
- the type of physical information and the degree to which it is reflected in the avatar may be set in advance for each communication purpose.
- the communication system gives priority to and uses the setting with the lower assigned priority number among multiple settings entered in the setting UI 401.
- the purpose of communication is counseling
- the other user's role is that of counselor
- the reflection of tics in the used avatar is controlled.
- settings with priorities of 1 and 5 can be used, but the communication system uses the setting with the lower priority (i.e., the setting with priority 1).
- the communication system performs control such that the symptoms of the tics occurring in the first user are reflected "as is" in the used avatar.
- the communication system may also reflect in the avatar used not only visible information such as the first user's facial expression or movements, but also other information.
- the user terminal 102 may obtain the patient's body temperature or heart rate using the sensor unit 209, and obtain eye movements using the camera 210. The user terminal 102 may then estimate the degree of tension or calmness of the first user based on the obtained information, and reflect the estimated result in the avatar.
- the above setting UI is intended to be used before the first user joins a community (group) in the virtual space, but there may also be a UI that allows the first user to make settings while viewing the virtual space. Such a UI will be described later with reference to FIG. 5C.
- Figures 5A to 5C are diagrams showing group counseling in a virtual space.
- patient A has set "When the purpose of communication is to provide counseling, tics will not be reflected in the avatar used to be shown to other users whose role is patient.”
- patient A has set "When the purpose of communication is to provide counseling, tics will be reflected in the avatar used to be shown to other users whose role is counselor.”
- four people are participating in the group counseling: a main counselor, a sub-counselor, patient A, and patient B.
- Avatar 501 is the avatar of the main counselor
- avatar 502 is the avatar of patient A.
- FIG. 5A shows the virtual space displayed on the sub-counselor's user terminal 102
- FIG. 5B shows the virtual space displayed on the patient B's user terminal 102.
- the virtual space displayed on the sub-counselor's user terminal 102 represents the space (field of view) visible from the sub-counselor's avatar.
- the virtual space displayed on the patient B's user terminal 102 represents the space visible from the patient B's avatar.
- the sub-counselor's avatar and Patient B's avatar are located in different locations in the virtual space. For this reason, the range of the avatars (main counselor's avatar 501 and Patient A's avatar 502) displayed on the sub-counselor's user terminal 102 and Patient B's user terminal 102 is different. Note that the display on the sub-counselor's user terminal 102 does not include Patient B's avatar. The display on Patient B's user terminal 102 does not include the sub-counselor's avatar.
- patient A's avatar 502 does not change its facial expression. This is because patient A has set "tics will not be reflected in the avatar shown to other users whose role is patient" as shown in setting group 406.
- FIG. 5C is another example of a UI that allows a first user to set "how the avatar used should be displayed to other users in the virtual space.”
- patient B uses a controller or the like to select the main counselor's avatar 501, and then issues an instruction to set how his or her own avatar should be displayed.
- a setting screen 503 is displayed in the virtual space.
- Patient B uses this setting screen 503 to set how his or her avatar should be displayed to the main counselor. This allows the first user to easily change the settings even when the virtual space is displayed.
- FIGS. 6A and 6B show the processing of the communication system according to embodiment 1.
- the flowchart in Figure 6A shows processing using a method in which the server PC 101 renders the images to be displayed on each user terminal 102 (a method called remote rendering).
- Figure 6B shows processing using a method in which the user terminal 102 renders the images (a method called local rendering).
- the communication system according to embodiment 1 is capable of executing either of these two types of processing.
- Steps S601 to S603 are the process in which the first user sets how his or her physical information will be reflected in the avatar used. This process is executed between the user terminals 102 of all users participating in the virtual space community and the server PC 101.
- the user terminal 102 of the first user will be referred to as the "user terminal 102A”
- each component of the user terminal 102A will have the letter "A” added to the end.
- the display 202 of the user terminal 102A will be referred to as the "display 202A”
- the CPU 201 of the user terminal 102A will be referred to as the "CPU 201A”.
- step S601 the CPU 201 (CPU 201A) of the user terminal 102A accepts a reflection setting (a setting for how the first user's physical information is reflected in the avatar used and communicated to the other user) from the first user. Specifically, the CPU 201A acquires the setting input by the user in the setting UI shown in FIG. 4 (settings that correlate the purpose of communication, the role of the other user, the type of physical information, and the degree of reflection) as the reflection setting.
- a reflection setting a setting for how the first user's physical information is reflected in the avatar used and communicated to the other user
- the CPU 201A acquires the setting input by the user in the setting UI shown in FIG. 4 (settings that correlate the purpose of communication, the role of the other user, the type of physical information, and the degree of reflection) as the reflection setting.
- step S602 the CPU 201A sends the accepted reflection settings to the server PC 101.
- step S603 the CPU 306 of the server PC 101 records (stores) the received reflection settings in the storage 307 or the like.
- steps S604 to S606 are the process in which the first user participates in the community in the virtual space. Although omitted in FIG. 6A, this process is executed between the user terminals 102 of all users who will participate in the community in this virtual space and the server PC 101.
- step S604 the CPU 201A receives an instruction from the first user to join a community in a virtual space. At this time, the CPU 201A obtains identification information for the virtual space of the community in which the first user wishes to join (hereinafter referred to as the "desired space").
- step S605 the CPU 201A sends identification information of the desired space to the server PC 101, requesting participation in the community of the desired space.
- step S606 the CPU 306 allows the first user to participate in the community of the desired space that corresponds to the acquired identification information.
- CPU 201A acquires real-time (current) physical information of the first user.
- CPU 201A acquires at least one of the following physical information, for example: voice, emotion, facial expression, blood pressure, heart rate, stress level, body temperature, amount of sweat, brain waves, pulse rate, posture, and movement (including eye movement).
- CPU 201A may acquire, for example, a photographed image of the first user as the physical information.
- CPU 201A controls microphone 208A to acquire the voice spoken by the first user.
- CPU 201A may further estimate the emotion of the first user from the acquired voice using existing voice emotion analysis technology.
- CPU 201A may control camera 210A to acquire a captured image of the user's face.
- CPU 201A may use facial expression analysis technology to analyze (acquire) the facial expression of the first user based on the captured image.
- CPU 201A may also analyze the eye movement of the first user based on the captured image to estimate the psychology of the first user.
- CPU 201A may also estimate the blood pressure, heart rate, and/or stress level of the first user using vital data analysis technology.
- the CPU 201A may also control the sensor unit 209A to measure at least one of the first user's blood pressure, heart rate, body temperature, sweat rate, and brain waves.
- CPU 201A may use short-range communication I/F 213 to communicate with a controller held by the first user or a wearable device worn by the first user.
- CPU 201A may acquire information acquired by the controller or wearable device (any of the first user's heart rate, body temperature, amount of sweat, brain waves, etc.).
- CPU 201A may use short-range communication I/F 213 to communicate with a camera installed indoors, etc., and acquire an image captured by the camera of the first user.
- CPU 201A may then acquire posture information or movement information of the first user based on the captured image.
- the CPU 201A may use the short-range communication I/F 213 to connect to a group of sensors installed in the room where the first user is located, and obtain vital information of the user.
- step S608 the CPU 201A transmits the physical information of the first user acquired in step S607 to the server PC 101.
- CPU 201A determines that the first user with a tic disorder has grimaced, it may determine that the first user has exhibited a motor tic.
- CPU 201A may also determine whether the first user with a dizziness disorder has exhibited dizziness based on the eye movement of the first user. In other words, CPU 201A may refer to information indicating the disease that the first user has (disease information) to determine whether the first user has exhibited symptoms.
- CPU 201A may then control the reflection of physical information such as motor tics or dizziness to the used avatar based on the result of the determination of whether the first user has exhibited symptoms.
- the CPU 201A may transmit physical information related to the first user's symptoms (for example, information on a grimacing motion) in step S608.
- the CPU 201A may not transmit physical information related to the first user's symptoms in step S608. According to this, if it is determined that the symptoms indicated by the disease information of the first user have appeared in the first user, it is possible to reflect the physical information related to the symptoms in the avatar used in step S612 described below.
- step S609 the CPU 306 of the server PC 101 receives the physical information of the first user.
- step S610 the CPU 306 obtains (determines) the purpose of the communication to be performed in the desired space.
- the CPU 306 acquires the purpose of communication that a user (any user participating in the community of the desired space) inputs using the UI displayed on the user terminal 102.
- the CPU 306 may determine the purpose from that information.
- the CPU 306 may estimate (determine) the purpose of communication based on information on the account of at least one of multiple users participating in the community of the desired space. For example, if a user with a counselor account participates in the community of the desired space, the CPU 306 may estimate that the purpose of communication is "to provide counseling".
- the CPU 306 may analyze the appearance of each avatar in the desired space to estimate the purpose of the communication. For example, if an avatar wearing a white coat is present in the desired space, the CPU 306 may estimate that the purpose of the communication is "to provide medical examination or counseling.”
- steps S611 to S615 is a loop process, which is repeated for each user (users other than the first user) who has joined the community and is viewing the video of the desired space (virtual space).
- One of the users viewing the video of the desired space is hereinafter referred to as the "second user.”
- the user terminal 102 of the second user is hereinafter referred to as the "user terminal 102B”
- each component of the user terminal 102B is suffixed with the letter “B.”
- the display 202 of the user terminal 102B is hereinafter referred to as the "display 202B”
- the CPU 201 of the user terminal 102B is hereinafter referred to as the "CPU 201B.”
- step S611 the CPU 306 determines (confirms) the role (position) of the second user.
- the CPU 306 acquires information on the role of the second user, for example, based on the account information of the second user. For example, if the purpose of communication is to provide counseling, the CPU 306 determines whether the role of the second user is a counselor, based on the account information of each user managed by the communication system.
- the CPU 306 may determine the role of the second user based on information obtained from an external system. For example, the CPU 306 queries an electronic medical record system in a hospital, and determines whether the role of the second user is a counselor (whether the second user is registered as a counselor) based on the results of the query.
- the CPU 306 may refer to the setting information to determine the role of the second user.
- the first user who is a patient may classify the second user who is a counselor as either a "trusted counselor” or an "untrusted counselor.”
- the CPU 306 may perform control so that the avatar used by the "trusted counselor” reflects the first user's tic disorder, and so that the avatar used by the "untrusted counselor” does not reflect the first user's tic disorder.
- step S612 the CPU 306 controls the avatar used by the first user based on the physical information of the first user, the purpose of communication, and the role of the second user.
- step S612 The processing of step S612 will be explained using an example in which the first user has a motor tic (a grimacing motor tic).
- step S607 CPU 201A controls camera 210 to acquire a captured image of the face of the first user.
- CPU 201A performs facial expression analysis from the captured image to acquire information on whether or not the first user is grimacing. Furthermore, because the user has a tic disorder, CPU 201A acquires information indicating that motor tics have appeared as physical information of the first user. Then, in step S608, CPU 201A transmits the physical information of the first user to server PC 101 via network I/F 205.
- step S612 CPU 306 refers to the reflection settings of the first user recorded in step S603.
- the first user has set "tic's are reflected in the avatar seen by the user who is the counselor during counseling, but tics are not reflected in the avatar seen by the user who is the patient.”
- information indicating that motor tics have appeared has been acquired as physical information of the first user. Therefore, CPU 306 controls the first user's avatar to frown in accordance with the reflection settings when the purpose of communication is to provide counseling and the role of the second user is a counselor.
- CPU 306 controls the first user's avatar not to frown when the purpose of communication is not to provide counseling or when the role of the second user is a patient.
- CPU 306 generates a 3D scene of the desired space including the avatar of the first user controlled in step S612.
- CPU 306 generates the 3D scene in, for example, a data format (such as X3D) capable of describing three-dimensional computer graphics.
- step S614 CPU 306 renders a 3D scene of the desired space to generate an image of the desired space as seen from the second user's avatar (the avatar's viewpoint).
- CPU 306 generates the image in a data format such as MP4.
- step S615 the CPU 306 transmits the video generated in step S614 to the user terminal 102B.
- step S616 CPU 201B receives the video.
- CPU 201B displays the video on display 202B.
- steps S614 and S615 shown in FIG. 6A are replaced with steps S631 and S632.
- steps S601 to S613, S616 when using local rendering is performed in the same way as when using remote rendering is performed. For this reason, only steps S631 and S632 will be described below.
- step S631 the CPU 306 transmits the 3D scene generated in step S613 to the user terminal 102B.
- step S632 CPU 201B renders the received 3D scene of the desired space and then generates a frame of an image of the desired space as seen by the second user's avatar.
- the process of generating the 3D scene in the virtual space in steps S612 to S613 is executed in both the case where the second user is the main counselor and the case where the second user is the sub-counselor.
- the process can be made more efficient by reusing the 3D scene in the virtual space generated in steps S611 to S612 for multiple other users with the same role.
- a first user sets the reflection degree of physical information in the avatar to be used by performing a reflection setting that associates the purpose of communication with the reflection degree of physical information.
- a user with a specific role may wish to determine the reflection degree of physical information of other users.
- the counselor may want to make the symptoms of the patient with the mild tic disorder more noticeable. Therefore, in the second embodiment, a communication system in which a certain user can set the reflection degree of physical information in the avatar of another user will be described.
- the communication system increases the size (dimension) of the user's avatar if the user's motor tic reaction is strong (depending on the magnitude of the motor tic reaction).
- FIGS. 7A to 7C are diagrams showing an example of a setting UI according to embodiment 2.
- the setting UI 701 is displayed on the counselor's user terminal 102.
- the counselor uses the setting UI 701 to set how the patient's physical information (including the patient's symptoms) is to be displayed on his/her own user terminal 102 (performs reflection settings).
- the UI area 702 is an area that displays the type of physical information.
- motor tics and vocal tics are selected as the types of physical information.
- the UI area 703 is a UI area for setting the degree to which physical information is reflected in the avatar.
- a slide bar 704 is displayed in the UI area 703 as an example.
- the slide bar 704 is a UI area for setting whether to "suppress” or "emphasize” the user's tic reaction when reflecting the user's tic reaction in the avatar. Moving the pointer 706 on the slide bar 704 to the left “suppresses” the user's tic reaction. Moving the pointer 706 to the right “emphasizes” the user's tic reaction.
- Figure 7A shows an example where the degree to which Patient A's motor tic reactions are reflected in the avatar is set to neither "suppressed” nor “emphasized.”
- Figure 7B shows an example where the degree to which Patient A's motor tic reactions are reflected in the avatar is set to "suppressed.”
- Figure 7C shows an example where the degree to which Patient B's motor tic reactions are reflected in the avatar is set to "emphasized.”
- the CPU 306 may automatically set the degree of reflection according to the patient's disease information. For example, the milder the symptoms indicated by the patient's disease information, the greater the degree to which the CPU 306 reflects the physical information related to that disease information in the patient's avatar. This reduces the effort required for the user to set the degree of reflection.
- FIGS. 8A to 8C are diagrams showing the avatars of patient A and patient B displayed on the main counselor's user terminal 102.
- Avatar 801 is the avatar of patient A
- avatar 802 is the avatar of patient B.
- Figure 8A shows the appearance of the two avatars when neither Patient A nor Patient B is experiencing motor tics.
- Fig. 8B shows the appearance of the two avatars when motor tics occur simultaneously in Patient A and Patient B, when the degree to which reactions are reflected in the avatars of Patient A and Patient B is set to neither "suppress” nor "emphasize” as in Fig. 7A.
- Fig. 8B shows that Patient A's motor tic reaction is large, while Patient B's motor tic reaction is small.
- Fig. 8C shows the two avatars with the degree of reflection of the avatar's reactions set to "suppressed” in the UI for Patient A in Fig. 7B, and with the degree of reflection of the avatar's reactions set to "emphasis" in the UI for Patient B in Fig. 7C.
- the two avatars are displayed with the size of Patient A's avatar made smaller and the size of Patient B's avatar made larger. This makes it easier to see the occurrence of motor tics in Patient A and Patient B.
- steps S601 and S602 in the flowchart in FIG. 6A are replaced with steps S901 and S902, and steps S603 and after are the same as those in the flowchart in FIG. 6A. Therefore, only steps S901 and S902 will be described.
- step S901 the CPU 201B of the user terminal 102B accepts a reflection setting indicating the degree to which the first user's physical information (including symptoms) is reflected in the first user's avatar.
- a reflection setting indicating the degree to which the first user's physical information (including symptoms) is reflected in the first user's avatar.
- the CPU 201B accepts that setting as the reflection setting.
- step S902 the CPU 201B sends the reflection setting information to the server PC 101.
- users who actually view the avatar can set the degree to which physical information is reflected in the avatar of other users. It is also possible for each user to set the degree to which the movements or sounds of other users are emphasized or suppressed when their movements or sounds are reflected in the avatar.
- a communication system has been described that is a client-server system in which a server PC 101 and multiple user terminals 102 are connected.
- the communication system can also be realized by a system that does not have a server PC 101. Therefore, in the third embodiment, a case will be described in which the communication system described in the first embodiment is constructed by a system that does not involve the server PC 101. It should be noted that the communication system described in the second embodiment can also be realized by a system that does not involve the server PC 101.
- FIG. 10 is a system configuration diagram of a communication system according to the third embodiment.
- the communication system has a plurality of user terminals 102 connected in a P2P (Peer to Peer) manner via a network such as the Internet.
- P2P Peer to Peer
- Each of the plurality of user terminals 102 shown in FIG. 10 has the same configuration as the user terminal 102 according to the first embodiment, and therefore a detailed description is omitted.
- the setting UI according to the third embodiment is the same as the setting UI 401 described in FIG. 4 of the first embodiment.
- FIG. 11 is a flowchart showing the processing of the communication system according to the third embodiment.
- step S1101 the CPU 201A of the user terminal 102A of the first user accepts the reflection settings from the first user, similar to step S401.
- steps S1102 to S1106 are the process of the first user participating in the community of the desired space (virtual space). Although omitted in FIG. 11, these processes are executed between the user terminals 102 of all other users who will participate in the community of this desired space.
- step S1102 similar to step S604, the CPU 201A accepts an instruction from the first user to join the community of the desired space, and obtains identification information for the desired space.
- step S1103 the CPU 201A transmits identification information of the desired space to the user terminal 102B (the user terminal 102 of the other user) in the same manner as in step S605. In this way, the CPU 201A notifies the user terminal 102B that the first user will be participating in the community of the desired space.
- step S1104 the CPU 201B records that the first user has joined the community of the desired space.
- step S1105 the CPU 201B transmits information about the second user to the user terminal 102A.
- the information about the second user includes information about the role of the second user.
- the role of the second user can be obtained in the same manner as in step S611.
- step S1106 the CPU 201A receives information about the second user.
- step S1107 the CPU 201A obtains the purpose of the communication to be performed in the desired space, similar to step S610.
- steps S1110 to S1117 is a loop process that is repeated until all users, including the first user, have left the virtual space.
- step S1110 the CPU 201A acquires real-time (current) physical information of the first user, similar to step S607.
- steps S1112 to S1113 is repeated the number of times corresponding to the number of second users other than the first user who participate in the community of the desired space (the number of user terminals 102B communicating with user terminal 102A).
- step S1112 the CPU 201A controls the avatar used by the first user based on the physical information of the first user, the purpose of communication, and the information of the second user, similar to step S612.
- step S1113 the CPU 201A generates a 3D model of the avatar used that was controlled in step S1112, and transmits the generated 3D model to the user terminal 102B.
- step S1114 the CPU 201B receives a 3D model of the first user's avatar.
- CPU 201B In step S1115, CPU 201B generates a 3D scene of the desired space including the first user's avatar.
- CPU 201B generates the 3D scene in a data format capable of describing three-dimensional computer graphics, such as X3D.
- step S1116 CPU 201B renders the 3D scene of the desired space generated in step S1115 to generate a frame of an image of the desired space as seen from the viewpoint of the second user.
- step S1117 CPU 201B displays the generated image of the desired space on display 202B.
- the communication system changes (updates) information required for controlling an avatar according to the virtual space of the community in which the user has joined.
- the communication system has the configuration described with reference to Figs. 1 to 3, as in the first embodiment.
- the communication system estimates the user's emotion based on the acquired physical information, and reflects the estimated emotion in the facial expression of the user's avatar.
- "physical information" in the fourth embodiment is information other than emotion.
- FIGS. 12A and 12B are diagrams showing the avatars of participants taking part in a communication system.
- Business partners A and B, and presenter user C are participating in this business negotiation.
- the scope of the virtual space displayed on the user terminals 102 of business partners A and B, and user C, are different for each.
- FIG. 12A and 12B show, for example, the virtual space displayed on the user terminal 102 of business partner B.
- avatar 1201 is the avatar of business partner A
- avatar 1202 is the avatar of user C.
- Figures 12A and 12B show an example in which the presenter, User C, is estimated to be feeling anxious and nervous through emotion estimation based on physical information.
- FIG. 12A is a diagram explaining the display of the user terminal 102 when the physical information change process according to embodiment 4 is not performed (when the acquired physical information is used as is to estimate emotions).
- the virtual space displayed on the user terminal 102 of business partner B as shown in FIG. 12A, since user C is experiencing feelings of anxiety and tension in the real space, these feelings are also reflected in the facial expression of avatar 1202.
- FIG. 12B is a diagram explaining the display of the user terminal 102 when the physical information change process according to embodiment 4 is performed (when the acquired physical information is changed and then the changed physical information is used to estimate emotions).
- the facial expression of user C's avatar 1202 does not reflect (is suppressed by) user C's anxiety and tension.
- FIG. 13 is a flowchart showing the processing of the communication system according to the fourth embodiment.
- the processing of the flowchart in FIG. 13 is processing using a remote rendering method, similar to FIG. 6A.
- the communication system according to the fourth embodiment can also be realized using the local rendering method described in FIG. 6B.
- the processing of the flowchart in FIG. 13 is realized by the CPU 201 controlling each part of the user terminal 102 according to a program stored in the ROM 203.
- Steps S1301 to S1303 are the process in which a first user participates in a community in a virtual space. This process is executed between the user terminals 102 of all users participating in this virtual space and the server PC 101.
- step S1301 the CPU 201A receives an instruction from the first user to participate in a virtual space. At this time, the CPU 201A obtains identification information for the virtual space in which the first user wishes to participate (desired space).
- step S1302 the CPU 201A requests the server PC 101 to join the virtual space community by sending identification information of the desired space to the server PC 101.
- step S1303 the CPU 306 of the server PC 101 allows the first user to participate in the community of the desired space that corresponds to the identification information.
- the subsequent steps S1305 to S1315 are loop processes that are executed repeatedly until all users, including the first user, have left the community in the virtual space.
- step S1305 the CPU 306 transmits account information for all users participating in the desired space community to the user terminal 102A.
- CPU 201A determines the purpose of the communication. For example, CPU 201A determines the purpose of the communication based on information about the tool that the first user is using in the desired space. For example, if the first user is using the presenter tool, CPU 201A can determine that the purpose of the communication is to hold a meeting (presentation).
- CPU 201A determines the role (position) of the first user based on information about the tool the first user is using in the desired space. If the first user is using a presenter tool, CPU 201A can determine that the first user is a presenter.
- CPU 201A determines the type of conference (relationship between multiple users participating in the conference) based on the accounts of users participating in the community of the desired space.
- CPU 201A determines, for example, whether the type of conference to be held is a conference with a business partner, an internal conference, or a conference with friends.
- identification information of the user account registered in user terminal 102 or server PC 101 is used. Note that in step S1308 described below, CPU 201A changes the physical information of the first user acquired in step S1307 described below according to the type of conference (relationship between multiple users participating in the conference).
- the CPU 201A determines the nationality of the participants based on the accounts of the users who are participating in the community of the desired space. For this determination, the identification information of the user account registered in the user terminal 102 or the server PC 101 (virtual space) is used. In step S1308 described below, the CPU 201A changes the physical information of the first user acquired in step S1307 according to the nationality of the participants.
- the CPU 201A determines the content of the event based on the information of the event to be held in the desired space (within the platform of the desired space). If the content of the event is a speech meeting or a business negotiation event, the CPU 201A changes the physical information of the first user acquired in S1037 in step S1308 described below.
- step S1307 the CPU 201A acquires physical information of the first user.
- CPU 201A determines the emotional information of the first user based on the acquired physical information (emotion determination). For example, the correspondence between the output results of the physical information and emotions is verified in advance, and a table showing the correspondence is stored in ROM 203 or server PC 101. CPU 201A determines the emotional information of the first user by determining whether the acquired physical information matches a specific emotional pattern described in the table.
- step S1308 the CPU 201A controls the physical information of the first user acquired in step S1307 based on the purpose of communication and related information determined in step S1306.
- CPU 201A determines that the purpose of communication is to hold a conference and that the first user is the presenter (the purpose of communication is to hold a conference with the first user as the presenter). In this case, CPU 201A changes the physical information of the first user so that "facial expressions or gestures showing tension in emotion determination are suppressed.” For example, if CPU 201A has acquired blood pressure or heart rate information as physical information, it reduces the blood pressure or heart rate by a predetermined value.
- CPU 201A determines that the purpose of communication is to hold a conference and that the type of conference is a conference with a business partner (the purpose of communication is to hold a conference with a business partner). In this case, CPU 201A changes the physical information of the first user so that "facial expressions or gestures showing tension are suppressed in emotion determination.”
- the CPU 201A determines that the purpose of communication is to hold a meeting and that the type of the meeting is a meeting with friends (the purpose of communication is to hold a meeting with friends). In this case, the CPU 201A does not change the physical information of the first user.
- CPU 201A determines that the purpose of communication is to hold a conference and that users of multiple nationalities will participate in the conference (the purpose of communication is to hold a conference in which users of multiple nationalities will participate).
- CPU 201A changes the physical information of the first user so that "tense facial expressions or gestures are suppressed and smiles are emphasized in the emotion determination." For example, if CPU 201A has acquired a stress level as the physical information, it lowers the stress level by a predetermined value so that smiles are emphasized in the emotion determination. Alternatively, if CPU 201A has acquired movement information as the physical information, it emphasizes information on the movement of opening the mouth so that smiles are emphasized in the emotion determination.
- CPU 201A determines that the purpose of communication is to hold a conference and that the content of the conference is to hold a lecture or business negotiations (the purpose of communication is to hold a lecture or business negotiations). In this case, CPU 201A changes the physical information of the first user so that "facial expressions or gestures showing tension are suppressed in emotion determination.”
- CPU 201A determines that the purpose of communication is to play a game and that the type of game is one that requires a poker face (the purpose of communication is to play a specific game). In this case, CPU 201A changes the physical information of the first user so that "tense facial expressions or gestures are suppressed in emotion determination.”
- step S1309 the CPU 201A determines the emotion of the first user based on the physical information of the first user controlled in step S1308 (performs emotion determination again).
- the CPU 201A transmits the information on the emotion of the first user to the server PC 101.
- step S1310 the CPU 306 receives information about the first user's emotions.
- steps S1311 to S1314 is a loop process that is repeated as many times as the number of second users who participate in the desired space and view the video of the desired space.
- step S1311 the CPU 306 controls the avatar used by the first user based on the information on the emotion of the first user received in step S1310. For example, the CPU 306 controls the facial expression or movement (gestures) of the avatar used to reflect the emotion of the first user.
- Steps S1312 to S1315 are similar to steps S613 to S616 in embodiment 1, so detailed explanations will be omitted.
- the facial expression of the avatar can be adjusted to match the purpose of communication.
- step S1308 if the physical information acquired in step S1307 satisfies a specific condition, the CPU 201A may issue a warning (notification) to the first user. For example, if the CPU 201A determines that a negative emotion value (emotion value such as anxiety) indicated by the emotion corresponding to the physical information exceeds a threshold, the CPU 201A may issue a warning (notification) to the first user. Alternatively, if the CPU 201A determines that a value indicated by the physical information acquired in step S1307 (e.g., blood pressure, heart rate, or body temperature) exceeds a threshold, the CPU 201A may issue a warning (notification) to the first user.
- a value indicated by the physical information acquired in step S1307 e.g., blood pressure, heart rate, or body temperature
- the CPU 201A issues a warning to the first user before transmitting the physical information to the server PC 101 in step S1309, for example, to inquire whether the physical information may be transmitted or whether the physical information may be changed.
- the CPU 201A may display a display item indicating a warning on the display 202, or may output a sound indicating a warning from the speaker 211.
- step S1308 it is determined whether or not to issue a warning based on the physical information before the physical information is changed (controlled) in step S1308, but a warning may be issued based on the physical information after the change is made in step S1308.
- step S1308 the user terminal 102A changes the user's physical information so that emotion determination can be performed based on the purpose of communication, etc., but emotion information may also be changed directly based on the purpose of communication and related information.
- the user terminal 102A sends emotional information determined based on the user's physical information to the server PC 101.
- the server PC 101 may determine the user's emotional state based on the user's physical information sent by the user terminal 102A and control the facial expression of the avatar.
- emotional information is controlled based on the purpose of communication in the virtual space.
- the avatar 1202 of user C in the virtual space seen by business partner B does not express anxiety or tension, but instead appears calm.
- the communication system can display an avatar with a more appropriate facial expression for business negotiations.
- each functional unit in each of the above embodiments may or may not be separate hardware.
- the functions of two or more functional units may be realized by common hardware.
- Each of the multiple functions of one functional unit may be realized by separate hardware.
- Two or more functions of one functional unit may be realized by common hardware.
- each functional unit may or may not be realized by hardware such as an ASIC, FPGA, or DSP.
- the device may have a processor and a memory (storage medium) in which a control program is stored. Then, the functions of at least some of the functional units of the device may be realized by the processor reading and executing the control program from the memory.
- the present invention can also be realized by a process in which a program for implementing one or more of the functions of the above-described embodiments is supplied to a system or device via a network or a storage medium, and one or more processors in a computer of the system or device read and execute the program.
- the present invention can also be realized by a circuit (e.g., ASIC) that implements one or more of the functions.
- 101 Server PC
- 102 User terminal
- 201 CPU
- 306 CPU
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
第1のユーザと第2のユーザとのコミュニケーションを実現するシステムであって、
前記第1のユーザのリアルタイムの情報を取得する取得手段と、
前記第2のユーザが有する表示装置であって、前記第1のユーザの第1のアバターを含む仮想空間を表示する表示装置における、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を、前記コミュニケーションの目的に基づき制御する制御手段と、
を有することを特徴とするシステムである。
第1のユーザと第2のユーザとのコミュニケーションを実現するシステムの制御方法であって、
前記第1のユーザのリアルタイムの情報を取得する取得ステップと、
前記第2のユーザが有する表示装置であって、前記第1のユーザの第1のアバターを含む仮想空間を表示する表示装置における、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を、前記コミュニケーションの目的に基づき制御する制御ステップと、
を有することを特徴とするシステムの制御方法である。
実施形態1では、クライアント-サーバシステムとして構築されたコミュニケーションシステムについて説明する。
実施形態1では、コミュニケーションの目的と身体情報の反映度合いを関連付ける反映設定を第1のユーザが行うことによって、利用アバターへの身体情報の反映度合いを設定する例を示した。しかし、コミュニケーションの目的によっては、特定の役割(立場)のユーザが、他のユーザの身体情報の反映度合いの決定を希望する場合がある。例えば、グループカウンセリングにおいて、重いチック症の患者と軽いチック症の患者が参加している場合には、カウンセラーが、軽いチック症の患者の症状を目に留まりやすくしたい場合がある。そこで、実施形態2では、或るユーザが、他人のアバターへの身体情報の反映度合いを設定可能なコミュニケーションシステムについて説明する。
実施形態1および実施形態2では、サーバPC101と複数のユーザ端末102とを接続したクライアント-サーバシステムであるコミュニケーションシステムについて説明した。しかし、コミュニケーションシステムは、サーバPC101を有しないシステムによっても実現可能である。そこで、実施形態3では、実施形態1で説明したコミュニケーションシステムを、サーバPC101サーバを介在しないシステムにより構築する場合について説明する。なお、実施形態2で説明したコミュニケーションシステムを、サーバPC101を介在しないシステムにより実現することも可能である。
実施形態4では、コミュニケーションシステムは、ユーザが参加したコミュニティの仮想空間に応じて、アバターの制御に必要な情報を変更(更新)する。
本発明は、上記の実施形態の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介してシステム又は装置に供給し、そのシステム又は装置のコンピュータにおける1つ以上のプロセッサがプログラムを読出し実行する処理でも実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
201:CPU、306:CPU
Claims (20)
- 第1のユーザと第2のユーザとのコミュニケーションを実現するシステムであって、
前記第1のユーザのリアルタイムの情報を取得する取得手段と、
前記第2のユーザが有する表示装置であって、前記第1のユーザの第1のアバターを含む仮想空間を表示する表示装置における、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を、前記コミュニケーションの目的に基づき制御する制御手段と、
を有することを特徴とするシステム。 - 前記リアルタイムの情報は、身体に関する情報を少なくとも含む、
ことを特徴とする請求項1に記載のシステム。 - 前記リアルタイムの情報は、音声、表情、血圧、心拍、ストレスレベル、体温、発汗、脳波、脈拍、姿勢、および動作の少なくともいずれかの情報を含む、
ことを特徴とする請求項2に記載のシステム。 - 前記制御手段は、前記コミュニケーションの目的に基づき、前記第1のユーザの前記リアルタイムの情報を強調し、または、抑制して、前記第1のアバターに反映する、
ことを特徴とする請求項1から3のいずれか1項に記載のシステム。 - 前記制御手段は、前記第1のアバターを含む仮想空間の画像を生成して、前記仮想空間の画像を表示するように前記表示装置を制御する、
ことを特徴とする請求項1から4のいずれか1項に記載のシステム。 - 前記制御手段は、前記第1のユーザまたは前記第2のユーザにより行われた反映設定および前記コミュニケーションの目的に基づき、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を制御し、
前記反映設定は、前記コミュニケーションの目的と、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映度合いとを関連付ける設定である、
ことを特徴とする請求項1から5のいずれか1項に記載のシステム。 - 前記制御手段は、前記第1のユーザの疾患情報および前記コミュニケーションの目的に基づき、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を制御する、
ことを特徴とする請求項1から6のいずれか1項に記載のシステム。 - 前記制御手段は、
前記第1のユーザの疾患情報および前記第1のユーザの前記リアルタイムの情報に基づき、前記第1のユーザに前記疾患情報が示す症状が発生したか否かを判定し、
前記症状が発生したか否かの判定結果に基づき、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を制御する、
ことを特徴とする請求項7に記載のシステム。 - 前記制御手段は、前記仮想空間を構成する情報に基づき、前記コミュニケーションの目的を判定する、
ことを特徴とする請求項1から8のいずれか1項に記載のシステム。 - 前記制御手段は、前記仮想空間のコミュニティに参加する複数のユーザの少なくともいずれかのアカウントの情報またはアバターに基づき、前記コミュニケーションの目的を判定する、
ことを特徴とする請求項1から9のいずれか1項に記載のシステム。 - 前記制御手段は、前記第1のユーザが使用するツールの情報に基づき、前記コミュニケーションの目的を判定する、
ことを特徴とする請求項1から10のいずれか1項に記載のシステム。 - 前記コミュニケーションの目的は、前記仮想空間のコミュニティに参加する複数のユーザの少なくともいずれかにより入力された目的である、
ことを特徴とする請求項1から8のいずれか1項に記載のシステム。 - 前記制御手段は、前記仮想空間のコミュニティに参加する複数のユーザの国籍、および前記コミュニケーションの目的に基づき、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を制御する、
ことを特徴とする請求項1から12のいずれか1項に記載のシステム。 - 前記制御手段は、前記仮想空間のコミュニティに参加する複数のユーザ間の関係、および前記コミュニケーションの目的に基づき、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を制御する、
ことを特徴とする請求項1から13のいずれか1項に記載のシステム。 - 前記制御手段は、前記コミュニケーションの目的および前記第2のユーザの役割に基づき、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を制御する、
ことを特徴とする請求項1から14のいずれか1項に記載のシステム。 - 前記制御手段は、前記第2のユーザのアカウントの情報または前記第1のユーザの操作に基づき、前記第2のユーザの役割の情報を取得する、
ことを特徴とする請求項15に記載のシステム。 - 前記制御手段は、外部システムから、前記第2のユーザの役割の情報を取得する、
ことを特徴とする請求項15に記載のシステム。 - 前記第1のユーザの前記リアルタイムの情報が特定の条件を満たす場合に、前記制御手段が前記第1のユーザの前記リアルタイムの情報を前記第1のアバターに反映させる前に、前記第1のユーザに警告を行う警告手段をさらに有する、
ことを特徴とする請求項1から17のいずれか1項に記載のシステム。 - 第1のユーザと第2のユーザとのコミュニケーションを実現するシステムの制御方法であって、
前記第1のユーザのリアルタイムの情報を取得する取得ステップと、
前記第2のユーザが有する表示装置であって、前記第1のユーザの第1のアバターを含む仮想空間を表示する表示装置における、前記第1のユーザの前記リアルタイムの情報の前記第1のアバターへの反映を、前記コミュニケーションの目的に基づき制御する制御ステップと、
を有することを特徴とするシステムの制御方法。 - コンピュータを、請求項1から18のいずれか1項に記載のシステムの各手段として機能させるためのプログラム。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202380081818.8A CN120266087A (zh) | 2022-11-29 | 2023-09-07 | 系统、系统控制方法 |
| US19/219,918 US20250285354A1 (en) | 2022-11-29 | 2025-05-27 | System, and system control method for controlling display of avatar of user |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2022190101A JP2024077887A (ja) | 2022-11-29 | 2022-11-29 | システム、システムの制御方法 |
| JP2022-190101 | 2022-11-29 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US19/219,918 Continuation US20250285354A1 (en) | 2022-11-29 | 2025-05-27 | System, and system control method for controlling display of avatar of user |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024116529A1 true WO2024116529A1 (ja) | 2024-06-06 |
Family
ID=91323472
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/032738 Ceased WO2024116529A1 (ja) | 2022-11-29 | 2023-09-07 | システム、システムの制御方法 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20250285354A1 (ja) |
| JP (1) | JP2024077887A (ja) |
| CN (1) | CN120266087A (ja) |
| WO (1) | WO2024116529A1 (ja) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2025028637A (ja) * | 2023-08-18 | 2025-03-03 | Cyberdyne株式会社 | 拡張空間構築システムおよび拡張空間構築方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009110276A1 (ja) * | 2008-03-05 | 2009-09-11 | 日本電気株式会社 | 利用者情報提示システム、利用者情報提示装置、利用者情報提示方法、及び利用者情報提示用プログラム |
| JP2014225801A (ja) * | 2013-05-16 | 2014-12-04 | 株式会社ニコン | 会議システム、会議方法およびプログラム |
| WO2021075288A1 (ja) * | 2019-10-15 | 2021-04-22 | ソニー株式会社 | 情報処理装置、情報処理方法 |
| CN113840158A (zh) * | 2021-10-11 | 2021-12-24 | 深圳追一科技有限公司 | 虚拟形象的生成方法、装置、服务器及存储介质 |
-
2022
- 2022-11-29 JP JP2022190101A patent/JP2024077887A/ja active Pending
-
2023
- 2023-09-07 WO PCT/JP2023/032738 patent/WO2024116529A1/ja not_active Ceased
- 2023-09-07 CN CN202380081818.8A patent/CN120266087A/zh active Pending
-
2025
- 2025-05-27 US US19/219,918 patent/US20250285354A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009110276A1 (ja) * | 2008-03-05 | 2009-09-11 | 日本電気株式会社 | 利用者情報提示システム、利用者情報提示装置、利用者情報提示方法、及び利用者情報提示用プログラム |
| JP2014225801A (ja) * | 2013-05-16 | 2014-12-04 | 株式会社ニコン | 会議システム、会議方法およびプログラム |
| WO2021075288A1 (ja) * | 2019-10-15 | 2021-04-22 | ソニー株式会社 | 情報処理装置、情報処理方法 |
| CN113840158A (zh) * | 2021-10-11 | 2021-12-24 | 深圳追一科技有限公司 | 虚拟形象的生成方法、装置、服务器及存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120266087A (zh) | 2025-07-04 |
| JP2024077887A (ja) | 2024-06-10 |
| US20250285354A1 (en) | 2025-09-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114222960B (zh) | 用于计算机生成现实的多模态输入 | |
| JP7697112B2 (ja) | 通信端末装置 | |
| KR102574874B1 (ko) | 헤드 마운트 디스플레이(hmd)를 이용한 화상회의를 위한 개선된 방법 및 시스템 | |
| JP5208810B2 (ja) | 情報処理装置、情報処理方法、情報処理プログラム、およびネットワーク会議システム | |
| KR20230098089A (ko) | 아바타 표시 장치, 아바타 생성 장치 및 프로그램 | |
| WO2020204000A1 (ja) | コミュニケーション支援システム、コミュニケーション支援方法、コミュニケーション支援プログラム、および画像制御プログラム | |
| JP6882797B2 (ja) | 会議システム | |
| JP2014099854A (ja) | ソーシャルネットワークサービス提供装置及び方法 | |
| CN111583355A (zh) | 面部形象生成方法、装置、电子设备及可读存储介质 | |
| CN114207557B (zh) | 虚拟和物理相机的位置同步 | |
| JP6969577B2 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
| US20230336689A1 (en) | Method and Device for Invoking Public or Private Interactions during a Multiuser Communication Session | |
| CN113282163A (zh) | 具可调整影像感测模块的头戴式装置及其系统 | |
| JP2023067708A (ja) | 端末、情報処理方法、プログラム、および記録媒体 | |
| WO2018158852A1 (ja) | 通話システム及びコミュニケーションシステム | |
| JP2012175136A (ja) | カメラシステムおよびその制御方法 | |
| US20250285354A1 (en) | System, and system control method for controlling display of avatar of user | |
| JP6901190B1 (ja) | 遠隔対話システム、遠隔対話方法及び遠隔対話プログラム | |
| US20240119619A1 (en) | Deep aperture | |
| US20260011079A1 (en) | Information processing device for displaying avatar of user in virtual space, information processing method, and non-transitory computer readable medium | |
| CN116700489A (zh) | 虚拟现实系统和方法 | |
| TW202318865A (zh) | 在根據注意焦點而識別之空間配置中及方位處的化身顯示 | |
| WO2023032172A1 (ja) | 仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 | |
| WO2023032173A1 (ja) | 仮想空間提供装置、仮想空間提供方法、及びコンピュータ読み取り可能な記憶媒体 | |
| CN121153246A (zh) | 用于化身生成的混合传感器融合 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23897188 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: CN2023800818188 Country of ref document: CN Ref document number: 202380081818.8 Country of ref document: CN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 202380081818.8 Country of ref document: CN |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 23897188 Country of ref document: EP Kind code of ref document: A1 |