[go: up one dir, main page]

WO2025053089A1 - Virtual space management device - Google Patents

Virtual space management device Download PDF

Info

Publication number
WO2025053089A1
WO2025053089A1 PCT/JP2024/031363 JP2024031363W WO2025053089A1 WO 2025053089 A1 WO2025053089 A1 WO 2025053089A1 JP 2024031363 W JP2024031363 W JP 2024031363W WO 2025053089 A1 WO2025053089 A1 WO 2025053089A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
city
virtual space
avatar
visited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/031363
Other languages
French (fr)
Japanese (ja)
Inventor
元一 吉澤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NTT Docomo Inc
Original Assignee
NTT Docomo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NTT Docomo Inc filed Critical NTT Docomo Inc
Publication of WO2025053089A1 publication Critical patent/WO2025053089A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a virtual space management device.
  • Patent Document 1 discloses an information processing device that identifies detected user behavior, determines customization information corresponding to the identified behavior, determines how to customize the virtual space, and processes the spatial information that is the basis of the virtual space according to the determined customization information.
  • the present invention has been made to solve the above problems, and aims to generate a virtual space that reflects the user's unique worldview, providing the user with an impressive or attractive visiting experience.
  • a virtual space management device is a virtual space management device that provides a user with a virtual space in which multiple individual spaces that correspond one-to-one to multiple cities are integrated, and includes a determination unit that determines a visual effect for a city visited by the user from among the multiple cities in the virtual space based on attribute information indicating the attributes of the user, a generation unit that generates an individual space for the visited city by commonly applying the visual effect determined by the determination unit to multiple buildings belonging to the visited city, and a provision unit that provides the individual space generated by the generation unit to a terminal device used by the first user.
  • the virtual space management device generates a virtual space that reflects the user's unique worldview, providing the user with an impressive or attractive visiting experience.
  • FIG. 1 is a diagram showing an overall configuration of an information processing system including a virtual space management device according to a first embodiment.
  • FIG. 1 is a schematic diagram showing an example of a network using City OS.
  • FIG. 2 is a block diagram showing a configuration example of a terminal device in FIG. 1 .
  • 2 is a block diagram showing a configuration example of an individual server in FIG. 1 .
  • 2 is a block diagram showing a configuration example of the management server of FIG. 1 .
  • FIG. 2 is a schematic diagram showing an example of a neural network model applied to a learning model according to the first embodiment.
  • FIG. 13 is a diagram showing an example of a landscape in a virtual space to which no visual effects are applied.
  • FIG. 11 is a diagram showing an example of a landscape in a virtual space to which a visual effect dedicated to a first user is applied.
  • FIG. 13 is a diagram showing an example of a landscape in a virtual space to which a visual effect dedicated to a second user is applied.
  • FIG. 8 is a diagram showing an example of avatars moving about in the virtual space of FIG. 7.
  • FIG. 9 is a diagram showing an example of avatars moving about in the virtual space of FIG. 8.
  • FIG. 10 is a diagram showing an example of avatars moving about in the virtual space of FIG. 9.
  • FIG. 6 is a flowchart showing a second operation of the processing device of FIG. 5 .
  • 7 is a flowchart showing a third operation of the processing device of FIG. 5 .
  • 11 is a diagram showing an example of a landscape in a virtual space to which an item included in the visual effect information is applied.
  • FIG. 1 is a diagram showing the overall configuration of an information processing system 1 including a virtual space management device according to a first embodiment.
  • the information processing system 1 includes a terminal device 10, an individual server 20, a management server 30, and a communication network NET.
  • the terminal devices 10 include terminal devices 10-1, 10-2, ..., 10-K, 10-L, ..., 10-N.
  • N is any natural number
  • K and L are any natural numbers smaller than N.
  • the configurations of the terminal devices 10-1 to 10-N are identical to each other.
  • the terminal devices 10 may include terminal devices that do not have the same configuration.
  • the first user U K is a user who uses the terminal device 10-K.
  • the second user U L is a user who uses the terminal device 10-L. Note that in FIG. 1, users who use terminal devices other than the terminal device 10-K and the terminal device 10-L are omitted from the illustration.
  • the individual servers 20 include individual servers 20-1, 20-2, ..., 20-J, and 20-M.
  • M is any natural number
  • J is any natural number smaller than M.
  • the individual servers 20-1 to 20-M have the same configuration.
  • the individual servers 20 may include individual servers that do not have the same configuration.
  • the individual servers 20 are servers that primarily provide administrative services for each city. Here, cities are divided into municipal units such as cities, towns, and villages, or regional units that include several cities, towns, and villages.
  • the individual servers 20 operate the City OS as a platform for providing administrative services.
  • the City OS is an open administrative management system that can be linked to existing administrative information systems by standardizing the platform and API (Application Programming Interface).
  • the City OS runs application software for administration, logistics, transportation, etc., for example.
  • FIG 2 is a schematic diagram showing an example of a network using each city OS.
  • the city OS is operated on a municipality or regional basis.
  • Each city OS 80-1 to 80-4 is configured to be able to cooperate with each other.
  • City OS 80-1 in City A is connected to services #1, #2, #3, and #7 via a standard API.
  • City OS 80-1 in City A is also connected to data #a, #b, #c, and #f via a standard API.
  • B Town's city OS 80-2 is connected to services #4, #5, and #6 via a standard API. Also, B Town's city OS 80-2 is connected to data #d, #e, and #f via a standard API.
  • City OS 80-4 in City D and service #7 are connected via a standard API. Therefore, city OS 80-1 in City A and city OS 80-4 in City D share service #7. Also, city OS 80-1 in City A and city OS 80-2 in Town B share data #f.
  • City OS 80-1 in City A, city OS 80-2 in Town B, city OS 80-3 in Village C, and city OS 80-4 in City D are directly or indirectly connected, and each city OS is interoperable.
  • Each city OS can cooperate with each other to distribute various data within and outside of each city.
  • the terminal devices 10-1 to 10-N, the individual servers 20-1 to 20-M, and the management server 30 are connected to each other so that they can communicate with each other via a communication network NET.
  • the information processing system 1 is a system that provides a virtual space management service to each user who uses the terminal devices 10-1 to 10-N.
  • the management server 30 is a server that manages the virtual space.
  • the management server 30 acquires various information from the terminal device 10 via the communication network NET, and provides a virtual space management service to the terminal device 10.
  • the virtual space is a space that integrates multiple individual spaces that correspond one-to-one to multiple cities. In other words, the virtual space is a space that integrates multiple individual spaces that correspond one-to-one to the individual servers 20-1 to 20-M.
  • Terminal Device Configuration Fig. 3 is a block diagram showing an example of the configuration of the terminal device 10-K in Fig. 1.
  • the terminal device 10-K includes a processing device 11, a storage device 12, a communication device 13, a display 14, an input device 15, an imaging device 16, a sound recording device 17, and a positioning device 18.
  • the elements included in the terminal device 10 are connected to each other by a single or multiple buses for communicating information.
  • the processing device 11 is a processor that controls the entire terminal device 10, and is configured, for example, using a single or multiple chips.
  • the processing device 11 is configured, for example, using a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, etc.
  • CPU central processing unit
  • Some or all of the functions of the processing device 11 may be realized by hardware such as a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), or an FPGA (Field Programmable Gate Array).
  • the processing device 11 executes various processes in parallel or sequentially.
  • the storage device 12 is a recording medium that can be read and written by the processing device 11.
  • the storage device 12 includes, for example, non-volatile memory and volatile memory.
  • the non-volatile memory is, for example, ROM (Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM (Electrically Erasable Programmable Read Only Memory).
  • the volatile memory is, for example, RAM (Random Access Memory).
  • the storage device 12 stores a number of programs including the control program PR1 to be executed by the processing device 11.
  • the storage device 12 also functions as a work area for the processing device 11.
  • the communication device 13 is hardware that functions as a transmitting/receiving device for communicating with other devices.
  • the communication device 13 is also called, for example, a network device, a network controller, a network card, a communication module, etc.
  • the communication device 13 may have a connector for wired connection and an interface circuit corresponding to the connector.
  • the communication device 13 may also have a wireless communication interface. Examples of the connector and interface circuit for wired connection include products that comply with wired LAN, IEEE 1394, and USB. Examples of the wireless communication interface include products that comply with wireless LAN and Bluetooth (registered trademark), etc.
  • the display 14 is a device that displays images and text information.
  • the display 14 displays various images based on the control of the processing device 11.
  • various display panels such as a liquid crystal panel and an organic EL (Electro Luminescence) panel are suitable for use as the display 14.
  • the input device 15 receives an operation from the first user U K.
  • the input device 15 includes a keyboard, a touch pad, a touch panel, and a pointing device such as a mouse.
  • the input device 15 may also function as the display 14.
  • the imaging device 16 outputs an image Gx obtained by capturing an image of the outside world.
  • the imaging device 16 includes, for example, a lens, an imaging element, an amplifier, and an AD converter. Light collected through the lens is converted by the imaging element into an analog imaging signal.
  • the amplifier amplifies the imaging signal and outputs it to the AD converter.
  • the AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal.
  • the converted imaging information is output to the processing device 11 as an image Gx.
  • the recording device 17 outputs audio information Ox obtained by recording surrounding sounds.
  • the recording device 17 includes, for example, a microphone, an amplifier, and an AD converter.
  • the surrounding sounds are converted into an analog audio signal by the microphone.
  • the amplifier amplifies the sound signal and outputs it to the AD converter.
  • the AD converter converts the amplified audio signal, which is an analog signal, into audio information Ox, which is a digital signal.
  • the converted audio information Ox is output to the processing device 11.
  • the positioning device 18 acquires position information of the terminal device 10.
  • the positioning device 18 may be, for example, a GNSS (Global Navigation Satellite System) receiver.
  • the GNSS receiver receives radio signals transmitted from one or more GNSS satellites.
  • GNSS is a positioning system that uses positioning satellites from countries around the world, including GPS (Global Positioning System) satellites.
  • the radio signals include information such as the position information of the satellite that transmitted the radio signals and the transmission time of the radio signals.
  • the GNSS receiver performs positioning based on the one or more received radio signals, and outputs the position information of the GNSS receiver to the processing device 11.
  • the positioning device 18 may be, for example, a VPS (Visual Positioning Service) device.
  • the VPS device acquires image information indicating an image of a scene in front of the first user U.K. from an imaging device provided in the XR glasses worn by the first user U.K.
  • the VPS device outputs the image information acquired from the imaging device to a position information server (not shown) via the communication device 13.
  • the VPS device acquires VPS information as position information from the position information server via the communication device 13.
  • the position information includes the position of the first user U.K. in real space and the direction in which the first user U.K. views the real space.
  • the terminal device 10-K displays virtual objects arranged in a virtual space on the display 14 or XR glasses worn on the head of the first user U K.
  • XR glasses is a general term for VR (Virtual Reality) glasses, AR (Augmented Reality) glasses, and MR (Mixed Reality) glasses.
  • the virtual object is, for example, a virtual object that represents data such as a still image, a video, a three-dimensional CG model, an HTML file, and a text file, and a virtual object that represents an application.
  • examples of text files include memos and source code.
  • applications include a browser, an application for using an SNS, and an application for generating document files.
  • the terminal device 10-K includes a personal computer, a tablet terminal, a smartphone, a smart watch, etc.
  • the terminal device 10-K is preferably a mobile terminal device such as a tablet or a smartphone.
  • the XR glasses may be connected to the communication network NET without going through the terminal device 10.
  • the processing device 11 functions as an acquisition unit 111, an output unit 112, and a display control unit 113, for example, by reading and executing the control program PR1 from the storage device 12.
  • the acquisition unit 111 acquires various information transmitted from the individual servers 20-1 to 20-M and the management server 30, the captured image Gx output from the imaging device 16, the audio information Ox output from the recording device 17, and the location information output from the positioning device 18.
  • the output unit 112 outputs the captured image Gx, the audio information Ox, and the position information to the management server 30 via the communication device 13.
  • the output unit 112 outputs information input by the first user U_K using the input device 15 to the management server 30 via the communication device 13.
  • the information input by the first user U_K includes, for example, text information input on a chat tool.
  • the display control unit 113 causes the display 14 to display various pieces of information based on the various pieces of information acquired by the acquisition unit 111. For example, the display control unit 113 causes the display 14 to display an image showing a virtual object.
  • the display control unit 113 causes the display of the XR glasses to display an image showing a virtual object in accordance with the posture of the first user U_K , i.e., the posture of the XR glasses.
  • FIG. 4 is a block diagram showing a configuration example of the individual server 20-J in Fig. 1.
  • the individual server 20-J includes a processing device 21, a storage device 22, a communication device 23, a display 24, and an input device 25.
  • the elements included in the individual server 20 are connected to each other by a single or multiple buses for communicating information.
  • the processing device 21 is a processor that controls the entire individual server 20, and is configured, for example, using one or more chips.
  • the processing device 21 is configured, for example, using a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, and a register. Some or all of the functions of the processing device 21 may be realized by hardware such as a DSP, ASIC, PLD, or FPGA.
  • the processing device 21 executes various processes in parallel or sequentially.
  • the storage device 22 is a recording medium that can be read and written by the processing device 21.
  • the storage device 22 includes, for example, a non-volatile memory and a volatile memory.
  • the non-volatile memory is, for example, a ROM, an EPROM, and an EEPROM.
  • the volatile memory is, for example, a RAM.
  • the storage device 22 stores a number of programs including a control program PR2 to be executed by the processing device 21, as well as a city OS COS, a three-dimensional CG model CGM, and visual effect information VEI.
  • the three-dimensional CG model CGM is a model of a number of buildings placed in the public space of the city.
  • the visual effect information VEI is information about the visual effects on the virtual space that are determined for each user who visits the city in the virtual space.
  • the storage device 22 also functions as a work area for the processing device 21.
  • the communication device 23 is hardware that serves as a transmitting/receiving device for communicating with other devices.
  • the communication device 23 is also called, for example, a network device, a network controller, a network card, a communication module, etc.
  • the communication device 23 may have a connector for wired connection and an interface circuit corresponding to the connector.
  • the communication device 23 may also have a wireless communication interface. Examples of the connector and interface circuit for wired connection include products that comply with wired LAN, IEEE 1394, and USB. Examples of the wireless communication interface include products that comply with wireless LAN and Bluetooth (registered trademark), etc.
  • the display 24 is a device that displays images and text information.
  • the display 24 displays various images based on the control of the processing device 21.
  • various display panels such as a liquid crystal panel and an organic EL panel are suitable for use as the display 24.
  • the input device 25 is a device that accepts operations by the administrator of the information processing system 1.
  • the input device 25 includes a keyboard, a touchpad, a touch panel, and a pointing device such as a mouse.
  • the input device 25 may also function as the display 24.
  • the administrator of the information processing system 1 can modify the control program PR2 by using the input device 25.
  • the processing device 21 functions as an acquisition unit 211, an output unit 212, and a display control unit 213, for example, by reading and executing the control program PR2 from the storage device 22.
  • the acquisition unit 211 acquires various information sent from other individual servers and the management server 30.
  • the output unit 212 outputs the visual effect information VEI stored in the storage device 22 to the management server 30 via the communication device 13.
  • the display control unit 213 causes the display 24 to display various pieces of information based on the various pieces of information acquired by the acquisition unit 211.
  • the display control unit 213 can transmit information on three-dimensional CG models of buildings in a city to the display 24, and cause the display 24 to display buildings in a virtual space.
  • FIG. 5 is a block diagram showing an example of the configuration of the management server 30 in Fig. 1.
  • the management server 30 includes a processing device 31 as a virtual space management device, a storage device 32, a communication device 33, a display 34, and an input device 35.
  • the elements included in the management server 30 are connected to each other by a single or multiple buses for communicating information.
  • the management server 30 is an example of a virtual space management device.
  • the processing device 31 is a processor that controls the entire management server 30, and is configured, for example, using one or more chips.
  • the processing device 21 is configured, for example, using a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, and a register. Some or all of the functions of the processing device 31 may be realized by hardware such as a DSP, ASIC, PLD, or FPGA.
  • the processing device 31 executes various processes in parallel or sequentially.
  • the storage device 32 is a recording medium that can be read and written by the processing device 31.
  • the storage device 32 includes, for example, a non-volatile memory and a volatile memory.
  • the non-volatile memory is, for example, a ROM, an EPROM, and an EEPROM.
  • the volatile memory is, for example, a RAM.
  • the storage device 32 stores a plurality of programs including the control program PR3 to be executed by the processing device 31, the teacher data TD1, the learning model LM1, the large-scale language model LLM, and the ID database IDB.
  • the storage device 32 also functions as a work area for the processing device 31.
  • the communication device 33 is hardware that functions as a transmitting/receiving device for communicating with other devices.
  • the communication device 33 is also called, for example, a network device, a network controller, a network card, a communication module, etc.
  • the communication device 33 may have a connector for wired connection and an interface circuit corresponding to the connector.
  • the communication device 33 may also have a wireless communication interface. Examples of the connector and interface circuit for wired connection include products that comply with wired LAN, IEEE 1394, and USB. Examples of the wireless communication interface include products that comply with wireless LAN and Bluetooth (registered trademark), etc.
  • the display 34 is a device that displays images and text information.
  • the display 34 displays various images based on the control of the processing device 31.
  • various display panels such as a liquid crystal panel and an organic EL panel are suitable for use as the display 34.
  • the input device 35 is a device that accepts operations by the administrator of the information processing system 1.
  • the input device 35 includes a keyboard, a touchpad, a touch panel, and a pointing device such as a mouse.
  • the input device 35 may also function as the display 34.
  • the administrator of the information processing system 1 can modify the control program PR3 by using the input device 35.
  • the processing device 31 functions as a judgment unit 311, an acquisition unit 312, a decision unit 313, a generation unit 314, a provision unit 315, an avatar control unit 316, and a learning unit 317, for example, by reading and executing the control program PR3 from the storage device 32.
  • the processing device 31 provides a virtual space VS that integrates a plurality of individual spaces to a first user U K.
  • the plurality of individual spaces correspond one-to-one to a plurality of cities.
  • a city OS is operated by an individual server 20 as a platform for providing administrative services.
  • the determination unit 311 determines whether or not the first user U.K. visits city A in the virtual space for the first time.
  • the processing device 31 determines a visual effect dedicated to the first user U.K. for buildings in the virtual space, as described below.
  • City J is an example of a visited city.
  • a visited city is a city visited by the first user U.K. among a plurality of cities in the virtual space VS.
  • the processing device 31 determines the visual effect dedicated to the first user U.K. on the buildings in the virtual space as the visual effect determined when the first user U.K. first visits city J.
  • the visual effect determined when the first user U.K. first visits city J is stored as visual effect information VEI in the storage device 22 of the individual server 20-J corresponding to city J.
  • the acquisition unit 312 acquires the behavior history of the first user U.K. from the terminal device 10-K and the individual servers 20-1 to 20-M.
  • the behavior history includes at least one of the following: the speech of the first user U.K. in a virtual space including the individual space for each city, the posting of the first user U.K. to a social networking service (SNS), and the spatial movement history of the first user U.K. in the virtual space.
  • SNS social networking service
  • the determination unit 313 extracts attribute information indicating attributes of the first user U K from the behavior history of the first user U K.
  • the attribute information is, for example, information regarding the age, sex, occupation, hobbies, preferences, etc. of the first user U K.
  • the behavior history of the first user U.K. may include the first user U.K. 's speech in the real space, the first user U.K. 's posts to SNS in the real space, the first user U.K. 's movement history in the real space, etc.
  • the determination unit 313 determines a visual effect for each city in the virtual space visited by the first user U_K , based on attribute information indicating the attributes of the first user U_K .
  • the determination unit 313 analyzes the emotion of the first user U K from the attribute information of the first user U K by using, for example, a known value understanding technology.
  • the value understanding technology is a technology that analyzes information related to the spatial movement history of the first user U K seamlessly acquired in the real space and the virtual space, and understands the emotion of the first user U K (see Non-Patent Document 1).
  • the analysis of information based on the speech content of the first user U_K and the analysis of information related to spatial movement history can be realized by a well-known large-scale language model LLM.
  • visual effects refer to at least one of the colors, tones, textures, patterns, etc., that are applied to the surface of a building constructed using a 3D CG model.
  • the generation unit 314 generates an individual space of a visiting city visited by the first user U.K. by commonly applying the visual effect determined by the determination unit 313 to a plurality of buildings belonging to the visiting city visited by the first user U.K. In other words, the generation unit 314 generates an individual space of a visiting city visited by the first user U.K. by applying the same visual effect to a plurality of buildings belonging to the visiting city visited by the first user U.K.
  • the buildings include buildings, houses, commercial facilities, public facilities, hospitals, factories, warehouses, etc.
  • the providing unit 315 provides the individual space generated by the generating unit 314 to the terminal device 10-K used by the first user U.K.
  • the providing unit 315 provides information about the individual space to the terminal device 10- K based on information about the position of the first user U.K. in the virtual space and information about the direction in which the first user U.K. is facing.
  • the avatar control unit 316 controls the movement of the avatar of the first user U.K. in the virtual space. More specifically, the avatar control unit 316 acquires position information of the avatar of the first user U.K. and operation information of the input device 15 by the first user U.K. from the terminal device 10-K. The avatar control unit 316 controls the movement of the avatar of the first user U.K. in the virtual space displayed on the display 14 or the display of the XR glasses in accordance with the acquired position information of the avatar and operation information of the input device 15.
  • the learning unit 317 has a teacher data acquisition unit 317a and a model generation unit 317b.
  • the teacher data acquisition unit 317a prepares multiple pieces of teacher data TD1 and stores the multiple pieces of teacher data TD1 in the storage device 32.
  • the multiple pieces of teacher data TD1 are configured by associating input data with output data.
  • the teacher data acquisition unit 317a prepares a learning model LM1 before learning.
  • the teacher data acquisition unit 317a randomly acquires a set of teacher data from the multiple pieces of teacher data TD1, for example, from the storage device 32.
  • the model generation unit 317b generates a trained learning model LM1 by having the learning model LM1 learn multiple pieces of teacher data TD1 by machine learning. More specifically, the model generation unit 317b performs machine learning using multiple pieces of teacher data TD1 stored in the storage device 32. That is, the model generation unit 317b inputs multiple pieces of teacher data TD1 to the learning model LM1, and has the learning model LM1 learn the correlation between the input data and output data that constitute the multiple pieces of teacher data TD1 by machine learning, thereby generating a trained learning model LM1.
  • the multiple teacher data TD1 include multiple sets of data sets.
  • One set of data sets is composed of a set of input data including an "image of the city” and output data including "clothing worn by a third-party avatar.”
  • the output data is, for example, called a correct answer label. In this way, for each keyword included in the input data, colors and patterns associated with that keyword are associated as output data.
  • the neural network model 90 includes an input layer 91, an intermediate layer 92, and an output layer 93.
  • the input layer 91 has neurons whose number corresponds to the number of words or character strings as input data, and each word or character string is input to each neuron.
  • the intermediate layer 92 is composed of, for example, a convolutional neural network.
  • the intermediate layer 92 converts the features extracted from the "city image" input via the input layer 91 using an activation function, and outputs the feature vector as a one-dimensional array.
  • the output layer 93 outputs output data including "clothes worn by the third-party avatar" based on the feature vectors output from the intermediate layer 92.
  • the model generation unit 317b inputs multiple pieces of teacher data TD1 to the neural network model 90, and causes the neural network model 90 to perform machine learning of the correlation between the input data "image of the town” and the output data "clothing worn by a third-party avatar.” More specifically, the model generation unit 317b first selects a set of data from the multiple pieces of teacher data TD1, and inputs the "image of the town" constituting the set of data into the input layer 91 of the neural network model 90 as input data.
  • the model generation unit 317b uses an evaluation function that compares the output data output from the output layer 93 as an inference result, i.e., "clothes worn by a third-party avatar," with the output data that constitutes the set of data, i.e., the correct label of "clothes worn by a third-party avatar,” and adjusts the weights associated with each synapse so that the value of the evaluation function becomes smaller.
  • adjusting the weights associated with each synapse is called backpropagation.
  • the model generation unit 317b sequentially inputs the "city image" constituting each of the multiple sets of data sets in the multiple sets of teacher data TD1 as input data to the input layer 91 of the neural network model 90.
  • the model generation unit 317b compares each piece of input data with the output data corresponding to each piece of input data, and iteratively adjusts the weights associated with each synapse so as to reduce the value of the evaluation function.
  • the model generation unit 317b determines that a predetermined learning end condition is satisfied, it ends the machine learning and stores the neural network model 90 at that point in time in the storage device 32 as a trained learning model LM1.
  • the predetermined learning end condition is, for example, that the number of iterations of the series of learning processes described above reaches a predetermined number, and that the value of the evaluation function becomes smaller than an allowable value.
  • FIG. 7 is a diagram showing an example of a landscape in the virtual space VS to which no visual effects are applied.
  • FIG. 7 shows a landscape of J city corresponding to the individual server 20-J.
  • buildings 101 to 105 are shown by three-dimensional CG models CGM.
  • the surface colors, patterns, etc. of the buildings 101 to 105 in the virtual space VS are different for each building based on the colors, patterns, etc. in the real space.
  • the three-dimensional CG models CGM in the virtual space VS are created based on photographic data of the real space, such as satellite photographs and aerial photographs.
  • the appearances of the buildings 101 to 105 in the virtual space VS which are virtual objects in a public space, often lack uniformity. If the appearances of the buildings 101 to 105 are not uniform, the cityscape will not be distinctive. Therefore, no matter which city a visitor visits, the cityscape will be similar, and the visitor may not have an impressive experience.
  • FIG. 7 is an auxiliary diagram for explaining the first embodiment, and the space shown in FIG. 7 is not provided to any terminal device.
  • Fig. 8 is a diagram showing an example of a landscape in the virtual space VS to which a visual effect dedicated to the first user U K is applied.
  • Fig. 8 shows a landscape in the same field of view as Fig. 7.
  • the buildings 101 K to 105 K correspond to the buildings 101 to 105 in Fig. 7, respectively.
  • the buildings 101 K to 105 K are applied with a common color and pattern as a visual effect, so that the colors and patterns of the buildings 101 K to 105 K are unified.
  • the first user U_K visits J city in the virtual space VS
  • the first user U_K sees a landscape with colors and patterns as shown in Fig. 8 as an individual space DS- J_K on the display of the terminal device 10-K or the XR glasses. Therefore, the first user U_K forms an impression of J city due to the unified landscape of J city.
  • Information relating to the visual effects dedicated to the first user U K in City J in the virtual space VS is stored in the storage device 22 of the individual server 20-J corresponding to City J in the virtual space VS.
  • Fig. 9 is a diagram showing an example of a landscape in the virtual space VS to which a visual effect dedicated to a second user U L is applied.
  • the second user U L is a user who uses the terminal device 10-L, and is a different user from the first user U K.
  • Fig. 9 shows a landscape in the same field of view as Fig. 7 and Fig. 8.
  • the buildings 101 L to 105 L correspond to the buildings 101 to 105 in Fig. 7, respectively.
  • the buildings 101 L to 105 L are given a common color and pattern as a visual effect, so that the color and pattern of the surfaces of the buildings 101 L to 105 L are unified.
  • the surface colors and patterns of the buildings 101L to 105L are unified, but are different from the surface colors and patterns of the buildings 101K to 105K in Fig. 8.
  • the second user U L visits City J in the virtual space VS, the second user U L sees a landscape with colors and patterns as shown in Fig. 9 as individual space DS- JL on the display of the terminal device 10-L or on the XR glasses. Therefore, the second user U L forms an impression of City J due to the unified scenery of City J.
  • Information relating to the visual effects dedicated to the second user U L in City J in the virtual space VS is stored in the storage device 22 of the individual server 20-J corresponding to City J in the virtual space VS.
  • the determination unit 313 determines, from the keywords tennis and travel, bright colors as the colors of the buildings 101 K to 105 K. If the hobby of the second user U L is extracted as reading, the determination unit 313 determines, from the keyword reading, calm colors as the colors of the buildings 101 L to 105 L.
  • Another index of attribute information is a user's impression of each city. For example, when the name of a city among a plurality of cities appears as a keyword in the history of past utterances or posts by the first user U K , the determination unit 313 extracts at least one term related to the impression of the city that was picked up within a predetermined period including the time when the name of the city appeared.
  • the determination unit 313 extracts the keyword "Kamakura” as a city, the determination unit 313 extracts terms related to the impression of "Kamakura” within a specified period that includes the time when the keyword appears.
  • the determination unit 313 determines the image of "Kamakura” by inputting the extracted terms as prompts into a well-known large-scale language model.
  • large-scale language models include ChatGPT (https://openai.com/chatgpt) and StableLM (https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models).
  • the determining unit 313 extracts terms such as "ocean” and “surfing” from the utterances and posts of the first user U K within a predetermined period of time, the image of "Kamakura is a surfing town” is obtained.
  • the image of "Kamakura” is, for example, a bright image bathed in plenty of sunlight.
  • the determining unit 313 extracts terms such as temples, famous places, historical sites, and historical figures from the utterances and posts of the second user U L within a predetermined period of time, the image of "Kamakura is a historic town" is obtained.
  • the image of "Kamakura” is, for example, an image of calm colors.
  • FIG. 10 is a diagram showing an example of avatars moving around in the virtual space VS of FIG. 7.
  • FIG. 10 is a diagram showing an example of a landscape in the virtual space VS to which no visual effects have been applied, and an example of an avatar to which no visual effects have been applied.
  • the avatar of the first user U K is avatar 201
  • the avatars of other users i.e., third parties
  • the appearance of each of the avatars 201 to 204, including their clothes, is set in advance by each user. Therefore, there is often no uniformity in the appearance of each avatar.
  • Fig. 10 is an auxiliary diagram for explaining the first embodiment, and the space shown in Fig. 10 is not provided to any terminal device.
  • Fig. 11 is a diagram showing an example of avatars moving about in the virtual space VS of Fig. 8.
  • Fig. 11 is a diagram showing an example of scenery and avatars in the virtual space VS to which a visual effect dedicated to the first user U K is applied.
  • the visual effect dedicated to the first user U K is not applied to the appearance of the avatar 201 of the first user U K , but is applied commonly to the appearances of the avatars 202 K to 204 K other than the avatar 201 of the first user U K. That is, the generation unit 314 applies the same visual effect to the avatars 202 K to 204 K other than the avatar 201 of the first user U K.
  • the determination unit 313 determines the appearance of the third party avatars 202K to 204K traveling through the city in the virtual space VS visited by the first user U.K. based on the attribute information of the first user U.K. For example, if the determination unit 313 obtains the image that "Kamakura is a surfing town" from the speech, posts, etc. of the first user U.K. , the image of "Kamakura" will be, for example, a bright image bathed in plenty of sunlight. In this case, the determination unit 313 determines that the clothing to be worn by the third party avatars 202K to 204K will be T-shirts with a yacht pattern reminiscent of the sea.
  • the correlation between the "image of the city” and the “clothing worn by a third-party avatar” is machine-learned in advance by the learning unit 317 using a large amount of data.
  • the providing unit 315 provides the individual space DS- JK generated by the generating unit 314 to the terminal device 10-K used by the first user U K. Therefore, the first user U K recognizes the appearance of the avatar 201 of the first user U K as the appearance previously set by the first user U K himself, and recognizes the third party avatars 202 K to 204 K as wearing T-shirts with a yacht pattern. Therefore, the first user U K forms an impression of city J based on the uniform appearance of the third party avatars 202 K to 204 K.
  • Data relating to the appearances of the avatars 202 K to 204 K when the visual effects of FIG. 11 are applied is stored in the storage device 22 of the individual server 20-J corresponding to J city.
  • Fig. 12 is a diagram showing an example of a state in which avatars are coming and going in the virtual space VS of Fig. 9.
  • Fig. 12 is a diagram showing an example of a landscape and avatars in the virtual space VS to which a visual effect dedicated to the second user UL is applied.
  • the visual effect dedicated to the second user UL is not applied to the appearance of the avatar 202 of the second user UL , but is commonly applied to the appearances of the avatars 201L , 203L , and 204L other than the avatar 202 of the second user UL. That is, the generation unit 314 applies the same visual effect to the avatars 201L , 203L , and 204L other than the avatar 202 of the second user UL .
  • the determination unit 313 determines the appearance of the third-party avatars 201L , 203L , and 204L traveling through the city in the virtual space VS visited by the second user UL based on the attribute information of the second user UL . For example, if the determination unit 313 obtains the image that "Kanda is a town of reading" from the speech, posts, etc. of the second user UL , the image of "Kanda" becomes a calm and quiet image. In this case, the determination unit 313 determines that the clothes to be worn by the third-party avatars 201L , 203L , and 204L will be plain monotone shirts.
  • the correlation between the "image of the city” and the “clothing worn by a third-party avatar” is machine-learned in advance by the learning unit 317 using a large amount of data.
  • the generation unit 314 generates an individual space DS-JL of a city visited by the second user UL by applying the appearance of the third party avatar determined by the determination unit 313 to a third party avatar belonging to a city visited by the second user UL .
  • the individual space DS- JL of a city visited by the second user UL is generated by the generation unit 314 by applying the appearance of the third party avatar determined by the determination unit 313 to a third party avatar belonging to a city visited by the second user UL .
  • the providing unit 315 provides the individual space DS- JL generated by the generating unit 314 to the terminal device 10-L used by the second user UL . Therefore, the second user UL recognizes the appearance of the avatar 202 of the second user UL as an appearance preset by the second user UL himself, and recognizes the third party avatars 201L , 203L , and 204L as wearing plain monotone shirts. Therefore, the second user UL forms an impression of City J based on the uniform appearance of the third party avatars 201L , 203L , and 204L .
  • Data relating to the appearances of the avatars 201 L , 203 L , and 204 L when the visual effects of FIG. 12 are applied is stored in the storage device 22 of the individual server 20-J corresponding to J city.
  • FIG. 13 is a diagram showing an example of a reception counter 61 of a government office 60 in an individual space DS-J K.
  • a first user U K can receive administrative services of city J corresponding to the individual space DS-J K by visiting the government office 60 in the individual space DS-J K.
  • a first user U_K refers to a list of procedures 62 at a reception counter 61 of a government office 60, and selects a target procedure from the list 62. If the target procedure is to obtain a birth certificate, the first user U_K selects “Birth Certificate” from the list 62.
  • the user ID for accessing the virtual space VS of the first user U.K. and the administrative personal number of the first user U.K. are linked to each other.
  • the relationship between the user ID of the first user U.K. and the administrative personal number of the first user U.K. is stored as an ID database IDB in the storage device 32 of the management server 30.
  • the avatar control unit 316 through the operation of the first user U.K. , causes the avatar 201 of the first user U.K. to carry out a procedure in the government office 60 in the virtual space VS using a user ID for accessing the virtual space VS. Through this procedure, the avatar control unit 316 provides the first user U.K. with services equivalent to the administrative services of City J in the real space.
  • FIG. 14 is a flowchart showing the first operation of the processing device 31 of Fig. 5.
  • the routine of Fig. 14 is started, for example, when the processing device 31 is started, and is executed every time a certain period of time has elapsed.
  • step S11 the processing device 31 determines whether or not the first user U_K has visited city J in the virtual space VS for the first time by functioning as the determination unit 311. Whether or not the first user U_K has visited city J in the virtual space VS for the first time is determined, for example, by whether or not a visual effect corresponding to the first user U_K is stored in the storage device 22 of the individual server 20-J corresponding to city J.
  • the storage device 22 of the individual server 20-J does not store visual effects corresponding to the first user U.K. , it is determined that the first user U.K. has visited city J for the first time. On the other hand, if the storage device 22 of the individual server 20-J stores visual effects corresponding to the first user U.K. , it is determined that the first user U.K. has already visited city J.
  • step S11 If it is determined in step S11 that the first user U_K has visited city J for the first time, i.e., if the determination result in step S11 is positive, the processing device 31 functions as an acquisition unit 312 and acquires the behavioral history of the first user U_K from the individual servers 20-1 to 20-M and the terminal device 10-K in step S12.
  • step S13 the processing device 31 functions as the determining unit 313 to extract attribute information of the first user U_K from the acquired behavior history and the like.
  • step S14 the processing device 31 functions as the determination unit 313 to determine the colors and patterns, i.e., the visual effects, of the multiple buildings belonging to city J based on the attribute information of the first user U through K.
  • step S15 the processing device 31 functions as the determination unit 313 to determine the appearance, i.e., the visual effect, of a third party's avatar in city J based on the attribute information of the first user UK.
  • step S16 the processing device 31 functions as a determination unit 313 to store the visual effects corresponding to the first user UK in the determined city J in the virtual space VS in the storage device 22 of the individual server 20-J corresponding to city J.
  • step S17 the processing device 31 functions as the generation unit 314 to generate an individual space DS-J K by applying the determined visual effect to a plurality of buildings belonging to city J and third-party avatars.
  • step S18 the processing device 31 functions as the providing unit 315 to provide the generated individual space DS- J_K to the terminal device 10-K of the first user U_K , and then ends this routine.
  • step S11 determines that this is not the first time that the first user U_K has visited city J, that is, if the determination result in step S11 is negative
  • the processing device 31 functions as a generation unit 314, and in step S17 generates an individual space DS-J_K by applying the visual effects stored in the storage device 22 of the individual server 20-J to multiple buildings and third-party avatars belonging to city J.
  • Fig. 15 is a flowchart showing the second operation of the processing device 31 of Fig. 5.
  • the second operation of the processing device 31 will be described below with reference to Fig. 15.
  • the routine of Fig. 15 is started, for example, when the processing device 31 is started, and is executed every time a certain period of time has elapsed.
  • step S21 the processing device 31 functions as the avatar control unit 316 to obtain avatar operation information and avatar position information.
  • step S22 the processing device 31 functions as the avatar control unit 316 to determine whether the avatar 201 has arrived at the reception counter 61 of the government office 60.
  • step S22 If, in step S22, the avatar control unit 316 determines that the avatar 201 has not arrived at the reception counter 61 of the government office 60, i.e., the determination result in step S22 is negative, the processing device 31 temporarily ends this routine.
  • step S22 determines in step S22 that the avatar 201 has arrived at the reception counter 61 of the government office 60, i.e., if the determination result in step S22 is positive, the processing device 31 functions as the avatar control unit 316 to display the list of procedures 62 in step S23.
  • step S24 the processing device 31 functions as the avatar control unit 316 to obtain avatar operation information.
  • step S25 the processing device 31 functions as the avatar control unit 316 to determine whether any of the displayed list of procedures 62 has been selected by the avatar 201.
  • step S25 the avatar control unit 316 determines that none of the procedures in the displayed list of procedures 62 has been selected by the avatar 201, i.e., the determination result in step S25 is negative, the processing device 31 functions as the avatar control unit 316 and reacquires the avatar operation information in step S24.
  • step S25 determines in step S25 that one of the procedures in the displayed list of procedures 62 has been selected by the avatar 201, i.e., if the determination result in step S25 is positive
  • the processing device 31 functions as the avatar control unit 316, and proceeds with the processing in step S26 according to the selected procedure, and then temporarily ends this routine.
  • Third operation of the processing device 31 Fig. 16 is a flowchart showing the third operation of the processing device 31 of Fig. 5. Hereinafter, the third operation of the processing device 31 will be described with reference to Fig. 16.
  • the third operation is an operation related to a machine learning method by the learning unit 317.
  • the routine of Fig. 15 is started, for example, when the processing device 31 is started, and is executed every time a certain time has passed.
  • step S31 the processing device 31 functions as the teacher data acquisition unit 317a to prepare multiple pieces of teacher data TD1 as a preliminary step for starting machine learning, and stores the multiple pieces of teacher data TD1 thus prepared in the storage device 32.
  • the number of pieces of teacher data prepared here may be set taking into consideration the inference accuracy required for the learning model LM1 that is ultimately obtained.
  • step S32 the processing device 31 functions as the teacher data acquisition unit 317a to prepare a pre-learning learning model LM1 in order to start machine learning.
  • the pre-learning learning model LM1 prepared here employs the neural network model 90 shown in FIG. 6, and the weights of each synapse are set to an initial value.
  • Each neuron in the input layer 91 is associated with an "image of a city" as input data constituting the multiple teacher data TD1.
  • Each neuron in the output layer 93 is associated with "clothing worn by a third-party avatar" as output data constituting the multiple teacher data TD1.
  • step S33 the processing device 31 functions as the teacher data acquisition unit 317a to acquire, for example, a set of data randomly from the multiple teacher data TD1 stored in the storage device 32.
  • step S35 the processing device 31 functions as the model generation unit 317b to compare the output data included in the set of data acquired in step S33, i.e., the correct label, with the output data output from the output layer 93 as an inference result in step S34, and adjust the weight of each synapse, thereby performing machine learning.
  • the model generation unit 317b causes the learning model LM1 to learn the correlation between the input data and the output data.
  • step S36 if the model generation unit 317b determines that the learning termination condition is not satisfied and machine learning is to be continued, i.e., if the determination result in step S36 is negative, the processing device 31 functions as the model generation unit 317b and performs the processes from step S33 to step S35 multiple times on the learning model LM1 being learned using an unlearned dataset.
  • step S36 determines in step S36 that the learning termination condition is satisfied, i.e., if the determination result in step S36 is positive
  • the processing device 31 functions as the model generation unit 317b, and adjusts the weights associated with each synapse in step S37 to store the machine-learned learning model LM1, i.e., the adjusted weight parameter group, in the storage device 32, and temporarily terminates this routine.
  • online learning has been used as a method for adjusting weights, but batch learning, mini-batch learning, etc. may also be used. Furthermore, whether or not a predetermined learning end condition has been met may be determined based on the misjudgment rate.
  • the processing device 31 as a virtual space management device includes a determination unit 313, a generation unit 314, and a provision unit 315.
  • the determination unit 313 determines a visual effect for a visited city visited by the first user U.K. among a plurality of cities in the virtual space VS, based on attribute information indicating the attributes of the first user U.K.
  • the generation unit 314 generates an individual space for the visited city by commonly applying the visual effect determined by the determination unit 313 to a plurality of buildings belonging to the visited city.
  • the provision unit 315 provides the individual space generated by the generation unit 314 to the terminal device 10-K used by the first user U.K.
  • the determination unit 313 determines the visual effect when the first user U_K visits the visited city in the virtual space VS for the first time, and maintains the visual effect when the first user U_K visits the visited city again.
  • the color and design of the virtual object in the virtual space VS set at the time of the first visit for each city are maintained. Therefore, even if the first user U K revisits a visited city, the previous world view is maintained.
  • the determination unit 313 determines the appearance of a third party avatar traveling to and from a visited city in the virtual space VS visited by the first user U.K. based on the attribute information.
  • the generation unit 314 applies the appearance of the third party avatar determined by the determination unit 313 to the third party avatars 202, 203, and 204 belonging to the visited city visited by the first user U.K. , thereby generating an individual space DS- J.K . of the visited city visited by the first user U.K.
  • the provision unit 315 provides the individual space DS- J.K . generated by the generation unit 314 to the terminal device 10-K used by the first user U.K.
  • the first user U.K. can visually see the appearance of the third party's avatar that has been changed based on the attribute information of the first user U.K. , regardless of how the third party's avatar is dressed, so that the worldview of the first user U.K. with respect to the visited city is expressed more impressively.
  • the determination unit 313 determines the appearance of the third-party avatar when the first user U_K visits the visited city in the virtual space VS for the first time, and maintains the appearance of the third-party avatar when the first user U_K visits the visited city again.
  • the appearance of the third-party avatar is maintained. Therefore, even when the first user U.K. revisits a visited city, the previous world view is maintained.
  • the determination unit 313 extracts at least one term related to the impression of the city that was picked up within a specified period including the time when the name of the city appeared.
  • the impression of the first user U_K regarding a certain city can be extracted with higher accuracy.
  • the processing device 31 also includes an avatar control unit 316.
  • the avatar control unit 316 provides the first user U_K with a service equivalent to the administrative service in the real space by having the avatar 201 of the first user U_K carry out a procedure in the government office 60 in the virtual space VS using a user ID for accessing the virtual space VS.
  • the first user UK can receive administrative services without having to go to a government office in the real world, thereby improving convenience for the first user UK .
  • the visual effect information VEI is applied to the colors and patterns of multiple buildings belonging to city J in the virtual space VS, and the visuals of third-party avatars traveling through city J in the virtual space VS.
  • the visual effect information VEI may also include items related to the "image of the city" that the user has.
  • Fig. 17 is a diagram showing an example of a scene in the virtual space VS to which an item included in the visual effect information VEI has been applied.
  • Fig. 17 shows a scene when a first user U_K visits city J in the virtual space VS. Since the first user U_K has an image of city J as a surfing town, the determination unit 313 places surfboards 301K and 302K on street corners as visual effects.
  • lanterns may be placed on street corners.
  • the three-dimensional CG models CGM of a plurality of buildings belonging to each city are stored in the storage devices 22 of the individual servers 20-1 to 20-M corresponding to each city, but may also be stored collectively in the storage device 32 of the management server 30.
  • information regarding the visual effects specific to each user in the virtual space VS of each city is stored as visual effect information VEI in the storage devices 22 of the individual servers 20-1 to 20-M corresponding to each city.
  • the visual effect information VEI stored in each storage device 22 may be stored collectively in the storage device 32 of the management server 30.
  • the management server 30 is provided as a separate entity from the individual servers 20-1 to 20-M, but the functions of the management server 30 may be distributed among the individual servers 20-1 to 20-M, respectively.
  • the determination unit 313 determined the color and pattern of the building using a large-scale language model LLM, but a learning model that learns the correlation between the "image of the city” and the "color and pattern of the building” through supervised learning may also be used.
  • the determination unit 313 determines the clothes to be worn by the third-party avatar using a learning model LM1 that has learned the correlation between the "image of the town" and the "clothing worn by the third-party avatar” through supervised learning.
  • a learning model that has been trained through unsupervised learning using a large amount of data including the "image of the town" and the "clothing worn by the third-party avatar” may be used.
  • the determination unit 313 may also determine the clothing to be worn by the third-party avatar using a well-known large-scale language model. For example, the determination unit 313 may determine the clothing to be worn by the third-party avatar by inputting a prompt related to the extracted "city image” of City J and a prompt to output clothing characteristics that match the "city image” of City J into the large-scale language model.
  • the storage device 12, the storage device 22, and the storage device 32 are exemplified by ROM and RAM, but the storage devices 12, 22, and 32 may be flexible disks, magneto-optical disks (e.g., compact disks, digital versatile disks, Blu-ray (registered trademark) disks), smart cards, flash memory devices (e.g., cards, sticks, key drives), CD-ROMs (Compact Disc-ROMs), registers, removable disks, hard disks, floppy (registered trademark) disks, magnetic strips, databases, servers, or other suitable storage media.
  • the programs may also be transmitted from a network via electric communication lines.
  • the programs may also be transmitted from a communication network NET via electric communication lines.
  • the information, signals, etc. described may be represented using any of a variety of different technologies.
  • data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any combination thereof.
  • the input/output information, etc. may be stored in a specific location (e.g., memory) or may be managed using a management table.
  • the input/output information, etc. may be overwritten, updated, or added to.
  • the output information, etc. may be deleted.
  • the input information, etc. may be transmitted to another device.
  • the determination may be made based on a value (0 or 1) represented using one bit, a Boolean value (true or false), or a comparison of numerical values (e.g., a comparison with a predetermined value).
  • each function illustrated in FIG. 1 to FIG. 17 is realized by any combination of at least one of hardware and software. Furthermore, there are no particular limitations on the method of realizing each functional block. That is, each functional block may be realized using one device that is physically or logically coupled, or may be realized using two or more devices that are physically or logically separated and connected directly or indirectly (e.g., using wires, wirelessly, etc.) and these multiple devices. A functional block may be realized by combining the one device or the multiple devices with software.
  • the programs exemplified in the above-described embodiments should be broadly construed to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., regardless of whether they are called software, firmware, middleware, microcode, hardware description language, or by other names.
  • software, instructions, information, etc. may be transmitted and received via a transmission medium.
  • a transmission medium such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)), and/or wireless technologies (such as infrared, microwave), then at least one of these wired and wireless technologies is included within the definition of a transmission medium.
  • wired technologies such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)
  • wireless technologies such as infrared, microwave
  • the information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using other corresponding information.
  • the terminal device 10 may be a mobile station (MS).
  • MS mobile station
  • a mobile station may also be referred to by those skilled in the art as a subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable terminology.
  • terms such as “mobile station,” “user terminal,” “user equipment (UE),” and “terminal” may be used interchangeably.
  • connection refers to any direct or indirect connection or coupling between two or more elements, and may include the presence of one or more intermediate elements between two elements that are “connected” or “coupled” to each other.
  • the coupling or connection between elements may be a physical coupling or connection, a logical coupling or connection, or a combination thereof. For example, “connected” may be read with "access”.
  • two elements may be considered to be “connected” or “coupled” to each other using at least one of one or more wires, cables, and printed electrical connections, as well as electromagnetic energy having wavelengths in the radio frequency range, microwave range, and light (both visible and invisible) range, as some non-limiting and non-exhaustive examples.
  • the term “based on” does not mean “based only on,” unless otherwise specified. In other words, the term “based on” means both “based only on” and “based at least on.”
  • determining and “determining” as used in this disclosure may encompass a wide variety of actions. “Determining” and “determining” may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (e.g., searching in a table, database, or other data structure), and considering ascertaining to be “judging” or “determining”. Also, “determining” and “determining” may include considering receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, and accessing (e.g., accessing data in memory) to be “judging” or “determining”.
  • judgment and “decision” can include considering resolving, selecting, choosing, establishing, comparing, etc., to have been “judged” or “decided.” In other words, “judgment” and “decision” can include considering some action to have been “judged” or “decided.” Additionally, “judgment (decision)” can be interpreted as “assuming,” “expecting,” “considering,” etc.
  • a and B are different may mean “A and B are different from each other.”
  • the term may also mean “A and B are each different from C.”
  • Terms such as “separate” and “combined” may also be interpreted in the same way as “different.”
  • notification of specific information is not limited to being an explicit notification, but may be performed implicitly (e.g., not notifying the specific information).

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This virtual space management device provides, to a user, a virtual space in which a plurality of individual spaces corresponding one-to-one to a plurality of cities are integrated. The virtual space management device comprises: a determination unit that, on the basis of attribute information indicating attributes of a user, determines visual effects for a visit city visited by the user among a plurality of cities in the virtual space; a generation unit that applies the visual effects determined by the determination unit in common to a plurality of buildings belonging to the visited city, thereby generating an individual space of the visited city; and a provision unit that provides the individual space generated by the generation unit to a terminal device used by the user.

Description

仮想空間管理装置Virtual space management device

 本発明は、仮想空間管理装置に関する。 The present invention relates to a virtual space management device.

 従来、都市毎に管理される仮想空間を用いて、都市間でデータを連携させることにより、街とユーザとの情報を繋げ、地域課題の解決を図る試みや新たなサービスを提供する試みがなされている。特許文献1には、検知されたユーザの行動を特定し、特定された行動に対応するカスタマイズ情報を決定して、仮想空間をどのようにカスタマイズするのかを決定し、決定されたカスタマイズ情報に従って仮想空間の元になる空間情報を処理する情報処理装置が開示されている。 Conventionally, attempts have been made to solve local issues and provide new services by linking information between cities and users through the use of virtual spaces managed on a city-by-city basis and linking data between the cities. Patent Document 1 discloses an information processing device that identifies detected user behavior, determines customization information corresponding to the identified behavior, determines how to customize the virtual space, and processes the spatial information that is the basis of the virtual space according to the determined customization information.

特開2013-250897号公報JP 2013-250897 A

「docomo Open House '23出展:超多人数同時接続、価値観理解、行動変容によるコミュニケーション活性化技術を開発 -ネットワークとサービス一体で実現するメタコミュニケーション-」,[online],2023年2月1日,[2023年8月10日検索],インターネット,<URL:https://www.docomo.ne.jp/info/news_release/2023/02/01_00.html>"Docomo Open House '23 Exhibit: Development of communication activation technology through ultra-large number of simultaneous connections, value understanding, and behavior change - Meta-communication realized by integrating network and services", [online], February 1, 2023, [Retrieved August 10, 2023], Internet, <URL: https://www.docomo.ne.jp/info/news_release/2023/02/01_00.html>

 しかしながら、従来の装置では、ユーザのプライベートな空間に配置された仮想オブジェクトに対するカスタマイズは行われるが、パブリックな空間に配置された仮想オブジェクトに対するカスタマイズは考慮されていない。そのため、ユーザが仮想空間上の各都市を訪問した際に、ユーザにとって十分に印象的な訪問体験が得られなかった。 However, in conventional devices, customization of virtual objects placed in a user's private space is performed, but customization of virtual objects placed in public spaces is not taken into consideration. As a result, when a user visits each city in a virtual space, the user is not provided with a sufficiently impressive visiting experience.

 本発明は、上記のような課題を解決するために為されたものであり、ユーザ固有の世界観が反映された仮想空間が生成され、ユーザにとって印象的又は魅力的な訪問体験が得られることを目的とする。 The present invention has been made to solve the above problems, and aims to generate a virtual space that reflects the user's unique worldview, providing the user with an impressive or attractive visiting experience.

 本発明の好適な態様に係る仮想空間管理装置は、複数の都市に1対1に対応する複数の個別空間が統合された仮想空間を利用者に提供する仮想空間管理装置であって、前記利用者の属性を示す属性情報に基づいて、前記仮想空間上の前記複数の都市のうち、前記利用者が訪問する訪問都市についての視覚的効果を決定する決定部と、前記決定部によって決定された視覚的効果を、前記訪問都市に属する複数の建造物に共通して付与することにより、前記訪問都市の個別空間を生成する生成部と、前記生成部によって生成された個別空間を、前記第1の利用者が使用する端末装置に提供する提供部と、を備える。 A virtual space management device according to a preferred embodiment of the present invention is a virtual space management device that provides a user with a virtual space in which multiple individual spaces that correspond one-to-one to multiple cities are integrated, and includes a determination unit that determines a visual effect for a city visited by the user from among the multiple cities in the virtual space based on attribute information indicating the attributes of the user, a generation unit that generates an individual space for the visited city by commonly applying the visual effect determined by the determination unit to multiple buildings belonging to the visited city, and a provision unit that provides the individual space generated by the generation unit to a terminal device used by the first user.

 本発明に係る仮想空間管理装置によれば、ユーザ固有の世界観が反映された仮想空間が生成され、ユーザにとって印象的又は魅力的な訪問体験が得られる。 The virtual space management device according to the present invention generates a virtual space that reflects the user's unique worldview, providing the user with an impressive or attractive visiting experience.

第1実施形態に係る仮想空間管理装置を含む情報処理システムの全体構成を示す図である。1 is a diagram showing an overall configuration of an information processing system including a virtual space management device according to a first embodiment. 都市OSによるネットワークの一例を示す概略構成図である。FIG. 1 is a schematic diagram showing an example of a network using City OS. 図1の端末装置の構成例を示すブロック図である。FIG. 2 is a block diagram showing a configuration example of a terminal device in FIG. 1 . 図1の個別サーバの構成例を示すブロック図である。2 is a block diagram showing a configuration example of an individual server in FIG. 1 . 図1の管理サーバの構成例を示すブロック図である。2 is a block diagram showing a configuration example of the management server of FIG. 1 . 第1実施形態に係る学習モデルに適用されるニューラルネットワークモデルの一例を示す概略図である。FIG. 2 is a schematic diagram showing an example of a neural network model applied to a learning model according to the first embodiment. 視覚的効果が付与されていない仮想空間上の風景の一例を示す図である。FIG. 13 is a diagram showing an example of a landscape in a virtual space to which no visual effects are applied. 第1のユーザ専用の視覚的効果が付与されている仮想空間上の風景の一例を示す図である。11 is a diagram showing an example of a landscape in a virtual space to which a visual effect dedicated to a first user is applied. FIG. 第2のユーザ専用の視覚的効果が付与されている仮想空間上の風景の一例を示す図である。13 is a diagram showing an example of a landscape in a virtual space to which a visual effect dedicated to a second user is applied. FIG. 図7の仮想空間上にアバターが往来している様子の一例を示した図である。FIG. 8 is a diagram showing an example of avatars moving about in the virtual space of FIG. 7. 図8の仮想空間上にアバターが往来している様子の一例を示した図である。FIG. 9 is a diagram showing an example of avatars moving about in the virtual space of FIG. 8. 図9の仮想空間上にアバターが往来している様子の一例を示した図である。FIG. 10 is a diagram showing an example of avatars moving about in the virtual space of FIG. 9. 個別空間内の役所の受付カウンターの一例を示す図である。FIG. 13 is a diagram showing an example of a government office reception counter in a separate space. 図5の処理装置の第1の動作を示すフローチャートである。6 is a flowchart showing a first operation of the processing device of FIG. 5 . 図5の処理装置の第2の動作を示すフローチャートである。6 is a flowchart showing a second operation of the processing device of FIG. 5 . 図5の処理装置の第3の動作を示すフローチャートである。7 is a flowchart showing a third operation of the processing device of FIG. 5 . 視覚的効果情報に含まれるアイテムが適用された仮想空間上の風景の一例を示す図である。11 is a diagram showing an example of a landscape in a virtual space to which an item included in the visual effect information is applied. FIG.

1.第1実施形態
 以下、図1~図16を参照することにより、本発明の第1実施形態に係る仮想空間管理装置の構成について説明する。
1. First Embodiment Hereinafter, the configuration of a virtual space management device according to a first embodiment of the present invention will be described with reference to FIGS.

1.1.第1実施形態の構成
1.1.1.全体構成
1.1. Configuration of the first embodiment 1.1.1. Overall configuration

 図1は、第1実施形態に係る仮想空間管理装置を含む情報処理システム1の全体構成を示す図である。情報処理システム1は、端末装置10、個別サーバ20、管理サーバ30、及び通信網NETを備えている。 FIG. 1 is a diagram showing the overall configuration of an information processing system 1 including a virtual space management device according to a first embodiment. The information processing system 1 includes a terminal device 10, an individual server 20, a management server 30, and a communication network NET.

 端末装置10は、端末装置10-1、10-2、…、10-K、10-L、…、10-Nを含んでいる。ここで、Nは任意の自然数であり、K及びLはNよりも小さい任意の自然数である。本実施形態において、端末装置10-1~10-Nの各構成は互いに同一である。なお、端末装置10には、構成が同一でない端末装置が含まれてもよい。 The terminal devices 10 include terminal devices 10-1, 10-2, ..., 10-K, 10-L, ..., 10-N. Here, N is any natural number, and K and L are any natural numbers smaller than N. In this embodiment, the configurations of the terminal devices 10-1 to 10-N are identical to each other. Note that the terminal devices 10 may include terminal devices that do not have the same configuration.

 第1のユーザUは、端末装置10-Kを利用するユーザである。第2のユーザUは、端末装置10-Lを利用するユーザである。なお、図1において、端末装置10-K及び端末装置10-L以外の端末装置を利用するユーザの図示は省略される。 The first user U K is a user who uses the terminal device 10-K. The second user U L is a user who uses the terminal device 10-L. Note that in FIG. 1, users who use terminal devices other than the terminal device 10-K and the terminal device 10-L are omitted from the illustration.

 個別サーバ20は、個別サーバ20-1、20-2、…、20-J、20-Mを含んでいる。ここで、Mは任意の自然数であり、JはMよりも小さい任意の自然数である。本実施形態において、個別サーバ20-1~20-Mの各構成は互いに同一である。なお、個別サーバ20には、構成が同一でない個別サーバが含まれてもよい。 The individual servers 20 include individual servers 20-1, 20-2, ..., 20-J, and 20-M. Here, M is any natural number, and J is any natural number smaller than M. In this embodiment, the individual servers 20-1 to 20-M have the same configuration. Note that the individual servers 20 may include individual servers that do not have the same configuration.

 個別サーバ20は、主として各都市の行政サービスを提供するサーバである。ここで、都市は、市町村といった自治体単位又はいくつかの市町村を含む地域単位に区分される。個別サーバ20は、行政サービスを提供するための基盤として、都市OSを運用している。都市OSは、基盤の共通化及びAPI(Application Programming Interface)の標準化により、既存の行政情報システムと連携可能なオープンな行政管理システムである。都市OSでは、例えば、行政、物流、交通等向けの応用ソフトウェアが実行される。 The individual servers 20 are servers that primarily provide administrative services for each city. Here, cities are divided into municipal units such as cities, towns, and villages, or regional units that include several cities, towns, and villages. The individual servers 20 operate the City OS as a platform for providing administrative services. The City OS is an open administrative management system that can be linked to existing administrative information systems by standardizing the platform and API (Application Programming Interface). The City OS runs application software for administration, logistics, transportation, etc., for example.

 図2は、各都市OSによるネットワークの一例を示す概略構成図である。図2に示したように、都市OSは、自治体単位又は地域単位ごとに運用される。各都市OS 80-1~80-4は互いに連携可能に構成されている。なお、各都市OSと個別サーバ20-1~20-Mとは、1対1に対応している。 Figure 2 is a schematic diagram showing an example of a network using each city OS. As shown in Figure 2, the city OS is operated on a municipality or regional basis. Each city OS 80-1 to 80-4 is configured to be able to cooperate with each other. There is a one-to-one correspondence between each city OS and individual servers 20-1 to 20-M.

 A市の都市OS 80-1とサービス#1、#2、#3、及び#7とは、標準APIを介して接続されている。また、A市の都市OS 80-1とデータ#a、#b、#c、及び#fとは、標準APIを介して接続されている。 City OS 80-1 in City A is connected to services #1, #2, #3, and #7 via a standard API. City OS 80-1 in City A is also connected to data #a, #b, #c, and #f via a standard API.

 B町の都市OS 80-2とサービス#4、#5、及び#6とは、標準APIを介して接続されている。また、B町の都市OS 80-2とデータ#d、#e、及び#fとは、標準APIを介して接続されている。 B Town's city OS 80-2 is connected to services #4, #5, and #6 via a standard API. Also, B Town's city OS 80-2 is connected to data #d, #e, and #f via a standard API.

 D市の都市OS 80-4とサービス#7とは、標準APIを介して接続されている。従って、A市の都市OS 80-1及びD市の都市OS 80-4は、サービス#7を共同利用している。また、A市の都市OS 80-1及びB町の都市OS 80-2は、データ#fを共同利用している。 City OS 80-4 in City D and service #7 are connected via a standard API. Therefore, city OS 80-1 in City A and city OS 80-4 in City D share service #7. Also, city OS 80-1 in City A and city OS 80-2 in Town B share data #f.

 A市の都市OS 80-1、B町の都市OS 80-2、C村の都市OS 80-3、及びD市の都市OS 80-4は、直接的又は間接的に接続されており、各都市OSは、相互運用可能である。各都市OSは、互いに連携して各都市内外の様々なデータを流通させることができる。 City OS 80-1 in City A, city OS 80-2 in Town B, city OS 80-3 in Village C, and city OS 80-4 in City D are directly or indirectly connected, and each city OS is interoperable. Each city OS can cooperate with each other to distribute various data within and outside of each city.

 情報処理システム1において、端末装置10-1~10-N、個別サーバ20-1~20-M、及び管理サーバ30は、通信網NETを介して互いに通信可能に接続されている。情報処理システム1は、端末装置10-1~10-Nを利用している各ユーザに対して仮想空間管理サービスを提供するシステムである。 In the information processing system 1, the terminal devices 10-1 to 10-N, the individual servers 20-1 to 20-M, and the management server 30 are connected to each other so that they can communicate with each other via a communication network NET. The information processing system 1 is a system that provides a virtual space management service to each user who uses the terminal devices 10-1 to 10-N.

 管理サーバ30は、仮想空間を管理するサーバである。管理サーバ30は、通信網NETを介して、端末装置10から各種情報を取得し、端末装置10に対して仮想空間管理サービスを提供する。仮想空間は、複数の都市に1対1に対応する複数の個別空間を統合した空間である。言い換えると、仮想空間は、個別サーバ20-1~20-Mに1対1に対応する複数の個別空間を統合した空間である。 The management server 30 is a server that manages the virtual space. The management server 30 acquires various information from the terminal device 10 via the communication network NET, and provides a virtual space management service to the terminal device 10. The virtual space is a space that integrates multiple individual spaces that correspond one-to-one to multiple cities. In other words, the virtual space is a space that integrates multiple individual spaces that correspond one-to-one to the individual servers 20-1 to 20-M.

1.1.2.端末装置の構成
 図3は、図1の端末装置10-Kの構成例を示すブロック図である。図3に示したように、端末装置10-Kは、処理装置11、記憶装置12、通信装置13、ディスプレイ14、入力装置15、撮像装置16、録音装置17、及び測位装置18を備えている。端末装置10が備えている各要素は、情報を通信するための単体又は複数のバスにより相互に接続されている。
1.1.2. Terminal Device Configuration Fig. 3 is a block diagram showing an example of the configuration of the terminal device 10-K in Fig. 1. As shown in Fig. 3, the terminal device 10-K includes a processing device 11, a storage device 12, a communication device 13, a display 14, an input device 15, an imaging device 16, a sound recording device 17, and a positioning device 18. The elements included in the terminal device 10 are connected to each other by a single or multiple buses for communicating information.

 処理装置11は、端末装置10の全体を制御するプロセッサであり、例えば、単数又は複数のチップを用いて構成されている。処理装置11は、例えば、周辺装置とのインタフェース、演算装置、レジスタ等を含む中央処理装置(CPU:Central Processing Unit)を用いて構成されている。なお、処理装置11が備えている機能の一部又は全部は、DSP(Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)、PLD(Programmable Logic Device)、FPGA(Field Programmable Gate Array)等のハードウェアによって実現されてもよい。処理装置11は、各種の処理を並列的又は逐次的に実行する。 The processing device 11 is a processor that controls the entire terminal device 10, and is configured, for example, using a single or multiple chips. The processing device 11 is configured, for example, using a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, registers, etc. Some or all of the functions of the processing device 11 may be realized by hardware such as a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a PLD (Programmable Logic Device), or an FPGA (Field Programmable Gate Array). The processing device 11 executes various processes in parallel or sequentially.

 記憶装置12は、処理装置11により読み取り及び書き込みが可能な記録媒体である。記憶装置12は、例えば、不揮発性メモリと揮発性メモリとを含む。不揮発性メモリは、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable Read Only Memory)、及びEEPROM(Electrically Erasable Programmable Read Only Memory)である。揮発性メモリは、例えば、RAM(Random Access Memory)である。 The storage device 12 is a recording medium that can be read and written by the processing device 11. The storage device 12 includes, for example, non-volatile memory and volatile memory. The non-volatile memory is, for example, ROM (Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM (Electrically Erasable Programmable Read Only Memory). The volatile memory is, for example, RAM (Random Access Memory).

 記憶装置12は、処理装置11が実行するための制御プログラムPR1を含む複数のプログラムを記憶している。また、記憶装置12は、処理装置11のワークエリアとして機能する。 The storage device 12 stores a number of programs including the control program PR1 to be executed by the processing device 11. The storage device 12 also functions as a work area for the processing device 11.

 通信装置13は、他の装置と通信を行うための送受信デバイスとしてのハードウェアである。通信装置13は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置13は、有線接続用のコネクタを備え、上記コネクタに対応するインタフェース回路を備えていてもよい。また、通信装置13は、無線通信インタフェースを備えていてもよい。有線接続用のコネクタ及びインタフェース回路としては、有線LAN、IEEE1394、USBに準拠した製品が挙げられる。また、無線通信インタフェースとしては、無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 13 is hardware that functions as a transmitting/receiving device for communicating with other devices. The communication device 13 is also called, for example, a network device, a network controller, a network card, a communication module, etc. The communication device 13 may have a connector for wired connection and an interface circuit corresponding to the connector. The communication device 13 may also have a wireless communication interface. Examples of the connector and interface circuit for wired connection include products that comply with wired LAN, IEEE 1394, and USB. Examples of the wireless communication interface include products that comply with wireless LAN and Bluetooth (registered trademark), etc.

 ディスプレイ14は、画像及び文字情報を表示するデバイスである。ディスプレイ14は、処理装置11による制御に基づいて各種の画像を表示する。例えば、液晶パネル及び有機EL(Electro Luminescence)パネル等の各種の表示パネルがディスプレイ14として好適に利用される。 The display 14 is a device that displays images and text information. The display 14 displays various images based on the control of the processing device 11. For example, various display panels such as a liquid crystal panel and an organic EL (Electro Luminescence) panel are suitable for use as the display 14.

 入力装置15は、第1のユーザUからの操作を受け付ける。例えば、入力装置15は、キーボード、タッチパッド、タッチパネル、マウス等のポインティングデバイスを含んで構成されている。ここで、入力装置15は、タッチパネルを含んで構成されている場合、ディスプレイ14を兼ねていてもよい。 The input device 15 receives an operation from the first user U K. For example, the input device 15 includes a keyboard, a touch pad, a touch panel, and a pointing device such as a mouse. Here, when the input device 15 includes a touch panel, it may also function as the display 14.

 撮像装置16は、外界を撮像して得られた撮像画像Gxを出力する。撮像装置16は、例えば、レンズ、撮像素子、増幅器、及びAD変換器を備える。レンズを介して集光された光は、撮像素子によってアナログ信号である撮像信号に変換される。増幅器は撮像信号を増幅した上でAD変換器に出力する。AD変換器は、アナログ信号である増幅された撮像信号をデジタル信号である撮像情報に変換する。変換された撮像情報は、撮像画像Gxとして処理装置11に出力される。 The imaging device 16 outputs an image Gx obtained by capturing an image of the outside world. The imaging device 16 includes, for example, a lens, an imaging element, an amplifier, and an AD converter. Light collected through the lens is converted by the imaging element into an analog imaging signal. The amplifier amplifies the imaging signal and outputs it to the AD converter. The AD converter converts the amplified imaging signal, which is an analog signal, into imaging information, which is a digital signal. The converted imaging information is output to the processing device 11 as an image Gx.

 録音装置17は、周囲の音声を収録して得られた音声情報Oxを出力する。録音装置17は、例えば、マイクロフォン、増幅器、及びAD変換器を備える。周囲の音声は、マイクロフォンによってアナログ信号である音声信号に変換される。増幅器は音信号を増幅した上でAD変換器に出力する。AD変換器は、アナログ信号である増幅された音声信号をデジタル信号である音声情報Oxに変換する。変換された音声情報Oxは、処理装置11に出力される。 The recording device 17 outputs audio information Ox obtained by recording surrounding sounds. The recording device 17 includes, for example, a microphone, an amplifier, and an AD converter. The surrounding sounds are converted into an analog audio signal by the microphone. The amplifier amplifies the sound signal and outputs it to the AD converter. The AD converter converts the amplified audio signal, which is an analog signal, into audio information Ox, which is a digital signal. The converted audio information Ox is output to the processing device 11.

 測位装置18は、端末装置10の位置情報を取得する。測位装置18は、例えば、GNSS(Global Navigation Satellite System)受信機であってもよい。GNSS受信機は、1以上のGNSS衛星から発信される電波信号を受信する。GNSSは、GPS(Grobal Positioning System)衛星をはじめとする世界各国の測位衛星を用いた測位システムである。電波信号には、当該電波信号を送信した衛星の位置情報及び当該電波信号の送信時刻等の情報が含まれる。GNSS受信機は、受信した1以上の電波信号に基づいて測位を実施し、GNSS受信機の位置情報を処理装置11に出力する。 The positioning device 18 acquires position information of the terminal device 10. The positioning device 18 may be, for example, a GNSS (Global Navigation Satellite System) receiver. The GNSS receiver receives radio signals transmitted from one or more GNSS satellites. GNSS is a positioning system that uses positioning satellites from countries around the world, including GPS (Global Positioning System) satellites. The radio signals include information such as the position information of the satellite that transmitted the radio signals and the transmission time of the radio signals. The GNSS receiver performs positioning based on the one or more received radio signals, and outputs the position information of the GNSS receiver to the processing device 11.

 測位装置18は、例えば、VPS(Visual Positioning Service)装置であってもよい。VPS装置は、第1のユーザUが装着しているXRグラスに備えられた撮像装置から、第1のユーザUの眼前の風景が撮像された画像を示す画像情報を取得する。VPS装置は、当該撮像装置から取得した画像情報を、通信装置13を介して、不図示の位置情報サーバに出力する。VPS装置は、通信装置13を介して、当該位置情報サーバから、位置情報としてのVPS情報を取得する。当該位置情報は、第1のユーザUの現実空間における位置及び第1のユーザUが現実空間を目視する方向を含む。 The positioning device 18 may be, for example, a VPS (Visual Positioning Service) device. The VPS device acquires image information indicating an image of a scene in front of the first user U.K. from an imaging device provided in the XR glasses worn by the first user U.K. The VPS device outputs the image information acquired from the imaging device to a position information server (not shown) via the communication device 13. The VPS device acquires VPS information as position information from the position information server via the communication device 13. The position information includes the position of the first user U.K. in real space and the direction in which the first user U.K. views the real space.

 端末装置10-Kは、ディスプレイ14又は第1のユーザUの頭部に装着するXRグラスに対して、仮想空間に配置される仮想オブジェクトを表示させる。ここで、XRグラスとは、VR(Virtual Reality)グラス、AR(Augmented Reality)グラス、及びMR(Mixed Reality)グラスの総称である。 The terminal device 10-K displays virtual objects arranged in a virtual space on the display 14 or XR glasses worn on the head of the first user U K. Here, XR glasses is a general term for VR (Virtual Reality) glasses, AR (Augmented Reality) glasses, and MR (Mixed Reality) glasses.

 仮想オブジェクトは、例えば、静止画像、動画、三次元CGモデル、HTMLファイル、及びテキストファイル等のデータを示す仮想オブジェクト、及びアプリケーションを示す仮想オブジェクトである。ここで、テキストファイルとしては、例として、メモ、及びソースコードが挙げられる。アプリケーションとしては、例として、ブラウザ、SNSを用いるためのアプリケーション、及びドキュメントファイルを生成するためのアプリケーションが挙げられる。 The virtual object is, for example, a virtual object that represents data such as a still image, a video, a three-dimensional CG model, an HTML file, and a text file, and a virtual object that represents an application. Here, examples of text files include memos and source code. Examples of applications include a browser, an application for using an SNS, and an application for generating document files.

 端末装置10-Kは、パーソナルコンピュータ、タブレット端末、スマートフォン、及びスマートウォッチ等を含む。端末装置10-Kは、好適には、タブレット、スマートフォン等の携帯端末装置である。なお、XRグラスは、端末装置10を介さずに通信網NETに接続されてもよい。 The terminal device 10-K includes a personal computer, a tablet terminal, a smartphone, a smart watch, etc. The terminal device 10-K is preferably a mobile terminal device such as a tablet or a smartphone. Note that the XR glasses may be connected to the communication network NET without going through the terminal device 10.

 処理装置11は、例えば、記憶装置12から制御プログラムPR1を読み出して実行することによって、取得部111、出力部112、及び表示制御部113として機能する。 The processing device 11 functions as an acquisition unit 111, an output unit 112, and a display control unit 113, for example, by reading and executing the control program PR1 from the storage device 12.

 取得部111は、個別サーバ20-1~20-M及び管理サーバ30から送信される各種情報、撮像装置16から出力される撮像画像Gx、録音装置17から出力される音声情報Ox、及び測位装置18から出力される位置情報を取得する。 The acquisition unit 111 acquires various information transmitted from the individual servers 20-1 to 20-M and the management server 30, the captured image Gx output from the imaging device 16, the audio information Ox output from the recording device 17, and the location information output from the positioning device 18.

 出力部112は、通信装置13を介して、撮像画像Gx、音声情報Ox、及び位置情報を管理サーバ30に出力する。また、出力部112は、入力装置15を用いて第1のユーザUが入力する情報を、通信装置13を介して管理サーバ30に出力する。第1のユーザUが入力する情報には、例えば、チャットツール上において入力されるテキスト情報が含まれる。 The output unit 112 outputs the captured image Gx, the audio information Ox, and the position information to the management server 30 via the communication device 13. In addition, the output unit 112 outputs information input by the first user U_K using the input device 15 to the management server 30 via the communication device 13. The information input by the first user U_K includes, for example, text information input on a chat tool.

 表示制御部113は、取得部111により取得された各種情報に基づいて、ディスプレイ14に対して、各種情報を表示させる。例えば、表示制御部113は、ディスプレイ14に仮想オブジェクトを示す画像を表示させる。端末装置10-KにXRグラスが接続される場合、表示制御部113は、仮想オブジェクトを示す画像を第1のユーザUの姿勢、即ち、XRグラスの姿勢に応じて、XRグラスのディスプレイに仮想オブジェクトを示す画像を表示させる。 The display control unit 113 causes the display 14 to display various pieces of information based on the various pieces of information acquired by the acquisition unit 111. For example, the display control unit 113 causes the display 14 to display an image showing a virtual object. When the XR glasses are connected to the terminal device 10-K, the display control unit 113 causes the display of the XR glasses to display an image showing a virtual object in accordance with the posture of the first user U_K , i.e., the posture of the XR glasses.

1.1.3.個別サーバの構成
 図4は、図1の個別サーバ20-Jの構成例を示すブロック図である。図4に示したように、個別サーバ20-Jは、処理装置21、記憶装置22、通信装置23、ディスプレイ24、及び入力装置25を備えている。個別サーバ20が備えている各要素は、情報を通信するための単体又は複数のバスにより相互に接続されている。
1.1.3. Configuration of Individual Server Fig. 4 is a block diagram showing a configuration example of the individual server 20-J in Fig. 1. As shown in Fig. 4, the individual server 20-J includes a processing device 21, a storage device 22, a communication device 23, a display 24, and an input device 25. The elements included in the individual server 20 are connected to each other by a single or multiple buses for communicating information.

 処理装置21は、個別サーバ20の全体を制御するプロセッサであり、例えば、単数又は複数のチップを用いて構成されている。処理装置21は、例えば、周辺装置とのインタフェース、演算装置、及びレジスタを含む中央処理装置(CPU)を用いて構成されている。なお、処理装置21が備えている機能の一部又は全部は、DSP、ASIC、PLD、FPGA等のハードウェアによって実現されてもよい。処理装置21は、各種の処理を並列的又は逐次的に実行する。 The processing device 21 is a processor that controls the entire individual server 20, and is configured, for example, using one or more chips. The processing device 21 is configured, for example, using a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, and a register. Some or all of the functions of the processing device 21 may be realized by hardware such as a DSP, ASIC, PLD, or FPGA. The processing device 21 executes various processes in parallel or sequentially.

 記憶装置22は、処理装置21により読み取り及び書き込みが可能な記録媒体である。記憶装置22は、例えば、不揮発性メモリと揮発性メモリとを含む。不揮発性メモリは、例えば、ROM、EPROM、及びEEPROMである。揮発性メモリは、例えば、RAMである。 The storage device 22 is a recording medium that can be read and written by the processing device 21. The storage device 22 includes, for example, a non-volatile memory and a volatile memory. The non-volatile memory is, for example, a ROM, an EPROM, and an EEPROM. The volatile memory is, for example, a RAM.

 記憶装置22は、処理装置21が実行するための制御プログラムPR2を含む複数のプログラム及び都市OS COS、三次元CGモデルCGM、及び視覚的効果情報VEIを記憶している。三次元CGモデルCGMは、都市のパブリックな空間に配置される複数の建造物についてのモデルである。視覚的効果情報VEIは、仮想空間上の都市を訪問した各ユーザに対して決定される仮想空間への視覚的効果についての情報である。また、記憶装置22は、処理装置21のワークエリアとして機能する。 The storage device 22 stores a number of programs including a control program PR2 to be executed by the processing device 21, as well as a city OS COS, a three-dimensional CG model CGM, and visual effect information VEI. The three-dimensional CG model CGM is a model of a number of buildings placed in the public space of the city. The visual effect information VEI is information about the visual effects on the virtual space that are determined for each user who visits the city in the virtual space. The storage device 22 also functions as a work area for the processing device 21.

 通信装置23は、他の装置と通信を行うための送受信デバイスとしてのハードウェアである。通信装置23は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置23は、有線接続用のコネクタを備え、上記コネクタに対応するインタフェース回路を備えていてもよい。また、通信装置23は、無線通信インタフェースを備えていてもよい。有線接続用のコネクタ及びインタフェース回路としては、有線LAN、IEEE1394、USBに準拠した製品が挙げられる。また、無線通信インタフェースとしては、無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 23 is hardware that serves as a transmitting/receiving device for communicating with other devices. The communication device 23 is also called, for example, a network device, a network controller, a network card, a communication module, etc. The communication device 23 may have a connector for wired connection and an interface circuit corresponding to the connector. The communication device 23 may also have a wireless communication interface. Examples of the connector and interface circuit for wired connection include products that comply with wired LAN, IEEE 1394, and USB. Examples of the wireless communication interface include products that comply with wireless LAN and Bluetooth (registered trademark), etc.

 ディスプレイ24は、画像及び文字情報を表示するデバイスである。ディスプレイ24は、処理装置21による制御に基づいて各種の画像を表示する。例えば、液晶パネル、有機ELパネル等の各種の表示パネルがディスプレイ24として好適に利用される。 The display 24 is a device that displays images and text information. The display 24 displays various images based on the control of the processing device 21. For example, various display panels such as a liquid crystal panel and an organic EL panel are suitable for use as the display 24.

 入力装置25は、情報処理システム1の管理者による操作を受け付ける機器である。例えば、入力装置25は、キーボード、タッチパッド、タッチパネル、マウス等のポインティングデバイスを含んで構成されている。ここで、入力装置25は、タッチパネルを含んで構成されている場合、ディスプレイ24を兼ねてもよい。とりわけ、情報処理システム1の管理者は、入力装置25を用いることにより、制御プログラムPR2を修正することが可能である。 The input device 25 is a device that accepts operations by the administrator of the information processing system 1. For example, the input device 25 includes a keyboard, a touchpad, a touch panel, and a pointing device such as a mouse. Here, if the input device 25 includes a touch panel, it may also function as the display 24. In particular, the administrator of the information processing system 1 can modify the control program PR2 by using the input device 25.

 処理装置21は、例えば、記憶装置22から制御プログラムPR2を読み出して実行することにより、取得部211、出力部212、及び表示制御部213として機能する。 The processing device 21 functions as an acquisition unit 211, an output unit 212, and a display control unit 213, for example, by reading and executing the control program PR2 from the storage device 22.

 取得部211は、他の個別サーバ及び管理サーバ30から送信される各種情報を取得する。 The acquisition unit 211 acquires various information sent from other individual servers and the management server 30.

 出力部212は、通信装置13を介して、記憶装置22に記憶されている視覚的効果情報VEIを管理サーバ30に出力する。 The output unit 212 outputs the visual effect information VEI stored in the storage device 22 to the management server 30 via the communication device 13.

 表示制御部213は、取得部211により取得された各種情報に基づいて、ディスプレイ24に対して、各種情報を表示させる。例えば、表示制御部213は、都市の建築物の三次元CGモデルの情報をディスプレイ24に送信し、仮想空間上の建築物をディスプレイ24に表示させることができる。 The display control unit 213 causes the display 24 to display various pieces of information based on the various pieces of information acquired by the acquisition unit 211. For example, the display control unit 213 can transmit information on three-dimensional CG models of buildings in a city to the display 24, and cause the display 24 to display buildings in a virtual space.

1.1.4.管理サーバの構成
 図5は、図1の管理サーバ30の構成例を示すブロック図である。図5に示したように、管理サーバ30は、仮想空間管理装置としての処理装置31、記憶装置32、通信装置33、ディスプレイ34、及び入力装置35を備えている。管理サーバ30が備えている各要素は、情報を通信するための単体又は複数のバスにより相互に接続されている。管理サーバ30は、仮想空間管理装置の一例である。
1.1.4. Configuration of the Management Server Fig. 5 is a block diagram showing an example of the configuration of the management server 30 in Fig. 1. As shown in Fig. 5, the management server 30 includes a processing device 31 as a virtual space management device, a storage device 32, a communication device 33, a display 34, and an input device 35. The elements included in the management server 30 are connected to each other by a single or multiple buses for communicating information. The management server 30 is an example of a virtual space management device.

 処理装置31は、管理サーバ30の全体を制御するプロセッサであり、例えば、単数又は複数のチップを用いて構成されている。処理装置21は、例えば、周辺装置とのインタフェース、演算装置、及びレジスタを含む中央処理装置(CPU)を用いて構成されている。なお、処理装置31が備えている機能の一部又は全部は、DSP、ASIC、PLD、FPGA等のハードウェアによって実現されてもよい。処理装置31は、各種の処理を並列的又は逐次的に実行する。 The processing device 31 is a processor that controls the entire management server 30, and is configured, for example, using one or more chips. The processing device 21 is configured, for example, using a central processing unit (CPU) that includes an interface with peripheral devices, an arithmetic unit, and a register. Some or all of the functions of the processing device 31 may be realized by hardware such as a DSP, ASIC, PLD, or FPGA. The processing device 31 executes various processes in parallel or sequentially.

 記憶装置32は、処理装置31により読み取り及び書き込みが可能な記録媒体である。記憶装置32は、例えば、不揮発性メモリと揮発性メモリとを含む。不揮発性メモリは、例えば、ROM、EPROM、及びEEPROMである。揮発性メモリは、例えば、RAMである。 The storage device 32 is a recording medium that can be read and written by the processing device 31. The storage device 32 includes, for example, a non-volatile memory and a volatile memory. The non-volatile memory is, for example, a ROM, an EPROM, and an EEPROM. The volatile memory is, for example, a RAM.

 記憶装置32は、処理装置31が実行するための制御プログラムPR3を含む複数のプログラム、教師データTD1、学習モデルLM1、大規模言語モデルLLM、及びIDデータベースIDBを記憶している。また、記憶装置32は、処理装置31のワークエリアとして機能する。 The storage device 32 stores a plurality of programs including the control program PR3 to be executed by the processing device 31, the teacher data TD1, the learning model LM1, the large-scale language model LLM, and the ID database IDB. The storage device 32 also functions as a work area for the processing device 31.

 通信装置33は、他の装置と通信を行うための送受信デバイスとしてのハードウェアである。通信装置33は、例えば、ネットワークデバイス、ネットワークコントローラ、ネットワークカード、通信モジュール等とも呼ばれる。通信装置33は、有線接続用のコネクタを備え、上記コネクタに対応するインタフェース回路を備えていてもよい。また、通信装置33は、無線通信インタフェースを備えていてもよい。有線接続用のコネクタ及びインタフェース回路としては、有線LAN、IEEE1394、USBに準拠した製品が挙げられる。また、無線通信インタフェースとしては、無線LAN及びBluetooth(登録商標)等に準拠した製品が挙げられる。 The communication device 33 is hardware that functions as a transmitting/receiving device for communicating with other devices. The communication device 33 is also called, for example, a network device, a network controller, a network card, a communication module, etc. The communication device 33 may have a connector for wired connection and an interface circuit corresponding to the connector. The communication device 33 may also have a wireless communication interface. Examples of the connector and interface circuit for wired connection include products that comply with wired LAN, IEEE 1394, and USB. Examples of the wireless communication interface include products that comply with wireless LAN and Bluetooth (registered trademark), etc.

 ディスプレイ34は、画像及び文字情報を表示するデバイスである。ディスプレイ34は、処理装置31による制御に基づいて各種の画像を表示する。例えば、液晶パネル、有機ELパネル等の各種の表示パネルがディスプレイ34として好適に利用される。 The display 34 is a device that displays images and text information. The display 34 displays various images based on the control of the processing device 31. For example, various display panels such as a liquid crystal panel and an organic EL panel are suitable for use as the display 34.

 入力装置35は、情報処理システム1の管理者による操作を受け付ける機器である。例えば、入力装置35は、キーボード、タッチパッド、タッチパネル、マウス等のポインティングデバイスを含んで構成されている。ここで、入力装置35は、タッチパネルを含んで構成されている場合、ディスプレイ34を兼ねてもよい。とりわけ、情報処理システム1の管理者は、入力装置35を用いることにより、制御プログラムPR3を修正することが可能である。 The input device 35 is a device that accepts operations by the administrator of the information processing system 1. For example, the input device 35 includes a keyboard, a touchpad, a touch panel, and a pointing device such as a mouse. Here, if the input device 35 includes a touch panel, it may also function as the display 34. In particular, the administrator of the information processing system 1 can modify the control program PR3 by using the input device 35.

 処理装置31は、例えば、記憶装置32から制御プログラムPR3を読み出して実行することにより、判定部311、取得部312、決定部313、生成部314、提供部315、アバター制御部316、及び学習部317として機能する。 The processing device 31 functions as a judgment unit 311, an acquisition unit 312, a decision unit 313, a generation unit 314, a provision unit 315, an avatar control unit 316, and a learning unit 317, for example, by reading and executing the control program PR3 from the storage device 32.

 処理装置31は、複数の個別空間を統合した仮想空間VSを第1のユーザUに提供する。複数の個別空間は複数の都市に1対1に対応する。複数の都市のそれぞれには、上述したように、行政サービスを提供するための基盤として、個別サーバ20により都市OSが運用されている。 The processing device 31 provides a virtual space VS that integrates a plurality of individual spaces to a first user U K. The plurality of individual spaces correspond one-to-one to a plurality of cities. As described above, in each of the plurality of cities, a city OS is operated by an individual server 20 as a platform for providing administrative services.

 判定部311は、第1のユーザUが、仮想空間上のJ都市を訪問するとき、第1のユーザUが仮想空間上のA都市に初めて訪問したか否かを判定する。第1のユーザUが仮想空間上のJ都市に初めて訪問した場合、処理装置31は、後述するように、仮想空間上の建築物に対して、第1のユーザU専用の視覚的効果を決定する。J都市は、訪問都市の一例である。訪問都市は、仮想空間VS上の複数の都市のうち、第1のユーザUが訪問する都市である。 When the first user U.K. visits city J in the virtual space, the determination unit 311 determines whether or not the first user U.K. visits city A in the virtual space for the first time. When the first user U.K. visits city J in the virtual space for the first time, the processing device 31 determines a visual effect dedicated to the first user U.K. for buildings in the virtual space, as described below. City J is an example of a visited city. A visited city is a city visited by the first user U.K. among a plurality of cities in the virtual space VS.

 一方、第1のユーザUによる仮想空間上のJ都市への訪問が初めてではない場合、処理装置31は、仮想空間上の建築物に対する第1のユーザU専用の視覚的効果を、第1のユーザUが初めてJ都市を訪問したときに決定された視覚的効果として決定する。第1のユーザUが初めてJ都市を訪問したときに決定された視覚的効果は、視覚的効果情報VEIとしてJ都市に対応する個別サーバ20-Jの記憶装置22に格納されている。 On the other hand, if it is not the first time that the first user U.K. visits city J in the virtual space, the processing device 31 determines the visual effect dedicated to the first user U.K. on the buildings in the virtual space as the visual effect determined when the first user U.K. first visits city J. The visual effect determined when the first user U.K. first visits city J is stored as visual effect information VEI in the storage device 22 of the individual server 20-J corresponding to city J.

 取得部312は、端末装置10-K及び個別サーバ20-1~20-Mから第1のユーザUの行動履歴を取得する。行動履歴は、都市ごとの個別空間を含む仮想空間内における第1のユーザUの発話、第1のユーザUのSNS(Social Networking Service)への投稿、及び同仮想空間内における第1のユーザUの空間移動履歴の少なくともいずれか1つを含む。 The acquisition unit 312 acquires the behavior history of the first user U.K. from the terminal device 10-K and the individual servers 20-1 to 20-M. The behavior history includes at least one of the following: the speech of the first user U.K. in a virtual space including the individual space for each city, the posting of the first user U.K. to a social networking service (SNS), and the spatial movement history of the first user U.K. in the virtual space.

 決定部313は、第1のユーザUの行動履歴等から、第1のユーザUの属性を示す属性情報を抽出する。属性情報は、例えば、第1のユーザUの年齢、性別、職業、趣味、嗜好等に関する情報である。 The determination unit 313 extracts attribute information indicating attributes of the first user U K from the behavior history of the first user U K. The attribute information is, for example, information regarding the age, sex, occupation, hobbies, preferences, etc. of the first user U K.

 なお、第1のユーザUの行動履歴には、現実空間における第1のユーザUの発話、現実空間における第1のユーザUのSNSへの投稿、現実空間における第1のユーザUの移動履歴等が含まれてもよい。また、発話内容が記録されること、及び空間移動履歴等に関する情報が記録されることについては、規約等により予め第1のユーザUの承諾を得ることが望ましい。 The behavior history of the first user U.K. may include the first user U.K. 's speech in the real space, the first user U.K. 's posts to SNS in the real space, the first user U.K. 's movement history in the real space, etc. In addition, it is desirable to obtain the first user U.K. 's consent in advance according to rules, etc., for the recording of the speech content and the recording of information related to the spatial movement history, etc.

 決定部313は、第1のユーザUの属性を示す属性情報に基づいて、第1のユーザUが訪問する仮想空間上の都市ごとに視覚的効果を決定する。 The determination unit 313 determines a visual effect for each city in the virtual space visited by the first user U_K , based on attribute information indicating the attributes of the first user U_K .

 より具体的に述べると、決定部313は、例えば、周知の価値観理解技術を用いて、第1のユーザUの属性情報から第1のユーザUの感情を分析する。価値観理解技術は、現実空間及び仮想空間においてシームレスに取得された第1のユーザUの空間移動履歴等に関する情報を解析し、第1のユーザUの感情を理解する技術である(非特許文献1参照)。 More specifically, the determination unit 313 analyzes the emotion of the first user U K from the attribute information of the first user U K by using, for example, a known value understanding technology. The value understanding technology is a technology that analyzes information related to the spatial movement history of the first user U K seamlessly acquired in the real space and the virtual space, and understands the emotion of the first user U K (see Non-Patent Document 1).

 第1のユーザUの発話内容に基づく情報の解析及び空間移動履歴に関する情報の解析は、周知の大規模言語モデルLLMにより実現可能である。 The analysis of information based on the speech content of the first user U_K and the analysis of information related to spatial movement history can be realized by a well-known large-scale language model LLM.

 ここで、視覚的効果とは、三次元CGモデルにより構成される建造物の表面に付与される色、トーン、テクスチャー、模様等の少なくともいずれか1つを意味する。 Here, visual effects refer to at least one of the colors, tones, textures, patterns, etc., that are applied to the surface of a building constructed using a 3D CG model.

 生成部314は、決定部313によって決定された視覚的効果を、第1のユーザUが訪問する訪問都市に属する複数の建造物に共通して付与することにより、第1のユーザUが訪問する訪問都市の個別空間を生成する。言い換えると、生成部314は、第1のユーザUが訪問する訪問都市に属する複数の建造物に対して同じ視覚的効果を付与することにより、第1のユーザUが訪問する訪問都市の個別空間を生成する。建造物は、ビル、家屋、商業施設、公共施設、病院、工場、倉庫等を含む。 The generation unit 314 generates an individual space of a visiting city visited by the first user U.K. by commonly applying the visual effect determined by the determination unit 313 to a plurality of buildings belonging to the visiting city visited by the first user U.K. In other words, the generation unit 314 generates an individual space of a visiting city visited by the first user U.K. by applying the same visual effect to a plurality of buildings belonging to the visiting city visited by the first user U.K. The buildings include buildings, houses, commercial facilities, public facilities, hospitals, factories, warehouses, etc.

 提供部315は、生成部314によって生成された個別空間を、第1のユーザUが使用する端末装置10-Kに提供する。提供部315は、第1のユーザUの仮想空間上の位置及び第1のユーザUが向いている方向に関する情報に基づいて、個別空間に関する情報を端末装置10-Kに提供する。 The providing unit 315 provides the individual space generated by the generating unit 314 to the terminal device 10-K used by the first user U.K. The providing unit 315 provides information about the individual space to the terminal device 10- K based on information about the position of the first user U.K. in the virtual space and information about the direction in which the first user U.K. is facing.

 アバター制御部316は、仮想空間における第1のユーザUのアバターの動作を制御する。より具体的に述べると、アバター制御部316は、端末装置10-Kから第1のユーザUのアバターの位置情報及び第1のユーザUによる入力装置15の操作情報を取得する。アバター制御部316は、取得したアバターの位置情報及び入力装置15の操作情報に従って、ディスプレイ14又はXRグラスのディスプレイに表示される仮想空間における第1のユーザUのアバターの動作を制御する。 The avatar control unit 316 controls the movement of the avatar of the first user U.K. in the virtual space. More specifically, the avatar control unit 316 acquires position information of the avatar of the first user U.K. and operation information of the input device 15 by the first user U.K. from the terminal device 10-K. The avatar control unit 316 controls the movement of the avatar of the first user U.K. in the virtual space displayed on the display 14 or the display of the XR glasses in accordance with the acquired position information of the avatar and operation information of the input device 15.

 学習部317は、教師データ取得部317a及びモデル生成部317bを有している。教師データ取得部317aは、複数の教師データTD1を準備し、準備した複数の教師データTD1を記憶装置32に記憶させる。複数の教師データTD1は、入力データ及び出力データが対応付けられて構成されている。教師データ取得部317aは、学習前の学習モデルLM1を準備する。教師データ取得部317aは、機械学習時において、複数の教師データTD1から、例えば、ランダムに1組の教師データを記憶装置32から取得する。 The learning unit 317 has a teacher data acquisition unit 317a and a model generation unit 317b. The teacher data acquisition unit 317a prepares multiple pieces of teacher data TD1 and stores the multiple pieces of teacher data TD1 in the storage device 32. The multiple pieces of teacher data TD1 are configured by associating input data with output data. The teacher data acquisition unit 317a prepares a learning model LM1 before learning. During machine learning, the teacher data acquisition unit 317a randomly acquires a set of teacher data from the multiple pieces of teacher data TD1, for example, from the storage device 32.

 モデル生成部317bは、複数の教師データTD1を学習モデルLM1に機械学習させることによって、学習済みの学習モデルLM1を生成する。より具体的に述べると、モデル生成部317bは、記憶装置32に記憶された複数の教師データTD1を用いて機械学習を実施する。即ち、モデル生成部317bは、学習モデルLM1に複数の教師データTD1を入力することにより、複数の教師データTD1を構成する入力データと出力データとの相関関係を学習モデルLM1に機械学習させ、学習済みの学習モデルLM1を生成する。 The model generation unit 317b generates a trained learning model LM1 by having the learning model LM1 learn multiple pieces of teacher data TD1 by machine learning. More specifically, the model generation unit 317b performs machine learning using multiple pieces of teacher data TD1 stored in the storage device 32. That is, the model generation unit 317b inputs multiple pieces of teacher data TD1 to the learning model LM1, and has the learning model LM1 learn the correlation between the input data and output data that constitute the multiple pieces of teacher data TD1 by machine learning, thereby generating a trained learning model LM1.

 複数の教師データTD1は、複数組のデータセットを含んでいる。1組のデータセットは、「街のイメージ」を含む入力データと、「第三者のアバターが着用する衣服」を含む出力データとのセットにより構成されている。出力データは、教師あり学習において、例えば、正解ラベルと呼ばれるものである。このように、入力データに含まれるキーワードに対して、出力データとして当該キーワードから連想される色及び模様が対応付けられている。 The multiple teacher data TD1 include multiple sets of data sets. One set of data sets is composed of a set of input data including an "image of the city" and output data including "clothing worn by a third-party avatar." In supervised learning, the output data is, for example, called a correct answer label. In this way, for each keyword included in the input data, colors and patterns associated with that keyword are associated as output data.

1.1.5.学習モデルの構成
 図6は、第1実施形態に係る学習モデルLM1に適用されるニューラルネットワークモデル90の一例を示す概略図である。ニューラルネットワークモデル90は、入力層91、中間層92、及び出力層93を備えている。
6 is a schematic diagram showing an example of a neural network model 90 applied to the learning model LM1 according to the first embodiment. The neural network model 90 includes an input layer 91, an intermediate layer 92, and an output layer 93.

 入力層91は、入力データとしての単語又は文字列に対応する数のニューロンを有し、単語又は文字列が各ニューロンにそれぞれ入力される。 The input layer 91 has neurons whose number corresponds to the number of words or character strings as input data, and each word or character string is input to each neuron.

 中間層92は、例えば、畳み込みニューラルネットワークにより構成される。中間層92は、入力層91を介して入力された「街のイメージ」から抽出された特徴量を活性化関数によって変換し、1次元配列の特徴ベクトルとして出力する。 The intermediate layer 92 is composed of, for example, a convolutional neural network. The intermediate layer 92 converts the features extracted from the "city image" input via the input layer 91 using an activation function, and outputs the feature vector as a one-dimensional array.

 出力層93は、中間層92から出力された特徴ベクトルに基づいて、「第三者のアバターが着用する衣服」を含む出力データを出力する。 The output layer 93 outputs output data including "clothes worn by the third-party avatar" based on the feature vectors output from the intermediate layer 92.

 ニューラルネットワークモデル90の各相の間には、層間のニューロンをそれぞれ接続するシナプスが張られている。 Between each phase of the neural network model 90, there are synapses that connect the neurons between the layers.

 モデル生成部317bは、複数の教師データTD1をニューラルネットワークモデル90に入力し、入力データである「街のイメージ」と、出力データである「第三者のアバターが着用する衣服」との相関関係をニューラルネットワークモデル90に機械学習させる。より具体的に述べると、モデル生成部317bは、先ず、複数の教師データTD1の中から1組のデータセットを選択し、1組のデータセットを構成する「街のイメージ」を入力データとして、ニューラルネットワークモデル90の入力層91に入力する。 The model generation unit 317b inputs multiple pieces of teacher data TD1 to the neural network model 90, and causes the neural network model 90 to perform machine learning of the correlation between the input data "image of the town" and the output data "clothing worn by a third-party avatar." More specifically, the model generation unit 317b first selects a set of data from the multiple pieces of teacher data TD1, and inputs the "image of the town" constituting the set of data into the input layer 91 of the neural network model 90 as input data.

 モデル生成部317bは、出力層93から推論結果として出力された出力データ、即ち、「第三者のアバターが着用する衣服」と、当該1組のデータセットを構成する出力データ、即ち、「第三者のアバターが着用する衣服」の正解ラベルとを比較する評価関数を用いて、評価関数の値が小さくなるように、各シナプスに対応付けられた重みを調整する。ここで、各シナプスに対応付けられた重みを調整することは、バックプロパゲーションと呼ばれる。 The model generation unit 317b uses an evaluation function that compares the output data output from the output layer 93 as an inference result, i.e., "clothes worn by a third-party avatar," with the output data that constitutes the set of data, i.e., the correct label of "clothes worn by a third-party avatar," and adjusts the weights associated with each synapse so that the value of the evaluation function becomes smaller. Here, adjusting the weights associated with each synapse is called backpropagation.

 モデル生成部317bは、複数の教師データTD1中の複数組のデータセットのそれぞれについて、当該データセットを構成する「街のイメージ」を入力データとして、順次ニューラルネットワークモデル90の入力層91に入力する。モデル生成部317bは、各入力データと、各入力データに対応する出力データとを比較し、評価関数の値が小さくなるように、各シナプスに対応付けられた重みを調整することを反復する。 The model generation unit 317b sequentially inputs the "city image" constituting each of the multiple sets of data sets in the multiple sets of teacher data TD1 as input data to the input layer 91 of the neural network model 90. The model generation unit 317b compares each piece of input data with the output data corresponding to each piece of input data, and iteratively adjusts the weights associated with each synapse so as to reduce the value of the evaluation function.

 モデル生成部317bは、所定の学習終了条件が満たされたと判定した場合、機械学習を終了し、その時点におけるニューラルネットワークモデル90を、学習済みの学習モデルLM1として記憶装置32に格納する。所定の学習終了条件は、例えば、上記一連の学習の処理の反復回数が所定の回数に達すること、及び評価関数の値が許容値よりも小さくなることである。 When the model generation unit 317b determines that a predetermined learning end condition is satisfied, it ends the machine learning and stores the neural network model 90 at that point in time in the storage device 32 as a trained learning model LM1. The predetermined learning end condition is, for example, that the number of iterations of the series of learning processes described above reaches a predetermined number, and that the value of the evaluation function becomes smaller than an allowable value.

1.1.6.処理装置31の動作の概要
 図7は、視覚的効果が付与されていない仮想空間VS上の風景の一例を示す図である。例えば、図7は、個別サーバ20-Jに対応するJ都市の風景である。図7には、三次元CGモデルCGMにより、建造物101~105が示されている。図7に示したように、視覚的効果が付与されていない場合、仮想空間VS上の建造物101~105の表面の色、模様等は、現実空間上の色、模様等に基づいて、建造物ごとに異なっている。視覚的効果が付与されていない場合の仮想空間VS上の三次元CGモデルCGMは、例えば、衛星写真、航空写真等の現実空間を撮影した撮影データに基づいて作成される。
1.1.6. Overview of the operation of the processing device 31 FIG. 7 is a diagram showing an example of a landscape in the virtual space VS to which no visual effects are applied. For example, FIG. 7 shows a landscape of J city corresponding to the individual server 20-J. In FIG. 7, buildings 101 to 105 are shown by three-dimensional CG models CGM. As shown in FIG. 7, when no visual effects are applied, the surface colors, patterns, etc. of the buildings 101 to 105 in the virtual space VS are different for each building based on the colors, patterns, etc. in the real space. When no visual effects are applied, the three-dimensional CG models CGM in the virtual space VS are created based on photographic data of the real space, such as satellite photographs and aerial photographs.

 このように、パブリックな空間における仮想オブジェクトである仮想空間VS上の建造物101~105の外観には、統一感がないことが多い。建造物101~105の外観に統一感がない場合、都市の景観が特徴的ではなくなる。従って、訪問者には、いずれの都市を訪問しても、都市の景観が同様であるため、印象的な訪問体験が得られないことがある。 In this way, the appearances of the buildings 101 to 105 in the virtual space VS, which are virtual objects in a public space, often lack uniformity. If the appearances of the buildings 101 to 105 are not uniform, the cityscape will not be distinctive. Therefore, no matter which city a visitor visits, the cityscape will be similar, and the visitor may not have an impressive experience.

 視覚的効果が付与されていない場合の三次元CGモデルCGMは、個別サーバ20-Jの記憶装置22に格納されている。なお、図7は、第1実施形態の説明のための補助的な図であり、図7に示した空間は、いずれの端末装置にも提供されない。 The three-dimensional CG model CGM without visual effects is stored in the storage device 22 of the individual server 20-J. Note that FIG. 7 is an auxiliary diagram for explaining the first embodiment, and the space shown in FIG. 7 is not provided to any terminal device.

 図8は、第1のユーザU専用の視覚的効果が付与されている仮想空間VS上の風景の一例を示す図である。図8は、図7と同一の視野における風景である。図8に示したように、建造物101~105は、それぞれ図7の建造物101~105に対応している。建造物101~105には、視覚的効果として共通の色及び模様が付与されるため、建造物101~105の色及び模様は統一される。 Fig. 8 is a diagram showing an example of a landscape in the virtual space VS to which a visual effect dedicated to the first user U K is applied. Fig. 8 shows a landscape in the same field of view as Fig. 7. As shown in Fig. 8, the buildings 101 K to 105 K correspond to the buildings 101 to 105 in Fig. 7, respectively. The buildings 101 K to 105 K are applied with a common color and pattern as a visual effect, so that the colors and patterns of the buildings 101 K to 105 K are unified.

 第1のユーザUが仮想空間VS上のJ都市を訪問すると、第1のユーザUには、端末装置10-Kのディスプレイ又はXRグラスに個別空間DS-Jとして図8に示すような色及び模様の風景が見える。従って、第1のユーザUには、統一的なJ都市の景観によってJ都市に対する心証が形成される。 When the first user U_K visits J city in the virtual space VS, the first user U_K sees a landscape with colors and patterns as shown in Fig. 8 as an individual space DS- J_K on the display of the terminal device 10-K or the XR glasses. Therefore, the first user U_K forms an impression of J city due to the unified landscape of J city.

 仮想空間VS上のJ都市における第1のユーザU専用の視覚的効果に関する情報は、仮想空間VS上のJ都市に対応する個別サーバ20-Jの記憶装置22に格納されている。 Information relating to the visual effects dedicated to the first user U K in City J in the virtual space VS is stored in the storage device 22 of the individual server 20-J corresponding to City J in the virtual space VS.

 図9は、第2のユーザU専用の視覚的効果が付与されている仮想空間VS上の風景の一例を示す図である。第2のユーザUは、端末装置10-Lを利用するユーザであり、第1のユーザUとは異なるユーザである。図9は、図7及び図8と同一の視野における風景である。図9に示したように、建造物101~105は、それぞれ図7の建造物101~105に対応している。建造物101~105には、視覚的効果として共通の色及び模様が付与されるため、建造物101~105の表面の色及び模様は統一される。 Fig. 9 is a diagram showing an example of a landscape in the virtual space VS to which a visual effect dedicated to a second user U L is applied. The second user U L is a user who uses the terminal device 10-L, and is a different user from the first user U K. Fig. 9 shows a landscape in the same field of view as Fig. 7 and Fig. 8. As shown in Fig. 9, the buildings 101 L to 105 L correspond to the buildings 101 to 105 in Fig. 7, respectively. The buildings 101 L to 105 L are given a common color and pattern as a visual effect, so that the color and pattern of the surfaces of the buildings 101 L to 105 L are unified.

 建造物101~105の表面の色及び模様は統一されるが、図8の建造物101~105の表面の色及び模様とは異なる。第2のユーザUが仮想空間VS上のJ都市を訪問すると、第2のユーザUには、端末装置10-Lのディスプレイ又はXRグラスに個別空間DS-Jとして図9に示すような色及び模様の風景が見える。従って、第2のユーザUには、統一的なJ都市の景観によってJ都市に対する心証が形成される。 The surface colors and patterns of the buildings 101L to 105L are unified, but are different from the surface colors and patterns of the buildings 101K to 105K in Fig. 8. When the second user U L visits City J in the virtual space VS, the second user U L sees a landscape with colors and patterns as shown in Fig. 9 as individual space DS- JL on the display of the terminal device 10-L or on the XR glasses. Therefore, the second user U L forms an impression of City J due to the unified scenery of City J.

 仮想空間VS上のJ都市における第2のユーザU専用の視覚的効果に関する情報は、仮想空間VS上のJ都市に対応する個別サーバ20-Jの記憶装置22に格納されている。 Information relating to the visual effects dedicated to the second user U L in City J in the virtual space VS is stored in the storage device 22 of the individual server 20-J corresponding to City J in the virtual space VS.

 このように、仮想空間VS上の同一の都市を訪問した場合であっても、第1のユーザUにとっての風景の見え方と、第2のユーザUにとっての風景の見え方とは互いに異なっている。 In this way, even when the same city in the virtual space VS is visited, the way the scenery appears to the first user UK and the way the scenery appears to the second user UL are different from each other.

 例えば、第1のユーザUの属性情報として、趣味がテニス及び旅行と抽出された場合、決定部313は、テニス及び旅行というキーワードから、明るい印象の色を建造物101~105の色として決定する。また、第2のユーザUの属性情報として、趣味が読書と抽出された場合、決定部313は、読書というキーワードから、落ち着いた印象の色を建造物101~105の色として決定する。 For example, if the hobbies of the first user U K are extracted as tennis and travel, the determination unit 313 determines, from the keywords tennis and travel, bright colors as the colors of the buildings 101 K to 105 K. If the hobby of the second user U L is extracted as reading, the determination unit 313 determines, from the keyword reading, calm colors as the colors of the buildings 101 L to 105 L.

 属性情報の別の指標として、各都市に対するユーザの印象が挙げられる。例えば、決定部313は、第1のユーザUによる過去の発話又は投稿の履歴において、キーワードとして、複数の都市の中のある都市の名前が出現した場合、当該都市の名前が出現した時点を含む所定の期間内においてピックアップされた都市の印象に関する少なくとも1つの用語を抽出する。 Another index of attribute information is a user's impression of each city. For example, when the name of a city among a plurality of cities appears as a keyword in the history of past utterances or posts by the first user U K , the determination unit 313 extracts at least one term related to the impression of the city that was picked up within a predetermined period including the time when the name of the city appeared.

 例えば、決定部313が、ある都市として「鎌倉」というキーワードを抽出した場合、決定部313は、当該キーワードが出現した時点を含む所定の期間内において、「鎌倉」の印象に関する用語を抽出する。 For example, if the determination unit 313 extracts the keyword "Kamakura" as a city, the determination unit 313 extracts terms related to the impression of "Kamakura" within a specified period that includes the time when the keyword appears.

 決定部313は、抽出された用語をプロンプトとして、周知の大規模言語モデルに入力することにより、「鎌倉」のイメージを決定する。大規模言語モデルとしては、例えば、ChatGPT(https://openai.com/chatgpt)及びStableLM(https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models)が知られている。 The determination unit 313 determines the image of "Kamakura" by inputting the extracted terms as prompts into a well-known large-scale language model. Known examples of large-scale language models include ChatGPT (https://openai.com/chatgpt) and StableLM (https://stability.ai/blog/stability-ai-launches-the-first-of-its-stablelm-suite-of-language-models).

 例えば、所定の期間内において、第1のユーザUの発話及び投稿から、決定部313が海、サーフィン等の用語を抽出した場合、「鎌倉はサーフィンの街」とのイメージが得られる。この場合、「鎌倉」のイメージは、例えば、太陽光を沢山浴びている明るいイメージとなる。 For example, when the determining unit 313 extracts terms such as "ocean" and "surfing" from the utterances and posts of the first user U K within a predetermined period of time, the image of "Kamakura is a surfing town" is obtained. In this case, the image of "Kamakura" is, for example, a bright image bathed in plenty of sunlight.

 例えば、所定の期間内において、第2のユーザUの発話及び投稿から、決定部313が寺院、名所旧跡、歴史的な人物等の用語を抽出した場合、「鎌倉は歴史ある街」とのイメージが得られる。この場合、「鎌倉」のイメージは、例えば、落ち着いた色のイメージとなる。 For example, when the determining unit 313 extracts terms such as temples, famous places, historical sites, and historical figures from the utterances and posts of the second user U L within a predetermined period of time, the image of "Kamakura is a historic town" is obtained. In this case, the image of "Kamakura" is, for example, an image of calm colors.

 図10は、図7の仮想空間VS上にアバターが往来している様子の一例を示した図である。言い換えると、図10は、視覚的効果が付与されていない仮想空間VS上の風景及び視覚的効果が付与されていないアバターの一例を示す図である。 FIG. 10 is a diagram showing an example of avatars moving around in the virtual space VS of FIG. 7. In other words, FIG. 10 is a diagram showing an example of a landscape in the virtual space VS to which no visual effects have been applied, and an example of an avatar to which no visual effects have been applied.

 図10において、第1のユーザU自身のアバターはアバター201であり、他のユーザ、即ち、第三者のアバターは、アバター202、203、及び204である。アバター201~204の服装を含むそれぞれの外見は、各ユーザにより予め設定される。従って、各アバターの外見には統一感がないことが多い。なお、図10は、第1実施形態の説明のための補助的な図であり、図10に示した空間は、いずれの端末装置にも提供されない。 In Fig. 10, the avatar of the first user U K is avatar 201, and the avatars of other users, i.e., third parties, are avatars 202, 203, and 204. The appearance of each of the avatars 201 to 204, including their clothes, is set in advance by each user. Therefore, there is often no uniformity in the appearance of each avatar. Note that Fig. 10 is an auxiliary diagram for explaining the first embodiment, and the space shown in Fig. 10 is not provided to any terminal device.

 図11は、図8の仮想空間VS上にアバターが往来している様子の一例を示した図である。言い換えると、図11は、第1のユーザU専用の視覚的効果が付与されている仮想空間VS上の風景及びアバターの一例を示す図である。図11に示したように、第1のユーザU専用の視覚的効果は、第1のユーザUのアバター201の外見には適用されず、第1のユーザUのアバター201以外のアバター202~204の外見に共通して適用される。即ち、生成部314は、第1のユーザUのアバター201以外のアバター202~204に対して同じ視覚的効果を付与する。 Fig. 11 is a diagram showing an example of avatars moving about in the virtual space VS of Fig. 8. In other words, Fig. 11 is a diagram showing an example of scenery and avatars in the virtual space VS to which a visual effect dedicated to the first user U K is applied. As shown in Fig. 11, the visual effect dedicated to the first user U K is not applied to the appearance of the avatar 201 of the first user U K , but is applied commonly to the appearances of the avatars 202 K to 204 K other than the avatar 201 of the first user U K. That is, the generation unit 314 applies the same visual effect to the avatars 202 K to 204 K other than the avatar 201 of the first user U K.

 決定部313は、第1のユーザUの属性情報に基づいて、第1のユーザUが訪問する仮想空間VS上の都市を往来する第三者のアバター202~204の外見を決定する。例えば、第1のユーザUの発話、投稿等から、決定部313が「鎌倉はサーフィンの街」とのイメージを得た場合、「鎌倉」のイメージは、例えば、太陽光を沢山浴びている明るいイメージとなる。この場合、決定部313は、第三者のアバター202~204が着用する衣服として、海を連想させるヨットの絵柄のTシャツに決定する。 The determination unit 313 determines the appearance of the third party avatars 202K to 204K traveling through the city in the virtual space VS visited by the first user U.K. based on the attribute information of the first user U.K. For example, if the determination unit 313 obtains the image that "Kamakura is a surfing town" from the speech, posts, etc. of the first user U.K. , the image of "Kamakura" will be, for example, a bright image bathed in plenty of sunlight. In this case, the determination unit 313 determines that the clothing to be worn by the third party avatars 202K to 204K will be T-shirts with a yacht pattern reminiscent of the sea.

 なお、「街のイメージ」と「第三者のアバターが着用する衣服」との相関関係は、多数のデータを用いて、学習部317によって予め機械学習される。 The correlation between the "image of the city" and the "clothing worn by a third-party avatar" is machine-learned in advance by the learning unit 317 using a large amount of data.

 生成部314は、決定部313によって決定された第三者のアバターの外見を、第1のユーザUが訪問する都市に属する第三者のアバター202~204に適用することにより、第1のユーザUが訪問するJ都市の個別空間DS-Jを生成する。 The generation unit 314 applies the appearance of the third-party avatar determined by the determination unit 313 to the third-party avatars 202K to 204K belonging to the city visited by the first user U_K , thereby generating an individual space DS- JK of city J visited by the first user U_K .

 提供部315は、生成部314によって生成された個別空間DS-Jを、第1のユーザUが使用する端末装置10-Kに提供する。従って、第1のユーザUには、第1のユーザUのアバター201の外見は、第1のユーザU自身が予め設定した外見として認識され、第三者のアバター202~204はヨットの絵柄のTシャツを着用していると認識される。従って、第1のユーザUには、統一的な第三者のアバター202~204の外見によってJ都市に対する心証が形成される。 The providing unit 315 provides the individual space DS- JK generated by the generating unit 314 to the terminal device 10-K used by the first user U K. Therefore, the first user U K recognizes the appearance of the avatar 201 of the first user U K as the appearance previously set by the first user U K himself, and recognizes the third party avatars 202 K to 204 K as wearing T-shirts with a yacht pattern. Therefore, the first user U K forms an impression of city J based on the uniform appearance of the third party avatars 202 K to 204 K.

 図11の視覚的効果が付与されている場合のアバター202~204の外見に関するデータは、J都市に対応する個別サーバ20-Jの記憶装置22に格納される。 Data relating to the appearances of the avatars 202 K to 204 K when the visual effects of FIG. 11 are applied is stored in the storage device 22 of the individual server 20-J corresponding to J city.

 図12は、図9の仮想空間VS上にアバターが往来している様子の一例を示した図である。言い換えると、図12は、第2のユーザU専用の視覚的効果が付与されている仮想空間VS上の風景及びアバターの一例を示す図である。図12に示したように、第2のユーザU専用の視覚的効果は、第2のユーザUのアバター202の外見には適用されず、第2のユーザUのアバター202以外のアバター201、203、及び204の外見に共通して適用される。即ち、生成部314は、第2のユーザUのアバター202以外のアバター201、203、及び204に対して同じ視覚的効果を付与する。 Fig. 12 is a diagram showing an example of a state in which avatars are coming and going in the virtual space VS of Fig. 9. In other words, Fig. 12 is a diagram showing an example of a landscape and avatars in the virtual space VS to which a visual effect dedicated to the second user UL is applied. As shown in Fig. 12, the visual effect dedicated to the second user UL is not applied to the appearance of the avatar 202 of the second user UL , but is commonly applied to the appearances of the avatars 201L , 203L , and 204L other than the avatar 202 of the second user UL. That is, the generation unit 314 applies the same visual effect to the avatars 201L , 203L , and 204L other than the avatar 202 of the second user UL .

 決定部313は、第2のユーザUの属性情報に基づいて、第2のユーザUが訪問する仮想空間VS上の都市を往来する第三者のアバター201、203、及び204の外見を決定する。例えば、第2のユーザUの発話、投稿等から、決定部313が「神田は読書の街」とのイメージを得た場合、「神田」のイメージは、落ち着いた静かなイメージとなる。この場合、決定部313は、第三者のアバター201、203、及び204が着用する衣服として、地味なモノトーンのシャツに決定する。 The determination unit 313 determines the appearance of the third-party avatars 201L , 203L , and 204L traveling through the city in the virtual space VS visited by the second user UL based on the attribute information of the second user UL . For example, if the determination unit 313 obtains the image that "Kanda is a town of reading" from the speech, posts, etc. of the second user UL , the image of "Kanda" becomes a calm and quiet image. In this case, the determination unit 313 determines that the clothes to be worn by the third-party avatars 201L , 203L , and 204L will be plain monotone shirts.

 なお、「街のイメージ」と「第三者のアバターが着用する衣服」との相関関係は、多数のデータを用いて、学習部317によって予め機械学習される。 The correlation between the "image of the city" and the "clothing worn by a third-party avatar" is machine-learned in advance by the learning unit 317 using a large amount of data.

 生成部314は、決定部313によって決定された第三者のアバターの外見を、第2のユーザUが訪問する都市に属する第三者のアバターに適用することにより、第2のユーザUが訪問する都市の個別空間DS-Jを生成する。言い換えると、第2のユーザUが訪問する都市の個別空間DS-Jは、決定部313によって決定された第三者のアバターの外見を、第2のユーザUが訪問する都市に属する第三者のアバターに適用することにより、生成部314によって生成される。 The generation unit 314 generates an individual space DS-JL of a city visited by the second user UL by applying the appearance of the third party avatar determined by the determination unit 313 to a third party avatar belonging to a city visited by the second user UL . In other words, the individual space DS- JL of a city visited by the second user UL is generated by the generation unit 314 by applying the appearance of the third party avatar determined by the determination unit 313 to a third party avatar belonging to a city visited by the second user UL .

 提供部315は、生成部314によって生成された個別空間DS-Jを、第2のユーザUが使用する端末装置10-Lに提供する。従って、第2のユーザUには、第2のユーザUのアバター202の外見は、第2のユーザU自身が予め設定した外見として認識され、第三者のアバター201、203、及び204は地味なモノトーンのシャツを着用していると認識される。従って、第2のユーザUには、統一的な第三者のアバター201、203、及び204の外見によってJ都市に対する心証が形成される。 The providing unit 315 provides the individual space DS- JL generated by the generating unit 314 to the terminal device 10-L used by the second user UL . Therefore, the second user UL recognizes the appearance of the avatar 202 of the second user UL as an appearance preset by the second user UL himself, and recognizes the third party avatars 201L , 203L , and 204L as wearing plain monotone shirts. Therefore, the second user UL forms an impression of City J based on the uniform appearance of the third party avatars 201L , 203L , and 204L .

 図12の視覚的効果が付与されている場合のアバター201、203、及び204の外見に関するデータは、J都市に対応する個別サーバ20-Jの記憶装置22に格納される。 Data relating to the appearances of the avatars 201 L , 203 L , and 204 L when the visual effects of FIG. 12 are applied is stored in the storage device 22 of the individual server 20-J corresponding to J city.

 図13は、個別空間DS-J内の役所60の受付カウンター61の一例を示す図である。第1のユーザUは、個別空間DS-J内の役所60を訪問することにより、当該個別空間DS-Jに対応するJ都市の行政サービスを受けることができる。 13 is a diagram showing an example of a reception counter 61 of a government office 60 in an individual space DS-J K. A first user U K can receive administrative services of city J corresponding to the individual space DS-J K by visiting the government office 60 in the individual space DS-J K.

 図13に示したように、第1のユーザUは、役所60の受付カウンター61における手続きの一覧62を参照し、当該一覧62の中から、目的の手続きを選択する。目的の手続きが出生証明書の取得である場合、第1のユーザUは、当該一覧62の中から「出生証明書」を選択する。 13 , a first user U_K refers to a list of procedures 62 at a reception counter 61 of a government office 60, and selects a target procedure from the list 62. If the target procedure is to obtain a birth certificate, the first user U_K selects “Birth Certificate” from the list 62.

 第1のユーザUの仮想空間VSにアクセスするためのユーザIDと、第1のユーザUの行政上の個人番号とは、互いに紐付けられている。第1のユーザUのユーザIDと第1のユーザUの行政上の個人番号との関係は、管理サーバ30の記憶装置32にIDデータベースIDBとして記憶されている。 The user ID for accessing the virtual space VS of the first user U.K. and the administrative personal number of the first user U.K. are linked to each other. The relationship between the user ID of the first user U.K. and the administrative personal number of the first user U.K. is stored as an ID database IDB in the storage device 32 of the management server 30.

 つまり、アバター制御部316は、第1のユーザUの操作により、仮想空間VS上の役所60において、第1のユーザUのアバター201に、仮想空間VSにアクセスするためのユーザIDを用いて手続きさせる。この手続きにより、アバター制御部316は、第1のユーザUに現実空間におけるJ都市の行政サービスに相当するサービスを提供する。 That is, the avatar control unit 316, through the operation of the first user U.K. , causes the avatar 201 of the first user U.K. to carry out a procedure in the government office 60 in the virtual space VS using a user ID for accessing the virtual space VS. Through this procedure, the avatar control unit 316 provides the first user U.K. with services equivalent to the administrative services of City J in the real space.

1.2.第1実施形態に係る仮想空間管理装置の動作
1.2.1.処理装置31の第1の動作
 図14は、図5の処理装置31の第1の動作を示すフローチャートである。以下、図14を参照することにより、処理装置31の第1の動作について説明する。図14のルーチンは、例えば、処理装置31が起動されることにより開始され、一定の時間が経過するごとに実行されるようになっている。
1.2. Operation of the virtual space management device according to the first embodiment 1.2.1. First operation of the processing device 31 Fig. 14 is a flowchart showing the first operation of the processing device 31 of Fig. 5. Hereinafter, the first operation of the processing device 31 will be described with reference to Fig. 14. The routine of Fig. 14 is started, for example, when the processing device 31 is started, and is executed every time a certain period of time has elapsed.

 ステップS11において、処理装置31は、判定部311として機能することにより、第1のユーザUが仮想空間VS上のJ都市を初めて訪問したか否かを判定する。第1のユーザUが仮想空間VS上のJ都市を初めて訪問したか否かは、例えば、J都市に対応する個別サーバ20-Jの記憶装置22に第1のユーザUに対応する視覚的効果が記憶されているか否かにより判定される。 In step S11, the processing device 31 determines whether or not the first user U_K has visited city J in the virtual space VS for the first time by functioning as the determination unit 311. Whether or not the first user U_K has visited city J in the virtual space VS for the first time is determined, for example, by whether or not a visual effect corresponding to the first user U_K is stored in the storage device 22 of the individual server 20-J corresponding to city J.

 即ち、個別サーバ20-Jの記憶装置22に第1のユーザUに対応する視覚的効果が記憶されていない場合、第1のユーザUは、J都市を初めて訪問したと判定される。一方、個別サーバ20-Jの記憶装置22に第1のユーザUに対応する視覚的効果が記憶されている場合、第1のユーザUは、J都市を既に訪問していると判定される。 That is, if the storage device 22 of the individual server 20-J does not store visual effects corresponding to the first user U.K. , it is determined that the first user U.K. has visited city J for the first time. On the other hand, if the storage device 22 of the individual server 20-J stores visual effects corresponding to the first user U.K. , it is determined that the first user U.K. has already visited city J.

 ステップS11において、第1のユーザUがJ都市を初めて訪問したと判定した場合、即ち、ステップS11における判定結果が肯定であった場合、処理装置31は、取得部312として機能することにより、ステップS12において、個別サーバ20-1~20-M及び端末装置10-Kから第1のユーザUの行動履歴を取得する。 If it is determined in step S11 that the first user U_K has visited city J for the first time, i.e., if the determination result in step S11 is positive, the processing device 31 functions as an acquisition unit 312 and acquires the behavioral history of the first user U_K from the individual servers 20-1 to 20-M and the terminal device 10-K in step S12.

 次いで、ステップS13において、処理装置31は、決定部313として機能することにより、取得した行動履歴等から第1のユーザUの属性情報を抽出する。 Next, in step S13, the processing device 31 functions as the determining unit 313 to extract attribute information of the first user U_K from the acquired behavior history and the like.

 次いで、ステップS14において、処理装置31は、決定部313として機能することにより、第1のユーザUの属性情報に基づいて、J都市に属する複数の建造物の色及び模様、即ち、視覚的効果を決定する。 Next, in step S14, the processing device 31 functions as the determination unit 313 to determine the colors and patterns, i.e., the visual effects, of the multiple buildings belonging to city J based on the attribute information of the first user U through K.

 次いで、ステップS15において、処理装置31は、決定部313として機能することにより、第1のユーザUの属性情報に基づいて、J都市における第三者のアバターの外見、即ち、視覚的効果を決定する。 Next, in step S15, the processing device 31 functions as the determination unit 313 to determine the appearance, i.e., the visual effect, of a third party's avatar in city J based on the attribute information of the first user UK.

 次いで、ステップS16において、処理装置31は、決定部313として機能することにより、決定された仮想空間VS上のJ都市における第1のユーザUに対応する視覚的効果をJ都市に対応する個別サーバ20-Jの記憶装置22に記憶させる。 Next, in step S16, the processing device 31 functions as a determination unit 313 to store the visual effects corresponding to the first user UK in the determined city J in the virtual space VS in the storage device 22 of the individual server 20-J corresponding to city J.

 次いで、ステップS17において、処理装置31は、生成部314として機能することにより、決定された視覚的効果をJ都市に属する複数の建造物及び第三者のアバターに対して付与することにより個別空間DS-Jを生成する。 Next, in step S17, the processing device 31 functions as the generation unit 314 to generate an individual space DS-J K by applying the determined visual effect to a plurality of buildings belonging to city J and third-party avatars.

 次いで、ステップS18において、処理装置31は、提供部315として機能することにより、生成された個別空間DS-Jを第1のユーザUの端末装置10-Kに提供し、本ルーチンを一旦終了する。 Next, in step S18, the processing device 31 functions as the providing unit 315 to provide the generated individual space DS- J_K to the terminal device 10-K of the first user U_K , and then ends this routine.

 なお、ステップS11において、第1のユーザUがJ都市を訪問したのは初めてではないと判定した場合、即ち、ステップS11における判定結果が否定であった場合、処理装置31は、生成部314として機能することにより、ステップS17において、個別サーバ20-Jの記憶装置22に記憶されている視覚的効果をJ都市に属する複数の建造物及び第三者のアバターに対して付与することにより個別空間DS-Jを生成する。 In addition, if it is determined in step S11 that this is not the first time that the first user U_K has visited city J, that is, if the determination result in step S11 is negative, the processing device 31 functions as a generation unit 314, and in step S17 generates an individual space DS-J_K by applying the visual effects stored in the storage device 22 of the individual server 20-J to multiple buildings and third-party avatars belonging to city J.

 なお、以上のルーチンにおいては、第1のユーザUが仮想空間VS上のJ都市を訪問する場合を例示しているが、任意のユーザが任意の都市を訪問する場合も、処理装置31は、上記のルーチンと同様に処理を実行する。 In the above routine, an example is shown of a case where a first user U_K visits city J in virtual space VS, but the processing device 31 also performs processing in the same manner as the above routine when any user visits any city.

1.2.2.処理装置31の第2の動作
 図15は、図5の処理装置31の第2の動作を示すフローチャートである。以下、図15を参照することにより、処理装置31の第2の動作について説明する。図15のルーチンは、例えば、処理装置31が起動されることにより開始され、一定の時間が経過するごとに実行されるようになっている。
1.2.2 Second Operation of the Processing Device 31 Fig. 15 is a flowchart showing the second operation of the processing device 31 of Fig. 5. The second operation of the processing device 31 will be described below with reference to Fig. 15. The routine of Fig. 15 is started, for example, when the processing device 31 is started, and is executed every time a certain period of time has elapsed.

 ステップS21において、処理装置31は、アバター制御部316として機能することにより、アバター操作情報及びアバター位置情報を取得する。 In step S21, the processing device 31 functions as the avatar control unit 316 to obtain avatar operation information and avatar position information.

 次いで、ステップS22において、処理装置31は、アバター制御部316として機能することにより、アバター201が役所60の受付カウンター61に到着しているか否かを判定する。 Next, in step S22, the processing device 31 functions as the avatar control unit 316 to determine whether the avatar 201 has arrived at the reception counter 61 of the government office 60.

 ステップS22において、アバター制御部316が、アバター201が役所60の受付カウンター61に到着していないと判定した場合、即ち、ステップS22における判定結果が否定であった場合、処理装置31は、本ルーチンを一旦終了する。 If, in step S22, the avatar control unit 316 determines that the avatar 201 has not arrived at the reception counter 61 of the government office 60, i.e., the determination result in step S22 is negative, the processing device 31 temporarily ends this routine.

 一方、ステップS22において、アバター制御部316が、アバター201が役所60の受付カウンター61に到着していると判定した場合、即ち、ステップS22における判定結果が肯定であった場合、処理装置31は、アバター制御部316として機能することにより、ステップS23において、手続きの一覧62を表示する。 On the other hand, if the avatar control unit 316 determines in step S22 that the avatar 201 has arrived at the reception counter 61 of the government office 60, i.e., if the determination result in step S22 is positive, the processing device 31 functions as the avatar control unit 316 to display the list of procedures 62 in step S23.

 次いで、ステップS24において、処理装置31は、アバター制御部316として機能することにより、アバター操作情報を取得する。 Next, in step S24, the processing device 31 functions as the avatar control unit 316 to obtain avatar operation information.

 次いで、ステップS25において、処理装置31は、アバター制御部316として機能することにより、表示した手続きの一覧62のいずれかがアバター201によって選択されたか否かを判定する。 Next, in step S25, the processing device 31 functions as the avatar control unit 316 to determine whether any of the displayed list of procedures 62 has been selected by the avatar 201.

 ステップS25において、アバター制御部316が、表示した手続きの一覧62のいずれの手続きもアバター201によって選択されていないと判定した場合、即ち、ステップS25における判定結果が否定であった場合、処理装置31は、アバター制御部316として機能することにより、ステップS24において、アバター操作情報を再度取得する。 If, in step S25, the avatar control unit 316 determines that none of the procedures in the displayed list of procedures 62 has been selected by the avatar 201, i.e., the determination result in step S25 is negative, the processing device 31 functions as the avatar control unit 316 and reacquires the avatar operation information in step S24.

 一方、ステップS25において、アバター制御部316が、表示した手続きの一覧62の中のいずれかの手続きがアバター201によって選択されたと判定した場合、即ち、ステップS25における判定結果が肯定であった場合、処理装置31は、アバター制御部316として機能することにより、ステップS26において、選択された手続きに従って処理を進め、本ルーチンを一旦終了する。 On the other hand, if the avatar control unit 316 determines in step S25 that one of the procedures in the displayed list of procedures 62 has been selected by the avatar 201, i.e., if the determination result in step S25 is positive, the processing device 31 functions as the avatar control unit 316, and proceeds with the processing in step S26 according to the selected procedure, and then temporarily ends this routine.

 なお、以上のルーチンにおいては、第1のユーザUが仮想空間VS上のJ都市を訪問する場合を例示しているが、任意のユーザが任意の都市を訪問する場合も、処理装置31は、上記のルーチンと同様に処理を実行する。 In the above routine, an example is shown of a case where a first user U_K visits city J in virtual space VS, but the processing device 31 also performs processing in the same manner as the above routine when any user visits any city.

1.2.3.処理装置31の第3の動作
 図16は、図5の処理装置31の第3の動作を示すフローチャートである。以下、図16を参照することにより、処理装置31の第3の動作について説明する。第3の動作は、学習部317による機械学習方法に係る動作である。図15のルーチンは、例えば、処理装置31が起動されることにより開始され、一定の時間が経過するごとに実行されるようになっている。
1.2.3. Third operation of the processing device 31 Fig. 16 is a flowchart showing the third operation of the processing device 31 of Fig. 5. Hereinafter, the third operation of the processing device 31 will be described with reference to Fig. 16. The third operation is an operation related to a machine learning method by the learning unit 317. The routine of Fig. 15 is started, for example, when the processing device 31 is started, and is executed every time a certain time has passed.

 ステップS31において、処理装置31は、教師データ取得部317aとして機能することにより、機械学習を開始するための事前準備として、複数の教師データTD1を準備し、準備した複数の教師データTD1を記憶装置32に記憶させる。ここで準備する教師データの数は、最終的に得られる学習モデルLM1に要求される推論精度を考慮して設定すればよい。 In step S31, the processing device 31 functions as the teacher data acquisition unit 317a to prepare multiple pieces of teacher data TD1 as a preliminary step for starting machine learning, and stores the multiple pieces of teacher data TD1 thus prepared in the storage device 32. The number of pieces of teacher data prepared here may be set taking into consideration the inference accuracy required for the learning model LM1 that is ultimately obtained.

 次いで、ステップS32において、処理装置31は、教師データ取得部317aとして機能することにより、機械学習を開始するため、学習前の学習モデルLM1を準備する。ここで準備する学習前の学習モデルLM1は、図6に示したニューラルネットワークモデル90を採用したものであり、各シナプスの重みが初期値に設定されている。入力層91の各ニューロンには、複数の教師データTD1を構成する入力データとしての「街のイメージ」が対応付けられる。出力層93の各ニューロンには、複数の教師データTD1を構成する出力データとしての「第三者のアバターが着用する衣服」が対応付けられる。 Next, in step S32, the processing device 31 functions as the teacher data acquisition unit 317a to prepare a pre-learning learning model LM1 in order to start machine learning. The pre-learning learning model LM1 prepared here employs the neural network model 90 shown in FIG. 6, and the weights of each synapse are set to an initial value. Each neuron in the input layer 91 is associated with an "image of a city" as input data constituting the multiple teacher data TD1. Each neuron in the output layer 93 is associated with "clothing worn by a third-party avatar" as output data constituting the multiple teacher data TD1.

 次いで、ステップS33において、処理装置31は、教師データ取得部317aとして機能することにより、記憶装置32に記憶された複数の教師データTD1から、例えば、ランダムに1組のデータセットを取得する。 Next, in step S33, the processing device 31 functions as the teacher data acquisition unit 317a to acquire, for example, a set of data randomly from the multiple teacher data TD1 stored in the storage device 32.

 次いで、ステップS34において、処理装置31は、モデル生成部317bとして機能することにより、1組のデータセットに含まれる入力データを、準備した学習前又は学習中の学習モデルLM1の入力層91に入力する。その結果、学習モデルLM1の出力層93から推論結果として出力データが出力される。しかし、当該出力データは、学習前又は学習中の学習モデルLM1によって生成されたものである。そのため、学習前又は学習中の状態においては、推論結果として出力されたデータは、当該1組のデータセットに含まれる出力データ、即ち、正解ラベルとは異なる情報を示す。 Next, in step S34, the processing device 31 functions as the model generation unit 317b, and inputs the input data contained in the set of data sets to the input layer 91 of the prepared learning model LM1 before or during learning. As a result, output data is output as an inference result from the output layer 93 of the learning model LM1. However, this output data has been generated by the learning model LM1 before or during learning. Therefore, in the pre-learning or learning state, the data output as an inference result indicates information that is different from the output data contained in the set of data sets, i.e., the correct answer label.

 次いで、ステップS35において、処理装置31は、モデル生成部317bとして機能することにより、ステップS33において取得された1組のデータセットに含まれる出力データ、即ち、正解ラベルと、ステップS34において出力層93から推論結果として出力された出力データとを比較し、各シナプスの重みを調整することにより、機械学習を実施する。これにより、モデル生成部317bは、入力データと出力データとの相関関係を学習モデルLM1に学習させる。 Next, in step S35, the processing device 31 functions as the model generation unit 317b to compare the output data included in the set of data acquired in step S33, i.e., the correct label, with the output data output from the output layer 93 as an inference result in step S34, and adjust the weight of each synapse, thereby performing machine learning. As a result, the model generation unit 317b causes the learning model LM1 to learn the correlation between the input data and the output data.

 次いで、ステップS36において、処理装置31は、モデル生成部317bとして機能することにより、所定の学習終了条件が満たされたか否かを、推論結果と、1組のデータセットに含まれる出力データ、即ち、正解ラベルとに基づく評価関数の値に基づいて判定する。なお、モデル生成部317bは、所定の学習終了条件が満たされたか否かを、推論結果と、記憶装置32内に記憶された未学習のデータセットの残数に基づいて判定してもよい。 Next, in step S36, the processing device 31 functions as the model generation unit 317b to determine whether a predetermined learning end condition has been met based on the value of an evaluation function based on the inference result and the output data contained in one set of data, i.e., the correct answer label. The model generation unit 317b may also determine whether a predetermined learning end condition has been met based on the inference result and the remaining number of unlearned data sets stored in the storage device 32.

 ステップS36において、モデル生成部317bが、学習終了条件が満たされておらず、機械学習を継続すると判定した場合、即ち、ステップS36における判定結果が否定であった場合、処理装置31は、モデル生成部317bとして機能することにより、学習中の学習モデルLM1に対して、ステップS33からステップS35までの処理を未学習のデータセットを用いて複数回実施する。 In step S36, if the model generation unit 317b determines that the learning termination condition is not satisfied and machine learning is to be continued, i.e., if the determination result in step S36 is negative, the processing device 31 functions as the model generation unit 317b and performs the processes from step S33 to step S35 multiple times on the learning model LM1 being learned using an unlearned dataset.

 一方、ステップS36において、モデル生成部317bが、学習終了条件が満たされたと判定した場合、即ち、ステップS36における判定結果が肯定であった場合、処理装置31は、モデル生成部317bとして機能することにより、ステップS37において、各シナプスに対応付けられた重みを調整することにより、機械学習させた学習済みの学習モデルLM1、即ち、調整済みの重みパラメータ群を記憶装置32に記憶し、本ルーチンを一旦終了する。 On the other hand, if the model generation unit 317b determines in step S36 that the learning termination condition is satisfied, i.e., if the determination result in step S36 is positive, the processing device 31 functions as the model generation unit 317b, and adjusts the weights associated with each synapse in step S37 to store the machine-learned learning model LM1, i.e., the adjusted weight parameter group, in the storage device 32, and temporarily terminates this routine.

 なお、図15の機械学習方法では、重みを調整する手法として、オンライン学習を採用した場合について説明したが、バッチ学習やミニバッチ学習等が採用されてもよい。さらに、所定の学習終了条件が満たされたか否かは、誤判定率に基づいて判定されてもよい。 In the machine learning method of FIG. 15, online learning has been used as a method for adjusting weights, but batch learning, mini-batch learning, etc. may also be used. Furthermore, whether or not a predetermined learning end condition has been met may be determined based on the misjudgment rate.

1.3.第1実施形態が奏する効果
 以上の説明によれば、第1実施形態に係る仮想空間管理装置としての処理装置31は、決定部313と、生成部314と、提供部315とを備える。決定部313は、第1のユーザUの属性を示す属性情報に基づいて、仮想空間VS上の複数の都市のうち、第1のユーザUが訪問する訪問都市についての視覚的効果を決定する。生成部314は、決定部313によって決定された視覚的効果を、訪問都市に属する複数の建造物に共通して付与することにより、訪問都市の個別空間を生成する。提供部315は、生成部314によって生成された個別空間を、第1のユーザUが使用する端末装置10-Kに提供する。
1.3. Effects of the First Embodiment According to the above description, the processing device 31 as a virtual space management device according to the first embodiment includes a determination unit 313, a generation unit 314, and a provision unit 315. The determination unit 313 determines a visual effect for a visited city visited by the first user U.K. among a plurality of cities in the virtual space VS, based on attribute information indicating the attributes of the first user U.K. The generation unit 314 generates an individual space for the visited city by commonly applying the visual effect determined by the determination unit 313 to a plurality of buildings belonging to the visited city. The provision unit 315 provides the individual space generated by the generation unit 314 to the terminal device 10-K used by the first user U.K.

 この態様によれば、訪問都市ごとに、仮想空間VS上のパブリックな空間に配置された複数の仮想オブジェクトに対して、第1のユーザUの属性情報に基づいて視覚的効果が共通して設定される。このため、統一的な都市の景観が形成される。従って、都市ごとに、第1のユーザU固有の世界観が反映された仮想空間VSが生成される。このため、第1のユーザUにとって印象的又は魅力的な訪問体験が得られる。 According to this aspect, for each visited city, visual effects are set in common for a plurality of virtual objects arranged in a public space in the virtual space VS based on the attribute information of the first user U.K. Thus, a unified cityscape is formed. Thus, for each city, a virtual space VS is generated that reflects the unique worldview of the first user U.K. Thus, an impressive or attractive visiting experience is obtained for the first user U.K.

 また、決定部313は、第1のユーザUが仮想空間VS上の訪問都市を初めて訪問する場合に視覚的効果を決定し、第1のユーザUが訪問都市を再度訪問する場合には、視覚的効果を維持する。 In addition, the determination unit 313 determines the visual effect when the first user U_K visits the visited city in the virtual space VS for the first time, and maintains the visual effect when the first user U_K visits the visited city again.

 この態様によれば、都市ごとに、最初の訪問時に設定された仮想空間VS上の仮想オブジェクトの色及びデザインが維持される。従って、第1のユーザUが訪問都市を再訪した場合であっても、従前の世界観が維持される。 According to this embodiment, the color and design of the virtual object in the virtual space VS set at the time of the first visit for each city are maintained. Therefore, even if the first user U K revisits a visited city, the previous world view is maintained.

 また、決定部313は、属性情報に基づいて、第1のユーザUが訪問する仮想空間VS上の訪問都市を往来する第三者のアバターの外見を決定する。生成部314は、決定部313によって決定された第三者のアバターの外見を、第1のユーザUが訪問する訪問都市に属する第三者のアバター202、203、及び204に適用することにより、第1のユーザUが訪問する訪問都市の個別空間DS-Jを生成する。提供部315は、生成部314によって生成された個別空間DS-Jを、第1のユーザUが使用する端末装置10-Kに提供する。 Furthermore, the determination unit 313 determines the appearance of a third party avatar traveling to and from a visited city in the virtual space VS visited by the first user U.K. based on the attribute information. The generation unit 314 applies the appearance of the third party avatar determined by the determination unit 313 to the third party avatars 202, 203, and 204 belonging to the visited city visited by the first user U.K. , thereby generating an individual space DS- J.K . of the visited city visited by the first user U.K. The provision unit 315 provides the individual space DS- J.K . generated by the generation unit 314 to the terminal device 10-K used by the first user U.K.

 この態様によれば、第1のユーザUは、第三者のアバターがどのような服装をしていても、第1のユーザUの属性情報に基づいて変更された外見が視認されるため、訪問都市に対する第1のユーザUの世界観がより印象的に表現される。 According to this aspect, the first user U.K. can visually see the appearance of the third party's avatar that has been changed based on the attribute information of the first user U.K. , regardless of how the third party's avatar is dressed, so that the worldview of the first user U.K. with respect to the visited city is expressed more impressively.

 また、決定部313は、第1のユーザUが仮想空間VS上の訪問都市を初めて訪問する場合に第三者のアバターの外見を決定し、第1のユーザUが訪問都市を再度訪問する場合には、第三者のアバターの外見を維持する。 In addition, the determination unit 313 determines the appearance of the third-party avatar when the first user U_K visits the visited city in the virtual space VS for the first time, and maintains the appearance of the third-party avatar when the first user U_K visits the visited city again.

 この態様によれば、第1のユーザUは、訪問都市を再訪した場合、第三者のアバターの外見が維持される。従って、第1のユーザUが訪問都市を再訪した場合であっても、従前の世界観が維持される。 According to this embodiment, when the first user U.K. revisits a visited city, the appearance of the third-party avatar is maintained. Therefore, even when the first user U.K. revisits a visited city, the previous world view is maintained.

 また、決定部313は、第1のユーザUによる過去の発話又は投稿の履歴において、キーワードとして複数の都市の中の都市の名前が出現した場合、当該都市の名前が出現した時点を含む所定の期間内においてピックアップされた都市の印象に関する少なくとも1つの用語を抽出する。 In addition, when the name of a city among multiple cities appears as a keyword in the history of past utterances or posts by the first user U_K , the determination unit 313 extracts at least one term related to the impression of the city that was picked up within a specified period including the time when the name of the city appeared.

 この態様によれば、ある都市に対する第1のユーザUの印象がより精度よく抽出される。 According to this embodiment, the impression of the first user U_K regarding a certain city can be extracted with higher accuracy.

 また、処理装置31は、アバター制御部316を備える。アバター制御部316は、仮想空間VS上の役所60において、第1のユーザUのアバター201に、仮想空間VSにアクセスするためのユーザIDを用いて手続きさせることにより、第1のユーザUに現実空間における行政サービスに相当するサービスを提供する。 The processing device 31 also includes an avatar control unit 316. The avatar control unit 316 provides the first user U_K with a service equivalent to the administrative service in the real space by having the avatar 201 of the first user U_K carry out a procedure in the government office 60 in the virtual space VS using a user ID for accessing the virtual space VS.

 この態様によれば、現実空間に存在する役所に出向くことなく、行政サービスを受けることができるため、第1のユーザUの利便性が向上する。 According to this embodiment, the first user UK can receive administrative services without having to go to a government office in the real world, thereby improving convenience for the first user UK .

2.変形例
 本開示は、以上に例示した実施形態に限定されない。具体的な変形の態様を以下に例示する。以下の例示から任意に選択された2以上の態様を組み合わせてもよい。また、上記実施形態の態様及び以下の変形の態様は、相互に矛盾しない限り任意に組み合わせることができる。
2. Modifications The present disclosure is not limited to the above-described embodiment. Specific modification examples are illustrated below. Two or more of the following examples may be combined. The above-described embodiment and the following modification examples may be combined in any combination as long as they are not mutually contradictory.

2.1.変形例1
 第1実施形態において、視覚的効果情報VEIは、仮想空間VS上のJ都市に属する複数の建造物の色及び模様、仮想空間VS上のJ都市を往来する第三者のアバターのビジュアルに適用される。しかし、視覚的効果情報VEIには、ユーザが抱く「街のイメージ」に関連するアイテムが含まれてもよい。
2.1. Variation 1
In the first embodiment, the visual effect information VEI is applied to the colors and patterns of multiple buildings belonging to city J in the virtual space VS, and the visuals of third-party avatars traveling through city J in the virtual space VS. However, the visual effect information VEI may also include items related to the "image of the city" that the user has.

 図17は、視覚的効果情報VEIに含まれるアイテムが適用された仮想空間VS上の風景の一例を示す図である。図17は、第1のユーザUが仮想空間VS上のJ都市を訪問した際の風景を示す。第1のユーザUは、J都市には、サーフィンの街とのイメージを抱いているため、決定部313は、視覚的効果として、サーフボード301、302を街角に配置する。 Fig. 17 is a diagram showing an example of a scene in the virtual space VS to which an item included in the visual effect information VEI has been applied. Fig. 17 shows a scene when a first user U_K visits city J in the virtual space VS. Since the first user U_K has an image of city J as a surfing town, the determination unit 313 places surfboards 301K and 302K on street corners as visual effects.

 また、例えば、ユーザが「読書」のイメージを抱く都市に対しては、街角に文豪の銅像が配置され、例えば、ユーザが「祭り」のイメージを抱く都市に対しては、街角に提灯が配置される。 For example, in a city in which the user associates "reading," bronze statues of famous writers may be placed on street corners, and in a city in which the user associates "festivals," lanterns may be placed on street corners.

2.2.変形例2
 第1実施形態において、各都市に属する複数の建造物の三次元CGモデルCGMは、各都市に対応する個別サーバ20-1~20-Mの記憶装置22にそれぞれ記憶されているが、管理サーバ30の記憶装置32に集約して記憶されていてもよい。
2.2. Modification 2
In the first embodiment, the three-dimensional CG models CGM of a plurality of buildings belonging to each city are stored in the storage devices 22 of the individual servers 20-1 to 20-M corresponding to each city, but may also be stored collectively in the storage device 32 of the management server 30.

 また、第1実施形態において、各都市の仮想空間VSにおける各ユーザ固有の視覚的効果に関する情報は、視覚的効果情報VEIとして各都市に対応する個別サーバ20-1~20-Mの記憶装置22にそれぞれ配置されている。しかし、各記憶装置22に記憶されている視覚的効果情報VEIは、管理サーバ30の記憶装置32に集約して記憶されていてもよい。 In addition, in the first embodiment, information regarding the visual effects specific to each user in the virtual space VS of each city is stored as visual effect information VEI in the storage devices 22 of the individual servers 20-1 to 20-M corresponding to each city. However, the visual effect information VEI stored in each storage device 22 may be stored collectively in the storage device 32 of the management server 30.

2.3.変形例3
 第1実施形態において、管理サーバ30は、個別サーバ20-1~20-Mとは別体として設けられているが、管理サーバ30の各機能は、個別サーバ20-1~20-Mにそれぞれ分散して設けられていてもよい。
2.3. Modification 3
In the first embodiment, the management server 30 is provided as a separate entity from the individual servers 20-1 to 20-M, but the functions of the management server 30 may be distributed among the individual servers 20-1 to 20-M, respectively.

2.4.変形例4
 第1実施形態において、決定部313は、大規模言語モデルLLMを用いて、建造物の色及び模様を決定していたが、教師あり学習により「街のイメージ」と「建造物の色及び模様」との相関関係を学習させた学習モデルが用いられてもよい。
2.4. Modification 4
In the first embodiment, the determination unit 313 determined the color and pattern of the building using a large-scale language model LLM, but a learning model that learns the correlation between the "image of the city" and the "color and pattern of the building" through supervised learning may also be used.

2.5.変形例5
 第1実施形態において、決定部313は、教師あり学習により「街のイメージ」と「第三者のアバターが着用する衣服」との相関関係を学習させた学習モデルLM1を用いて、第三者のアバターが着用する衣服を決定していた。しかし、第三者のアバターが着用する衣服の決定には、「街のイメージ」及び「第三者のアバターが着用する衣服」を含む大量のデータを用いて、教師なし学習により学習させた学習モデルが用いられてもよい。
2.5. Modification 5
In the first embodiment, the determination unit 313 determines the clothes to be worn by the third-party avatar using a learning model LM1 that has learned the correlation between the "image of the town" and the "clothing worn by the third-party avatar" through supervised learning. However, to determine the clothes to be worn by the third-party avatar, a learning model that has been trained through unsupervised learning using a large amount of data including the "image of the town" and the "clothing worn by the third-party avatar" may be used.

 また、決定部313は、第三者のアバターが着用する衣服を周知の大規模言語モデルを用いて決定してもよい。例えば、決定部313は、抽出されたJ都市の「街のイメージ」に関するプロンプトと、J都市の「街のイメージ」に合致する衣服の特徴を出力する旨のプロンプトとを大規模言語モデル入力することにより決定してもよい。 The determination unit 313 may also determine the clothing to be worn by the third-party avatar using a well-known large-scale language model. For example, the determination unit 313 may determine the clothing to be worn by the third-party avatar by inputting a prompt related to the extracted "city image" of City J and a prompt to output clothing characteristics that match the "city image" of City J into the large-scale language model.

3.その他
(1)上述した実施形態では、記憶装置12、記憶装置22、及び記憶装置32は、ROM及びRAMなどを例示したが、フレキシブルディスク、光磁気ディスク(例えば、コンパクトディスク、デジタル多用途ディスク、Blu-ray(登録商標)ディスク)、スマートカード、フラッシュメモリデバイス(例えば、カード、スティック、キードライブ)、CD-ROM(Compact Disc-ROM)、レジスタ、リムーバブルディスク、ハードディスク、フロッピー(登録商標)ディスク、磁気ストリップ、データベース、サーバその他の適切な記憶媒体である。また、プログラムは、電気通信回線を介してネットワークから送信されてもよい。また、プログラムは、電気通信回線を介して通信網NETから送信されてもよい。
3. Others (1) In the above-described embodiment, the storage device 12, the storage device 22, and the storage device 32 are exemplified by ROM and RAM, but the storage devices 12, 22, and 32 may be flexible disks, magneto-optical disks (e.g., compact disks, digital versatile disks, Blu-ray (registered trademark) disks), smart cards, flash memory devices (e.g., cards, sticks, key drives), CD-ROMs (Compact Disc-ROMs), registers, removable disks, hard disks, floppy (registered trademark) disks, magnetic strips, databases, servers, or other suitable storage media. The programs may also be transmitted from a network via electric communication lines. The programs may also be transmitted from a communication network NET via electric communication lines.

(2)上述した実施形態において、説明した情報、信号などは、様々な異なる技術のいずれかを使用して表されてもよい。例えば、上記の説明全体に渡って言及され得るデータ、命令、コマンド、情報、信号、ビット、シンボル、チップなどは、電圧、電流、電磁波、磁界若しくは磁性粒子、光場若しくは光子、又はこれらの任意の組み合わせによって表されてもよい。 (2) In the above-described embodiments, the information, signals, etc. described may be represented using any of a variety of different technologies. For example, data, instructions, commands, information, signals, bits, symbols, chips, etc. that may be referred to throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetic particles, optical fields or photons, or any combination thereof.

(3)上述した実施形態において、入出力された情報等は特定の場所(例えば、メモリ)に保存されてもよいし、管理テーブルを用いて管理してもよい。入出力される情報等は、上書き、更新、又は追記され得る。出力された情報等は削除されてもよい。入力された情報等は他の装置へ送信されてもよい。 (3) In the above-described embodiment, the input/output information, etc. may be stored in a specific location (e.g., memory) or may be managed using a management table. The input/output information, etc. may be overwritten, updated, or added to. The output information, etc. may be deleted. The input information, etc. may be transmitted to another device.

(4)上述した実施形態において、判定は、1ビットを用いて表される値(0か1か)によって行われてもよいし、真偽値(Boolean:true又はfalse)によって行われてもよいし、数値の比較(例えば、所定の値との比較)によって行われてもよい。 (4) In the above-described embodiment, the determination may be made based on a value (0 or 1) represented using one bit, a Boolean value (true or false), or a comparison of numerical values (e.g., a comparison with a predetermined value).

(5)上述した実施形態において例示した処理手順、シーケンス、フローチャートなどは、矛盾の無い限り、順序を入れ替えてもよい。例えば、本開示において説明した方法については、例示的な順序を用いて様々なステップの要素を提示しており、提示した特定の順序に限定されない。 (5) The order of the process steps, sequences, flow charts, etc. illustrated in the above-described embodiments may be changed as long as it is not inconsistent. For example, the methods described in this disclosure present elements of various steps using an example order and are not limited to the particular order presented.

(6)図1~図17に例示された各機能は、ハードウェア及びソフトウェアの少なくとも一方の任意の組み合わせによって実現される。また、各機能ブロックの実現方法は特に限定されない。すなわち、各機能ブロックは、物理的又は論理的に結合した1つの装置を用いて実現されてもよいし、物理的又は論理的に分離した2つ以上の装置を直接的又は間接的に(例えば、有線、無線などを用いて)接続し、これら複数の装置を用いて実現されてもよい。機能ブロックは、上記1つの装置又は上記複数の装置にソフトウェアを組み合わせて実現されてもよい。 (6) Each function illustrated in FIG. 1 to FIG. 17 is realized by any combination of at least one of hardware and software. Furthermore, there are no particular limitations on the method of realizing each functional block. That is, each functional block may be realized using one device that is physically or logically coupled, or may be realized using two or more devices that are physically or logically separated and connected directly or indirectly (e.g., using wires, wirelessly, etc.) and these multiple devices. A functional block may be realized by combining the one device or the multiple devices with software.

(7)上述した実施形態において例示したプログラムは、ソフトウェア、ファームウェア、ミドルウェア、マイクロコード、ハードウェア記述言語と呼ばれるか、他の名称を用いて呼ばれるかを問わず、命令、命令セット、コード、コードセグメント、プログラムコード、プログラム、サブプログラム、ソフトウェアモジュール、アプリケーション、ソフトウェアアプリケーション、ソフトウェアパッケージ、ルーチン、サブルーチン、オブジェクト、実行可能ファイル、実行スレッド、手順、機能などを意味するよう広く解釈されるべきである。 (7) The programs exemplified in the above-described embodiments should be broadly construed to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executable files, threads of execution, procedures, functions, etc., regardless of whether they are called software, firmware, middleware, microcode, hardware description language, or by other names.

 また、ソフトウェア、命令、情報などは、伝送媒体を介して送受信されてもよい。例えば、ソフトウェアが、有線技術(同軸ケーブル、光ファイバケーブル、ツイストペア、デジタル加入者回線(DSL:Digital Subscriber Line)など)及び無線技術(赤外線、マイクロ波など)の少なくとも一方を使用してウェブサイト、サーバ、又は他のリモートソースから送信される場合、これらの有線技術及び無線技術の少なくとも一方は、伝送媒体の定義内に含まれる。 In addition, software, instructions, information, etc. may be transmitted and received via a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using at least one of wired technologies (such as coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL)), and/or wireless technologies (such as infrared, microwave), then at least one of these wired and wireless technologies is included within the definition of a transmission medium.

(8)前述の各形態において、「システム」及び「ネットワーク」という用語は、互換的に使用される。 (8) In each of the above embodiments, the terms "system" and "network" are used interchangeably.

(9)本開示において説明した情報、パラメータなどは、絶対値を用いて表されてもよいし、所定の値からの相対値を用いて表されてもよいし、対応する別の情報を用いて表されてもよい。 (9) The information, parameters, etc. described in this disclosure may be expressed using absolute values, may be expressed using relative values from a predetermined value, or may be expressed using other corresponding information.

(10)上述した実施形態において、端末装置10は、移動局(MS:Mobile Station)である場合が含まれる。移動局は、当業者によって、加入者局、モバイルユニット、加入者ユニット、ワイヤレスユニット、リモートユニット、モバイルデバイス、ワイヤレスデバイス、ワイヤレス通信デバイス、リモートデバイス、モバイル加入者局、アクセス端末、モバイル端末、ワイヤレス端末、リモート端末、ハンドセット、ユーザエージェント、モバイルクライアント、クライアント、又はいくつかの他の適切な用語で呼ばれる場合もある。また、本開示においては、「移動局」、「ユーザ端末(user terminal)」、「ユーザ装置(UE:User Equipment)」、「端末」等の用語は、互換的に使用され得る。 (10) In the above-described embodiments, the terminal device 10 may be a mobile station (MS). A mobile station may also be referred to by those skilled in the art as a subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless terminal, remote terminal, handset, user agent, mobile client, client, or some other suitable terminology. In addition, in this disclosure, terms such as "mobile station," "user terminal," "user equipment (UE)," and "terminal" may be used interchangeably.

(11)上述した実施形態において、「接続された(connected)」、「結合された(coupled)」という用語、又はこれらのあらゆる変形は、2又はそれ以上の要素間の直接的又は間接的なあらゆる接続又は結合を意味し、互いに「接続」又は「結合」された2つの要素間に1又はそれ以上の中間要素が存在することを含むことができる。要素間の結合又は接続は、物理的な結合又は接続であっても、論理的な結合又は接続であっても、或いはこれらの組み合わせであってもよい。例えば、「接続」は「アクセス」を用いて読み替えられてもよい。本開示において使用する場合、2つの要素は、1又はそれ以上の電線、ケーブル及びプリント電気接続の少なくとも一つを用いて、並びにいくつかの非限定的かつ非包括的な例として、無線周波数領域、マイクロ波領域及び光(可視及び不可視の両方)領域の波長を有する電磁エネルギーなどを用いて、互いに「接続」又は「結合」されると考えることができる。 (11) In the above-mentioned embodiments, the terms "connected" and "coupled" or any variation thereof refer to any direct or indirect connection or coupling between two or more elements, and may include the presence of one or more intermediate elements between two elements that are "connected" or "coupled" to each other. The coupling or connection between elements may be a physical coupling or connection, a logical coupling or connection, or a combination thereof. For example, "connected" may be read with "access". As used in this disclosure, two elements may be considered to be "connected" or "coupled" to each other using at least one of one or more wires, cables, and printed electrical connections, as well as electromagnetic energy having wavelengths in the radio frequency range, microwave range, and light (both visible and invisible) range, as some non-limiting and non-exhaustive examples.

(12)上述した実施形態において、「に基づいて」という記載は、別段に明記されていない限り、「のみに基づいて」を意味しない。言い換えれば、「に基づいて」という記載は、「のみに基づいて」と「に少なくとも基づいて」の両方を意味する。 (12) In the above embodiments, the term "based on" does not mean "based only on," unless otherwise specified. In other words, the term "based on" means both "based only on" and "based at least on."

(13)本開示において使用される「判断(determining)」、「決定(determining)」という用語は、多種多様な動作を包含する場合がある。「判断」、「決定」は、例えば、判定(judging)、計算(calculating)、算出(computing)、処理(processing)、導出(deriving)、調査(investigating)、探索(looking up、search、inquiry)(例えば、テーブル、データベース又は別のデータ構造での探索)、確認(ascertaining)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、受信(receiving)(例えば、情報を受信すること)、送信(transmitting)(例えば、情報を送信すること)、入力(input)、出力(output)、アクセス(accessing)(例えば、メモリ中のデータにアクセスすること)した事を「判断」「決定」したとみなす事などを含み得る。また、「判断」、「決定」は、解決(resolving)、選択(selecting)、選定(choosing)、確立(establishing)、比較(comparing)などした事を「判断」「決定」したとみなす事を含み得る。つまり、「判断」「決定」は、何らかの動作を「判断」「決定」したとみなす事を含み得る。また、「判断(決定)」は、「想定する(assuming)」、「期待する(expecting)」、「みなす(considering)」などで読み替えられてもよい。 (13) The terms "determining" and "determining" as used in this disclosure may encompass a wide variety of actions. "Determining" and "determining" may include, for example, judging, calculating, computing, processing, deriving, investigating, looking up, search, inquiry (e.g., searching in a table, database, or other data structure), and considering ascertaining to be "judging" or "determining". Also, "determining" and "determining" may include considering receiving (e.g., receiving information), transmitting (e.g., sending information), input, output, and accessing (e.g., accessing data in memory) to be "judging" or "determining". Additionally, "judgment" and "decision" can include considering resolving, selecting, choosing, establishing, comparing, etc., to have been "judged" or "decided." In other words, "judgment" and "decision" can include considering some action to have been "judged" or "decided." Additionally, "judgment (decision)" can be interpreted as "assuming," "expecting," "considering," etc.

(14)上述した実施形態において、「含む(include)」、「含んでいる(including)」及びそれらの変形が使用されている場合、これらの用語は、用語「備える(comprising)」と同様に、包括的であることが意図される。更に、本開示において使用されている用語「又は(or)」は、排他的論理和ではないことが意図される。 (14) In the above embodiments, when the terms "include," "including," and variations thereof are used, these terms are intended to be inclusive, similar to the term "comprising." Furthermore, the term "or" as used in this disclosure is not intended to be an exclusive or.

(15)本開示において、例えば、英語でのa, an及びtheのように、翻訳により冠詞が追加された場合、本開示は、これらの冠詞の後に続く名詞が複数形であることを含んでもよい。 (15) In this disclosure, where articles have been added by translation, such as a, an, and the in English, this disclosure may include that the noun following these articles is in the plural.

(16)本開示において、「AとBが異なる」という用語は、「AとBが互いに異なる」ことを意味してもよい。なお、当該用語は、「AとBがそれぞれCと異なる」ことを意味してもよい。「離れる」、「結合される」等の用語も、「異なる」と同様に解釈されてもよい。 (16) In this disclosure, the term "A and B are different" may mean "A and B are different from each other." The term may also mean "A and B are each different from C." Terms such as "separate" and "combined" may also be interpreted in the same way as "different."

(17)本開示において説明した各態様/実施形態は単独で用いてもよいし、組み合わせて用いてもよいし、実行に伴って切り替えて用いてもよい。また、所定の情報の通知(例えば、「Xであること」の通知)は、明示的に行う通知に限られず、暗黙的(例えば、当該所定の情報の通知を行わない)ことによって行われてもよい。 (17) Each aspect/embodiment described in this disclosure may be used alone, in combination, or switched depending on the execution. In addition, notification of specific information (e.g., notification that "X is the case") is not limited to being an explicit notification, but may be performed implicitly (e.g., not notifying the specific information).

 以上、本開示について詳細に説明したが、当業者にとっては、本開示が本開示中に説明した実施形態に限定されないということは明らかである。本開示は、請求の範囲の記載により定まる本開示の趣旨及び範囲を逸脱することなく修正及び変更態様として実施できる。したがって、本開示の記載は、例示説明を目的とし、本開示に対して何ら制限的な意味を有さない。 Although the present disclosure has been described in detail above, it is clear to those skilled in the art that the present disclosure is not limited to the embodiments described herein. The present disclosure can be implemented in modified and altered forms without departing from the spirit and scope of the present disclosure as defined by the claims. Therefore, the description of the present disclosure is intended as an illustrative example and does not have any limiting meaning on the present disclosure.

 1…情報処理システム、10,10-1,10-2,10-K,10-L,10-N…端末装置、30…管理サーバ、101~105,101~105,101~105…建造物、201,201,202,202,203,203,203,204,204,204…アバター、313…決定部、314…生成部、315…提供部、316…アバター制御部、DS-J,DS-J…個別空間、LLM…大規模言語モデル、U…第1のユーザ、U…第2のユーザ、VS…仮想空間 REFERENCE SIGNS LIST 1...information processing system, 10, 10-1, 10-2, 10-K, 10-L, 10-N...terminal device, 30...management server, 101 to 105, 101K to 105K , 101L to 105L ...buildings, 201, 201L , 202 , 202K, 203 , 203K, 203L, 204 , 204K , 204L ...avatars, 313...determination unit, 314...generation unit, 315...provision unit, 316...avatar control unit, DS- JK , DS- JL ...individual space, LLM...large-scale language model, UK ...first user, UL ...second user, VS...virtual space

Claims (6)

 複数の都市に1対1に対応する複数の個別空間が統合された仮想空間を利用者に提供する仮想空間管理装置であって、
 前記利用者の属性を示す属性情報に基づいて、前記仮想空間上の前記複数の都市のうち、前記利用者が訪問する訪問都市についての視覚的効果を決定する決定部と、
 前記決定部によって決定された視覚的効果を、前記訪問都市に属する複数の建造物に共通して付与することにより、前記訪問都市の個別空間を生成する生成部と、
 前記生成部によって生成された個別空間を、前記利用者が使用する端末装置に提供する提供部と、
 を備える仮想空間管理装置。
A virtual space management device that provides a user with a virtual space in which a plurality of individual spaces corresponding one-to-one to a plurality of cities are integrated, comprising:
a determination unit that determines a visual effect for a city that the user visits among the plurality of cities in the virtual space based on attribute information indicating attributes of the user;
a generating unit that generates an individual space of the visited city by commonly applying the visual effect determined by the determining unit to a plurality of buildings belonging to the visited city;
a providing unit that provides the individual space generated by the generating unit to a terminal device used by the user;
A virtual space management device comprising:
 前記決定部は、前記利用者が前記訪問都市を初めて訪問する場合に前記視覚的効果を決定し、前記訪問都市を再度訪問する場合には、前記視覚的効果を維持する、
 請求項1に記載の仮想空間管理装置。
the determination unit determines the visual effect when the user visits the visited city for the first time, and maintains the visual effect when the user visits the visited city again.
The virtual space management device according to claim 1 .
 前記決定部は、前記属性情報に基づいて、前記訪問都市を往来する第三者のアバターの外見を決定し、
 前記訪問都市の前記個別空間は、前記決定部によって決定された前記第三者のアバターの外見を、前記訪問都市に属する前記第三者のアバターに適用することにより前記生成部によって生成される、
 請求項1に記載の仮想空間管理装置。
The determination unit determines an appearance of an avatar of a third party traveling to the visited city based on the attribute information;
the individual space of the visited city is generated by the generation unit by applying the appearance of the third-party avatar determined by the determination unit to the third-party avatar belonging to the visited city;
The virtual space management device according to claim 1 .
 前記決定部は、前記利用者が前記訪問都市を初めて訪問する場合に前記外見を決定し、前記利用者が前記訪問都市を再度訪問する場合には、前記外見を維持する、
 請求項3に記載の仮想空間管理装置。
The determination unit determines the appearance when the user visits the visited city for the first time, and maintains the appearance when the user visits the visited city again.
The virtual space management device according to claim 3.
 前記決定部は、前記利用者による過去の発話又は投稿の履歴において、キーワードとして前記複数の都市の中の都市の名前が出現した場合、前記都市の名前が出現した時点を含む所定の期間内においてピックアップされた前記都市の印象に関する少なくとも1つの用語を抽出する、
 請求項1に記載の仮想空間管理装置。
When a name of a city among the plurality of cities appears as a keyword in a history of past utterances or posts by the user, the determination unit extracts at least one term related to an impression of the city that has been picked up within a predetermined period including the time when the name of the city appeared.
The virtual space management device according to claim 1 .
 アバター制御部を備え、
 前記アバター制御部は、前記仮想空間上の役所において、前記利用者のアバターに、前記仮想空間にアクセスするためのユーザIDを用いて手続きさせることにより、前記利用者に現実空間における行政サービスに相当するサービスを提供する、
 請求項1に記載の仮想空間管理装置。
An avatar control unit is provided,
the avatar control unit provides the user with a service equivalent to an administrative service in a real space by having the avatar of the user carry out a procedure in a government office in the virtual space using a user ID for accessing the virtual space;
The virtual space management device according to claim 1 .
PCT/JP2024/031363 2023-09-04 2024-08-30 Virtual space management device Pending WO2025053089A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023-142975 2023-09-04
JP2023142975 2023-09-04

Publications (1)

Publication Number Publication Date
WO2025053089A1 true WO2025053089A1 (en) 2025-03-13

Family

ID=94924021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/031363 Pending WO2025053089A1 (en) 2023-09-04 2024-08-30 Virtual space management device

Country Status (1)

Country Link
WO (1) WO2025053089A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08161398A (en) * 1994-12-02 1996-06-21 N T T Data Tsushin Kk Integrated service system
JP2003323520A (en) * 2002-04-30 2003-11-14 Omron Corp Image distribution system and method
JP2017529635A (en) * 2014-06-14 2017-10-05 マジック リープ, インコーポレイテッドMagic Leap,Inc. Methods and systems for creating virtual and augmented reality
WO2018216602A1 (en) * 2017-05-26 2018-11-29 株式会社ソニー・インタラクティブエンタテインメント Information processing device, information processing method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08161398A (en) * 1994-12-02 1996-06-21 N T T Data Tsushin Kk Integrated service system
JP2003323520A (en) * 2002-04-30 2003-11-14 Omron Corp Image distribution system and method
JP2017529635A (en) * 2014-06-14 2017-10-05 マジック リープ, インコーポレイテッドMagic Leap,Inc. Methods and systems for creating virtual and augmented reality
WO2018216602A1 (en) * 2017-05-26 2018-11-29 株式会社ソニー・インタラクティブエンタテインメント Information processing device, information processing method, and program

Similar Documents

Publication Publication Date Title
CN116664719B (en) An image redrawing model training method, image redrawing method and device
CN112069414B (en) Recommended model training methods, devices, computer equipment, and storage media
WO2021180062A1 (en) Intention identification method and electronic device
CN113269612B (en) Item recommendation method, device, electronic device and storage medium
JP2022040183A (en) Computer-based selection of synthetic speech for agents
KR20210040892A (en) Information Recommendation Method based on Fusion Relation Network, Apparatus, Electronic Device, Non-transitory Computer Readable Medium, and Computer Program
CN114282035B (en) Image retrieval model training and retrieval method, device, equipment and medium
JP2019506664A (en) Entity identification using deep learning models
CN109471978B (en) Electronic resource recommendation method and device
CN110781407A (en) User label generation method and device and computer readable storage medium
US10909606B2 (en) Real-time in-venue cognitive recommendations to user based on user behavior
CN113821654A (en) Multimedia data recommendation method and device, electronic equipment and storage medium
CN116955835B (en) Resource screening method, device, computer equipment and storage medium
CN114970494B (en) A method, apparatus, electronic device, and storage medium for generating comments.
CN111914180B (en) User characteristic determining method, device, equipment and medium based on graph structure
CN111639253A (en) A method, device, equipment and storage medium for determining weight of data
CN114677170A (en) Feature preprocessing method and device, electronic equipment and storage medium
WO2025053089A1 (en) Virtual space management device
CN120407861A (en) Dynamic expansion method of knowledge graph based on the fusion of knowledge graph and user behavior
CN113761784A (en) Data processing method, training method and device of data processing model
CN113822084A (en) Statement translation method and device, computer equipment and storage medium
CN115482019A (en) Activity attention prediction method and device, electronic equipment and storage medium
JP7398587B1 (en) Systems, terminals, servers, methods, and programs using machine learning
KR102562282B1 (en) Propensity-based matching method and apparatus
CN114722234B (en) Music recommendation method, device and storage medium based on artificial intelligence

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24862741

Country of ref document: EP

Kind code of ref document: A1