[go: up one dir, main page]

HK1158335A - Shared virtual area communication environment based apparatus and methods - Google Patents

Shared virtual area communication environment based apparatus and methods Download PDF

Info

Publication number
HK1158335A
HK1158335A HK11112647.9A HK11112647A HK1158335A HK 1158335 A HK1158335 A HK 1158335A HK 11112647 A HK11112647 A HK 11112647A HK 1158335 A HK1158335 A HK 1158335A
Authority
HK
Hong Kong
Prior art keywords
user
virtual
virtual area
communicants
real
Prior art date
Application number
HK11112647.9A
Other languages
Chinese (zh)
Inventor
David Van Wie
Paul J. Brody
Original Assignee
Social Communications Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Social Communications Company filed Critical Social Communications Company
Publication of HK1158335A publication Critical patent/HK1158335A/en

Links

Description

Apparatus and method for sharing virtual area communication environment
Cross Reference to Related Applications
In accordance with 35 u.s.c. ยง 119(e), the present application claiming the benefit of U.S. provisional application having filing date 2008, 4 and 5 and application number 61/042714, which is incorporated herein by reference in its entirety.
The present application also relates to the following co-pending patent applications, each of which is incorporated herein by reference in its entirety:
U.S. patent application No. 12/354,709 filed on date 1/15 of 2009;
U.S. patent application No. 11/923,629 filed on 24/10/2007; and
U.S. patent application No. 11/923,634 filed on 24/10/2007.
Background
When face-to-face communication is not possible, people often rely on one or more technical solutions to meet their communication needs. These schemes are typically designed to simulate one or more aspects of face-to-face communications. Conventional telephone systems are capable of voice communication between callers. Instant messaging (also known as "chat") communication systems enable users to deliver text messages in real-time through instant messaging computer clients interconnected by instant messaging servers. Some instant messaging systems also allow a user to be presented in a virtual environment through graphical objects (called "avatars") that are controlled by the user. Interactive virtual reality communication systems enable users at remote locations to communicate over a variety of real-time channels and interact with each other through their respective avatars in a three-dimensional virtual space. What is needed is an improved system and method for interfacing with a virtual communication environment.
Disclosure of Invention
In one aspect, the invention features a method in accordance with which interaction options are determined from results of querying at least one interaction database that contains interaction records describing respective interactions of users in a virtual communication environment. The virtual communication environment includes a virtual area and supports real-time communication between the user and other communicants. Each interaction record contains a respective place attribute value that identifies a respective one of the virtual areas in which a respective one of the interactions occurred and one or more correspondent identifier attribute values that identify respective ones of the correspondents that participated in the interactions in the respective virtual area. On the display, a user interface is presented. The user interface includes a graphical presentation of the interaction options associated with the respective group of one or more user-selectable controls. In response to selection by the user of a respective one of the user-selectable controls, initiating interaction by the user in the virtual communication environment.
In another aspect, the invention features a method in accordance with which a presentation of a virtual area in a virtual communication environment is displayed on a display. The virtual communication environment supports real-time communication between users and other communicants. Presenting, on the display, a user-selectable control that enables the user to manage interactions with the virtual area and a communicant of the other communicants. Responsive to input received from the user through the user-selectable control, establishing a respective presence of the user in the virtual area. On the display, depicting a graphical representation of each of the communicants who has presence in the virtual area. In this process, the depiction is contained in the virtual area in respective positions and renders (render) each of the respective graphical presentations of the communicant using a three-dimensional sphere element that supports a directional graphical visual element having a variable orientation that indicates a direction of attention of the user in the virtual area.
In another aspect, the invention features a method in accordance with which a presentation of a virtual area in a virtual communication environment is displayed on a display. The virtual communication environment supports real-time communication between users and other communicants. Presenting, on the display, a user-selectable control that enables the user to manage interactions with the virtual area and a communicant of the other communicants. In this process, an immersion control interface is displayed. The immersion control interface enables the user to select a degree of interaction with the particular virtual area from a set of different levels of interaction. Responsive to input received from the user through the user-selectable control, establishing a respective presence of the user in the virtual area. On the display, a graphical representation of each of the communicants who has presence in the virtual area is depicted.
In another aspect, the invention features a method in accordance with which locality attribute values are associated with real-time interactions of users and other communicants operating on respective network nodes and sharing a virtual communication environment. The virtual communication environment comprises one or more virtual areas and supports real-time communication between the user and the other communicants. For each interaction involving a respective one of the communicants in a respective one of the one or more virtual areas, the process of associating a venue attribute value includes generating a respective interaction record, the interaction record including a respective venue attribute value and one or more communicant identifier attribute values, the respective venue attribute value identifying the virtual area in which the interaction occurred, and the communicant identifier attribute value identifying a respective one of the communicants participating in the interaction. Connecting the user and the other communicant interface to the virtual communication environment based on the associated location attribute values.
In another aspect, the invention features a method in accordance with which an invitation to join a meeting is presented on a display at a predetermined time. The conference is scheduled to be conducted in a virtual area of a virtual communication environment. The virtual communication environment supports real-time communication between the user and other communicants operating on respective network nodes, and presents controls for accepting the invitation. Responsive to the user selection of the control, establishing a respective presence of the user in the virtual area. On the display, a presentation of a virtual area and a respective graphical presentation of each of the communicants who has presence in the virtual area are depicted.
In another aspect, the invention features a method in accordance with which a presentation of a virtual area in a virtual communication environment is displayed on a display. The virtual communication environment supports real-time communication between users and other communicants operating on respective network nodes. Presenting, on the display, user-selectable controls that enable the user to manage interactions with the virtual area and some of the other communicants. On the display, depicting a graphical representation of each of the communicants who has presence in the virtual area. In this process, a respective location of the respective graphical presentation of the communicant in the virtual area is determined based on a respective real-time differential action stream that describes movement of the respective graphical presentation of the communicant in the virtual area and that is received from the network node. Automatically repositioning at least a particular one of the graphical presentations of the communicant based on the determined location of the particular graphical presentation in the virtual area and at least one other graphical presentation of the particular graphical presentation proximate to the communicant in the virtual area.
In another aspect, the invention features a method in accordance with which a presentation of a virtual area in a virtual communication environment is displayed on a display. The virtual communication environment supports real-time communication between users and other communicants. Presenting, on the display, user-selectable controls that enable the user to manage interactions with the virtual area and some of the other communicants. The user-selectable controls include a modification control that enables the user to initiate modification of the virtual area as desired. Responsive to input received from the user through the user-selectable control, establishing a respective presence of the user in the virtual area. On the display, depicting a respective graphical representation of each of the communicants present in the virtual area.
In another aspect, the invention features a method in accordance with which a place attribute value is associated with a data file received from a communicant operating on a respective network node and sharing a virtual communication environment that encompasses one or more virtual areas and supports real-time communication between the communicants. For each of the data files shared by a respective one of the communicants in a respective one of the one or more virtual areas, the process of associating the locality attribute values produces a respective interaction record containing a respective one of the locality attribute values and a respective data file identifier, the respective one of the locality attribute values identifying the respective virtual area in which the data file is shared, and the data file identifier identifying the respective data file. Managing sharing of the data file between the communicants based on the associated location attribute values.
In another aspect, the invention features a method in accordance with which a graphical representation of a virtual area in a virtual communication environment is displayed on a display. The virtual communication environment supports real-time communication between a first correspondent operating on a first network node and a second correspondent operating on a second network node. A first software application is executed on the first network node that establishes a real-time data stream connection between the first and second network nodes. The first real-time data stream connection is associated with a reference to the virtual area. Executing a second software application of a second real-time data stream connection between the first network node and a third network node on which a third correspondent operates, concurrently with the execution of the first software application. The second real-time data stream connection does not reference any virtual area. One or more integrated real-time data streams are generated from real-time data streams interacting through the first and second real-time data stream connections.
In another aspect, the invention features a method in accordance with which an operations server network node performs operations including the following. Instances of the client software application are executed in relation to virtual zones in a virtual communication environment that supports real-time communications between communicants operating on respective client network nodes. Real-time input data streams are received from respective ones of client network nodes associated with communicants interacting in the virtual area. A composite data stream is generated from the real-time input data stream. The composite data stream is input to an executing instance of the client software application. A corresponding instance of an output data stream is generated from output generated by an executing instance of the client software application, at least partially in response to the input of the composite data stream. The instances of the output data stream are transmitted to respective ones of client network nodes associated with communicants interacting in the virtual area.
In another aspect, the invention features a method in accordance with which a virtual area is established in a virtual communication environment. The virtual communication environment supports real-time communication between communicants operating on respective network nodes. Establishing a respective presence in the virtual area for each of one or more of the communicants. Information is transferred between the file store and the wiki resource associated with the virtual area in response to input received from a respective one of the network nodes associated with a respective one of the communicants having presence in the virtual area.
In another aspect, the invention features a method in accordance with which a location attribute value is associated with a user and other communicants operating on respective network nodes and sharing a virtual communication environment that includes at least one virtual area and supports real-time communication between the user and the other communicants. Each of the user and the other communicants is associated with a respective object in the virtual area. The method additionally includes connecting the user and the other communicant interface to the virtual communication environment based on the associated location attribute values.
The invention also features apparatus operable to implement the above-described methods and computer-readable media storing computer-readable instructions for causing a computer to implement the above-described methods.
Drawings
Fig. 1 is a diagrammatic view of one embodiment of a network communication environment including a first client network node, a second client network node, and a virtual environment creator.
Fig. 2 is a diagrammatic view of one embodiment of a network node that includes a graphical user interface that presents a depiction of a virtual area.
Fig. 3 is a block diagram of the network communication environment of fig. 1 showing elements of one embodiment of a client network node.
FIG. 4 is a flow diagram of one embodiment of a method by which the network infrastructure service environment 30 processes shared data files.
Fig. 5A is a diagrammatic view of one embodiment of a shared virtual area communication environment in which network nodes communicate in a point-to-point architecture.
Fig. 5B is a diagrammatic view of one embodiment of a shared virtual area communication environment in which network nodes communicate in a server-mediated configuration.
Fig. 6 is a block diagram of one embodiment of a shared virtual area communication environment including an exemplary set of real-time data flows between sources and sinks (sinks) of three network nodes.
Fig. 7 shows a block diagram of an embodiment of a network node comprising an exemplary group and an exemplary group receiver.
Fig. 8 is a block diagram of one embodiment of a regional client network node connected to a regional server network node and two other regional client network nodes in one embodiment of a shared virtual region communication environment.
FIG. 9 is a diagrammatic view of one embodiment of the shared virtual area communication environment shown in FIG. 8.
FIG. 10 illustrates one embodiment of a system architecture that supports real-time communicant interaction in a virtual environment.
FIG. 11 is a flow diagram of one embodiment of a method by which a network infrastructure service environment interfaces a user with a spatial communication environment.
FIG. 12 is a flow diagram of one embodiment of a method by which a communication application interfaces a user with a spatial communication environment.
FIG. 13 illustrates one embodiment of a graphical user interface for a heads-up display (HUD) for viewing a contact person and a location.
FIG. 14 shows the HUD graphical user interface of FIG. 13 displaying contacts by location.
FIG. 15 shows the HUD graphical user interface of FIG. 13 showing a contact in a place (i.e., Sococo Master) that the user entered by clicking on the corresponding place tile (tile) shown in FIG. 14.
FIG. 16 shows the HUD graphical user interface of FIG. 13 with data relating to preferred contacts of the user's real-time contacts extracted based on queries to the user's Skype history.
FIG. 17 shows the HUD graphical user interface of FIG. 13 with data relating to the next-to-select contact of the user's real-time contacts extracted based on queries to the user's Skype history.
Fig. 18 shows the HUD graphical user interface of fig. 13 displaying a two-dimensional presentation (presentation) of the Sococo location where the user's real-time contacts are present.
FIG. 19 shows the HUD graphical user interface of FIG. 18 displaying a three-dimensional representation of the Sococo place currently occupied by the selected one of the user's real-time contacts.
FIG. 20 shows the HUD graphical user interface of FIG. 19, where the user interacts with a first real-time contact in the shared Sococo location while interacting with a second real-time contact that is currently playing an online game presented by the MMOG minimap.
FIG. 21 shows one embodiment of a three-dimensional representation of the then-current gaming environment experienced by the second real-time contact of FIG. 20 with which the user is interacting via the HUD.
FIG. 22 shows one embodiment of a three-dimensional representation of a user's HomeSpace location.
Fig. 23 shows one embodiment of a two-dimensional representation of an officepspace location.
FIG. 24A illustrates one embodiment of a heads-up display (HUD) graphical user interface in a desktop mode of operation.
FIG. 24B shows an enlarged view of the HUD shown in FIG. 24A.
FIG. 25 shows the HUD graphical user interface of FIG. 24, showing a reminder of a pre-scheduled meeting (i.e., "8 am-prepare Ops exam").
FIG. 26 shows the HUD graphical user interface of FIG. 24A in desktop image mode after a user enters the Sococo location, where the user is currently the only communicant at the location.
Fig. 27 shows the HUD graphical user interface of fig. 26 in desktop image mode after two additional correspondents enter the Sococo location.
FIG. 28 shows the HUD graphical user interface of FIG. 27 in a three-dimensional avatar mode just prior to the user entering the location.
FIG. 29 shows the HUD graphical user interface of FIG. 27 after the user enters the location and the avatar presenting the communicant currently in the location is automatically turned to a three-dimensional avatar mode after facing the user.
Figure 30 shows the HUD graphical user interface of figure 29 just before the user changes his location by clicking on the wall of the location to add a new room to the Sococo location.
Fig. 31 shows the HUD graphical user interface of fig. 30 just after the user has added a new room to the Sococo site.
Fig. 32 shows the HUD graphical user interface of fig. 31 just after the user enters a new room that the user adds to the Sococo location.
FIG. 33 shows the HUD graphical user interface of FIG. 32 after rendering avatars of other users in the Sococo venue into the new room occupied by the user.
Fig. 34 shows one embodiment of an avatar of an officepspace location.
FIG. 35 is a flow diagram of one embodiment of a method of generating one or more integrated real-time data streams from real-time data streams exchanged over a real-time data stream connection established by a separate software application executing on a client.
Figure 36 is a block diagram of one embodiment of a communication architecture that enables people to communicate with users of the Sococo platform through different communication applications.
FIG. 37 is a flow diagram of one embodiment of a method by which the network infrastructure service environment 30 multiplexes (multiplexes) client software for one or more communicants.
FIG. 38 is a method illustration of one embodiment of a method by which the Sococo platform integrates with wiki (wiki) resources.
FIG. 39 shows one embodiment of a representation of an OfficeSpace location where information from a wiki is imported to a wiki that allows real-time interaction with the wiki content.
Detailed Description
In the following description, like reference numerals are used to identify like elements. Moreover, the drawings are intended to depict major features of the exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.
I. Definition of terms
A "communicant" is a person who communicates or otherwise interacts with others through one or more network connections, where the communication or interaction may or may not occur in a virtual area context. A "user" is a correspondent that is operating a particular network node that defines a particular perspective for purposes of the description. The "real-time contacts" of a user are communicants or others who have communicated with the user through the real-time communication platform.
A "communicant interaction" is any type of direct or indirect interaction or influence between a communicant and another network entity, which may include, for example, another communicant, a virtual area, or a network service. Exemplary types of communicant interactions include communicants communicating with each other in real-time, communicants entering a virtual area, and communicants requesting access to resources from a network service.
"Presence" refers to the ability and willingness of a networked entity (e.g., a correspondent, service, or device) to communicate, where such willingness affects the ability to detect and obtain information about the entity's status on the network and to connect to the entity. When a correspondent exists in a particular virtual area, that correspondent is said to be "in that virtual area".
A "virtual communication environment" is a computer-managed representation of a space that includes at least one virtual area and supports real-time communication between communicants.
A "location attribute value" refers to a value that characterizes an aspect of a location within a virtual communication environment, where a "location" may refer to a spatial aspect of the virtual communication environment, including but not limited to a group of virtual areas, a single virtual area, one or more rooms within a virtual area, a zone or other area within a room of a virtual area, or a particular location within a virtual area. A place identifier is a place attribute that, for example, represents, identifies, or locates a place within the virtual environment.
The term "interfacing" means providing one or more facilities that enable a communicant to interact physically, functionally or logically with a virtual communication environment. These facilities may include one or more of computer hardware, computer firmware, and computer software.
A "computer" is any machine, device, or apparatus that processes data according to computer-readable instructions stored temporarily or permanently on a computer-readable medium. A "computer operating system" is a software element of a computer system that manages and coordinates the performance of tasks and the sharing of computing and hardware resources. A "software application" (also referred to as software, an application, computer software, a computer application, a program, and a computer program) is a set of instructions that a computer can interpret and execute to perform one or more specific tasks. A "data file" is a block of information of data that is stored long term for use by a software application.
A "window" is a viewable area of a display that typically includes a user interface. The window typically displays the output of the software process and typically enables a user to input commands or data for the software process. Windows with a parent class are called "child windows". Windows that have no parent or whose parent is a desktop window are referred to as "top-level windows". A "desktop" is a system-defined window that renders a Graphical User Interface (GUI) background and serves as the basis for all windows displayed by all software processes.
A "network node" (also simply referred to as a "node") is a junction or connection point in a communication network. Exemplary network nodes include, but are not limited to, terminals, computers, and network switches. A "server" network node is a host computer on a network that responds to information or service requests. A "client" network node is a computer on a network that requests information or services from a server. A "network connection" is a link between two communicating network nodes. The term "local network node" refers to the network node that is currently the main subject of discussion. The term "remote network node" refers to a network node that is connected to a local network node by a network communication link. A "connection handle" is a pointer or identifier (e.g., a Uniform Resource Identifier (URI)) that can be used to establish a network connection with a correspondent, resource, or service on a network node. "network communications" may include any type of information (e.g., text, voice, audio, video, email messages, data files, action data streams, and data packets) transmitted or otherwise communicated from one network node to another over a network connection.
A "database" is an organized collection of records presented in a standardized format that can be searched by a computer. The database can be stored on a single computer-readable data storage medium on a single computer, or can be distributed across multiple computer-readable data storage media on one or more computers.
A "file store" is a data file storage system that allows network access to data files stored on one or more nodes of a network.
A "multitrack recording" is a data file that stores multiple separable tracks (or layers) of data streams (e.g., audio, motion, video, chat) of the same or different data types, where each track can be accessed and manipulated independently.
An "identifier" identifies an entity in a locally unique or globally unique manner. The resource identifier identifies the resource and provides a handle to interact with (e.g., act on or get to) the presentation of the resource. "resource" refers to any type of information (e.g., web pages, files, streaming data, and presence data) or service (e.g., a service that establishes a communication link with another user) that is accessible over a network. A resource may be identified by a Uniform Resource Identifier (URI). A "handle" is a pointer or identifier (e.g., a Uniform Resource Identifier (URI)) that may be used to establish a network connection with a correspondent, resource, or service on a network node.
A "data source" (referred to herein simply as a "source") is any of a device, a portion of a device (e.g., a computer), or software that originates data.
A "data receiver" (herein simply denoted as "receiver") is any of a device, a part of a device (e.g., a computer), or software that receives data.
A "switching rule" is an instruction specifying one or more conditions that must be met in order to connect or disconnect one or more real-time data sources and one or more real-time data receivers.
A "stream mix" is a combination of two or more real-time data streams of the same or semantically consistent type (e.g., audio, video, chat, and motion data). For example, a set of voice streams may be mixed into a single voice stream or a voice stream may be mixed into the audio portion of a video stream.
A "flow handling topology" is a network routing hierarchy through which real-time data flows (each of which may be a mixed flow or a non-mixed flow) are delivered to one or more network nodes.
A "wiki" is a website or similar online resource that allows users to collaboratively add and edit content. In the case of a website-based wiki, users typically collaborate using respective web browser applications.
A "real-time data stream" is data that is constructed and processed in a continuous stream and is designed to be received without delay or with only an imperceptible delay; real-time data streams include digital presentations of sound, video, user movements, facial expressions, and other physical phenomena that may benefit from rapid transmission, rapid execution, or both, as well as data within a computing environment, including, for example, avatar movement instructions, text chat, real-time data feeds (feeds) (e.g., sensor data, machine control instructions, transaction streams, and stock quote information feeds), and file transmissions.
A "virtual area" (also referred to herein as a "region" or "place") is a presentation of a space or scene managed by a computer. The virtual area may be a two-dimensional or three-dimensional representation. Typically, virtual areas are designed to simulate physical, real-world spaces. For example, using a conventional computer display, the virtual area may be visualized as a two-dimensional representation of a three-dimensional space generated by the computer. However, the virtual area does not require relevant visualization to implement the switching rules.
A "virtual area application" (also referred to as a "virtual area specification") is a description of a virtual area used in creating a virtual environment. The virtual area application typically includes definitions of geometry, physical properties (physics), and real-time switching rules associated with one or more zones of the virtual area.
A "virtual environment" is a computer-managed presentation of space that includes at least one virtual area and supports real-time communication between communicants.
A "zone" is a region of a virtual area associated with at least one switching rule or management rule. The switching rules control the switching (e.g., routing, connecting and disconnecting) of real-time data flows between network nodes communicating through the shared virtual area. The administrative rules control the communicant's access to a resource (e.g., an area, a region of an area, the contents of the area or region), the extent of the access, and the consequences of the access (e.g., overseeing the requirements of a record related to the recorded access).
In the context of a virtual area, an "object" is any type of discrete element in the virtual area that can be effectively manipulated to separate from the geometry of the virtual area. Exemplary objects include doors, portals, windows, view screens, and interphones. The object typically has attributes and characteristics that are independent of and different from those of the virtual area. An "avatar" is an object that characterizes a correspondent in a virtual area.
The "location" in a virtual area refers to the location of a point or area or volume (volume) in the virtual area. A point is typically characterized by a single two-or three-dimensional set of coordinates (e.g., x, y, z) that define the point in the virtual area. The region is typically characterized by three-dimensional coordinates of three or more coplanar vertices defining the boundaries of the closed two-dimensional shape in the virtual area. The volume is typically characterized by three-dimensional coordinates of four or more non-coplanar vertices defining the closed boundaries of the three-dimensional shape in the virtual area.
A "statistic" is a quantity calculated from a statistical analysis of data in a sample and characterizing an aspect of the sample. The term "statistical analysis" refers to the process of: analyzing the data for purposes of summarization or reasoning, determining values for variables of a predictive model, determining one or more metrics (metrics) summarizing the data, or classifying one or more aspects or topics of the data.
"third party" refers to an entity that is not affiliated with the entity that owns or controls the Sococo platform. The third party is typically independent of any contract (contract) between the correspondent and the owner of the Sococo platform.
The term "including" as used herein means including but not limited to, the term "including" is meant to be inclusive and not limited to, and "based on" means at least partially based on.
General description of the invention
A. Exemplary operating Environment
1. Introduction to
Embodiments described herein provide improved systems and methods for navigation (navigator) and interaction in a virtual communication environment. These embodiments provide an interface that includes navigation controls that enable a user to navigate to a virtual area and interactive controls that enable the user to interact with other communicants in the virtual area.
Fig. 1 shows one embodiment of an exemplary network communication environment 10, the network communication environment 10 including a first client network node 12, a second client network node 14, and a virtual environment creator 16 interconnected by a network 18. The first client network node 12 includes computer readable memory 20, a processor 22, and input/output (I/O) hardware 24. The processor 22 executes at least one communication application 26 stored in the memory 20. The second client network node 14 is typically configured in substantially the same manner as the first client network node 12. The virtual environment creator 16 includes at least one server network node 28 that provides a network infrastructure service environment 30. The communication application 26 and the network infrastructure service environment 30 collectively provide a platform (also referred to herein as a "platform") for creating a spatial virtual communication environment (also referred to herein simply as a "virtual environment").
In some embodiments, the network infrastructure service environment 30 manages sessions for the first and second client nodes 12, 14 in the virtual area 32 in accordance with the virtual area application 34. The virtual area application 34 is hosted by the virtual area 32 (host) and includes a description of the virtual area 32. The communication application 26 running to the first and second client network nodes 12, 14 exposes respective views of the virtual area 32 and provides respective interfaces for receiving commands from the correspondent in accordance with data received from the network infrastructure service environment 30. The communicants are generally characterized within the virtual area 32 by respective avatars that move about the virtual area 32 in response to commands entered by the communicants at their respective network nodes. The view of the virtual area 32 by each communicant is typically presented from the perspective of the communicant's avatar, which increases the degree of immersion experienced by the communicant. Each communicant is generally able to view any portion of the virtual area 32 surrounding his or her avatar. In some embodiments, the communication application 26 establishes real-time data stream connections between the first and second client network nodes 12, 14 and other network nodes sharing the virtual area 32 based on the position of the correspondent's avatar within the virtual area 32.
The network infrastructure service environment 30 also maintains a relational database 36, the relational database 36 containing interaction records 38 for communicants in the virtual area. Each interaction record 38 describes a scenario of interaction between one or more communicants in the virtual area.
2. Network environment
Network 18 may include any of a Local Area Network (LAN), a Metropolitan Area Network (MAN), and a Wide Area Network (WAN), such as the Internet. Network 18 typically includes a number of different computing platforms and transmission facilities that support the transmission of various media types (e.g., text, sound, audio, and video) between network nodes.
The communication application 26 (see fig. 1), which typically runs on a client network node comprising software and hardware resources, defines, together with administrative policies, user preferences including exportation (deployment) regarding the presence of the user and preferences of the user's connections to regions and other users, and other settings, a local configuration that affects the management of real-time connections with other network nodes. Network connections between network nodes may be arranged in a variety of different stream handling topologies, including peer-to-peer architectures, server-mediated architectures, and hybrid architectures that combine aspects of peer-to-peer and server-mediated architectures. Exemplary topologies of these types are described in U.S. applications serial nos. 11/923,629 and 11/923,634, both of which were filed on 24/10/2007.
3. Network infrastructure services
The network infrastructure services environment 30 generally includes one or more network infrastructure services that cooperate with the communication application 26 in establishing and managing network connections between the client nodes 12, 14 and other network nodes (see fig. 1). The network infrastructure service may run on a single network node or may be distributed across multiple network nodes. The network infrastructure services typically run on one or more dedicated network nodes (e.g., server computers or network devices that perform one or more edge services such as routing and switching). In some embodiments, however, one or more of the network infrastructure services run on at least one of the communicants' network nodes. Among the network infrastructure services included in the exemplary embodiment of network infrastructure services environment 30 are account services, security services, regional services, rendezvous services, and interaction services.
Account service
The account service is a correspondent account that manages the virtual environment. The account service also manages the creation and issuance of authentication tokens that can be used by client network nodes to authenticate themselves to any of the network infrastructure services.
Security service
The security service controls access by communicants to assets and other resources of the virtual environment. The access control method implemented by the security service is typically based on one or more of an access control list (where access is granted to entities having identities on the list) and a capability (where access is granted to entities having appropriate capabilities or permissions). After a particular communicant has been granted access to resources, the communicant typically uses functionality provided by other network infrastructure services to interact in the network communication environment 10.
Regional service
The regional service manages a virtual region. In some embodiments, the regional service remotely configures the communication applications 26 running on the first and second client network nodes 12, 14 in accordance with the virtual regional application 34 in accordance with a set of constraints 47 (see fig. 1). Constraints 47 typically include control of access to the virtual area. The access control is typically based on one or more of an access control list (where access is granted to communicants or client nodes having identities on the list) and a capability (where access is granted to communicants or client nodes having appropriate capabilities or permissions).
The zone service also manages network connections associated with the virtual zone based on capabilities of the requesting entity, maintains global state information for the virtual zone, and serves as a data server for client network nodes participating in the shared communication session in the context defined by virtual zone 32. The global state information includes a list of all objects in the virtual area and their corresponding locations in the virtual area. The regional service sends instructions to configure the client network node. The regional service also registers and sends initialization information to other client network nodes requesting to join the communication session. In this process, the zone service also sends to each joining client network node a list of elements (e.g., plug-ins) that are necessary to render the virtual zone 32 on the client network node in accordance with the virtual zone application 34. The regional service also ensures that the client network node can synchronize to a global state if a communication failure occurs. The zone service typically manages communicant interaction with the virtual zone through administrative rules associated with the virtual zone.
Aggregation service
The aggregation service manages the collection, storage, and distribution of presence information and provides a mechanism for network nodes to communicate with each other based on the capabilities of the requesting entity (e.g., by managing the distribution of connection handles). The aggregation service typically stores the presence information in a presence database. The aggregation service typically manages the interaction of the communicants with each other through preferences that are kept secret by the communicants.
Interactive service
The interaction service maintains a relational database 36, the relational database 36 containing records 38 of interactions between communicants. For each interaction between communicants, one or more services (e.g., regional services) of network infrastructure services environment 30 transmit interaction data to the interaction service. In response thereto, the interaction service generates one or more corresponding interaction records and stores them in the relational database. Each interaction record describes a scenario of interaction between a correspondent pair. For example, in some implementations, the interaction record contains an identifier for each communicant, an identifier for the interaction location (e.g., a virtual area instance), a description of the hierarchy of interaction locations (e.g., a description of how the interaction room is associated with a larger area), start and stop times of the interaction, and a list of all files and other data streams shared and recorded during the interaction. Thus, for each real-time interaction, the interaction service tracks when, where and what occurred during the interaction (e.g., entry and exit), activated/deactivated objects, and shared files from the perspective involved by the correspondent.
The interaction service also supports queries to the relational database 36 based on the capabilities of the requesting entity. The interaction service presents query results to the interaction database records in a sorted order (e.g., most frequent or up-to-date) based on the virtual areas. The query results can be used to drive the ranking of the frequency of contacts encountered by the correspondent in which virtual area, as well as the ranking of people encountered by the correspondent and the ranking of virtual areas most frequented by the correspondent regardless of which virtual area. The query results may also be used by application developers as part of a heuristic system that automates certain relationship-based tasks. Examples of this type of heuristic are one in which a communicant who has accessed a particular virtual area more than five times can enter by default without knocking, or one in which a communicant who exists in an area at a particular time is allowed to modify and delete a file created by another communicant who exists in the same area at the same time. Queries to relational database 36 may be combined with other searches. For example, queries to a relational database may be combined with queries to contact history data generated for interactions with contacts using communication systems outside the domain of the network infrastructure services environment 30 (e.g., Skype, Facebook, and Flickr).
4. Virtual area
The communication applications 26 and the network infrastructure service environment 30 typically manage real-time connections with network nodes in a communication scenario defined by an instance of a virtual area. The virtual area instance may correspond to an abstract (non-geometric) virtual space defined relative to abstract coordinates. Alternatively, the virtual area instance may correspond to a visual virtual space defined relative to one-, two-, or three-dimensional geometric coordinates associated with a particular avatar. The abstract virtual areas may or may not be associated with respective avatars, while the visible virtual areas are associated with respective avatars.
In some implementations, the spatial virtual communication environment is modeled as a spatial hierarchy of virtual areas (also referred to herein as "locations" or "positions") and objects. The spatial hierarchy includes an ordered sequence of levels ranging from a highest level to a lowest level. Each location in successive levels in the spatial hierarchy is contained in a respective one of the locations in a preceding level. Each object in the spatial hierarchy is contained in a respective one of the locations. The levels of the spatial hierarchy are typically associated with corresponding avatars, which are consistent with geographic, architectural, or city-like regions, and are labeled accordingly. The zones of each virtual area are defined by respective grids, some of which define elements of a physical environment (e.g., spaces such as rooms and courtyards, which are associated with buildings) that may contain objects (e.g., avatars and props such as screenable objects and objects for meetings).
As explained above, communicants are typically characterized by respective avatars in an virtual area having an associated avatar. The avatar moves about the virtual area in response to commands entered by the communicants at their respective network nodes. In some implementations, the communicant's view of the virtual area instance is typically presented from the perspective of the communicant's avatar, and each communicant is typically able to view any portion of the visual virtual area surrounding his or her avatar, which improves the level of immersion experienced by the communicant.
Fig. 2 shows an embodiment of an exemplary network node implemented by a computer system 48. The computer system 48 includes a display monitor 50, a computer mouse 52, a keyboard 54, speakers 56, 58, and a microphone 60. The display monitor 50 displays a graphical user interface 62. The graphical user interface 62 is a window-based graphical user interface that may include a plurality of windows, icons, and pointers 64. In the illustrated embodiment, the graphical user interface 62 presents a two-dimensional depiction of the shared virtual area 66 in relation to a three-dimensional avatar that characterizes the gallery. The communicants are characterized in the virtual area 66 by respective avatars 68, 70, 72, each of which has a respective character (e.g., therapist, artist, and visitor) in the context of the virtual area 66.
As explained in detail below, the virtual area 66 includes zones 74, 76, 78, 80, 82 associated with respective rules governing switching of real-time data flows between network nodes characterized by avatars 68-72 in the virtual area 66. (during a typical communication session, the dashed lines depicting zones 74-82 in FIG. 2 are not visible to the communicants, although there may be visual cues relating to such zone boundaries the switching rules dictate how local connection processes performed on each network node communicate with other network nodes based on the location of the communicants' avatars 68-72 in the zones 74-82 of the virtual area 66.
The virtual area is defined by a specification that includes a description of the geometric elements of the virtual area and one or more rules, including switching rules and administrative rules. The switching rules govern the real-time streaming connections between the network nodes. The administrative rules control access by communicants to resources, such as the virtual area itself, areas within the virtual area, and objects within the virtual area. In some embodiments, the geometric elements of the virtual area are described in accordance with the COLLADA-digital assets diagram Specification for 4.2006 (available from http:// www.khronos.org/COLLADA), version 1.4.1, and the switching rules are described using the extended markup language (XML) text format (referred to herein as the virtual space description format (VSDL)) in accordance with the COLLADA stream reference Specification described in U.S. patent applications 11/923,629 and 11/923,634.
The geometric elements of the virtual area typically include the physical geometry and conflict geometry (collision geometry) of the virtual area. The physical geometry is typically formed by triangular, quadrilateral or polygonal surfaces. Colors and textures are mapped onto the physical geometry to create a more realistic appearance for the virtual region. A shadow effect may be provided, for example, by painting (paint) light onto the visible geometry and modifying the texture, color, or intensity in the vicinity of the light. The collision geometry describes an invisible surface that determines the way objects can move in the virtual area. The conflicting geometry may be consistent with the visual geometry, correspond to a simpler approximation of the visual geometry, or involve the requirements of a specific application of the virtual area designer.
The switching rules typically include a description of the conditions of the source and the receiver connecting the real-time data streams in terms of position in the virtual area. Each rule typically includes attributes defining the type of real-time data stream to which the rule applies and one or more locations in the virtual area to which the rule applies. In some embodiments, each rule optionally may include one or more attributes specifying a desired role for the source, a desired role for the sink, a priority for the stream, and a requested stream processing topology. In some embodiments, if there are no explicit switching rules defining a particular portion of the virtual area, one or more implicit or default switching rules may be applied to that portion of the virtual area. One exemplary default switching rule is to connect each source to each compatible receiver within the region according to policy rules (policy rule). The policy rules may apply globally to all connections between the client nodes or only to corresponding connections with individual client nodes. One example of a policy rule is a proximity default policy rule that only allows connection of sources and compatible receivers associated with corresponding objects within a specified distance (radius) from each other in the virtual area.
In some implementations, administrative rules are associated with a virtual area to control who can access the virtual area, who can access its content, what the scope of content that can access the virtual area is (e.g., how the user can handle the content), and what the subsequent results of accessing those content are (e.g., record keeping, such as audit logs and payment requirements). In some embodiments, the entire virtual area or a zone of the virtual area is associated with a "management grid (grace mesh)". In some embodiments, the management grid is implemented in a manner similar to the implementation of the zone grid described in U.S. patent applications 11/923,629 and 11/923,634. The management grid enables software application developers to associate management rules with virtual areas or zones of virtual areas. This avoids the need to create separate permissions for each file in the virtual area and avoids the need to deal with the complexity that can arise when the same document needs to be processed differently depending on the scenario.
In some embodiments, a virtual area is associated with a management grid that associates one or more zones of the virtual area with Digital Rights Management (DRM) functionality. The DRM function controls access to one or more of an object within the virtual area or one or more zones within the virtual area or the virtual area. The DRM function is triggered whenever a correspondent crosses a management grid boundary within the virtual area. The DRM function determines whether the trigger action is allowed and, if so, what is the scope of the allowed action, whether payment is required and whether a audit record needs to be generated. In an exemplary implementation of a virtual area, the associated management grid is configured so that if a communicant is able to enter the virtual area, he or she can perform actions on all documents associated with the virtual area, including manipulating the document, viewing the document, downloading the document, deleting the document, modifying the document, and re-uploading the document. In this way, the virtual area may become a repository for information that is shared and discussed in the context defined by the virtual area.
Additional details regarding the specifications of the virtual areas are described in U.S. applications 61/042714 (filed on 4/2008), 11/923,629 (filed on 24/10/2007) and 11/923,634 (filed on 24/10/2007).
5. Client node architecture
The correspondent typically connects to the network 18 from a client network node. Client network nodes are typically implemented by general purpose computer systems or special purpose communication computer systems (or "consoles", such as network-enabled video game consoles). The client network node performs a communication process that establishes real-time data stream connections with other network nodes and typically performs a character rendering process that presents a view of each virtual area that the correspondent enters.
Fig. 3 shows an embodiment of a client network node implemented by computer system 120. The computer system 120 includes a processing unit 122, a system memory 124, and a system bus 126 that couples the processing unit 122 to the various components of the computer system 120. The processing unit 122 may include one or more data processors, each of which may be in the form of any of a variety of commercially available computer processors. The system memory 124 includes one or more computer-readable media typically associated with a software application addressing space that defines addresses available to a software application. The system memory 124 may include Read Only Memory (ROM) that stores a basic input/output system (BIOS) containing the start-up routines of the computer system 120, and Random Access Memory (RAM). The system bus 126 may be a memory bus, a peripheral bus, or a local bus, and may be compatible with any of a variety of bus protocols, including PCI, VESA, Microchannel, ISA, and EISA. Computer system 120 also includes persistent storage memory 128 (e.g., hard disk drives, floppy drives, CD ROM drives, tape drives, flash memory devices, and digital video disks) that is connected to system bus 126 and that contains one or more computer-readable media disks that provide non-volatile or persistent storage of data, data structures, and computer-executable instructions.
A communicant may interact with (e.g., enter commands or data) computer system 120 using one or more input devices 130 (e.g., one or more keyboards, computer mice, microphones, cameras, joysticks, physical motion sensors such as Wii input devices, and touch pads). Information may be presented via a Graphical User Interface (GUI) presented to the correspondent on a display monitor 132, the display monitor 132 being controlled by a display controller 134. Computer system 120 may also include other input/output hardware (e.g., peripheral output devices such as speakers and printers). Computer system 120 is connected to other network nodes through network adapters 136 (also referred to as "network interface cards" or NICs).
A number of program modules may be stored in the system memory 124, including an application programming interface 138(API), an Operating System (OS)140 (e.g., Windows XP, available from Microsoft corporation of Redmond, Washington, USA)An operating system), an implementation 142 of the communication application 26, drivers 143 (e.g., GUI drivers), network transport protocols 144 and data 146 (e.g., input data, output data, program data, registry (registry)148, and configuration settings) for sending and receiving real-time data streams.
Operating system 140 includes executives (executives) that provide basic operating system services (e.g., memory management, process and thread management, security, input/output, and interprocess communication) to create a runtime execution environment on the computer system. The registry 148 typically contains the following information: parameters required to boot and configure the system; system-wide software settings that control the operation of the operating system 140; a secure database; and per-user profile setting. Native Operating System (OS) Application Programming Interfaces (APIs) expose the executor's underlying operating system services to communication applications 142 and other user applications. The term "service" (or "service module") as used herein refers to an element of an operating system that provides a set of one or more functions.
In some embodiments, the communication application 142 includes a process that controls the presentation of the virtual area and the corresponding view of the objects in the virtual area on the display monitor 132 and a process that controls the switching of real-time data streams between the client network node 120, the client network node 14, and the virtual environment creator 16. The communication application 142 interfaces with the GUI driver and user input 130 to present a view of the virtual area to allow the communicant to control the operation of the communication application 142.
Embodiments of the communication application 142 may be implemented by one or more separate modules (or data processing elements), which are not limited to any particular hardware, firmware, or software configuration. In general, these modules may be implemented in any computing or data processing environment, including in digital electronic circuitry (e.g., an application specific integrated circuit such as a Digital Signal Processor (DSP)) or in computer hardware, firmware, device drivers, or software. In some embodiments, the functionality of the various modules are combined into a single data processing element. In some implementations, the respective functions of each of one or more of the modules are performed by a respective set of multiple data processing elements. In some implementations, process instructions (e.g., machine readable code such as computer software) for implementing methods performed by embodiments of the communication application 142 and the data it generates are stored in one or more machine readable media. Storage devices suitable for tangibly embodying these instructions and data include all forms of non-volatile computer-readable memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magneto-optical disks, DVD-ROM/RAM, and CD-ROM/RAM. Embodiments of the communication application 142 may be implemented in various electronic devices, including personal computing devices (e.g., desktop computers, portable computers, and communication devices), network devices (e.g., server computers, routers, switches, and hubs), gaming consoles, cable television and hybrid set-top boxes, and modems.
In some embodiments, communications over network 18 are performed in accordance with the transmission control protocol/network protocol (TCP/IP). The TCP portion of the protocol provides transport functions by breaking messages into smaller packets, reassembling the packets at the other end of the communication network, and resending any packets lost en route. The IP portion of the protocol provides routing functionality by specifying the destination network and the destination node address of the destination network for the packet. Each packet transmitted using the TCP/IP protocol includes a header containing TCP and IP information. The IP protocol does not provide guarantees for packet delivery at upper layers of the communication stack. On the other hand, the TCP protocol provides a connection-oriented, end-to-end transport service with guaranteed, in-sequence packet delivery. In this way, the TCP protocol provides a reliable transport layer connection.
In other embodiments, communications over network 18 may be performed in accordance with a user datagram protocol/network protocol (UDP/IP). UDP can be used to replace TCP when reliable delivery is not required. For example, UDP/IP may be used for real-time audio and video communication when lost packets are simply ignored for any of the following reasons: no time retransmission or any degradation in the overall data quality is acceptable.
Some embodiments may use the Java Media Framework (JMF), which supports device capture, encoding, decoding, rendering, and real-time transport protocol (RTP). A variety of network protocols may be used to send and receive RTP data between regional client network nodes 52-56, including peer-to-peer networking frameworks, centralized servers using TCP sockets alone or in combination with UDP, or multicast protocols.
The execution environment also includes a hardware link layer and an access protocol, which may correspond to the data link and physical layers of the Open Systems Interconnection (OSI) reference model.
In the illustrated embodiment, communication between the client network nodes 120, 14 and the virtual environment creator 16 is performed in accordance with the TCP/IP protocol. In these embodiments, the computer system determines an IP address for each of its network interfaces before it communicates using TCP/IP. This process may involve contacting the server to dynamically obtain an IP address for one or more of its network interfaces. The computer system may use Dynamic Host Configuration Protocol (DHCP) to issue an IP address request to a DHCP server. In this regard, the computer system broadcasts a DHCP request packet at system startup requesting assignment of an IP address to the indicated network interface. Upon receiving the DHCP request packet, the DHCP server assigns an IP address to the computer system for use by the indicated network interface. The computer system then stores the IP address in the response from the server as an IP address to associate with that network interface when communicating using the IP protocol.
6. Server node architecture
In some embodiments, one or more of the server network nodes of the virtual environment creator 16 are implemented by respective general purpose computer systems of the same type as the client network nodes 120, except that each server network node typically includes one or more server software applications.
In other embodiments, one or more server network nodes of the virtual environment creator 16 are implemented by respective network devices that perform edge services (e.g., routing or switching).
7. System database and storage device
The system database and storage device store various types of information used by the platform. Exemplary information typically stored by the storage device includes a presence database, a relational database, an avatar database, a real user id (ruid) database, an art cache database (art cache database), and a regional application database. Such information may be stored on a single network node or may be distributed across multiple network nodes.
8. File association and storage
The network infrastructure service environment 30 associates data files with locations. The Sococo site can have any data (i.e., files and streams) associated with it. If the user shares the document in the Sococo location, the file is associated with the room and will remain there until it is deleted by an authorized user.
Fig. 4 shows an embodiment of a method by which the network infrastructure service environment 30 handles shared data files. In accordance with the method of fig. 4, the interaction service associates a locality attribute value with a data file received from communicants operating on respective network nodes and sharing a virtual communication environment that encompasses one or more virtual areas and supports real-time communication between communicants (fig. 4, block 150). In this process, for each of the data files shared by a respective one of the communicants in a respective one of the one or more virtual areas, the interaction service generates a respective interaction record that includes a respective one of the location attribute values that identifies the respective virtual area in which the data file is shared and a respective data file identifier that identifies the respective data file. The network infrastructure service environment 30 manages the sharing of data files between communicants based on the associated location attribute values (fig. 4, block 152).
In some embodiments, the network infrastructure service environment 30 associates the file stored on the user's network node with the virtual area in response to receiving an indication of the user to share the file with some of the other communicants in the virtual area.
Documents may be shared in a variety of ways. In a first exemplary case, a user shares by directing a document to a view in a virtual area (also referred to herein as a Sococo location) that is being shared by another user. In a second exemplary scenario, the document is shared by viewing the document at a shared Sococo site, where the document is rendered by a server process running a sharing application on an area server (e.g., a Microsoft Office application such as Word, Excel, PowerPoint). In a third exemplary case, the file is shared by uploading the document onto a file store associated with the shared Sococo location. In a fourth exemplary case, a document is shared by uploading the document to a file store, at which point everyone at the shared Sococo location automatically receives a copy of the shared document (similar to direct file sharing); each person then has their own copy of the document. Multiple of these four exemplary cases described may be blended and matched to produce a hybrid document sharing scenario.
In the first case described in the previous paragraph, there is no permanent association between the document and the shared Sococo venue unless a separate explicit record of the interaction is made. In each of the other cases described in the preceding paragraph, the Sococo platform automatically stores a persistent copy of the shared document associated with the shared Sococo location. In this process, the network infrastructure service environment 30 copies the shared file from the user's network node to another data storage location indexed by an attribute value identifying the virtual area. At any time in the future, the user may re-enter the Sococo site and browse the repository of files associated with that space-assuming the user has the appropriate permissions.
In some implementations, administrative rules are associated with a shared Sococo venue to control who can access the venue, who can access its content, what the scope of the content that can access the venue is (e.g., what a user can do with the content), and what the subsequent results of accessing the venue's content are (e.g., record keeping, such as audit logs and payment requirements).
In some embodiments, the entire Sococo site or a region of the Sococo site is associated with a "management grid". In some embodiments, the management grid is implemented in a manner similar to the implementation of the zone grid described in U.S. patent applications 11/923,629 and 119/23,634. The management grid enables developers to associate management rules with a Sococo site or a region of a Sococo site. This avoids creating separate permissions for each file in the place and the complexity that can arise when the same file needs to be processed differently depending on the scenario.
In some implementations, a Sococo location is associated with a management grid that associates one or more regions (or zones) of the location with Digital Rights Management (DRM) functionality. The DRM function controls access to one or more of a venue or one or more areas within the venue or objects within the venue. The DRM function is triggered whenever a correspondent crosses a management grid boundary within the Sococo venue. The DRM function determines whether the trigger action is allowed and, if allowed, the scope of the allowed action, whether payment is required and whether a audit record needs to be generated.
In one exemplary implementation of the Sococo venue, the associated management grid is configured such that if a correspondent is able to enter the venue, he or she can perform actions on all files associated with the room, including manipulating the file, viewing the file, downloading the file, deleting the file, modifying the file, and re-uploading the file.
Thus, each sococo location can become a repository for information that is shared and discussed in that room.
9. Recording
The Sococo real-time communication session may be recorded. In this process, the Sococo platform stores the multitrack recording on at least one computer readable medium. The multi-track record includes a real-time data stream of different data types transmitted over one or more network connections with one or more of a plurality of network nodes, wherein the one or more network nodes are connected with interactions of one or more of a plurality of communicants in a particular one of the virtual areas, wherein the multi-track record includes a respective track for each of the different data types of the real-time data stream. In some embodiments, the Sococo platform stores the multitrack record in accordance with a recording rule that is described in a specification of a particular virtual area that includes a description of geometric elements of the particular virtual area. In some cases, the recording captures all real-time streams (audio streams, real-time actions-such as vector data, file sharing, etc.) and archives them along with the Sococo location where the interaction occurred. In this process, the Sococo platform generates an interaction record that includes a respective one of a plurality of location attribute values that identifies the particular Sococo location, and a respective data file identifier that identifies the multitrack record.
The Sococo platform plays back multi-track streams of audio, actions, chats, etc. when things happen in the area are created again. In this process, the Sococo platform replays individual streams, which are different from playing a recorded interactive movie (i.e., a single homogenized stream) from a fixed vantage point (advantageous point). For example, the multi-track playback allows the user to experience what the meeting is in depth detail from any location and vantage point (camera angle) within the venue. It also allows the user to navigate to other parts of the area (e.g. the outbreak session of a conference you are not participating in) beyond what a single user could experience at the same time.
The multitrack recording and multitrack playback capabilities of the Sococo platform are particularly useful for meetings in an enterprise setting. For example, the meeting may be recorded for later viewing by any other user who cannot participate. These capabilities can also be used to create records for training, distance learning, news, sports, and entertainment. In these cases, the record is a set of records of a generated or scripted (scripted) real-time stream (e.g., movements and interactions of a scripted avatar within the Sococo venue).
A person with appropriate permissions can enter the Sococo location and browse/view any recordings related to that location and play them. In some embodiments, the Sococo platform sends the multitrack recorded real-time data stream to a particular one of the plurality of network nodes as a separate data stream that can be operated solely by the particular network node.
10. Pseudo file system
The Sococo platform uses interaction records to associate files and records with regions. In some embodiments, the Sococo platform manages the sharing of data files between communicants according to the results of the query on the interaction record. In some embodiments, the sococo platform uses a pseudo file system (or restatement, a database system for locating files) to store and organize the interaction records. The records of this database include references to one or more regions, users that exist when the file is uploaded or created, and timestamp information. The Sococo platform can then be based on location within the area (e.g., room or virtual table top), or based on user (creator/uploader of presence, communicant, or any of them) or time (specific or scoped). The sococo platform may also use traditional file names. The pseudo file system may be queried in a manner that displays all files associated with one or more of an area, a region in an area, a user, or time. In this process, the interaction record may be based on one or more of the following queries: a time attribute value associated with one or more of the plurality of data files; a location attribute value associated with one or more of the plurality of data files; and a correspondent identifier associated with one or more of the plurality of data files. The pseudo file system enables a database query approach to locate files, rather than the conventional folder/file model.
The pseudo file system allows users of the Sococo platform to use a variety of possible policies to find information (e.g., uploaded files or records) stored by the platform. For example, a user may request to view a list of all files uploaded to a particular area by a particular user, and then select one of those files to download onto their own computer. Alternatively, a user may request to view all files uploaded to a portion of a region when that user and another user are together in that portion of the region. Alternatively, the user may request to view all files uploaded to the region today or last week. The user may then want to display only those files that were uploaded when some other user was present.
For example, Alice may remember that she was in Charlie's virtual office with Bob when the file she was attempting to locate was uploaded to the Sococo platform. After finding many possible files from which to choose, she starts listening to a recording of those sounds while uploading various files to refresh her memory of the scenario of uploading those files. The pseudo file system makes it possible to perform such queries and reviews and then, if necessary, to make further relational queries to improve the search for particular blocks of information. Specific information may be located through a number of different paths depending on what the user can remember for the scenario in which the Sococo platform stores the information.
Conventional techniques for locating bit fields use the following identification syntax:
//hostname/drive/path/name.ext
wherein each syntax element is specified as follows:
in this discussion, the term "bit field" refers to binary content that specifies a file (the content is typically stored independently of the file name and other metadata associated with the content in conventional file systems). One example of a code implementation in this manner uses the C programming language command fopen as follows:
fopen(โ€ณ//hostname/drive/path/name.extโ€ณ)๏ผ›
execution of this command opens a stream of words, e.g., 8, 16, or 32 bits, which may be read into a buffer, processed out of the buffer in the buffer, and the process repeats until the stream is exhausted or closed.
The implementation of the Sococo platform incorporates a new pseudo file system technology that introduces a relational database to replace the traditional// hostname/drive/path/name.ext parser and associated bit field locator. In these embodiments, the pseudo file system provides an alternative technique for locating bit fields (content is typically associated with filenames in conventional file systems). These embodiments use the following recognition grammar:
//hostname:dbnameโ€ณqueryโ€ณ
where// hostname is the same as above, and dbname is an optional database name on that host. If the specified database does not exist, the query is directed to a default database configured by the host. The "query" string has relational database semantics (e.g., SQL semantics). In some embodiments, the mode is based on the following:
One example of a code implementation of these embodiments uses the following C programming language command fopen:
fopen(//hostname:dbnameโ€ฒunique queryโ€ฒโ€ณ)
execution of this command opens a stream of words, e.g., 8, 16, or 32 bits, which can be read into a buffer, processed in the buffer, and the process repeats until the stream is exhausted or closed. In these embodiments, a query for a single time, a single place, a single source, or a name, if any, is always unique (i.e., it returns zero or one record). Any other queries return zero or more records. If there are multiple candidate bit fields, the set of records returned may be parsed and processed, the records exposed to the user and singled out from a list, or further refined queries based on time, people, place, or source fields. Once uniqueness is achieved, the location value may be committed to the storage system and streamed to the invoker, or the entire bit field may be transmitted as a unit.
The sococo platform may retrieve a particular one of the data files based on results of a query for the interaction records requested by a particular client node. In response to the client node request, the Sococo platform may send to the particular network node a storage location identifier associated with the particular data file, or it may send to the particular network node information derived from one or more of the plurality of interaction records identified in the query result.
B. Exemplary communication sessions
Referring again to fig. 2, during the communication session, each correspondent network node generates a set of respective real-time data streams (e.g., an action data stream, an audio data stream, a chat data stream, a file transfer data stream, and a video data stream). For example, each communicant operates one or more input devices (e.g., the computer mouse 52 and the keyboard 54) that generate a stream of motion data that controls the movement of his or her avatar within the virtual area 66. In addition, the correspondent's voice and other sounds generated locally in the vicinity of the network node 48 are captured by the microphone 60. The microphone 60 generates an audio signal that is converted to a real-time audio stream. The corresponding copy of the audio stream is sent to the other network nodes represented by the avatars in the virtual area 66. Sounds generated locally at these other network nodes are converted to real-time audio signals and sent to the network node 48. The network node 48 converts the received locally generated audio stream into audio signals that are reproduced by the speakers 56, 58. The action data stream and the audio stream may be transmitted directly or indirectly from each correspondent node to the other correspondent network nodes. In some stream processing topologies, each correspondent network node receives copies of the real-time data streams sent by other correspondent network nodes. In other stream processing topologies, one or more of the plurality of correspondent network nodes receives a mix of one or more streams derived from the real-time data stream originating from (or originating from) another one of the other network nodes.
Fig. 5A is a diagrammatic view of one embodiment of a shared virtual area communication environment 160 in which three network nodes 162, 12, 14 are interconnected in a peer-to-peer architecture by an embodiment 164 of a communication network 18. The communication network 164 may be a Local Area Network (LAN) or a global communication network (e.g., the internet). The network nodes 162, 12, 14 are represented by respective computers.
In this architecture, each of the network nodes 162, 12, 14 sends a state change, such as an avatar movement in the virtual area, to each of the other network nodes. One of the network nodes (typically the network node that initiated the communication session) acts as a regional server. In the illustrated embodiment, network node 162 assumes the role of a zone server. The regional server network node 162 maintains global state information and acts as a data server for the other network nodes 12, 14. The global state information includes a list of all objects in the virtual area and their corresponding locations in the virtual area. The regional server network node 162 periodically sends the global state information to the other network nodes 12, 14. The area server network node 162 also registers and sends initialization information to other network nodes requesting to join the communication session. In this process, the area server network node 162 sends a copy of the virtual area specification 166 to each joining network node, which may be stored in a local or remote database. The regional server network node 162 also ensures that the other network nodes 12, 14 can synchronize to a global state if a communication error occurs.
As explained in detail above, the virtual area specification 166 includes a description of the geometric elements of the virtual area and one or more switching rules governing real-time streaming connections between the network nodes. The description of the geometric element allows the respective communication application running on the network node 162, 12, 14 to present the respective view of the virtual area to the correspondent on the respective display monitor. The switching rules indicate how the connection process executing on each network node 162, 12, 14 establishes communication with other network nodes based on the location of the correspondent's avatar in the virtual area.
Fig. 5B is a diagrammatic view of one embodiment of a shared virtual area communication environment 168 in which network nodes 162, 12, 14 (referred to in this architecture as "area client network nodes") communicate in an architecture that is mediated by an area server 170. In this embodiment, the zone server 170 assumes the functions of a zone server performed by the network node 162 in the peer-to-peer architecture shown in fig. 5A. In this regard, the regional server 170 maintains global state information and acts as a data server for the regional client network nodes 162, 12, 14. As explained in detail in U.S. patent applications 11/923,629 and 11/923,634, this architecture allows real-time data stream switching between regional client nodes 162, 12, 14 to be handled in a variety of topologies, including a peer-to-peer topology, a full server intermediary topology in which the regional server 170 acts as a communication intermediary between the network nodes 162, 12, 14, and a hybrid topology that combines aspects of the peer-to-peer and full server intermediary topologies.
Fig. 6 shows an exemplary set of real-time data streams between the sources and receivers of three network nodes 162, 12, 14 in one embodiment of a shared virtual area communication environment. For ease of illustration, each arrow in fig. 6 represents a group in the corresponding one or more real-time data streams. In accordance with embodiments described herein, the connections shown in fig. 6 are established based on switching rules defined in the specification of the shared virtual area, the location of the communicant's avatar in the shared virtual area, and the particular sources and receivers available in each network node 162, 12, 14.
Fig. 7 shows an exemplary embodiment of a network node 12 comprising an exemplary group 172 of sources and an exemplary group 174 of receivers. Each source is a device or element of the network node 12 that originates data and each sink is a device or element of the network node 12 that receives data. The set of sources 172 includes an audio source 180 (e.g., an audio capture device such as a microphone), a video source 182 (e.g., a video capture device such as a camera), a chat source 184 (e.g., a text capture device such as a keyboard), an action data source 186 (e.g., a pointing device such as a computer mouse), and "other" sources 188 (e.g., a file share source or a source of customized real-time data streams). The set of receivers 174 includes an audio receiver 190 (e.g., an audio rendering device such as a speaker or headphones), a video receiver 192 (e.g., a video rendering device such as a display monitor), a chat receiver 194 (e.g., a text rendering device such as a display monitor), an action data receiver 196 (e.g., a mobile rendering device such as a display monitor), and an "other" receiver 198 (e.g., a printer for printing shared files, a device that renders real-time data streams other than those already described, or software that processes real-time streams to analyze or customize displays).
As exemplified by the network node embodiment shown in fig. 7, it is possible for each network node to have various sources and receivers available. By enabling the zone designer to control how connections are established between sources and receivers, the embodiments described herein provide the zone designer with great control over the communicants' sensory experience when they communicate or otherwise interact in the virtual zone. In this way, the region designer can optimize the virtual region for a particular communication purpose or for a particular communication environment (e.g., gallery, concert hall, auditorium, conference room, and club house).
Exemplary System architecture embodiments
A. Overview of Server intermediary System
Communicants typically access the shared virtual area communication environment from respective network nodes. Each of these network nodes is typically implemented by a general-purpose computer system or a special-purpose communication computer system (or "console"). Each network node performs a communication process that presents a corresponding view of the virtual area on each network node and establishes real-time data stream connections with other network nodes.
Fig. 8 shows an embodiment 200 of the service-mediated, shared virtual area communication environment 168 of fig. 5B, in which the network nodes 162, 12, 14 (referred to in this architecture as "area client network nodes" or simply "area clients") and the area server 170 are interconnected by the communication network 18. In this embodiment, each regional client network node 162, 12, 14 is implemented by a respective type of computer system connected to the regional client network node 12 as described below; the zone server 170 is also implemented by the same type of general purpose computer system as described below.
B. Exemplary System architecture
Fig. 9 shows an embodiment 210 of the server-mediated, shared virtual area communication environment 200 of fig. 8, in which the area client network nodes 12, 14, 162 communicate in an architecture mediated by the area server 170.
The regional server 170 maintains global state information and acts as a data server for the regional client network nodes 12, 14, 162. Among the global state information maintained by the zone server is a current specification 230 of the virtual zone, a current register 232 of objects in the virtual zone, and a list 234 of any stream mixes currently generated by the zone server 170.
The object register 232 typically includes, for each object in the virtual area, a corresponding object identifier (e.g., a tag that uniquely identifies the object), connection data (e.g., an IP address) that enables a network connection to be established at the network node associated with the object, and interface data that identifies real-time data sources and sinks associated with the object (e.g., sources and sinks of the network node associated with the object). The object register 232 typically includes one or more optional role identifiers for each object that may be explicitly assigned to the object by the correspondent or the area server 170, or may be inferred from other attributes of the object. In some embodiments, the object register 232 also includes the current location of each object in the virtual area as determined by the area server 170 through analysis of the real-time action data stream received from the area client network nodes 12, 14, 162. In this regard, the regional server 170 receives real-time motion data streams from the regional client nodes 12, 14, 162, and tracks avatars and other objects of communicants who enter, leave, and move about the virtual region based on the motion data. The area server 170 updates the object register 232 according to the current location of the tracked object.
In the embodiment shown in fig. 9, the regional client network node 12 includes an embodiment of a communication application 142 (see fig. 1) that includes a communication module 212, a three-dimensional persona engine 214, a chat engine 215, and an audio processing engine 216. Each of the other network nodes 14, 162 typically includes the same or similar embodiments of the communication application 142 as described in connection with the regional client network node 12.
The communication module 212 controls the switching of real-time data streams between the regional client network node 12 and the other regional client network nodes 14, 162 and the regional server 170. The communication module 212 includes a stream switching manager 218 and a bandwidth monitor 220. The stream switching manager 218 handles the entry and exit of avatars and other objects associated with the regional client network node 12 into and out of the virtual region. The flow switching manager 218 also automatically determines how to switch (e.g., route, connect and disconnect) real-time data flows between the regional client network node 12 and the other regional client network nodes 14, 162 and the regional server 170. The stream switching manager 228 makes these determinations based on the switching rules contained in the virtual area specification, the current location of avatars and other objects in the virtual area, and the real-time data stream types associated with avatars and other objects in the virtual area. In some embodiments, the stream switching manager 218 also takes into account upload and download bandwidth limitations of any of the regional client network nodes 12, other network nodes 14, 162, or the regional server 170 when making the determination. In addition, the stream switching manager 218 re-evaluates the current connection set in response to events (e.g., upload and download bandwidth failures and requests to enter or leave a virtual area), periodically, or both. Due to the re-evaluation of the current connection, the flow switching manager 218 may, for example, take any of the following actions: requesting stream mixing from the regional server 170, discarding stream mixing from the regional server, breaking or forming one or more direct links with one or more of the other regional client network nodes 14, 162.
The stream switching manager 218 maintains a set of configuration data including interface data 236, a zone list 238 and the location 242 of the object currently in the virtual area in managing the real-time data stream connection switching. The interface data 236 for each object associated with the regional client network node 12 contains a respective list of all sources and sinks of the real-time data stream type associated with that object. The zone list 238 is a register of all zones in the virtual area currently occupied by the avatar associated with the regional client network node 12. When the correspondent first enters the virtual area, the stream switching manager 218 typically initializes the current object location database 242 with location initialization information downloaded from the area server 170. The stream switching manager 218 then updates the object location database 242 with the current location of the object in the virtual area, as determined from an analysis of the real-time action data stream received from, for example, the computer mouse 221, the regional client network nodes 14, 162, and the regional server 170. In some embodiments, the object location 242 is incorporated into the object register 240. The configuration data maintained by the stream switching manager 218 also includes copies 240, 242, 246 of the object register 232, the stream mix list 234, and the virtual area specification 30, respectively; these copies 240, 244, and 246 are typically downloaded from the regional server 170 and represent a local cache of such data.
The avatar engine 214 presents a view of the virtual area and any objects in the virtual area on the display monitor 132. In this process, the three-dimensional character engine 214 reads the virtual area specification data 246, the object register 240, and the current object position database 242. In some embodiments, the three-dimensional character engine 214 also reads a correspondent avatar database 248 containing images required to render the correspondent's avatar in the area. Based on this information, the avatar engine 214 generates a stereoscopic representation (i.e., image) of the virtual area and objects in the virtual area viewed from the perspective (position and orientation) of the communicant's avatar in the virtual area. The three-dimensional character engine 214 then renders a stereoscopic representation of the virtual area on the display monitor 132. In some embodiments, the three-dimensional character engine 214 determines the visibility of the communicant avatar to limit the amount of data that must be exchanged, processed, and rendered to the portion of the virtual area visible on the display monitor 132.
In some embodiments, the three-dimensional visualization engine 214 is additionally operable to generate a plan view representation of the virtual area. In these embodiments, the communicant may direct the avatar engine 214 to render one or both of a stereoscopic representation of the virtual area and a plan view representation of the virtual area on the display monitor 132.
The communicant may control the rendered view of the virtual area or the position of the avatar in the virtual area by sending commands from an input device (e.g., computer mouse 221) to the communication module 212. The avatar engine 214 updates the view of the virtual area and the location of the object in the virtual area in accordance with the updated location in the current object location database 242 and re-renders an updated version of the graphical representation of the virtual area on the display monitor 132. The three-dimensional avatar engine 214 may update the re-rendered image periodically or only in response to movement of one or more objects in the virtual area.
The chat engine 215 provides an interface for outgoing chat (text) messages received from local text input devices (e.g., keypads) of the regional client network nodes 12 and incoming chat streams received from other regional client network nodes 14, 162. The chat engine 215 converts chat (text) messages entered by the correspondent through the text input device into a real-time chat stream that can be sent to the other network nodes 14, 162. The chat engine 215 also converts the incoming chat stream into text signals that can be displayed on the display monitor 132.
The audio processing engine 216 produces audio signals that are reproduced by speakers 222, 224 in a communicator headset 226 and converts audio signals produced by a microphone 228 in the headset 226 into a real-time audio stream that can be transmitted to other regional client network nodes 14, 162.
C. Automatic switching of real-time data streams
As explained above, the shared virtual area is defined by a specification comprising a description of the geometric elements of the virtual area and one or more switching rules governing the real-time streaming connections between the network nodes. The switching rules typically include a description of the conditions connecting the source and the receiver of the real-time data stream in terms of position in the virtual area. Each rule typically includes attributes defining the type of real-time data stream to which the rule applies and one or more locations in the virtual area to which the rule applies. In some embodiments, each rule optionally includes one or more attributes specifying a desired role for the source, a desired role for the receiver, a desired priority level for the stream, and a desired or preferred stream topology.
The switching rule relates to entry of an object into a virtual area, movement of an object within the virtual area, and exit of an object from the virtual area.
More details regarding automatic switching of real-time data streams are described in U.S. patent applications 11/923,629 and 11/923,634, both of which were filed on 24/10/2007.
Sococo platform architecture
FIG. 10 illustrates one embodiment of a system architecture that supports real-time communicant interaction in a virtual environment. The system architecture includes a Sococo platform (also known as a "virtual environment creator"), a heads-up display (HUD), and many applications. In some embodiments, the Sococo platform corresponds to a communication application 142 (see fig. 3).
The HUD interfaces various different business and consumer applications to the Sococo platform. Among the wide variety of different applications enabled by the system architecture are ad hoc communication applications, online sales applications, conferencing applications, training applications, real-time group collaboration applications, content sharing applications (e.g., photo and video sharing applications), and group survey applications.
The Sococo platform is additionally integrated with other systems (e.g., ERP systems, gaming systems, and social networking systems) to support various other applications including, but not limited to, enterprise data collaboration applications, association room applications, simplex space applications, gallery applications, and chat room applications.
Interface with a virtual communication environment
A. Introduction to
In addition to the local Human Interface Devices (HIDs) and audio playback devices, So3D graphical display avatars and physics engines, and the system database and storage, communication application 26 also includes a graphical navigation and interaction interface (referred to herein as a "heads-up display" or "HUD") that interfaces the user with the virtual communication environment. The HUD includes navigation controls that enable a user to navigate the virtual environment and interaction controls that enable the user to control his or her interactions with other communicants in the virtual communication environment. The navigation and interactive controls are typically responsive to selections made by a user using various input devices, including a computer mouse, touch pad, touch screen display, keyboard, and video game controller. The HUD is an application running on each client network node. The HUD is a small, lightweight interface that the user can keep on and running on his or her desktop at all times. The HUD allows a user to launch a virtual desktop application and provides the user immediate access to real-time contacts and real-time collaboration sites (or areas). The HUD is integrated with real-time communication elements of the real-time communication application and/or the underlying operating system so that the HUD can initiate and receive real-time communications with other network nodes. The virtual area is integrated with the user's desktop through the HUD so that the user can upload files into the virtual environment created by virtual environment creator 16, use stored files associated with the virtual area by using native client software applications that are independent of the virtual environment but still exist within the virtual area, and more generally handle the presence and location within the virtual area as aspects of their operating environment, similar to other operating system functions and not just one of some applications.
B. Viewing contacts and places
Figure 11 illustrates one embodiment of a method for the network infrastructure service environment 30 to interface the user with the virtual communication environment.
In accordance with the method of fig. 11, the interaction service associates location attribute values with real-time interactions of users and other communicants operating on respective network nodes and sharing the virtual communication environment that includes one or more virtual areas and supports real-time communications between the users and the other communicants (fig. 11, block 250). As explained above, in the illustrated embodiment, the interaction service maintains a relational database 36, with the relational database 36 containing records 38 of interactions between communicants. For each interaction involving a respective one of the communicants in a respective one of the one or more virtual areas, the interaction service generates a respective interaction record that includes a respective location attribute value that identifies the virtual area in which the interaction occurred and one or more communicant identifier attribute values that identify a respective plurality of communicants among the communicants participating in the interaction. The interaction service typically also includes the following additional information into the interaction record for a particular interaction: the start and end times of the respective interactions; identification of any data streams shared during the respective interaction; and any hierarchical information that associates the locations where the respective interactions occurred to a larger domain.
The network infrastructure service environment 28 interfaces the user and other communicants to the virtual communication environment according to the associated location attribute values (fig. 11, block 252). As explained above, in the illustrated embodiment, the interaction service supports queries to the relational database 36 based on the capabilities of the requesting entity. In response to a request from the client network node, the interaction service queries the interaction record and sends the query result to a requesting one of the network nodes.
In response to some request to view real-time contacts, the interaction service queries the interaction record for one or more of the other communicants with which the user has interacted in the virtual environment, and then sends a list of identified ones of the other communicants with which the user has interacted to the requesting network node. The interaction service typically ranks the identified other communicants based on an evaluation of interaction records describing interactions between the user and respective ones of the identified other communicants, and orders the identified ones of the other communicants in the list according to the ranking. In this process, the interaction service typically determines a respective relevance score for each of the other communicants based on at least one statistic derived from the interaction record. The interaction service then orders the identified ones of the other communicants in the list in an order reflecting the respective relevance scores. In some cases, the relevance score measures a frequency of interaction between the user and a communicant of the other communicants. In other cases, the relevance score measures recency of interaction (recentness) between the user and a communicant of the other communicants.
In response to some request to view a place, the interaction service queries interaction records for one or more of the virtual areas in which the user has interacted and sends a list of the identified virtual areas in which the user has interacted to the requesting network node. The interaction service typically ranks the identified virtual areas according to an evaluation of an interaction record describing interactions between the user and corresponding ones of the identified virtual areas, and orders the identified ones of the virtual areas in the list according to rank. In this process, the interaction service typically determines a respective relevance score for each of the virtual areas based on at least one statistic derived from the interaction record. The interaction service then orders the identified ones of the virtual areas in the list in an order that reflects the respective relevance scores. In some cases, the relevance score measures the frequency of interactions between the user and users in other virtual areas. In other cases, the relevance score measures how old and new the interaction between the user and users in other virtual areas is.
FIG. 12 illustrates one embodiment of a method for the communication application 26 to interface a user with the virtual communication environment.
In accordance with the method of FIG. 12, communication application 26 determines interaction options from results of querying at least one interaction database that includes interaction records describing respective interactions of users in the virtual communication environment (FIG. 12, block 254). On the display, the communication application 26 presents a user interface including a graphical presentation of interaction options associated with respective groups of one or more user-selectable controls (fig. 12, block 256). In response to the user's selection of a corresponding one of the user-selectable controls, communication application 26 initiates the user's interaction in the virtual communication environment (FIG. 12, block 258). For example, this process may involve moving a user's graphical presentation into a particular virtual area in response to the user's selection of the graphical presentation of the particular virtual area or in response to the user's selection of the graphical presentation of other communicants present in the particular virtual area.
Exemplary interaction options include an option to interact with a contact and an option to interact with a place.
In some cases, the communication application 26 identifies one or more of the other communicants with which the user has interacted in the virtual communication environment, and displays a respective graphical representation of each of the identified other communicants associated with the at least one respective user-selectable control in the user interface for interaction with the other communicants. In one example, the communication application 26 identifies one or more of the other communicants with whom the user has interacted in a particular one of the virtual areas and presents graphical presentations of the identified other communicants in association with the graphical presentation of the particular virtual area. The respective graphical representations of the identified other communicants may be displayed in an array of graphical representations adjacent to the particular virtual area. The graphical presentation of the communicant is typically ordered according to a hierarchy of the identified other communicants derived from an evaluation of interaction records describing interactions between the user and respective ones of the identified other communicants.
The communication application 26 identifies one or more of the plurality of virtual areas in which the user interacts and displays a respective image representation of each of the determined virtual areas associated with at least one user-selectable control in the user interface to interact with the respective virtual area. The respective graphical representations of the identified virtual areas may be displayed in an array. The graphical presentation of the virtual areas is typically ordered according to a hierarchy of the determined virtual areas, the hierarchy being derived from an evaluation of interaction records describing interactions between the user and the respective determined virtual areas. In some embodiments, for each of one or more of the plurality of determined virtual areas, a respective two-dimensional graphical representation of the virtual area is displayed, and a respective graphical representation of each of the communicants present in the virtual area is depicted in the respective two-dimensional graphical representation. In some of these embodiments, each of the correspondent's respective graphical presentations provides contextual information from which a user can infer a respective activity currently being performed by a respective other correspondent in a respective virtual area. The context information may include, for example, one or more of: information of respective locations of one or more other communicants described by the virtual area identifier; information describing respective locations of one or more other communicants within the virtual area; and information describing the respective directions of one or more other communicants. The communication application typically presents at least one user-selectable control associated with the graphical presentation of the determined virtual area, wherein the control enables a user to establish presence in the respective virtual area.
In some cases, the communication application 26 displays a graphical representation of a particular one of the plurality of virtual areas in which the user is present on the display. The graphical representation of the particular virtual area may be a three-dimensional graphical representation or a two-dimensional graphical representation. In the embodiments depicted in the figures discussed below, the HUD includes a two-dimensional graphical representation of the particular virtual area, which is displayed in the lower right corner of a desktop interface window rendered on the user's display. The HUD also includes an immersion control interface that enables a user to select a level of interaction with the particular virtual area from a set of different levels of interaction (e.g., a three-dimensional graphical interface mode of interaction with the virtual area, a two-dimensional graphical interface mode of interaction with the virtual area, and a non-graphical mode of interaction with the virtual area).
FIG. 13 shows an embodiment 260 of a HUD that provides users with immediate access to their real-time contacts and virtual locations where real-time collaboration occurs. The HUD 260 allows viewing of the area based on the location of the person and the perspective to the location. These sites can be accessed in a variety of different ways, including: the most commonly used, the most recently used, or application specific approach.
The HUD 260 includes an immersion control interface 261 that enables each user to control his or her avatar. The immersion control interface 261 includes a graphical immersion level indicator 263, a user operable immersion level controller (or slider) 265, and a text immersion level controller 267 that marks different levels of immersion corresponding to different positions along the slider 265 of the graphical immersion level indicator 263. The user can move the slider 265 along the graphical immersion level indicator 263 with an input device (e.g., a computer mouse) to select a level of interaction with the particular virtual area from a set of different levels of interaction. For example, in the illustrated embodiment, the user can select only a sound (corresponding to "off" for the bottom position of the immersion level indicator 263), a two-dimensional (2D) top view (corresponding to "2D" or a middle position of the immersion level indicator 263), or a three-dimensional view of a region (e.g., a true three-dimensional simulation of a physical region) (corresponding to "3D" or a top position of the immersion level indicator 263). In particular, immersion control interface 261 enables a user to change the level of interaction by selectively changing between a three-dimensional graphical interface mode ("3D" mode) interacting with the virtual area, a two-dimensional graphical interface mode ("2D" mode) interacting with the virtual area, and a non-graphical interface mode ("Off" mode) interacting with the virtual area. In the interactive three-dimensional graphical interface mode, the corresponding graphical presentation of the communicant is described as a three-dimensional avatar; in the interactive two-dimensional graphical interface mode, the correspondent graphical presentation of the correspondent is depicted as a two-dimensional presence icon or sprite (sprite); while in the non-graphical interface mode, the corresponding graphical presentation of the communicant is omitted (i.e., not displayed).
In the two-dimensional character mode, each communicant present in the virtual area is presented by a corresponding two-dimensional presence icon. In some implementations, the presence icon changes in appearance in response to receiving the real-time data stream from the respective communicant. For example, in some implementations, the appearance of the presence icon alternates between two different modes at a fixed rate (e.g., visual features such as brightness levels, alternating between high and low levels, or the appearance of the presence image changes from a fill view to an outline view). In some embodiments, the triggered input real-time data stream corresponds to a real-time data stream generated by a respective input device (e.g., a computer keyboard or microphone) of a respective network node operated at the correspondent. In this manner, the HUD 260 provides a visual indication of when a particular communicant is interacting (e.g., chatting or talking) in the virtual area.
The HUD 260 displays an ordered set of place boxes 262. Clicking on one of the place boxes 262 brings the user to the virtual area represented by the selected place box. For humans, the Sococo platform has a basic metaphor (metaphor) to Go (Go) (the area to the correspondent) and Get (Get) (the area to bring them to the user). This is improved in the HUD 260 by allowing the correspondent to queue for requests to "go" or "get" and communicate with the person by text or voice without "moving". The system notifies the user whenever a communication request is received from another correspondent. The user may receive the request, ignore it, or add it to the communication queue. In this manner, the user is able to respond to the non-priority communication at a later time. For example, a user can queue communications he receives during a time period that is busy (e.g., in a current communication session), and when the user is idle, the user can respond to communication requests in the communication queue.
As explained above, the Sococo platform maintains a relational database that records who the user encountered, and where it encountered. For each interaction of the user with a real-time contact, the system generates one or more corresponding interaction records in the relational database. Each interaction record contains a description of the scenario of the interaction. For example, in some implementations, the interaction record contains an identifier for the user, an identifier for the contact, an identifier for the interaction location (e.g., a room for HomeSpace), a description of the hierarchy of interaction locations (e.g., a description of how an interaction room is related to a larger area), start and end times of the interaction, and a list of all files and other data streams shared during the interaction. Thus, for each real-time interaction, the system tracks when, where, and during the interaction it occurs, and during the interaction (e.g., entry and exit) the communicant is involved in, the objects activated/deactivated, and the files shared. The system can then present the results of queries for this information in a hierarchical order (e.g., most frequent or recent) according to location.
In some embodiments, the system may be configured to display to the user a sorted listing of Skype contacts associated with a particular place (e.g., the Skype place); the list may be presented in a variety of different ways (e.g., by frequency of interactions on either Skype or Sococo or both, by total minutes of interactions on either Skype or Sococo or both, or recent interactions on either Skype or Sococo or both). For example, the information stored in the interaction record may be used to drive a frequency rating for the people encountered in which zone, and a rating for the people that the user most frequently encounters in which zone or zones. This data is commonly used in the HUD 260, but it can also be used by application developers as part of a heuristic system (e.g., a rule that allows people who have access to the user's Homespace more than five times to enter by default without making a knock, or a rule that allows people who are present in the area at a particular time to modify or delete a file created by another communicant who is there at the same time).
Each location (represented by block 262 in HUD 260) is tied to a query to relational database 36. With respect to each location, the interaction service queries the relational database 36 for all contacts that the user encounters in that area. The interaction service typically presents the identified contacts in the form of a list that is ranked by the frequency or temporal proximity of the interaction (e.g., the last person the user interacted with). In other embodiments, the contact may be ranked in some other application-specific manner.
Queries to the relational database may be combined with other searches. For example, queries to a relational database may be combined with queries to contact history data generated using another communication system (e.g., Skype, Facebook, and Flickr) to interact with contacts. In one example, a Skype place can be associated with a query for relationship data of a user associated with the Skype place to produce a sorted listing of real-time contacts of the user associated with the Skype place.
Fig. 13 and 14 show the basic navigation of people and places in the HUD 260. Clicking left arrow 264 displays a list 266 of real-time contacts ranked according to interaction frequency in an associated place (i.e., My HomeSpace). For example, the default state of the HUD 260 is the minimized interface occupying the lower right corner of the user's desktop. For a new user, the HUD 260 will display the user's HomeSpace. HomeSpace is a Sococo application for users' personal collaboration (i.e., is their personalized collaboration space). HomeSpace is described in more detail in later sections. Clicking left-pointing arrow 264 displays the real-time contacts in his or her HomeSpace with which the user most frequently communicates. The list of names is ranked according to frequency. The first name in the list (DVW in this example) represents the most frequency-divided correspondent that the user (PJB in this example) collaborates in HomeSpace, then EAG, Tim, etc. Clicking on the up arrow 268 displays a list of all real-time locations visited by the user, ranked according to frequency, time proximity, or user-defined order, as shown in fig. 14. The list of places shows the real-time activity that occurred at that location in any case. For example, DVW, Kim and Joe all appear in the Sococo home location and a real-time conversation is conducted in the Sococo home location represented by the Sococo home location block 272. Similarly, Jeff, Ann, and Jane are all in the Facebook location represented by Facebook location block 276.
If any user leaves or enters a particular place, the presence indicator (i.e., the "thumbnail" surrounded by the associated name or other identifier) in the place box representing that particular place will be automatically updated in real-time. This feature demonstrates the ability of an application designer to place real-time data for a particular application into a place box. The location block may appear or be related to the location of the correspondent or user. For example, a game developer may derive a map of a correspondent in their gaming environment, recording via a relational database that others connected to that correspondent will receive a real-time data stream feed of that correspondent's current activity. They can then use this place box to navigate to that correspondent, communicate with them, or "get" them (get the). The HUD 260 is able to manage this interface to people and places to do many different applications simultaneously.
The real-time data used in the HUD location block 262 is provided in the regional server hosting the relevant region represented by the location block through an interface designed for this purpose. The region server may provide a different HUD location block data feed to the user based on the user's permission to view the virtual region. For example, if the correspondent enters a virtual area that the user does not have permission to view, the HUD place box may display limited or no detailed information. In addition, the HUD location block data feed provided by the region server can be customized by the application provider operating that region server to present the subscribed HUD with a view of the region for a particular application.
C. Viewing contacts by location
FIG. 14 shows how a user rates his or her real-time contacts according to location. In some implementations, the place is also ranked by frequency of interaction, recent interaction, or other interaction-based ranking criteria. For any of these locations, the user may click on the corresponding left-facing arrow (e.g., arrow 270 associated with the Sococo Master location block 272) to display a list of real-time contacts that the user interacts most frequently at that location. The user's list varies between different locations-although there is a high probability of overlap between lists.
For example, clicking on the left-pointing arrow 270 representing the Sococo main room displays a list of real-time contacts (DVW, Joe, Tom, Margaret) representing the person in the Sococo main room with whom the user interacts. Clicking on the Facebook location box displays a different set of real-time contacts-those the user communicates with in the Facebook application-arrow 274. When generating this contact directory, the system queries the user's Facebook web data. In particular, the system performs a composition (or aggregation) of the location-based Sococo query with the query for Facebook relational network data on the Sococo relational database 36 to pull the user's Facebook contacts that are not in the Sococo relational database 36. In FIG. 14, the "f" icon indicates that the three leftmost contacts associated with the Facebook location are Facebook contacts that are not already in the Sococo contacts; and the invite icon relates to the control of sending invitations to those Facebook contacts to become a member of Sococo.
The view of the real-time contacts displayed by the HUD 260 in FIG. 14 reveals how the Sococo platform allows users to create inferred social networks. Conventional social networking services (LinkedIn, Facebook, MySpace, etc.) require users to push information into the networking service (send an invitation to a friend and clearly describe whether that correspondent is a work colleague, friend, or a general deal). On the other hand, the Sococo platform infers the relationship of real-time contacts. For example, "i know that DVW is a work partner because i and he are communicating in the Sococo master-work site". The Sococo platform presents this inferred relationship information back to the user in a meaningful way.
D. Go to a place
The sococo platform provides a continuous communication environment in a straight line (always-on). Unlike conventional forms of communication transactions (such as telephone or Skype, where the user must dial a number and wait for a connection to be established), the Sococo platform allows a user with appropriate permissions to simply enter the venue and begin talking or interacting with anyone in existence.
Figure 15 shows the basic ligation metaphor for Sococo. The user clicks on the Sococo master site block 272 to enter the Sococo master site. At this point, the HUD interface displays the user (PJB) at the Sococo home location and other correspondents (DVW, Kim, and Joe) that already exist at the Sococo home location. According to the switching rules established by the region designer, the Sococo platform multiplexes the designated real-time streams (e.g., streams from microphones and speakers) of all communicants currently in the Sococo home so that, for example, they can see each other's thumbnails or avatars and communicate (e.g., speak and listen) with each other.
E. Contact history and connections to people
The Sococo platform and HUD 260 allows the user to view his or her communication history with any of his or her real-time contacts. In some embodiments, in response to the user's mouse pointer being placed over a particular one of a plurality of blocks representing the user's contacts, the Sococo platform displays all recent communication histories (e.g., text chats, voice conversations, file shares, etc.) with that correspondent. For example, FIG. 16 shows an interface 278 of a recent text chat contained in Karen, where messages between the user and Karen are listed vertically in chronological order, with the user's messages displayed in left-shifted message blocks and Karen's messages displayed in right-shifted message blocks. Interface 278 also shows the basic method of connecting to another correspondent on the Sococo platform:
-go-take the user to the place where the contact is located
-taking-bringing the correspondent to the location where the user is located
Text-sending instant messages
Private chat-sending a short voice message (voice clip) that is mixed into the headset of a contact so that only the contact can hear. The HUD displays to the user where the contact is and what the contact is doing, providing the user with useful contextual information that can inform the user to select the content of the voice message.
The system typically includes appropriate defaults so that the user can go to or take someone the user is communicating with continuously, but may have to request permission to go to or take a correspondent who is a more casual contact.
The Sococo platform and HUD 260 also allows users to connect with contacts they have through other communication applications (e.g., Skype contacts), but not necessarily the Sococo user. For example, in FIG. 17, Joe is the user's Skype contact, but he is not a Sococo user. By integrating with the Skype programming interface, the Sococo platform gets and displays the user's Skype contacts directly in the HUD 260. Clicking on the controller 280 in the Skype history interface 281 (labeled "call on Skype") initiates a call to Joe using Skype, for example. The Sococo platform captures the audio stream and multiplexes it into a stream that is mixed with other users in the current room (e.g., My HomeSpace in the example shown in fig. 17). Joe can thus participate in a Sococo conversation despite the audio experience provided by Skype alone. One exemplary embodiment of a communication architecture that enables people to communicate with a Sococo platform user via different communication applications is described below in connection with fig. 36.
F. Viewing contact current location
The sococo platform allows developers to extract data (e.g., multimedia content data and relationship data, such as friends of the user and friends of the user) from third party websites through a published API that allows searches for metadata related to the data. In particular, the Sococo platform includes various programming interfaces that allow developers to integrate existing databases (which may be run and managed independently of the Sococo or the local application designer) into the Sococo real-time interactive communication environment (i.e., the Sococo site).
Fig. 18 shows how a user can determine the current location of a given contact. In response to the mouse pointer being positioned over the graphical presentation 284 of the real-time contact (i.e., Karen), the Sococo platform displays the current location of that contact through a thumbnail 285 in the mini-map view 282. In this example, the contact Karen exists in a real-time room named Flickr Italy Photos. Flickr is a very popular community-oriented photo sharing website where users are encouraged to post and comment on photos posted by others. The Sococo platform integrates with this service to obtain real-time information about the user, e.g., what photos they are now watching. The Flickr Italy phones room has been configured to be extracted from Flickr database photos tagged with Italy metadata tags. The photograph is then placed in the Flickr Italy photos room according to other metadata related to the photograph (e.g., according to the photographer).
By presenting the location box 282 associated with the area, the user can see that Karen and five other correspondents (also represented by the small graph 290 in FIG. 18) are looking at the photos (shown by the thumbnails 286, 287, and 288 in FIG. 18) in the Flickr Italy room. Clicking on the Flickr Italy room brings the user directly to the location of his/her real-time contacts (as shown in the HUD interface), where the user can initiate a voice or text chat session instantaneously. The user can change the progressive immersion control 261, which is represented by a longitudinal continuous immersion control slider 265 on the right hand side of the HUD 260. For example, changing the slider from the current position (labeled "2D") to the position labeled "3D" changes the visual display of the Flickr Italy photo room from the 2D mini map view 282 shown in FIG. 18 to the 3D presentation 290 shown in FIG. 19.
In FIG. 19, Karen is presented by an avatar 291 in the 3D presentation 290 of the Flickr Italy room, viewing a photograph 292 taken by the photographer Claudio-X. The room includes controls 294, 296 that allow the user to view the previous/next photo in the sequence of images displayed by the array of thumbnails 298 below the current photo displayed on the screen. As shown in fig. 18 and 19, the Sococo platform allows users to have a two-dimensional or three-dimensional immersive experience of navigating/browsing photos when interacting in real-time. The Sococo platform allows developers to configure a real-time communication environment (i.e., a Sococo site) to fetch asynchronous data (e.g., Flickr photos) that is accumulated by users and their contacts. These places can then be used by users to communicate with their real-time contacts for their photos and other content. In this way, the Sococo platform enhances the user experience through his or her asynchronous data and other existing databases. The Sococo platform and the HUD 260 allow users to maintain constant contact with their real-time contacts from a variety of different locations.
In FIG. 20, the user has added Karen, his real-time contact, in the Flickr Italy Photo room. In this embodiment, the user is represented by a hand-shaped pointer 297. A second real-time contact (e.g., a DVW presented by the contacts block 300) initiates a voice conversation with the user directly from the HUD 260. The DVW happens to be in the MMOG (massively multiplayer online role-playing game) when he initiates contact with the user. In this embodiment, the DVW requests the user to join the MMOG with him, which is displayed in the micro map view 302. The user views the micro map view 302 and clicks on the precise location of the DVW. In these embodiments, a game server providing the backbone architecture of the MMOG outputs location information to the HUD 160. According to the integration between the Sococo platform and the MMOG, the Sococo platform can directly launch the MMOG client software and place the user exactly in the same location of his real-time contacts. In these embodiments, the Sococo library is integrated into the MMOG client software so that he can launch like any conventional application, yet access and use the Sococo platform.
FIG. 21 shows a graphical representation 304 of a user at the MMOG location 306 (i.e., a zombie filled dune) where the user requests entry into the MMOG through the micro-map view 302 shown in FIG. 20. This example also depicts the HUD location squares generated by the application server of the managed non-Sococo "zone" (i.e., the MMOG). This example depicts how an existing application server can export to an interface to the HUD260 so that the Sococo user can monitor the status of their real-time contacts from outside of that application. In fact, they can monitor the status of those contacts even if they do not use the application themselves. If such a user initiates a Go request to their Sococo platform through HUD260, he creates a subscriber acquisition opportunity for the provider of that application. Additionally, such HUD data feeds may be used as place squares in the user place square list in their respective HUDs.
HomeSpace applications
As mentioned above, HomeSpace is a Sococo application built on this Sococo platform. It is provided by default to all users when they first register to obtain service. HomeSpace is a user's personal collaboration space. The user may:
customizing HomeSpace with photos, videos, music or any form of rich media (rich media)
-selecting different visual themes or geometries/architectures to personalize their space, or to create their own
Decorating the space with virtual objects that they create themselves or that they purchase from the Sococo or other users
โ€ฆ โ€ฆ or various other personalization options
FIG. 22 shows one example of how a user's HomeSpace may be customized. In this example, the user is able to interact with the scheduling application through an interface 310 presented on a wall 32 in a room 314 of the HomeSpace area. The interface 310 includes a graphical presentation 316 of a scheduled weekly view of the user and a set of control buttons 318 that allow the user to navigate/control the scheduling application.
OfficeSpace application
1. Introduction to
Officepsace is an Sococo application built on this Sococo platform. Officepsace is a real-time communication application for the enterprise market. It is built on the Sococo real-time platform. Officepspace provides a survey of the technology and user experience provided by applications on the platform.
Fig. 23 shows how officepspace is initiated and used from the HUD 260. In this example, the common virtual area contains some other virtual area according to the hierarchy of the virtual area. The HUD 260 includes a character control (i.e., a magnified icon 322) having a first character mode in which a graphic representation of a specific virtual area (e.g., the Sococo main room 320) in which the user exists is displayed separately, and a second character mode in which a graphic representation of all virtual areas included in the common virtual area is displayed in a spatial layout. In this example, the HUD 260 initially displays only the Sococo main room 320, where there are three real-time contacts (DV, JA, and PB). Clicking on the zoom-in icon 322 displays a full area view 324 of the Office Space application. The area view 324 displays all real-time rooms contained in the current instance of the OfficeSpace application, including five other rooms associated with the OfficeSpace application instance and the Sococo main room 320. In this example, the depicted example of the officepsace application is organized by function-room for market (Marketing) 326, room for engineering 328, and room for design 330. The individual room of the officepspace application has files associated with it, as indicated by file icon 332, file icon 332 being visually associated with the associated virtual area. The graphical depiction of these rooms also shows real-time presence information-i.e., which real-time contacts are currently in each room. The user can click into any room of the officepspace application where he has permission to enter and begin collaborating (voice, file, image, etc.) in real time with other communicants present in that room.
The following description is based on the following exemplary scenario. Three colleagues were in a virtual meeting to prepare for their presentations at the morning. The three colleagues are in different locations but in front of their PCs, they will meet in the Sococo virtual venue.
2. Head-up display
Fig. 24A and 24B show another embodiment 340 of the heads-up display (HUD) implemented by a semi-transparent user interface that is retracted into the lower right-hand corner of the user's desktop 342. The HUD 340 is the application interface to the Sococo platform. Features of the HUD 340 include:
the HUD 340 is a small, lightweight application intended for it to run on the user's desktop at all times; and
the HUD 340 provides a simple interface for the user to view and interact with the contacts and the Sococo location where the interaction takes place.
In this embodiment, the HUD 340 is implemented by a floating, substantially transparent (semi-transparent) user interface that provides a persistent interface and access to controls. In the embodiment shown in fig. 24A, the HUD 340 is transparent except for one or more of the following translucent elements of the interface:
-an outline 337 of the progressive immersion control 345;
an outline of the user's current location 334, which is presented by an unfilled region within the translucent octagonal location boundary 339;
A small graph 341 representing real-time contacts in the Sococo location 344; and
lines 333 that decorate the borders of HUD 340.
In this manner, the HUD 340 is designed to act as a real interface for displaying information and providing access to controls with minimal obscuring of underlying portions of the user's display screen. The HUD 340 efficiently displays:
-those of the user's real-time contacts that are currently online,
where the user and the user's real-time contacts are currently "located" (e.g., where the user is currently located in the Sococo space and where the user's real-time contacts are located in the spatial virtual environment),
a progressive immersion control interface 345 to control the appearance of a location (real-time interactive environment) within the relevant virtual area, and
-a navigation control enabling the user to quickly connect to a specific location.
The immersion control interface 345 includes an unfilled semi-transparent graphical immersion level indicator 343, a semi-transparent immersion level controller (or slider) 347, and a semi-transparent text immersion level indicator 349 that marks different levels of immersion corresponding to different positions along the slider 347 of the graphical immersion level indicator 343. The user may move the slider 347 along the graphical immersion level indicator 343 using an input device (e.g., a computer mouse) to select a desired level of interaction from a set of different levels of interaction. For example, in the illustrated embodiment, the immersion control interface 345 enables the user to change the level of interaction by selectively changing between a three-dimensional graphical interface mode ("3D" mode) interacting with the virtual region, a two-dimensional graphical interface mode ("2D" mode) interacting with the virtual region, and a non-graphical interface mode ("Off" mode) interacting with the virtual region. In the interactive three-dimensional graphical interface mode, the corresponding graphical presentation of the communicant is described as a three-dimensional avatar; in the interactive two-dimensional graphical interface mode, the correspondent graphical presentation of the correspondent is depicted as a two-dimensional presence icon or sprite (sprite); while in the non-graphical interface mode, the corresponding graphical representations of the communicant and the virtual area are omitted (i.e., not displayed).
In the illustrated officepspace application embodiment, the user sets the HUD 340 by default to display the Sococo location (i.e., office) where the meeting is to be taken. The Sococo location is presented by an octagonal conference room 344 displayed in the HUD 340. Initially, conference room 344 is empty because no participants have joined the conference.
3. Desktop integration
As shown in fig. 24A, the user can work in his or her normal window environment while the Sococo platform and HUD are running and ready to initiate a real-time communication session. For example, the user may work with other applications, such as Microsoft Excel, to create information that can be subsequently shared in a real-time communication session on the Sococo platform (e.g., Excel table 346 shown in fig. 24A). The virtual area is integrated with the user's desktop 342 so that the user can (i) drag and drop files into the environment, (ii) use files stored in the area by using their native client application, although still present in the area, but independent of the area environment, and (iii) more universally handle presence and location within the area as an aspect of their operating environment, similar to other operating system functionality rather than one of some applications.
4. Pre-scheduled conference
The Sococo platform allows any ad hoc or pre-planned meeting. For pre-planned meetings, the Sococo platform issues an alert (alert) to the user. For example, in the embodiment shown in FIG. 25, a prompt 348 titled "8 AM-ready Ops review" is displayed in the HUD 340 to inform the user that a meeting is about to begin and an "accept" button 350 is presented to the user to enable the user to join the reviewing meeting. Clicking on the prompt (i.e., "accept") connects the user into the Sococo location (octagonal virtual meeting room). In the Sococo site, the user is presented by a small, bright circle 352 (referred to as a "thumbnail") that shows the user's presence in the conference room 344 (see fig. 26). From a real-time communication perspective, the user is now in that virtual Sococo site and can talk to anyone else in that same site, because the stream switching rules in this embodiment of officepsace dictate that all users in a given room are connected in that way.
As shown in fig. 27, two colleagues join the user in conference room location 344. Both colleagues are similarly represented by respective thumbnails 354, 356 in the Sococo location 344. All communicants are now able to see each other's presence in the room (e.g., see each other's small figures) and hear each other talking. In this regard, the Sococo platform multiplexes the microphones and speakers of all participants together. All of the persons in that Sococo location can see or hear anyone else at that location.
5. Progressive immersion
Although the communicant interacts in virtual area 344, HUD 340 provides the user with independent control of his or her desired persona. For example, the user may display a minimized view of the Sococo location (minimized to the lower right corner of the desktop) and engage in an audio conversation while working with a different application (e.g., Microsoft Excel). The user can then choose to change his avatar style and enter a more immersive three dimensional rendering of the Sococo location. This is done by changing the setting of the progressive immersion slider 347 in the HUD 340 from desktop (as shown in fig. 27) to 3D (as shown in fig. 28). Upon entering the 3D avatar mode, the user's desktop displays a 3D rendering of the shared Sococo location 344. The communicant (the thumbnail in desktop mode) now takes the form of a 3D avatar 362, 363 (corresponding to a hand cursor), 364, as shown in fig. 28.
Any data associated with the Sococo location 344 may be displayed on the screens 366, 368, 370. A view screen is a general data reproduction element that can be used to reproduce any random data. Examples of the types of data that can be rendered on the view screen include:
microsoft PowerPoint presentation
-video
-webcam output
Real-time data directly from an organised ERP system
Sococo utilizes 3D visualization technology to enhance the communication experience where appropriate. In the illustrated embodiment, the Sococo location is designed as an octagon so that the information can be displayed on three adjacent walls and easily seen at a glance without the need to look wall by wall (or between tiled windows in a strictly 2D display). In other embodiments, the Sococo site may be in the form of various geometric shapes (e.g., rectangular, circumferential, pentagonal, and arbitrary shapes). The geometry selection is determined by the designer of the application.
6. Social processor
The Sococo platform enables developers to define the ability for social processors and pass them through plug-ins. The social processor is a set of instructions that are automatically executed when a particular condition occurs or is satisfied at a particular time (e.g., an automatic action triggered by at least one of proximity to other avatars, a location in an area, and a change in the state of an area (e.g., by an entry or exit of a communicant)). The social processor may be any arbitrary programmed routine to control the actions of a user or object in the Sococo location. For example, in some embodiments, if the avatar is close to the screen, the social processor will automatically grab (snap) the avatar to grid (gird) the avatar and center the avatar in front of the screen so that the user can easily see the contents of the screen. This feature of the social processor eliminates the need for complex manipulations of the character's movements.
Other examples of social processors include the ability for an avatar to automatically pivot (pivot) and rotate to confirm the presence of another user. For example, fig. 29 shows two avatars that turn from facing each other to facing the user in fig. 28 in response to the user entering the Sococo venue. The users associated with the two avatars do not have to manually manipulate their avatars; instead, the social processor automatically rotates their head to confirm the new user.
In the embodiment shown in fig. 28 and 29, a file icon 389 is displayed in the HUD340 to indicate to the user that there are files (such as documents) associated with that space. In the 3D avatar, the user can use the hand cursor 363 to get one of the documents 301 in his table 303 by clicking on the document. The user may then associate the document with one of the screens 366-370 by moving the document to the selected screen and clicking with the hand cursor 363. The Sococo platform interprets this action as a command to render the document on the selected screen. In some embodiments, the Sococo platform will use an application (e.g., Microsoft Excel) running on the regional server to render the document on the screen. Control buttons 305 are provided under each of the screens 366-. Thus, the Sococo platform associates control buttons with the content rendering surface (e.g., the view screen) of the 3D character.
7. Dynamic space
The Sococo platform allows for the creation of dynamic spaces (i.e., virtual Sococo sites that are created on demand by user actions). This process typically involves changing the region definition by adding or removing regions of virtual space. Application designers can define templates (e.g., virtual rooms) with various shapes and flow manipulation characteristics to be able to easily join regions according to a desired usage pattern. For example, in an area designed for virtual teleconferencing, an application developer may define a room type designed for a burst session of a subset of participants, and another room type for displaying detailed information from slides, spreadsheets, and real-time feeds. When the area is used for the first time it appears with a base set of one or more rooms, and during a real-time conversation participants can add rooms to the space for one or two purposes depending on their needs during the conversation. In some embodiments, the added room is subsequently used continuously. In other embodiments, the added rooms are explicitly deleted by the user who has completed their use or automatically deleted by the system as part of garbage collection.
As shown in FIGS. 30-33, to create a dynamic place 380, the user clicks on a wall 382 (FIG. 30) of the existing place 344. Alternatively, the user may create a dynamic space by selecting a dynamic space creation command, which can be obtained through a common menu structure provided in some embodiments of the HUD 340. The geometry, size, and default configuration options of the dynamic space are determined by the application developer. Typically, a user will have various options to choose from when creating a dynamic space. As shown in fig. 31, in response to the user's selection of wall 382, the system generates a dynamic location 380, which dynamic location 380 is an exact copy of the Sococo location where he originally was, except that it has no external data (files) related to the view, as can be seen from the blank view. The user may enter the new location (fig. 32) and other communicants 362, 364 may follow the user to enter the new location (fig. 33).
8. Auditory zone
Real-time stream handling techniques in this platform bring independent stream processing zones. The most common example of a flow treatment zone is the auditory zone. In this embodiment of officepsace, a typical auditory zone is an area where: the users in that area can hear any other user in the same zone (i.e., the microphone and speaker of any user located in that space are multiplexed together so that all users can hear each other's voice). More details regarding the specifications of auditory zones and other types of "zone grids" are described in U.S. applications 11/923,629 and 11/923,634, both of which were filed on 24/10 of 2007.
Fig. 23 shows an embodiment of the officepsepace application, where each octagonal Sococo location represents a separate auditory zone. In this embodiment, the user's avatar (represented by the light colored small figure) is located in the Sococo location represented by the upper octagonal space in the HUD and therefore cannot hear the conversation that the avatar is engaged in between two users in the Sococo location represented by the lower octagonal space in the HUD; the two users also do not hear the sound associated with the room represented by the upper octagonal space.
I.2.5 dimensional image
Fig. 34 shows another embodiment 400 of a heads-up display (HUD) that displays a virtual area and a communicant using two half-dimensional (2.5D) visualizations, the 2.5D visualization simulating a three-dimensional graphic using a two-dimensional graphic. In the embodiment shown in fig. 34, each communicant in the selected location (i.e., the Sococo main room) is presented by a respective small map 402, which small map 402 depicts both the location in the virtual area and the direction of the point of interest reflecting the relevant user. The location of the thumbnail generally refers to the center of the thumbnail relative to the virtual area. The direction of the thumbnail refers to the direction in which the "eyes" of the thumbnail appear to face. In the illustrated embodiment, this direction corresponds to a vector 403 along a transverse path orthogonal to a line 409 connecting the eyes 404, from the center of mass (center of mass) of the body 406 of the small map 402.
In the illustrated embodiment, each thumbnail 402 has a spherical at least partially transparent body 406, which body 406 interacts with light (indicated by dashed line 407) from a virtual light source in a manner that enhances the visual orientation of the thumbnail 402. In particular, the rendering of each of the mini-map bodies 406 involves displaying a flash point 408 on the body surface, displaying a shadow region 410 on the body that at least partially obstructs the passage of the virtual ray from an upper portion of the mini-map body, and displaying a second shadow region 412 that is projected on a "floor" 414 of the Sococo main room (due to the mini-map body 406 at least partially obstructing the passage of the virtual ray). The location of the sparkle point 408, the first shaded region 410, and the second shaded region 412 gives the widget 402 a three-dimensional appearance that allows the communicant to infer (i.e., the direction in and out of the plane of the displayed interface) the user's point of interest within the three-dimensional virtual area.
Some embodiments include a social processor that responds to the positioning of the thumbnail 402 within a threshold distance from the screens by automatically moving the thumbnail to a preset position in front of a selected one of the screens 414, 416, 418 and orienting the thumbnail with its "eyes" toward the screen.
Data processing in a network communication environment
A. Communicating through multiple client applications and client mixes to others
FIG. 35 shows one embodiment of such a method: by this approach, the communication application 26 allows users to connect with contacts that they have through other communication applications (e.g., Skype contacts), but not necessarily Sococo users.
In accordance with the method of fig. 35, the communication application 26 displays a graphical representation of a virtual area in a virtual communication environment that supports real-time communication between a first communicant operating on a first network node and a second communicant operating on a second network node (fig. 35, block 454). On the first network node, the communication application 26 executes a first software application that establishes a first real-time data stream connection between the first and second network nodes, where the first real-time data stream connection is associated with a reference to the virtual area (FIG. 35, block 456). While executing the first software application, the communication application 26 on the first network node executes a second software application that establishes a second real-time data flow connection between the first network node and a third network node on which a third correspondent operates, wherein the second real-time data flow connection does not reference the virtual area (fig. 35, block 458). At the first network node, the communication application 26 generates one or more integrated real-time data streams from the real-time data streams exchanged over the first and second real-time data stream connections (fig. 35, block 460).
At least one of the one or more integrated real-time data streams is typically rendered at the first network node. The communication application 26 typically sends the respective integrated real-time data stream of the one or more integrated real-time data streams to the second and third network nodes.
At least two of the real-time data streams exchanged over the first and second real-time data stream connections, respectively, are typically of the same particular data type, and the communication application 26 mixes a plurality of the exchanged real-time data streams of the particular data type at the first network node. For example, in some embodiments, communication application 26 generates a first real-time data stream of the particular data type; the communication application 26 receives a second real-time data stream of the particular data type from the second network node; and communication application 26 receives a third real-time data stream of the particular data type from the third network node. In these embodiments, the process of generating the integrated real-time data stream involves mixing the second and third real-time data streams to generate a first integrated real-time data stream, mixing the first and third real-time data streams to generate a second integrated real-time data stream, and mixing the first and second real-time data streams to generate a third integrated real-time data stream. The communication application 26 renders the first integrated real-time data stream on the first network node, transmits the second integrated real-time data stream from the first network node to the second network node, and transmits the third integrated real-time data stream from the first network node to the third network node. In some embodiments, the communication application 26 passes the third integrated real-time data stream to the second software application. In some embodiments, the first and second realtime data streams are generated by first and second instances of the first software application running on the first and second network nodes, respectively, and the communication application 26 passes the second integrated realtime data stream from the first instance of the first software application to the second instance of the first software application.
In the example discussed above with reference to FIG. 17, Joe is the user's Skype contact, but he is not a Sococo user. The Sococo platform obtains and displays the user's Skype contacts directly in the HUD 260 through integration with the Skype programming interface. Clicking on control 280 in the Skype history interface (labeled "call on Skype") initiates a call to Joe using Skype. The Sococo platform takes that audio stream and multiplexes it into the stream where it is mixed with other users in the room. Joe can thus participate in Sococo conversations, although only Skype provides an audio experience.
Figure 36 is a block diagram of one embodiment of a communication architecture that enables people to communicate with a Sococo platform user through different communication applications (e.g., Skype). Fig. 36 shows audio communication channels established between four network nodes (i.e., system 1, system 2, system 3, and system 4) sharing a virtual area. The system 1 represents a client terminal which is not configured to run the Sococo communication platform; instead, the system 1 is configured to run an alternative communication system (e.g., Skype). System 2 represents a user terminal running the Sococo communication platform, which includes integrated elements that visualize the playback and audio capture streams of the alternative communication system. Systems 3 and 4 represent two other client terminals running the Sococo communication platform. Summary information for the system shown in FIG. 36 is provided in the following text boxes:
In operation, the I/O multiplexer demultiplexer transmits audio signals 1 and 2 received from systems 1 and 2 to systems 3 and 4. The I/O multiplexer demultiplexer also sends audio signals 3 and 4 received from systems 3 and 4 to the P routing element of system 2. The P routing element sends audio signals 1, 3, and 4 to the playback element of system 2 and passes audio signals 3 and 4 to the P mixing element of system 2. The P-mixing element of system 2 mixes audio signals 2, 3 and 4 and passes the mixed signals to the integrated elements of system 2. The integrated element passes the mixed signal to an audio capture element of an alternative communication application (e.g., Skype) running on the system 2 and corresponding to the communication application (e.g., Skype) used by the system 1. The alternative audio capture system (CA) passes the captured mixed signal 2+3+4 to the playback element of the alternative communication application running on the system 1.
In some implementations of the system shown in fig. 36, P-hybrids can also subscribe directly to the I/O multiplexer/demultiplexer. The system is then more symmetrical. The P route becomes P mix 1 and receives 3, 4 from I/O and 1 from C offload 1. Because these are sent as independent channels, the output of C-split 1 can be sent directly to the playback element, but that is not very flexible (because P-mixing can perform the actual mixing instead of delivery as independent channels, see 3 below). In this case, Phybrid becomes Phybrid 2 and receives 3, 4 from I/O and 2 from C _ Sput 2. The output of such a mixer is a true mix because we assume that the alternative audio system is a single channel communication system (even if the channel is stereo, we assume that there is no multi-track mixer on the other end to combine signals from multiple sources).
Fig. 36 does not show the interaction of system 3 and system 4 with each other, but only with system 2, and by extension, with system 1. The interaction between systems 3 and 4 may be peer-to-peer or server-mediated, as described above.
In fig. 36, at any time both streams are comma delimited (meaning it is a multi-channel route), the system can also send the mixed stream to save internal communication resources (e.g., from the I/O multiplexer/demultiplexer). The streams that must be mixed are indicated by plus signs (i.e., the visual microphone signal sent by the integrated element to the alternate capture element).
B. Multiplexing client software
As described above, in some embodiments, a document may be shared by viewing the document in a shared Sococo location, where the document is rendered by a server process running a sharing application (e.g., a Microsoft Office document processing application such as Word, Excel, PowerPoint) on the virtual server.
In some embodiments, the Sococo platform combines real-time streams from multiple users running on different client nodes into a single stream. The Sococo platform sends the composite stream to a client application (e.g., a Microsoft Office application) running on the regional server node. The Sococo platform routes output data generated by client software running on the zone server onto the screens in the shared Sococo zone. The Sococo platform multiplexes user input streams (e.g., keyboard and/or mouse command streams) to client software running on the server, and vice versa. In this way, the Sococo platform processes documents in the client application running on the regional server network node in accordance with the composite real-time data stream. The multiplexing client software feature of the Sococo platform enables users to collaborate on the same document. In these embodiments, the Sococo platform takes a single terminal server session and multiplexes among multiple clients to enable collaboration on the same document. It also allows the Sococo platform to provide support for a variety of interactive sessions without the need to create a custom viewer (viewer) for the native client software application.
FIG. 37 shows one embodiment of such a method: by this approach, the network infrastructure service environment 30 multiplexes client software for one or more communicants.
In accordance with the method of fig. 37, the network infrastructure service environment 30 executes an instance of a client software application associated with a virtual area in a virtual communication environment that supports real-time communication between communicants operating on respective client network nodes (fig. 37, block 470). The client software application may be any type of client software application. In some implementations, the client software application is a document processing software application (e.g., Microsoft Windows)Officedesktop publishing software application). The network infrastructure service environment 30 receives real-time input data streams from respective ones of the client network nodes associated with communicants interacting in the virtual area (fig. 37, block 472). The real-time input data stream typically originates from input device events (e.g., real-time computer keyboard events and real-time computer mouse events) on respective ones of the client network nodes. The network infrastructure service environment 30 generates a composite data stream from the real-time input data stream (FIG. 37, block 474). The network infrastructure service environment 30 inputs the composite data stream into the running instance of the client software application (FIG. 37, block 476). At least partially in response to the input of the composite data stream, the network infrastructure service environment 30 generates a respective instance of an output data stream from the output generated by the running instance of the client software application (FIG. 37, block 478). the network infrastructure service environment 30 transmits the instance of the output data stream to a respective one of the client network nodes associated with communicants interacting in the virtual area (FIG. 37, block 480).
In some embodiments, the network infrastructure service environment 30 transmits instances of the output data stream associated with the view objects in the virtual area so that the communicant can interact with the client software application through the view in the virtual area. For example, in some of these embodiments, the Sococo platform runs a browser client on the local server and routes the output of the browser client to the screen of the Sococo site. In some implementations, a remote access interface (e.g., a terminal server) in a Windows operating system environment is used for keyboard and mouse input data and routing these input data through the local server to the video screen for rendering in a shared Sococo site. The zone server combines input commands (e.g., mouse and keyboard inputs) from all users into a single stream and sends that single stream to the client software process running on the zone server.
C. Real-time WIKI
FIG. 38 shows one embodiment of such a method: by this approach, the Sococo platform integrates with WIKI resources, which are websites or similar online resources that allow users to collectively add and edit content.
The method of claim 38, the Sococo platform establishing a virtual area in a virtual communication environment that supports real-time communication between communicants operating on respective network nodes (fig. 38, block 490). The Sococo platform creates a respective presence in the virtual area for each of one or more of the communicants (fig. 38, block 492). In response to receiving input from a respective network node associated with a respective one of the communicants who exists in the virtual area, the Sococo platform communicates information between a file store associated with the virtual area and the WIKI resource (fig. 38, block 494).
In some embodiments, the process of transferring information between the file store associated with the virtual area and the wiki resource involves transferring the information through a web browser application.
In some embodiments, the process of transferring information between the file store associated with the virtual area and the wiki resource involves importing information associated with the wiki resource into the file store. For example, in some cases, the Sococo platform imports at least one of a message thread (message thread) related to the wiki resource and a link to a data file related to the wiki resource into the file store. In some cases, the Sococo platform associates the imported information with the display object in the virtual area. In some implementations, the display object corresponds to a web browser window that displays the imported information in its native format. The Sococo platform selects at least a portion of the imported information indicated by a respective one of the communicants present in the virtual area and associates the required information with the video object in the virtual area. The Sococo platform communicates selected information associated with the video object to each communicant present in the virtual area. The Sococo platform also allows one or more communicants present in the virtual area to have editorial control of selected information. The editing control generally allows the particular communicant to control rendering of selected information related to the video object and to modify the selected information using a real-time input data stream transmitted from a network node associated with the particular communicant.
The Sococo platform typically generates an interaction record that references one or more of the following items to index the imported information: identifying a location attribute value for the virtual area; and a respective identifier for each of the communicants present in the virtual area.
In some embodiments, the transfer of information between the file store associated with the virtual area and the wiki resource involves exporting information from the file store to the wiki resource. This process typically involves deriving wiki resource information related to the virtual area. The derived information may be associated with a view object in the virtual area. The derived information may correspond to data files transmitted to each of the communicants present in the virtual area that are related to the view object. In exporting the information to the wiki resource, the Sococo platform may export the information to a location in the wiki resource indicated by a respective one of the communicants present in the virtual area. In some exemplary embodiments, the designated location corresponds to a message thread of the wiki resource. In some cases, the derived information corresponds to at least one of: a data file associated with the virtual area; a reference to a data file associated with the virtual area; and a record of one or more real-time data streams received from one or more communicants present in the virtual area.
FIG. 39 illustrates another embodiment 500 of the heads-up display (HUD)260 presenting an embodiment of a wiki real-time collaboration element or panel 508. In this embodiment, the Sococo platform is able to import information (e.g., message threads and links to document files and other content) from a particular wiki into a virtual area 502 (e.g., the Sococo main room in the illustrated embodiment) and export information (e.g., files created or modified during the collaboration process) from the Sococo main room 502 to the wiki. The particular wiki may be selected by one or more users or may be associated with the Sococo main room 502 (e.g., through a virtual zone specification of the Sococo main room). In some embodiments, the wiki board is a web browser window that displays content from the wiki in native format. The Sococo platform typically accesses the wiki through a web browser application that allows a user to import content from the wiki to the Sococo main room 502 and export content from the Sococo main room 502 to the wiki.
In a first use case, the Sococo platform allows a user to select content displayed or referenced (e.g., by hyperlink) in the wiki panel and direct the selected content to one of the screens 504, 506 in the Sococo main room 502. For example, in the embodiment shown in FIG. 38, the user is represented by a small graph 510 (labeled "DVW"). Ppt, which is referenced in team wiki panel 208. The user also directs the Sococo platform to render the selected file on the view 506 (labeled "View 1"). In response, the Sococo platform imports the selected file into memory associated with the Sococo main room 502 and renders the file on view 1. In some embodiments, the Sococo platform references a file imported by one or more of the following attribute indexes: the Sococo main room 502; the user currently in the room; the current time; and other attributes related to the communicant's interactions within the room. The user (i.e., DVW) represented by the thumbnail 510 can then move the thumbnail to a position adjacent to the view screen 1, which signals the Sococo platform: the user wishes to have edit control over the selected file by referencing the switching rule in the relevant area specification. When in this position relative to the view screen 1, the Sococo platform allows the user to control the editing of the file rendered on the view screen 1 by the function using the function of the zone application. In this regard, a user represented by the small view DVW can control the reproduction of the contents of the selected file on the view screen 1 (e.g., to flip to a different area of a page or change pages) and change the contents of the selected file. In some embodiments, the file modification is performed using an interface provided by the file processing application (e.g., Microsoft PowerPoint) that is used to render the selected file on the screen 1. The interface is configured to receive a real-time input data stream (e.g., computer keyboard and computer mouse data streams) from a correspondent.
In a second use case, the Sococo platform allows the user to export content from the HUD 500 to the wiki. In the example shown in fig. 38, the users associated with the thumbnails 512, 514 (labeled "PB" and "MM," respectively) collaborate on a document named APAP sales. After the users have completed their modifications to the document, they may direct the application to export the document from the Sococo main room 502 to the team wiki. In response, the Sococo platform either exports the document APAP sales. ppt itself to the wiki or exports a reference (e.g., URI or URL) to the document. The wiki then incorporates the derived information into the designed location (e.g., message thread) in turn in accordance with wiki collaboration software that controls the operation of the wiki. In some implementations, the user can generate one or more other files related to their collaboration on the document and export those files or references to those files to the wiki. For example, in some embodiments, the user may direct the Sococo platform to create an audio recording of their user discussion in the collaboration process on document APAP sales. In addition to the document, the user may also enter comments directly in the wiki.
VII. characteristics
The following are some of the features described herein:
conclusion VIII
Embodiments described herein provide improved systems and methods for navigation and interaction in a virtual communication environment. These embodiments provide an interface that includes navigational controls that enable a user to navigate to a virtual area and interactive controls that enable the user to interact with other communicants in the virtual area.
Other implementations are within the scope of the following claims.

Claims (216)

1. A method, comprising:
determining interaction options from results of querying at least one database (36), the database (36) containing interaction records (38) describing respective interactions of a user in a virtual communication environment (10), the virtual communication environment (10) containing virtual areas (32) and supporting real-time communication between the user and other communicants, wherein each interaction record (38) contains a respective place attribute value and one or more communicant identifier attribute values, the respective place attribute value identifying a respective one of the virtual areas (32) in which a respective one of the interactions occurs, and the communicant identifier attribute value identifying a respective one of the communicants participating in the interaction in the respective virtual area (32);
presenting, on a display (132), a user interface (260), the user interface (260) including a graphical presentation of the interaction options associated with the respective group of one or more user-selectable controls; and
in response to selection by the user of a respective one of the user-selectable controls, initiating interaction by the user in the virtual communication environment (10).
2. The method of claim 1, wherein the determining includes declaring one or more of the other communicants with which the user has interacted in the virtual communication environment as respective ones of the interaction options, and the presenting includes displaying, in the user interface, a respective graphical presentation (226) of each of the declared other communicants in association with at least one respective user-selectable control for interacting with the respective other communicant.
3. The method of claim 2, wherein the ascertaining comprises identifying one or more of the other communicants with whom the user has interacted in one of the virtual areas as respective ones of the interaction options, and the presenting comprises displaying the graphical representations (266) of the identified other communicants in association with the graphical representation of the particular virtual area (260).
4. The method of claim 3, wherein the identifying comprises generating a query for the interaction record (38), and the query comprises an identifier of the user and an identifier of the particular virtual area (32).
5. The method of claim 3, wherein the displaying comprises displaying the respective graphical representations (266) of the identified other communicants in an array adjacent to the graphical representation of the particular virtual area and ordered according to a ranking of the identified other communicants derived from an evaluation of the interaction records (38) describing the interactions between the user and the respective ones of the identified other communicants.
6. The method of claim 2, wherein the initiating includes moving a graphical presentation of the user into a particular one of the virtual areas (306) in response to the user selection of one of the graphical presentations (300) of the other communicants who has presence in the particular virtual area (306).
7. The method of claim 2, wherein in response to the user selection of the graphical presentation (284) of a particular one of the other communicants, the initiating includes sending an invitation to the particular other communicant to join the user in a particular one of the virtual areas in which the user has presence.
8. The method of claim 2, wherein
In response to the user selection of the graphical presentation (284) of a particular one of the other communicants, displaying a view (282) in a location in which the particular other communicant has an existence in relation to the selected graphical presentation.
9. The method of claim 8, further comprising:
in response to the user selection of the graphical presentation (300) of the particular other communicant, launching a client software application that enables real-time interaction between the user and the particular other communicant in the location in which the particular other communicant has a presence.
10. The method of claim 9, wherein the launching of the client software application connects the user to an online video game.
11. The method of claim 1, wherein the determining includes ascertaining one or more of the virtual areas in which the user has interacted as respective ones of the interaction options, and the presenting includes displaying, in the user interface, a respective graphical presentation (262) of each of the ascertained virtual areas in association with at least one user-selectable control for interacting with the respective virtual area.
12. The method of claim 11, wherein the displaying includes displaying the respective graphical representations (262) of the ascertained virtual areas in an array, the array being ordered according to a ranking of the ascertained virtual areas, the ranking resulting from an evaluation of the interaction records (38) describing the interactions between the user and the respective ones of the ascertained virtual areas.
13. The method of claim 11, wherein the initiating includes moving a graphical presentation of the user into a particular one of the virtual areas in response to the user selection of the graphical presentation (262) of the particular virtual area.
14. The method of claim 11, wherein the displaying includes, for each of one or more of the ascertained virtual areas
Displaying a corresponding two-dimensional graphical representation (272) of said virtual area, and
in the respective two-dimensional graphical representation (272), a respective graphical representation of each of the communicants who has presence in the virtual area is depicted.
15. The method of claim 14, wherein each of the respective graphical presentations of the communicants provides contextual information from which the user can infer a respective activity currently performed by the respective other communicant in the respective virtual area.
16. The method of claim 15, wherein the context information comprises one or more of: information describing respective locations of the one or more other communicants in terms of virtual area identifiers; information describing respective locations of the one or more other communicants within the virtual area; and information describing the respective directions of the one or more other communicants.
17. The method of claim 11, wherein the presenting includes presenting, in relation to each of the graphical presentations (262) of the ascertained virtual areas, at least one user-selectable control that enables the user to establish presence in the respective virtual area.
18. The method of claim 1, wherein the presence includes displaying a graphical representation (260) of a particular one of the virtual areas in which the user has presence.
19. The method of claim 18, wherein said ascertaining comprises identifying as respective ones of said interaction options one or more of said other communicants with which said user has interacted in said particular virtual area, and said presenting comprises displaying said respective graphical presentations (266) of said identified other communicants in an array adjacent to said graphical presentation of said particular virtual area and ordered according to a ranking of said identified other communicants, said ranking derived from an evaluation of said interaction records (38) describing said interactions between said user and said respective ones of said identified other communicants.
20. The method of claim 19, wherein said ascertaining further comprises ascertaining as respective ones of the interaction options one or more of the virtual areas in which the user has interacted, and said presenting comprises displaying the respective graphical presentations (262) of the ascertained virtual areas in an array adjacent to the graphical presentation of the particular virtual area and ordered according to a ranking of the ascertained virtual areas, the ranking derived from an evaluation of the interaction records describing the interactions between the user and the respective ones of the ascertained virtual areas.
21. The method of claim 18, further comprising receiving a real-time data stream comprising data related to real-time activity occurring in the particular virtual area, and wherein the displaying comprises displaying a graphical representation of the real-time activity derived from the received real-time data stream in the graphical representation (260) of the particular virtual area.
22. The method of claim 18, further comprising depicting, on the display (132), a respective graphical representation of each of the communicants who has presence in the virtual area, wherein the depicting comprises changing a particular one of the graphical representations of a particular one of the communicants in response to receipt of a real-time data stream from the network node associated with the particular communicant.
23. The method of claim 22, wherein the particular graphical presentation corresponds to a two-dimensional presence icon, and the changing includes displaying a different avatar of the presence icon while the real-time data stream is being received.
24. The method of claim 23, wherein the displaying comprises alternately displaying two different avatars of the presence icon at a fixed rate while the real-time data stream is being received.
25. The method of claim 18, wherein said displaying comprises displaying the graphical representation (260) of the particular virtual area in a corner of a desktop interface on the display (132).
26. The method of claim 18, wherein
In response to a determination that there are one or more files associated with a particular virtual area (344), displaying an iconic graphical indication (389) of the at least one data file in relation to the graphical representation of the particular virtual.
27. The method of claim 18, wherein a common virtual area (328) of the virtual areas comprises the particular virtual area (260) and at least one other virtual area (326) of the virtual areas according to a hierarchy of the virtual areas, and
further comprising, presenting in the user interface a character control (322) having a first character mode in which the graphical representations (260) of the particular virtual area are displayed separately and a second character mode in which the graphical representations of all the virtual areas contained by the common virtual area (328) are displayed in a spatial layout.
28. The method of claim 18, wherein the displaying the graphical representation (260) of the particular virtual area includes importing a real-time data stream feed describing a current state of the particular virtual area.
29. The method of claim 28, wherein the importing comprises importing a map (302) of an online gaming environment that includes the particular virtual area.
30. The method of claim 28, wherein the importing comprises importing the real-time data stream feed from a third-party server.
31. The method of claim 18, further comprising depicting a respective graphical presentation (362, 364) of each of the communicants who has presence in the particular virtual area (344), and moving one or more of the graphical presentations (362, 364) of the communicants in the virtual area in accordance with instructions performed in response to satisfaction of a particular condition.
32. The method of claim 31, wherein the particular situation relates to at least one of: a position of the graphical representations (362, 364) of the communicants relative to each other, a position of the graphical representations of the communicants in the particular virtual area (344), and a change in a state of the particular virtual area (344).
33. The method of claim 1, further comprising presenting a reminder interface (348) on the display (132), the reminder interface including a description of the meeting in a particular one of the virtual areas, wherein the reminder interface (348) includes a user-selectable control (350), and further comprising establishing the user's presence in the particular virtual area in response to a user selection of the control.
34. At least one computer readable medium (124, 128) having computer readable program code embodied therein, the computer readable program code adapted to be executed by a computer (120) to implement a method comprising:
determining interaction options from results of querying at least one database (36), the database (36) containing interaction records (38) describing respective interactions of a user in a virtual communication environment (10), the virtual communication environment (10) containing virtual areas (32) and supporting real-time communication between the user and other communicants, wherein each interaction record (38) contains a respective place attribute value and one or more communicant identifier attribute values, the respective place attribute value identifying a respective one of the virtual areas in which a respective one of the interactions occurs and the communicant identifier attribute value identifying a respective one of the communicants participating in the interaction in the respective virtual area (32);
Presenting, on a display (132), a user interface (260), the user interface (260) including a graphical presentation of the interaction options associated with the respective group of one or more user-selectable controls; and
in response to selection by the user of a respective one of the user-selectable controls, initiating interaction by the user in the virtual communication environment (10).
35. An apparatus, comprising:
a display (132);
a computer readable medium (124, 128) storing computer readable instructions; and
a data processing unit (122) coupled to the memory and operable to execute the instructions and to perform operations based at least in part on the execution of the instructions, the operations including
Determining interaction options from results of querying at least one database (36), the database (36) containing interaction records (38) describing respective interactions of a user in a virtual communication environment (10), the virtual communication environment (10) containing virtual areas (32) and supporting real-time communication between the user and other communicants, wherein each interaction record (38) contains a respective place attribute value and one or more communicant identifier attribute values, the respective place attribute value identifying a respective one of the virtual areas (32) in which a respective one of the interactions occurs, and the communicant identifier attribute value identifying a respective one of the communicants participating in the interaction in the respective virtual area (32);
Presenting, on the display (132), a user interface containing a graphical presentation of the interaction options associated with the respective group of one or more user-selectable controls; and
in response to selection by the user of a respective one of the user-selectable controls, initiating interaction by the user in the virtual communication environment (10).
36. A method, comprising:
displaying, on a display (132), a presentation (400) of a virtual area in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between a user and other communicants;
presenting, on the display (132), user-selectable controls that enable the user to manage interactions with the virtual area and communicants of the other communicants;
responsive to input received from the user through the user-selectable control, establishing a respective presence of the user in the virtual area; and
on the display (132), depicting a graphical representation (402) of each of the communicants who has presence in the virtual area, wherein the depicting is contained in the virtual area in a respective location and renders each of the respective graphical representations of the communicants using a three-dimensional sphere element (406), the three-dimensional sphere element (406) supporting a directional graphical visual element (404), the directional graphical visual element (404) having a variable orientation indicating a direction of attention of the user in the virtual area.
37. The method of claim 36, wherein the graphical visual element (404) represents a line of sight.
38. The method of claim 37, wherein the rendering includes rendering each of the respective graphical presentations of the communicant with the graphical visual elements (404) representative of both eyes.
39. The method of claim 36, wherein the rendering includes depicting an interaction (408, 412) of each of the volume elements (406) with virtual rays projected from respective locations in the virtual area.
40. A method, comprising:
displaying, on a display (132), a presentation of a virtual area in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between a user and other communicants;
presenting, on the display (132), user-selectable controls that enable the user to manage interactions with the virtual region and communicants of the other communicants, wherein the presenting includes displaying an immersion control interface (345), the immersion control interface (345) enabling the user to select a degree of interaction with the particular virtual region (344) from a set of different levels of interaction;
Responsive to input received from the user through the user-selectable control, establishing a respective presence of the user in the virtual area (344); and
on the display (132), a graphical presentation (341) of each of the communicants who has presence in the virtual area (344) is depicted.
41. The method of claim 40, wherein the immersion control interface (245) enables the user to change the interaction level by selectively changing between interface modes of a three-dimensional graphical interface mode interacting with the virtual region (344), a two-dimensional graphical interface mode interacting with the virtual region (344), and a non-graphical interface mode interacting with the virtual region (344).
42. The method of claim 41, wherein:
in the interactive three-dimensional graphical interface mode, the depicting includes depicting the respective graphical representations of the communicants as three-dimensional avatars (362, 364);
in a two-dimensional graphical interface mode of said interaction, said depicting comprises depicting said respective graphical representations of said communicants as two-dimensional minigraphs (341); and
in the non-graphical interface mode, the depiction of the respective graphical presentation of the communicant is omitted.
43. The method of claim 40, wherein the displaying includes displaying the representation of the virtual area as a permanent, substantially transparent interface depicting the graphical representations of the communicants at their respective real-time locations in the virtual area (344).
44. The method of claim 40, wherein the immersion control interface includes a user-selectable slider control (347), the slider control (347) having different positions corresponding to respective ones of the different levels of interaction.
45. The method of claim 44, wherein the slider control (347) is movable along an axis of a conical immersion level indicator (343), the conical immersion level indicator (343) having a width that decreases from a first end to a second end transverse to and along the axis, and the immersion level decreases as the slider control (347) moves from the first end to the second end.
46. A method comprising operating a processor (122) to perform operations comprising:
associating the locality attribute values with real-time interactions of users and other communicants operating on respective network nodes and sharing a virtual communication environment (10), the virtual communication environment (10) comprises one or more virtual areas (32) and supports real-time communication between the user and the other communicants, wherein the associating comprises, for each interaction involving a respective one of the communicants in a respective one of the one or more virtual areas, generating a respective interaction record (38), the interaction record (38) containing respective place attribute values and one or more correspondent identifier attribute values, the respective place attribute values identifying the virtual area in which the interaction occurred, and the correspondent identifier attribute value identifies a respective one of the correspondents participating in the interaction; and
Connecting the user and the other communicant interface to the virtual communication environment (10) in accordance with the associated location attribute values.
47. The method of claim 46, wherein said generating comprises incorporating start and end times of the respective interaction into each of the interaction records (38).
48. The method of claim 46, wherein said generating comprises incorporating into each interaction record (38) an identification of any data streams shared during the respective interaction.
49. The method of claim 46, wherein said generating comprises incorporating into each interaction record (38) any hierarchical information linking a location where the respective interaction occurred with a larger domain.
50. The method of claim 46, wherein each location identifier value identifies a respective one of the virtual areas within the virtual communication environment (10) by uniquely naming the respective virtual area or by describing a unique address of the respective virtual area.
51. The method of claim 46, wherein the interfacing comprises querying the interaction record (38) in response to a request received from a requesting one of the network nodes, and transmitting results of the querying to the respective one of the network nodes.
52. The method of claim 51, wherein the querying comprises querying the interaction record (38) for one or more of the other communicants with which the user interacted in the virtual communication environment, and the transmitting comprises transmitting a list of identified ones of the other communicants with which the user interacted.
53. The method of claim 52, further comprising ranking the determined other communicants according to an evaluation of the interaction records (38), the interaction records (38) describing interactions between the user and the respective ones of the identified other communicants, and ordering the identified ones of the other communicants in the list according to ranking.
54. The method of claim 53, wherein said ranking comprises determining a respective relevance score for each of the other communicants based on at least one statistic derived from the interaction records (38), and said ordering comprises ordering the identified ones of the other communicants in the list in an order reflecting the respective relevance scores.
55. The method of claim 54, wherein the relevance score measures the frequency of interactions between the user and some of the other communicants.
56. The method of claim 54, wherein the relevance score measures recency of interaction between the user and some of the other communicants.
57. The method of claim 51, wherein the querying comprises querying the interaction record (38) for one or more of the virtual areas in which the user interacted, and the transmitting comprises transmitting a list of identified ones of the virtual areas in which the user interacted.
58. The method of claim 57, further comprising ranking the identified virtual areas according to an evaluation of the interaction records (38), the interaction records (38) describing the interactions between the user and respective ones of the identified virtual areas, and ordering the identified ones of the virtual areas in the list according to the ranking.
59. The method of claim 58, wherein the ranking comprises determining a respective relevance score for each of the virtual areas based on at least one statistic derived from the interaction records (38), and the ordering comprises ordering the identified ones of the virtual areas in the list in an order reflecting the respective relevance scores.
60. The method of claim 59, wherein the relevance score measures a frequency of interactions between the user and some of the other virtual areas.
61. The method of claim 59, wherein the relevance score measures recency of interaction between the user and some of the other virtual areas.
62. The method of claim 46, wherein said interfacing comprises establishing a respective presence of the user in a particular one of the virtual areas based on at least one statistic derived from the interaction records (38).
63. The method of claim 62, wherein the establishing comprises establishing the respective presence of the user in the particular virtual area based on a frequency of interaction of the user in the particular virtual area.
64. The method of claim 63, wherein the establishing comprises:
automatically establishing the respective presence of the user in the particular virtual area in response to a determination that the frequency of interaction of the user in the particular virtual area satisfies a prescribed threshold level; and
responsive to a determination that the user's interaction frequency in the particular virtual area does not meet the prescribed threshold level, requiring confirmation of an authorized correspondent prior to establishing the respective presence of the user in the particular virtual area.
65. The method of claim 46, wherein said associating comprises associating a respective current location attribute value with each of the communicants.
66. The method of claim 65, wherein said interfacing comprises selectively enabling access to resources by said user and said other communicants in accordance with at least one administrative rule based on said respective current location attribute values.
67. The method of claim 66, wherein the enabling includes comparing the respective current location attribute values for zones (74), the zones (74) being associated with the administrative rule according to a virtual area specification that includes a description of geometric elements of the virtual area (66).
68. The method of claim 67, wherein the administrative rule describes a criteria for accessing the resource.
69. A method as described in claim 67, wherein the management rule describes a scope of access to the resource.
70. The method of claim 67, wherein the management rule describes one or more results of access to the resource.
71. The method of claim 46, further comprising storing the interaction record (38) on at least one computer-readable medium (124, 128).
72. A method, comprising:
presenting on a display (132) at a predetermined time an invitation (348) to join a meeting predetermined to be conducted in a virtual area of a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between a user and other communicants operating on respective network nodes, and presenting controls (350) for accepting the invitation;
responsive to the user selection of the control (250), establishing a respective presence of the user in the virtual area; and
on the display (132), a presentation of a virtual area and a respective graphical presentation of each of the communicants who has presence in the virtual area are depicted.
73. A method, comprising:
displaying, on a display (132), a presentation of a virtual area in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between a user and other communicants operating on respective network nodes;
presenting, on the display (132), user-selectable controls that enable the user to manage interactions with the virtual area and some of the other communicants;
Depicting, on the display (132), a graphical presentation (362, 364) of each of the communicants who has presence in the virtual area (344), wherein the depicting includes determining respective locations of the respective graphical representations (362, 364) of the communicants in the virtual area based on respective real-time differential action streams, the real-time differential action stream describes movement of the respective graphical representation (362, 364) of the communicant in the virtual area and is received from the network node, and automatically repositioning at least a particular one of the graphical presentations (362, 364) of the communicant based on the determined location of the particular graphical presentation in the virtual area and at least one other graphical presentation of the particular graphical presentation proximate to the communicant in the virtual area.
74. A method, comprising:
displaying, on a display (132), a presentation of a virtual area (344) in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between a user and other communicants;
presenting, on the display (132), user-selectable controls that enable the user to manage interaction with the virtual area (344) and some of the other communicants, wherein the user-selectable controls include modification controls that enable the user to initiate modifications to the virtual area (344) as needed;
Responsive to input received from the user through the user-selectable control, establishing a respective presence of the user in the virtual area (344); and
on the display (132), a respective graphical presentation (362, 364, 363) of each of the communicants present in the virtual area (344) is depicted.
75. The method of claim 74, further comprising
In response to the user selecting the modification control, modifying a specification of a geometric element of the virtual area (344).
76. The method of claim 75, wherein the modifying includes changing the specification to add a new region (380) to the virtual area (344).
77. The method of claim 75, wherein the modifying includes changing the specification to remove an existing region of the virtual area (344).
78. The method of claim 75, wherein the modification control is associated with a wall (382) of the virtual area (344), and the modifying is performed in response to receipt of a command from the user to select the wall (382) of the virtual area (344).
79. A method comprising manipulating a processor (122) to perform operations comprising:
Associating venue attribute values with data files received from communicants operating on respective network nodes and sharing a virtual communication environment (10), the virtual communication environment (10) including one or more virtual areas (32) and supporting real-time communication between the communicants, wherein the associating includes, for each of the data files shared by a respective one of the communicants in a respective one of the one or more virtual areas, generating a respective interaction record (38), the interaction record (38) including a respective one of the venue attribute values and a respective data file identifier, the respective one of the venue attribute values identifying the respective virtual area in which the data file is shared, and the data file identifier identifying the respective data file; and
managing (152) sharing of the data files between the communicants based on the associated locality attribute values.
80. The method of claim 79, wherein the generating comprises, for each of the shared data files, generating the respective interaction record (38), the interaction record (38) comprising an identification of each of the communicants in the respective virtual area, and a respective file location identifier identifying a respective location of the respective data file in physical storage.
81. The method of claim 79, wherein the associating is performed for a particular one of the data files in response to a request from one of the network nodes to share the particular data file on a view object (368) in a respective one of the one or more virtual areas.
82. The method of claim 79, wherein the associating is performed for a particular one of the data files in response to a request from one of the network nodes to share the particular data file via a server process running an application shared by a plurality of the communicants connected to a respective one of the one or more virtual areas.
83. The method of claim 79, wherein the associating is performed for a particular one of the data files in response to a request from one of the network nodes to upload the particular data file to a file store associated with a respective one of the one or more virtual areas.
84. The method of claim 79, wherein the associating is performed for a particular one of the data files in response to a request from one of the network nodes to transmit a respective copy of the data file to each of the other communicants having a respective presence in the particular one of the one or more virtual areas.
85. The method of claim 79, further comprising storing a persistent copy of a particular one of the data files associated with a particular one of the one or more virtual areas in response to a request from one of the network nodes to share the particular data file connected to the particular virtual area.
86. The method of claim 85, wherein the managing (152) comprises managing sharing of the particular data file in accordance with at least one management rule associated with the particular virtual area.
87. The method of claim 86, wherein the administrative rule specifies at least one of: a criterion for accessing the particular data file; a scope of access to the particular data file; subsequent tasks performed in response to the correspondent's access to the particular data file.
88. The method of claim 86 wherein the particular virtual area is associated with a management grid that associates one or more zones (74-82) of the virtual area (66) with a digital rights management function that is triggered in response to an action by one of the communicants, the action involving crossing a boundary of the management grid, and the digital rights management function specifying one of more criteria that must be met in order to allow the action.
89. A method according to claim 88, wherein the digital rights management function specifies that any communicant permitted to enter the particular virtual area is also permitted to perform one or more permitted actions on any of the data files associated with the particular virtual area.
90. The method of claim 89, wherein the one or more permitted actions include: operating the specific data file; viewing the particular data file; downloading the specific data file; deleting the specific data file; modifying the particular data file and re-uploading the particular data file.
91. The method of claim 79, wherein the managing (152) comprises querying the interaction record (38).
92. The method of claim 91 wherein the query is specified by a data file identification grammar that contains// hostname: "query", dbname being a string uniquely associated with a particular computer, dbname being an identifier of a particular database on the particular computer, and "query" being a string having relational database semantics.
93. The method of claim 92, wherein the relational database semantics have schema that includes a bit field for one or more time attribute values that identify one or more times, one or more place attribute values that identify one or more of the virtual areas, and a source attribute value that identifies a source of a data file.
94. The method of claim 91, wherein the querying comprises querying the interaction record (38) based on one or more of: a time attribute value associated with one or more of the data files; a place attribute value associated with one or more of the data files and a correspondent identifier associated with one or more of the data files.
95. The method of claim 91, wherein said managing (152) comprises retrieving a particular one of the data files based on a result of the query.
96. The method of claim 95 wherein the retrieving comprises transmitting a storage location identifier associated with the particular data file to a respective one of the network nodes that originated the query.
97. The method of claim 95, wherein the managing (152) comprises transmitting information derived from one or more of the interaction records (38) identified in the results of the query to a respective one of the network nodes that initiated the query.
98. The method of claim 79, wherein the managing (152) comprises storing on at least one computer readable medium (124, 128) a multi-track record of real-time data streams of different data types, the real-time data streams being transmitted over one or more network connections having one or more of the network nodes that are related to interaction of one or more of the communicants in a particular one of the virtual areas, and the multi-track record comprising a respective track for each of the different data types of the real-time data streams.
99. The method of claim 98, wherein the associating comprises generating a respective one of the interaction records (38), the interaction record (38) comprising a respective one of the locality attribute values and a respective data file identifier, the respective one of the locality attribute values identifying the particular virtual area and the respective data file identifier identifying the multitrack record.
100. The method of claim 98, wherein the storing is initiated in response to a request from one of the communicants participating in the interaction to initiate the storing of the multitrack recording.
101. The method of claim 100, wherein the storing is terminated in response to a request from one of the communicants participating in the interaction to stop the storing of the multitrack recording.
102. The method of claim 101, wherein the request to initiate and stop the storing of the multitrack recording is received in relation to a correspondent-selectable recording object in the particular virtual area (344).
103. The method of claim 98, wherein the storing comprises storing all real-time data streams related to the interaction in the particular virtual area in the multitrack recording.
104. The method of claim 103, wherein the storing comprises storing any real-time data stream types related to the interaction in the multitrack recording, the real-time data stream types including all audio, mobile, and chat real-time data streams.
105. The method of claim 98, wherein the storing is performed in accordance with a recording rule described in a specification of the particular virtual area, and the specification contains a description of geometric elements of the particular virtual area.
106. The method of claim 98, wherein the managing (132) comprises transmitting the real-time data streams of the multitrack records to a particular one of the particular network nodes, respectively, as independent streams that can be operated separately by the particular network node.
107. The method of claim 106, wherein the transmitting is performed in response to a request from the particular network node to access the multitrack record.
108. At least one computer-readable medium (124, 128) having computer-readable program code embodied therein, the computer-readable program code adapted to be executed by a computer (120) to implement a method comprising:
Associating venue attribute values with data files received from communicants operating on respective network nodes and sharing a virtual communication environment (10), the virtual communication environment (10) including one or more virtual areas (32) and supporting real-time communication between the communicants, wherein the associating includes, for each of the data files shared by a respective one of the communicants in a respective one of the one or more virtual areas, generating a respective interaction record (38), the interaction record (38) including a respective one of the venue attribute values and a respective data file identifier, the respective one of the venue attribute values identifying the respective virtual area in which the data file is shared, and the data file identifier identifying the respective data file; and
managing (152) sharing of the data files between the communicants based on the associated locality attribute values.
109. An apparatus, comprising:
a computer readable medium (124, 128) storing computer readable instructions; and
a data processing device (122) coupled to the memory and operable to execute the instructions and to perform operations based at least in part on the execution of the instructions, the operations including
Associating venue attribute values with data files received from communicants operating on respective network nodes and sharing a virtual communication environment (10), the virtual communication environment (10) including one or more virtual zones and supporting real-time communication between the communicants, wherein the associating includes, for each of the data files shared by a respective one of the communicants in a respective one of the one or more virtual zones, generating a respective interaction record (38), the interaction record (38) including a respective one of the venue attribute values and a respective data file identifier, the respective one of the venue attribute values identifying the respective virtual zone in which the data file is shared, and the data file identifier identifying the respective data file; and
managing (152) sharing of the data files between the communicants based on the associated locality attribute values.
110. A method, comprising:
displaying, on a display (132), a graphical representation of a virtual area (32) in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between a first communicant operating on a first network node and a second communicant operating on a second network node;
At the first network node,
executing (456) a first software application that establishes a first real-time data stream connection between the first and second network nodes, wherein the first real-time data stream connection is associated with a reference to the virtual area,
while executing the first software application, executing (458) a second software application that establishes a second real-time data streaming connection between the first network node and a third network node on which a third correspondent operates, wherein the second real-time data streaming connection does not make any reference to the virtual area, and
one or more integrated real-time data streams are generated (460) from real-time data streams exchanged over the first and second real-time data stream connections.
111. The method of claim 110, further comprising rendering at least one of the one or more integrated real-time data streams on the first network node.
112. The method of claim 110, further comprising transmitting respective ones of the one or more integrated real-time data streams to the second and third network nodes.
113. The method of claim 110, wherein at least two of the real-time data streams exchanged over the first and second real-time data stream connections, respectively, are of a particular data type, and the generating (460) comprises mixing some of the exchanged real-time data streams of the particular data type at the first network node.
114. The method in accordance with claim 113 wherein,
further included on the first network node,
a first real-time data stream of the specific data type is generated,
receiving a second real-time data stream of the specific data type from the second network node, an
Receiving a third real-time data stream of the particular data type from the third network node; and
wherein the generating (460) comprises mixing the second and third real-time data streams to produce a first integrated real-time data stream, mixing the first and third real-time data streams to produce a second integrated real-time data stream, and mixing the first and second real-time data streams to produce a third integrated real-time data stream.
115. The method of claim 114, further comprising rendering the first integrated realtime data stream on the first network node, transmitting the second integrated realtime data stream from the first network node to the second network node, and transmitting the third integrated realtime data stream from the first network node to the third network node.
116. The method of claim 115, wherein the sending comprises passing the third integrated real-time data stream to the second software application.
117. The method of claim 115, wherein the first and second realtime data streams are generated by first and second instances of the first software application executing on the first and second network nodes, respectively, and the communicating comprises passing the second integrated realtime data stream from the first instance of the first software application to the second instance of the first software application.
118. At least one computer-readable medium (124, 128) having computer-readable program code embodied therein, the computer-readable program code adapted to be executed by a computer (120) to implement a method comprising:
displaying (454), on a display (132), a graphical representation of a virtual area in a virtual communication environment, the virtual communication environment (10) supporting real-time communication between a first communicant operating on a first network node and a second communicant operating on a second network node; and
at the first network node,
executing (456) a first software application that establishes a first real-time data stream connection between the first and second network nodes, wherein the first real-time data stream connection is associated with a reference to the virtual area,
While executing the first software application, executing (458) a second software application that establishes a second real-time data streaming connection between the first network node and a third network node on which a third correspondent operates, wherein the second real-time data streaming connection does not make any reference to the virtual area, and
one or more integrated real-time data streams are generated (460) from real-time data streams exchanged over the first and second real-time data stream connections.
119. A local network node, comprising:
a display (132);
a computer readable medium (124, 128) storing computer readable instructions; and
a data processing device (122) coupled to the memory and operable to execute the instructions and to perform operations operatively based at least in part on the execution of the instructions, the operations including
Displaying (454), on the display (132), a graphical representation of a virtual area in a virtual communication environment that supports real-time communication between a first communicant operating on the local network node and a second communicant operating on a remote network node;
at the local network node in question,
Executing (456) a first software application that establishes a first real-time data stream connection between the local and remote network nodes, wherein the first real-time data stream connection is associated with a reference to the virtual area,
while executing the first software application, executing (458) a second software application that establishes a second real-time data flow connection between the local network node and a second remote network node on which a third correspondent operates, wherein the second real-time data flow connection does not make any reference to the virtual area, and
one or more integrated real-time data streams are generated (460) from real-time data streams exchanged over the first and second real-time data stream connections.
120. A method comprising operating a server network node to perform operations comprising:
executing (470) an instance of a client software application associated with a virtual area in a virtual communication environment that supports real-time communication between communicants operating on respective client network nodes;
receiving (472) real-time input data streams from respective ones of the client network nodes associated with the communicants interacting in the virtual area;
Generating (474) a composite data stream from the real-time input data stream;
inputting (476) the composite data stream to the executing instance of the client software application;
generating (478), at least partially in response to the input of the composite data stream, a respective instance of an output data stream from output generated by the executing instance of the client software application; and
transmitting (480) the instances of the output data streams to respective ones of the client network nodes associated with communicants interacting in the virtual area.
121. The method of claim 120, wherein the real-time input data streams are derived from input device events on respective ones of the client network nodes.
122. The method of claim 121, wherein at least some of the input device events correspond to real-time computer keyboard events.
123. The method of claim 121, wherein at least some of the input device events correspond to real-time computer mouse events.
124. The method of claim 120, wherein the client software application is a document processing software application.
125. The method of claim 120, wherein the transmitting (480) includes transmitting the instance of the output data stream related to a view object in the virtual area.
126. At least one computer-readable medium (124, 128) having computer-readable program code embodied therein, the computer-readable program code adapted to be executed by a computer (120) to implement a method comprising:
executing (470) an instance of a client software application associated with a virtual area in a virtual communication environment that supports real-time communication between communicants operating on respective client network nodes;
receiving (472) real-time input data streams from respective ones of the client network nodes associated with the communicants interacting in the virtual area;
generating (474) a composite data stream from the real-time input data stream;
inputting (476) the composite data stream to the executing instance of the client software application;
generating (478), at least partially in response to the input of the composite data stream, a respective instance of an output data stream from output generated by the executing instance of the client software application; and
Transmitting (480) the instances of the output data streams to respective ones of the client network nodes associated with communicants interacting in the virtual area.
127. A server network node, comprising:
a computer readable medium (124, 128) storing computer readable instructions; and
a data processing device (122) coupled to the memory and operable to execute the instructions and operatively perform operations based at least in part on the execution of the instructions, the operations including
Executing (470) an instance of a client software application associated with a virtual area in a virtual communication environment that supports real-time communication between communicants operating on respective client network nodes;
receiving (472) real-time input data streams from respective ones of the client network nodes associated with the communicants interacting in the virtual area;
generating (474) a composite data stream from the real-time input data stream;
inputting (476) the composite data stream to the executing instance of the client software application;
generating (478), at least partially in response to the input of the composite data stream, a respective instance of an output data stream from output generated by the executing instance of the client software application; and
Transmitting (480) the instances of the output data streams to respective ones of the client network nodes associated with communicants interacting in the virtual area.
128. A method comprising
Establishing (490) a virtual area (502) in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between communicants operating at respective network nodes;
creating (492), for each of one or more of the communicants, a respective presence in the virtual area; and
communicating (494) information between a file store and a wiki resource associated with the virtual area in response to input received from a respective one of the network nodes associated with a respective one of the communicants who has presence in the virtual area (502).
129. The method of claim 128, wherein the transmitting (494) comprises importing information associated with the wiki resource to the file store.
130. The method of claim 129, wherein the importing comprises importing at least one of a message thread related to the wiki resource and a link to a data file related to the wiki resource to the file store.
131. The method of claim 129, further comprising associating the imported information with a displayed object (508) in the virtual area.
132. The method of claim 131, wherein the display object corresponds to a web browser window that displays the imported information in a native format.
133. The method of claim 131, further comprising selecting at least a portion of the imported information designated by a respective one of the communicants who has presence in the virtual area (502) and associating the selected information with a view object (506) in the virtual area.
134. The method of claim 133, further comprising transmitting the selected information associated with the view object (506) to each of the communicants who have presence in the virtual area (502).
135. The method of claim 133, further comprising allowing a particular one of the communicants who has presence in the virtual area (502) to have editorial control over the selected information.
136. The method of claim 135, wherein the edit control allows the particular communicant to control rendering of the selected information associated with the video object (506) and to modify the selected information using a real-time input data stream transmitted from a network node associated with the particular communicant.
137. The method of claim 129, further comprising generating an interaction record (38), the interaction record (38) indexing the imported information with respective references to one or more of: identifying a location attribute value for the virtual area; and having a respective identifier for each of the communicants present in the virtual area.
138. The method of claim 128, wherein the transferring comprises exporting information from the file store to the wiki resource.
139. The method of claim 138, wherein said exporting comprises exporting information associated with the virtual area (502) to the wiki resource.
140. The method of claim 139, wherein the derived information is associated with a viewscreen object (504) in the virtual area (502).
141. The method of claim 140, wherein the derived information corresponds to a data file communicated in association with the view object (504) to each of the communicants who has presence in the virtual area.
142. The method of claim 139, wherein the exporting comprises exporting the information to a location in the wiki resource specified by a respective one of the communicants who has presence in the virtual area (502).
143. The method of claim 142, wherein the designated location corresponds to a message thread of the wiki resource.
144. The method of claim 142, wherein the derived information corresponds to a record of one or more real-time data streams received from one or more of the communicants who have presence in the virtual area (502).
145. The method of claim 142, wherein the derived information corresponds to at least one of: a data file associated with the virtual area; and a reference to a data file associated with the virtual area (502).
146. The method of claim 128, wherein the communicating comprises communicating the information through a web browser application.
147. At least one computer-readable medium (124, 128) having computer-readable program code embodied therein, the computer-readable program code adapted to be executed by a computer (120) to implement a method comprising:
establishing (490) a virtual area (502) in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between communicants operating at respective network nodes;
Creating (492), for each of one or more of the communicants, a respective presence in the virtual area (502); and
communicating (494) information between a file store and a wiki resource associated with the virtual area (502) in response to input received from a respective one of the network nodes associated with a respective one of the communicants having presence in the virtual area (502).
148. An apparatus, comprising:
a computer readable medium (124, 128) storing computer readable instructions; and
a data processing apparatus (120) coupled to the memory and operable to execute the instructions and, based at least in part on the execution of the instructions, to perform operations including
Establishing (490) a virtual area (502) in a virtual communication environment (10), the virtual communication environment (10) supporting real-time communication between communicants operating at respective network nodes;
creating (492), for each of one or more of the communicants, a respective presence in the virtual area (502); and
communicating (494) information between a file store and a wiki resource associated with the virtual area (502) in response to input received from a respective one of the network nodes associated with a respective one of the communicants having presence in the virtual area (502).
149. A method, comprising:
associating a venue attribute value with real-time interaction of a user and other communicants operating on respective network nodes and sharing a virtual communication environment (10), the virtual communication environment (10) containing at least one virtual area and supporting real-time communication between the user and the other communicants, wherein each of the user and the other communicants is associated with a respective object (362, 363, 364) in the virtual area; and
connecting the user and the other communicant interface to the virtual communication environment (10) in accordance with the associated location attribute values.
150. The method of claim 149, wherein the associating includes associating a respective current location attribute value with each of the objects (362, 363, 364).
151. The method of claim 150, wherein said interfacing comprises selectively enabling access to resources by said user and said other communicants in accordance with at least one administrative rule based on said respective current location attribute values.
152. The method of claim 151, wherein the enabling includes comparing the respective current location attribute values for zones (74-82), the zones (74-82) being associated with the administrative rule according to a virtual area specification that includes a description of geometric elements of the virtual area (66).
153. The method of claim 152, wherein the administrative rules describe criteria for accessing the resources.
154. A method as described in claim 152 wherein the management rule describes a scope of access to said resource.
155. The method of claim 152, wherein the management rule describes one or more outcomes for access to the resource.
156. The method of claim 149, wherein the interfacing comprises enabling one or more of the user and the other communicants to initiate multi-track recording of a real-time data stream associated with the virtual area based on selection of a recording object in the virtual area (344).
157. The method of claim 156, further comprising
In response to the selection of the recording object, recording the selected ones of the real-time data streams in accordance with recording rules described in a virtual area specification, the virtual area specification containing a description of geometric elements of the virtual area.
158. The method of claim 149, wherein the interfacing comprises displaying a presentation (272) of the virtual area to the user, the presentation comprising a depiction of a current location of at least one of the objects currently in the virtual area.
159. The method of claim 158, wherein the interfacing comprises querying a database (36) of at least one record (38), the record (38) comprising at least one location attribute value associated with the virtual area.
160. The method of claim 159, wherein the query comprises querying a database (36) of records (38), the records (38) comprising a venue attribute value associated with the virtual area and at least one correspondent attribute value identifying the user.
161. The method of claim 160, wherein the database comprises a record (38), a record (38) describing interactions between the user and one or more of the other communicants in the virtual area.
162. The method of claim 161, further comprising presenting to the user an array of graphical presentations (266) of the other communicants associated with the virtual area.
163. The method of claim 162, further comprising determining a respective relevance score for each of the other communicants associated with the virtual area based on at least one statistic derived from the record (38), and the presenting comprises presenting the graphical presentations (266) in an order reflecting the respective relevance scores.
164. The method of claim 163, wherein the relevance score measures the frequency of interaction between the user and some of the other communicants.
165. The method of claim 164, wherein the relevance score measures recency of interaction between the user and ones of the other communicants.
166. The method of claim 162, wherein the interface includes moving the object (363) associated with the user into the virtual area in response to user selection of one of the other communicants' graphical presentations (266) related to the virtual area.
167. The method of claim 160, wherein the database (36) contains records (38), the records (38) describing interactions between the user and one or more other virtual areas in the virtual communication environment (10).
168. The method of claim 167, further comprising presenting to the user an array of graphical presentations (262) of some of the virtual areas.
169. The method of claim 168, further comprising determining a respective relevance score for each of the virtual areas based on at least one statistic derived from the record (38), and the presenting comprises presenting the graphical representations (262) of the ones of the virtual areas in an order reflecting the respective relevance scores.
170. The method of claim 169, wherein the relevance scores measure a frequency of interactions between the user and some of the other virtual areas.
171. The method of claim 169, wherein the relevance scores measure recency of interactions between the user and ones of the other virtual areas.
172. The method of claim 158, wherein the query is initiated in response to a user selecting an interface element related to the presentation of the virtual area.
173. The method of claim 158, wherein the depictions respectively include names of ones of the other communicants represented by the user and respective ones of the objects currently in the virtual area.
174. The method of claim 158 wherein the presenting (272) provides contextual information from which the user can infer activities currently performed by ones of the other communicants that are represented by respective ones of the objects currently in the virtual area.
175. The method of claim 174, wherein the context information includes one or more of: information describing respective locations of the one or more other communicants with virtual area identifiers; information describing respective locations of the one or more other communicants within the virtual area; and information describing the respective directions of the one or more other communicants.
176. The method of claim 158, wherein the displaying includes rendering at least one of the objects (402) at a location in the virtual area and in an orientation that shows a direction of attention in the virtual area.
177. The method of claim 176, wherein the rendering comprises rendering the at least one object (402) with a graphical visual element (404) representing a line of sight.
178. The method of claim 177, wherein the rendering includes rendering the at least one object with the graphic visual element (404) representative of both eyes.
179. The method of claim 176, wherein the rendering comprises rendering the at least one object (402) with a three-dimensional spherical body element (406) that supports the graphical visual element (404).
180. The method of claim 179, wherein the performing includes depicting interaction of the body element (406) with virtual light projected from a location in the virtual area.
181. The method of claim 158, further comprising updating the presentation in real-time.
182. The method of claim 158, wherein the interfacing comprises displaying an immersive control interface (345), the immersive control interface (345) enabling the user to select a level of interaction with the virtual region from a set of levels of interaction.
183. The method of claim 182, wherein the immersion control interface (345) enables the user to change the level of interaction by selectively changing between ones of a three-dimensional graphical interface with the virtual region, a two-dimensional graphical interface with the virtual region, and a non-graphical interface with the virtual region.
184. The method of claim 158, wherein the displaying includes displaying the presentation of the virtual area as a permanent substantially transparent interface depicting real-time locations of some of the objects (341) currently in the virtual area (344) and providing at least one control (347) to the user operable by the user to control one or more aspects of the interfacing.
185. The method of claim 149, wherein the interfacing comprises establishing a connection in real-time to a database containing asynchronous data, dynamically retrieving asynchronous data from the database over the connection, and presenting a presentation (282, 290) of the retrieved data to ones of the user and the other communicants related to respective ones of the objects currently in the virtual area in real-time.
186. The method of claim 185, wherein the retrieving comprises dynamically retrieving an asynchronous media file (286, 287, 288) having a common set of one or more metadata values from the database.
187. The method of claim 186, wherein the retrieving comprises dynamically retrieving the media file from an online social networking service.
188. The method of claim 149, wherein the interfacing comprises enabling the user to simultaneously communicate with a first one of the other communicants through a first real-time communication application running on the user's network node and with a second one of the other communicants through a second real-time communication application running on the user's network node.
189. The method of claim 188, wherein the enabling comprises
Mixing real-time communication streams generated by instances of the first real-time communication application running on the network nodes of the user and the first other communicant, respectively, to generate a mixed real-time data stream,
passing the mixed real-time data stream to the second real-time communication application,
Generating a real-time output data stream from said mixed real-time data stream, an
Passing the real-time output data stream to a second instance of the second real-time communication application running on the network node of the second other communicant.
190. The method of claim 149, wherein the interfacing comprises
Generating a respective real-time input data stream at the respective network node of each of the user and at least one of the other communicants,
combining the real-time input data streams into a composite real-time data stream,
processing the document in accordance with the composite real-time data stream in a client application running on a regional server network node, an
Rendering the document in the virtual area.
191. The method of claim 190, wherein at least one of the real-time input data streams corresponds to a real-time computer keyboard output data stream.
192. The method of claim 190, wherein at least one of the real-time input data streams corresponds to a real-time computer mouse output data stream.
193. The method of claim 190, wherein the client application is a client document processing application.
194. The method of claim 149, wherein the interfacing comprises multiplexing a single terminal server session of a client application between the user and some of the other communicants to enable collaboration on a shared document.
195. The method of claim 149, wherein the interfacing comprises importing wiki information from a wiki resource into the virtual area (502).
196. The method of claim 195, wherein the interfacing comprises presenting the wiki information on an interface object (508) in the virtual area (502).
197. The method of claim 196, wherein the importing comprises importing a file referenced in the wiki information from the wiki resource in response to a user selecting the reference via the interface object (508).
198. The method of claim 197, wherein the interfacing comprises rendering the imported file on a video object (506) in the virtual area (502).
199. The method of claim 198, wherein the interfacing includes modifying a file rendered on the view screen object (506) in response to an input data stream received from the user.
200. The method of claim 199, wherein the interfacing comprises exporting the modified file to the wiki resource for incorporation into a wiki web page managed by the wiki resource.
201. The method of claim 149, wherein the interfacing comprises exporting information from the virtual area (502) to a wiki resource for incorporation into a wiki web page managed by the wiki resource.
202. The method of claim 201, wherein the exporting comprises exporting a file associated with the virtual area (502) to the wiki resource in response to dragging a graphical representation of the file into a graphical representation of the interface (504) to the wiki resource.
203. The method of claim 149, wherein the interfacing comprises associating a file stored on a network node of the user with the virtual area in response to receiving an indication that the user and some of the other communicants associated with the respective ones of the objects currently in the virtual area share the file.
204. The method of claim 203, wherein the associating comprises copying the file from the user's network node to another data storage location indexed with an attribute value that identifies the virtual area.
205. The method of claim 149, wherein the interfacing comprises receiving respective real-time differential action streams from the network nodes to control movement of the objects (362, 364) in the virtual area (344), determining respective locations of the objects (362, 364) in the virtual area based on the real-time differential action streams, and automatically repositioning at least one of the objects (362, 364) with at least one of the determined location of the object in the virtual area (344) and the proximity of the object to at least one other object in the virtual area (344).
206. The method of claim 149, wherein the interfacing comprises enabling the user to initiate modifications to the virtual area on demand.
207. The method of claim 206, wherein the enabling comprises modifying a specification of a geometric element of the virtual area in response to a command by the user to add or remove a region.
208. The method of claim 207, wherein the enabling includes modifying the specification in response to receiving a command from the user to select a wall (382) of the virtual area (344).
209. The method of claim 149, wherein the interfacing comprises querying a database (36) of at least one record (38) based on two or more of a location identification attribute value, a correspondent identification attribute value, and a time attribute value.
210. The method of claim 209, wherein the associating comprises, for each of the real-time interactions, storing a respective record (38) in the database (36), the record (38) comprising an identification of the particular virtual area in which the interaction occurred, an identification of each of the communicants in the particular virtual area when the interaction occurred, an identification of a time at which the interaction occurred, and a file location identifier for each file shared during the interaction.
211. The method of claim 149, wherein the associating comprises associating a respective place identifier value with each real-time communication between the user and one or more of the other communicants, wherein each place identifier value identifies a respective place within the virtual communication environment by uniquely naming the place or by describing a unique address for the place.
212. A computer-implemented method, comprising:
displaying, on a monitor (132), a spatial layout (324) of zones (320, 326, 330, 332) of a virtual area (328) of a network communication environment (10), wherein a user is able to have a respective presence in each of one or more of the zones (320, 326, 330, 332);
on the monitor (132), presenting a navigation control and an interaction control, wherein the navigation control enables the user to specify where in the virtual area presence is established and the interaction control enables the user to manage interactions with one or more other communicants in the network communication environment;
responsive to input received through the navigation control, establishing a respective presence of the user in each of one or more of the zones (320, 326, 330, 332); and
on the monitor (132), respective graphical representations of the communicants in each of the zones in which the communicants respectively have presence are depicted.
213. The method of claim 212, wherein the displaying includes displaying the zones (320, 326, 330, 332) as respective graphical representations of elements of a physical environment.
214. The method of claim 213, wherein the displaying comprises displaying the zones as respective graphical representations of physical spaces associated with buildings.
215. The method of claim 212, wherein
In response to a user command to execute one of the zones displayed on the monitor (132), the depicting includes depicting the graphical presentation of the user in the selected zone.
216. The method of claim 215, wherein
The displaying includes displaying a view of objects in all reproducible zones of the virtual area in an area map (324), an
In response to the user command, a magnified view (260) of the selected zone is displayed in the area map.
HK11112647.9A 2008-04-05 2009-04-03 Shared virtual area communication environment based apparatus and methods HK1158335A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US61/042714 2008-04-05

Publications (1)

Publication Number Publication Date
HK1158335A true HK1158335A (en) 2012-07-13

Family

ID=

Similar Documents

Publication Publication Date Title
US12386477B2 (en) Shared virtual area communication environment based apparatus and methods
US8930472B2 (en) Promoting communicant interactions in a network communications environment
US9483157B2 (en) Interfacing with a spatial virtual communication environment
US7958453B1 (en) System and method for real-time, multi-user, interactive and collaborative environments on the web
US20180234363A1 (en) Context based virtual area creation
KR20120050980A (en) Spatial interfaces for realtime networked communications
KR20120118019A (en) Web browser interface for spatial communication environments
HK1158335A (en) Shared virtual area communication environment based apparatus and methods
HK1165580A (en) Interfacing with a spatial virtual communication environment