[go: up one dir, main page]

HK1178279A - Application sharing - Google Patents

Application sharing Download PDF

Info

Publication number
HK1178279A
HK1178279A HK13105105.6A HK13105105A HK1178279A HK 1178279 A HK1178279 A HK 1178279A HK 13105105 A HK13105105 A HK 13105105A HK 1178279 A HK1178279 A HK 1178279A
Authority
HK
Hong Kong
Prior art keywords
network node
window
windows
sharer
screen
Prior art date
Application number
HK13105105.6A
Other languages
Chinese (zh)
Inventor
A.S.高
V.彼得
Original Assignee
社会传播公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 社会传播公司 filed Critical 社会传播公司
Publication of HK1178279A publication Critical patent/HK1178279A/en

Links

Description

Application sharing
Background
When face-to-face communication is impractical, people often rely on one or more technical solutions to meet their communication needs. These solutions are typically designed to simulate one or more aspects of face-to-face communication. Conventional telephone systems enable voice communication between callers. Instant messaging (also known as "chat") communication systems enable users to communicate text messages in real-time via instant messaging computer clients interconnected by instant messaging servers. Some instant messaging systems additionally allow a user to be represented in a virtual environment by a user-controllable graphical object (referred to as an "avatar"). An interactive virtual reality communication system enables users in remote locations to communicate through multiple real-time channels and interact with each other by manipulating their respective avatars in a three-dimensional virtual space. These communication modes may each typically handle some form of data sharing between communications.
A common form of data sharing is application sharing, which involves transferring application data from one node (referred to as a "sharer node") to one or more other nodes (referred to as "watcher nodes"). Application sharing has a variety of useful applications, including providing remote technical support, remote collaboration, and remote presentation of presentations, documents, and images. In some proposed systems, the application sharing program on the sharer node periodically collects drawing commands (e.g., GUI calls for drawing lines and curves, rendering fonts, and processing palettes) from the linked display driver processes on the sharer node, packages the drawing commands into command packets, and sends the command packets to the respective corresponding application sharing programs on each viewer node, which accurately constitute a shared view of the sharer display. However, this application sharing approach requires each viewer node to pass the charting commands in the command packet to the display process (e.g., Microsoft WindowsWindowsGDI interface provided by the operating system) to render its own version of the shared application.
What is needed are improved application sharing devices and methods.
Summary of The Invention
In one aspect, the invention features a method in accordance with which a window associated with a software process among a plurality of windows is identified in a screen layout on a local display of a sharer network node. On the sharer network node, a composite image of the identified window is generated when the identified window is arranged in the screen layout and is not occluded by any other window in the screen layout. The composite image is transmitted from the sharer network node to the viewer network node.
In one aspect, the invention features a method in accordance with which locally generated commands derived from local input device events on a sharer network node are received. A remotely generated command derived from a remote input device event on a remote observer network node is also received. The received commands are processed into a sequence of commands. The command sequence is passed to a sharing process executing on the sharer network node. In a screen layout on a local display of a sharer network node, one or more windows associated with a sharing process are presented in accordance with a received command sequence. When the one or more windows are presented in the screen layout, images of the windows are generated. The image is transmitted from the sharer network node to the viewer network node.
The invention also features apparatus that can be used to implement the above-described inventive methods and computer-readable media storing computer-readable instructions for causing a computer to implement the above-described inventive methods.
Other features and advantages of the invention will become apparent from the following description, including the drawings and claims.
Brief Description of Drawings
Fig. 1 is a diagram of an embodiment of a network communication environment including a first network node and a second network node.
FIG. 2 is a flow diagram of an embodiment of an application sharing method.
FIG. 3 is a diagram of an embodiment of a screen layout on a display of a sharer network node.
Fig. 4A is a diagram of an embodiment of a screen layout on a display of a viewer network node.
Fig. 4B is a diagram of an embodiment of a screen layout on a display of a viewer network node.
FIG. 5 is a diagram of an embodiment of an application sharing class model.
FIG. 6 is a diagram of an embodiment of an application sharing component implementing a method in an embodiment of an application sharing process.
FIG. 7 is a diagram of an embodiment of an application sharing component implementing a method in an embodiment of an application sharing process.
FIG. 8 is a diagram of an embodiment of an application sharing component implementing a method in an embodiment of an application sharing process.
FIG. 9 is a diagram of an embodiment of an application sharing component implementing a method in an embodiment of an application sharing process.
FIG. 10 is a diagram of an embodiment of an application sharing component implementing a method in an embodiment of an application sharing process.
11A and 11B are flow diagrams of embodiments of an application sharing method.
FIG. 12 is a diagram of an embodiment of a screen layout on a display of a sharer network node.
FIG. 13 is a flow diagram of an embodiment of a method of determining a hierarchical order of windows associated with shared software processes.
Figure 14 is a block diagram of an embodiment of a shared network node implementing an embodiment of an application sharing process in conjunction with an embodiment of a watcher network node.
FIG. 15 is a flow diagram of an embodiment of an application sharing method.
Fig. 16 is a diagram of an embodiment of a network communication environment including the first network node of fig. 1, the second network node of fig. 1, and a virtual environment creator.
FIG. 17 is a diagram of an embodiment of a network node including a graphical user interface presenting a depiction of a virtual area.
Fig. 18 is a block diagram of the network communication environment of fig. 16 illustrating components of an embodiment of a client network node.
FIG. 19 is a diagram of an embodiment of a graphical user interface.
FIG. 20 is a diagram of an embodiment of a graphical user interface.
FIG. 21 is a diagram of an embodiment of a graphical user interface.
FIG. 22 is a diagram of an embodiment of a graphical user interface.
FIG. 23 is a diagram of an embodiment of a graphical user interface.
FIG. 24 is a diagram of an embodiment of a graphical user interface.
FIG. 25 is a diagram of an embodiment of a graphical user interface.
Detailed description of the invention
In the following description, like reference numerals are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.
I. Definition of terms
A "window" is a visual area of a display that typically includes a user interface. The window typically displays the output of the software process and typically enables a user to input commands or data for the software process. A window with a parent window is referred to as a "child window". Windows that have no parent window, or whose parent window is a desktop window, are referred to as "top-level windows". A "desktop" is a system-defined window that draws the background of a Graphical User Interface (GUI) and serves as the basis for all windows displayed by all software processes.
The term "window grabbing" refers to a process of extracting data from the display output of another software process. The fetch process is executed by a "fetcher" (scraper) software process.
A "hierarchical order" (also referred to as a "z-order") is an ordering of a two-dimensional object, such as a window in a Graphical User Interface (GUI), superimposed along an axis perpendicular to a display on which the GUI is presented (often referred to as the "z-axis").
"compositing" is the combining of visual elements from separate sources into a single composite image (also referred to herein as a "frame").
The term "occlusion" means an action or process that is concealed or hidden by an overlay or as if by an overlay.
A "communicant" is a person who communicates or otherwise interacts with others over one or more network connections, where the communication or interaction may or may not occur in the context of a virtual area. A "user" is a correspondent that is operating a particular network node that defines a particular point of view for descriptive purposes. A "sharer" is a correspondent that is operating a sharer network node. A "watcher" is a correspondent that is operating a watcher network node.
The "real-time contacts" of a user are communicants or others who have communicated with the user by the real-time communication platform.
A "computer" is any machine, device, or apparatus that processes data according to computer-readable instructions stored temporarily or permanently on a computer-readable medium. An "operating system" is a software component of a computer system that manages and coordinates the execution of tasks and the sharing of computing and hardware resources. A "software process" (also known as software, an application, computer software, a computer application, a program, and a computer program) is a set of instructions that a computer can interpret and carry out to perform one or more specific tasks. A software process may have one or more "threads" of execution. A "shared software process" is a software process whose output is shared with a watcher network node. A "computer data file" is a block of information that persistently stores data for use by a software application.
A "database" is an organized collection of records presented in a standardized format that can be searched by a computer. The database may be stored on a single computer-readable data storage medium on a single computer, or it may be distributed across multiple computer-readable data storage media on one or more computers.
A "data sink" (referred to herein simply as a "sink") is any of a device (e.g., a computer), a portion of a device, or software that receives data.
A "data source" (referred to herein simply as a "source") is any one of a device (e.g., a computer), a portion of a device, or software that produces data.
A "network node" (also referred to simply as a "node") is a node or connection point in a communication network. Exemplary network nodes include, but are not limited to, terminals, computers, and network switches. A "server" network node is a host computer on a network that responds to information or service requests. A "client" network node is a computer on a network that requests information or services from a server. A "network connection" is a link between two communication network nodes. The term "local network node" refers to the network node that is currently the subject of the primary discussion. The term "remote network node" refers to a network node that is connected to a local network node by a network communication link. A "connection handle" is a pointer or identifier (e.g., a Uniform Resource Identifier (URI)) that may be used to establish a network connection with a correspondent, resource, or service on a network node. A "sharer network node" is a network node that is sharing content with another network node, referred to as a "watcher network node. "network communications" may include any type of information (e.g., text, voice, audio, video, email messages, data files, motion data streams, and data packets) transmitted or otherwise communicated from one network node to another over a network connection.
A "communicant interaction" is any type of direct or indirect action or influence between a communicant and another network entity, which may include, for example, another communicant, a virtual area, or a network service. Exemplary types of communicant interactions include communicants communicating with each other in real-time, communicants entering virtual areas, and communicants requesting access to resources from a network service.
"presence" refers to the ability and willingness of a networked entity (e.g., communicant, service, or device) to communicate, wherein such willingness affects the ability to detect and acquire status about the entity on the network and to connect to the entity.
A "real-time data stream" is data that is constructed and processed in a continuous stream and is designed to be received without delay or with only an imperceptible delay. The real-time data stream includes digital representations of voice, video, user movement, facial expressions, and other physical phenomena, as well as data within the computing environment that may benefit from rapid transmission, rapid execution, or both rapid transmission and rapid execution, including, for example, avatar movement instructions, text chat, real-time data feeds (e.g., sensor data, machine control instructions, transaction streams, and stock price information feeds), and file transfers.
A "virtual area" (also referred to as an "area" or "place") is a representation of a computer-managed space or scene. Virtual areas are typically one-dimensional, two-dimensional, or three-dimensional representations; although in some embodiments the virtual area may correspond to a single point. Often times, virtual areas are designed to simulate physical real-world space. For example, using a conventional computer monitor, the virtual area may be visualized as a two-dimensional graphic of a three-dimensional space generated by a computer. However, the virtual area does not require associated visualization to implement the switching rules. A virtual area generally refers to an instance of a virtual area solution, where the solution defines the structure and content of the virtual area in the form of variables, and the instance defines the structure and content of the virtual area in the form of values that have been solved from a particular context.
A "virtual area application" (also referred to as a "virtual area specification") is a description of a virtual area used when creating a virtual environment. Virtual zone applications typically include definitions of geometric, physical, and real-time exchange rules associated with one or more zones (zones) of a virtual zone.
A "virtual environment" is a representation of a computer-managed space that includes at least one virtual area and supports real-time communication between communicants.
A "zone" is a region of the virtual region associated with at least one switching rule or governing rule. An "exchange rule" is an instruction that specifies the connection or disconnection of one or more real-time data sources to one or more real-time data sinks according to one or more conditional precedent. The switching rules control the switching (e.g., routing, connecting, and disconnecting) of real-time data streams between network nodes communicating in the context of the virtual area. The governing rules control a communicant's access to a resource (e.g., an area, a region of an area, contents of the area or region), the scope of the access, and subsequent results of the access (e.g., requiring that an audit record related to the access must be recorded). A "renderable segment" is a segment associated with a respective visualization.
The "position" in the virtual area refers to the position of a point or area or volume in the virtual area. A point is typically represented by a single set of one-, two-, or three-dimensional coordinates (e.g., cartesian, polar, or spherical coordinates) that define the point in the virtual area. Coordinates may be defined as any single or multiple number that establishes a location. The area is typically represented by three-dimensional coordinates of three or more coplanar vertices defining the boundaries of the closed two-dimensional shape in the virtual area. A volume is typically represented by three-dimensional coordinates of four or more non-coplanar vertices defining the closed boundaries of a three-dimensional shape in a virtual area.
The "spatial state" is an attribute that describes where the user is present in the virtual area. The spatial status attribute typically has a respective value (e.g., zone _ ID value) for each of the zones in which the user is present.
A "landmark" is a storage reference (e.g., a hyperlink) to a location in a virtual area. Landmarks are typically selectable to present a view of the associated location in the virtual area to the user. The verb "place a landmark" means an action or operation to create a landmark.
In the context of a virtual area, an "object" is any type of discrete element in the virtual area that can be usefully processed separately from the geometry of the virtual area. Exemplary objects include doors, entrances, windows, viewing screens, and speakers. The object typically has attributes or characteristics that are separate and distinct from the attributes and characteristics of the virtual area. The "avatar" is an object representing a correspondent in the virtual area.
The term "double-click" refers to an action or operation of typing or inputting an execution command (e.g., double-clicking a left button of a computer mouse or by clicking a user interface button associated with executing a command, such as entering a section or viewing an object). The term "Shift-click" refers to an action or operation (e.g., clicking a left button of a computer mouse) that types or inputs a selection command when the Shift key of an alphanumeric input device is activated. The term "Shift-double-click" refers to an action or operation of typing or inputting an execution command when the Shift key of the alphanumeric input device is activated.
As used herein, the term "including" means including but not limited to. The term "based on" means based at least in part on.
Introduction II
Embodiments described herein enable application sharing with high fidelity, real-time performance, observer presence, and privacy protection. In some embodiments, the screen content associated with each thread of the software process may be determined and composited into a corresponding composite image (or frame) without other window content. The contents of windows associated with one or more software application threads on a network node may be propagated to other network nodes without risk of being obscured by screen content that other software processes may generate (e.g., windows containing application content, messages, or dialogs), thereby preventing damage to the shared window contents by overlapping screen content that processes that are outside of the user's immediate control sometimes generate. This feature avoids the need for the sharer to interrupt the presentation to remove occluded screen content, thereby creating a more immersive collaboration experience for the viewers sharing the window content. In addition, the sharer does not have to worry that private information will be inadvertently shared with the intended screen content, thereby maintaining the sharer's privacy.
Some embodiments also enable multi-channel application sharing, where two or more communicants share applications and screen content with each other simultaneously. These embodiments generally include an interface that allows each recipient to distinguish one shared window from another.
Application sharing
A. Introduction to the design reside in
Fig. 1 illustrates an embodiment of an exemplary network communication environment 10, the network communication environment 10 including a first network node 12 and a second network node 14 interconnected by a network 18. The first network node 12 includes a computer readable memory 20, a processor 22, and input/output (I/O) hardware 24 (including a display). The processor 22 executes at least one communication application 26 stored in the memory 20. The second network node 14 is typically configured in substantially the same way as the first network node 12. In operation, the communication application 26 typically provides one or more modes of communication (e.g., text, voice, audio, and video) between the first and second network nodes 12, 14. In addition, the communication application 26 enables one-way or two-way application sharing between the first and second network nodes 12, 14.
Embodiments of the communication application may implement one or more of the following application sharing modes:
● share all windows created by a given process;
● share only one window and not any other windows;
● share the given window and all child windows, where the child windows may belong to the same process that created the given window or they may belong to different processes.
● share multiple applications. For example, instead of sharing each of the applications independently, windows created by different processes are made to constitute a single frame.
Fig. 2 illustrates an embodiment of an application sharing method implemented by a communication application 26 running on one or both of the first and second network nodes 12, 14. The process is typically performed in response to sharing one or more documents from the observer network node that are related to the software process running on the sharer network node.
According to the method of fig. 2, the communication application 26 identifies a window associated with the software process in the plurality of windows in a screen layout on the display of the sharer network node (fig. 2, block 500). In some embodiments, the communication application 26 may identify all windows in the screen layout that are associated with the software process, or it may identify portions of windows in the screen layout that match handles assigned to threads of the software process. In some cases, the process includes identifying a parent window and at least one associated child window created by a software process thread. In some embodiments, the communication application 26 identifies all windows created by a specified group of software processes.
When the identified window is arranged in the screen layout and is not occluded by any other window in the screen layout, the communication application 26 generates a composite image of the identified window (FIG. 2, block 502). In some embodiments, the process comprises: determining a hierarchical order of the identified windows relative to each other, the hierarchical order corresponding to a relative hierarchical order of the identified windows in the screen layout; for each of the identified windows, retrieving a respective image of the window; and synthesizing the retrieved images into a composite image according to the determined hierarchical order. In some embodiments, determining the hierarchical order comprises: for each of the windows in the screen layout, generating a z-order list associating respective z-order values with respective window handles of the windows; and deriving a hierarchical order of the identified windows from the z-th order sequence table. The process of deriving the hierarchical order generally includes: for each of the z-order values in the z-order list, matching the associated window handle with the window handle of the respective one of the identified windows; and sorting the identified windows in a hierarchical order according to respective z-order values in the z-order list associated with z-order values of the plurality of z-order values determined to match the window handle of the identified window.
In addition to preventing occlusion by other windows, some embodiments also prevent occlusion of a selected window because the window is partially or completely off-screen (e.g., outside a visual desktop window containing the screen layout). For example, respective images of the identified windows are stored in respective memory buffers, and generating the composite image includes retrieving each of the images from the respective memory buffers and compositing the retrieved images into the composite image. For example, in some exemplary embodiments, each of the windows is a hierarchical window whose screen data is stored in a respective memory buffer via programmatic calls to the Microsoft Win32 Application Programming Interface (API) at Microsoft Win32WindowsThe operating system version 2000 and later are available. These operating systems provide an extended window pattern that is invoked by setting the WS _ EX _ LA YERED window pattern bit. The WS _ EX _ layyred style bit associated with a particular window may be set by the sharing software process at window creation time (e.g., via a CreateWindowEx API call), or it may be set by the communication application 26 after creation (e.g., via a SetWindowLong API call with GWL _ exttyle). With the WS _ EX _ LA YERED window style bit set for the window, the operating system redirects the drawing of the window to the off-screen bitmap and buffer, which is then accessible by the communication application 26 to generate the composite image. Similar hierarchical windowing functionality is available from other operating systems (e.g., X-Windows on UNIX-based operating systems).
After the composite image has been generated, the communication application 26 transmits the composite image to a viewer network node of the first and second network nodes 12, 14 (i.e., the network node of the first and second network nodes that receives the composite image) (fig. 2, block 504). In some embodiments, the process includes transmitting the composite image to each of the remote network nodes requesting (or subscribing to) view the shared content on the sharer network node. The composite image (and subsequent updates) are transmitted to the subscribing observer network node over a corresponding real-time data stream connection established between the sharer network node and the observer network node.
Fig. 3 shows an embodiment of a screen layout 506 on a display 508 of the first network node 12, the first network node 12 being a sharer network node. Screen layout 506 includes a first window 510, a second window 512 that is a parent of a child window 514, a third window 516, and a fourth window 518. The first window 510, the second window 512, and its sub-windows 514 are executed by a first software process (e.g., Microsoft Windows)WordDocument processing software process), and a third window is created by a second software process (e.g., Microsoft windows)ExcelSpreadsheet software process), and the fourth window 518 is created by a third software process (e.g., Microsoft windows (r)OutlookPersonal information manager software process). The first and second windows 510, 512 are their respective software process thread creations (e.g., optionally at Microsoft WindowsWindowsWinMain entry point function creation of WS _ EX _ layred bit set in application environment). At MicrosoftWindowsIn an application environment, aspects of the appearance of a child window 514 are typically affected by its parent window 512. For example, the parent window 512 generally defines a coordinate system for positioning the child window 514 on the display 508. Additionally, in some implementations, the child window 514 may be cropped so that portions of the child window 514 do not appear outside the boundaries of its parent window 512.
Fig. 4A shows an embodiment of a first screen layout 520 on a display 522 of the second network node 14, the second network node 14 acting as a watcher network node according to the first application sharing example. In this example, the correspondent on the first network node 12 acts as a sharer that has been selected to share all windows of the first software process (App1) with the second network node 14. Thus, the screen layout 520 is composed of a composite image 524 including respective areas 526, 528, 530, the composite image 524 showing the first window 510, the second window 512 and the sub-window 514, which windows were created by respective threads of a first software process running on the first network node 14. When the first window 510, the second window 512, and the sub-window 514 are arranged in the screen layout 506 on the first network node 14, the areas 526, 528, 530 show the first window 510, the second window 512, and the sub-window 514. The areas 526, 528, 530 are also not obscured by any other window in the screen layout 506 on the first network node 14. For example, a third window 516 and a fourth window 518 (which occlude the lower portions of the first window 510, the second window 512, and the sub-window 514 in the screen layout 506) are omitted from the composite image 524; in addition, the portions of the first window 510, the second window 512, and the sub-window 514 that are occluded in the screen layout 506 have been replaced by the appropriate content of the first window 510, the second window 512, and the sub-window 514 in the composite image 524.
Fig. 4B shows an embodiment of a second screen layout 532 on the display 522 of the second network node 14, the second network node 14 being a watcher network node according to a second application sharing example. In this example, the correspondent on the first network node 12 acts as a sharer that has been selected to share all windows associated with only one of the threads of the first software process (App 1). Thus, the screen layout 532 is composed of a composite image 534 including respective regions 536, 538, which regions 536, 538 show the second window 512 and the sub-window 514. When the second window 512 and the sub-window 514 are arranged in the screen layout 506 on the first network node 14, the areas 536, 538 show the second window 512 and the sub-window 514. The regions 536, 538 are also not obscured by any other window in the screen layout 506 on the first network node 14. For example, the first window 510, the third window 516, and the fourth window 518 are omitted from the composite image 534; in addition, the portions of the second window 512 and the sub-window 514 that are occluded in the screen layout 506 have been replaced by the appropriate content of the second window 512 and the sub-window 514 in the composite image 534.
B. Embodiments of application sharing
1. Introduction to the design reside in
In some embodiments, application sharing is initiated after the sharer network node has published one or more applications or documents available for sharing and at least one watcher has subscribed to at least one of the published applications or documents. In some embodiments, the sharer may publish the shared application or document to a viewport object associated with the virtual area, and the observer may subscribe to the shared content by activating the viewport object in the virtual area (e.g., by double-clicking the viewport object with the user input device).
Watchers are typically granted one of two types of access to shared content: observation access rights that allow only observers to passively observe shared content; and control access rights that allow the viewer to view, control, edit, and manipulate the shared content. The type of permission for the observer to access may be set by the sharer or by one or more governing rules associated with the content in which sharing occurs (e.g., governing rules associated with sections of the virtual area, as described below in section IV).
The shared content is typically streamed from the sharer network node to the observer network node in the form of a streaming bitmap of a window associated with the shared application or document on the sharer display. The bitmaps for each window may be streamed separately, or already composited. The bitmap is typically compressed prior to streaming. If the viewer has only viewing access, the viewer can only passively view the image of the shared window on the sharer display. If the observer has control access, the observer network node may transmit remote control commands generated by user input devices (e.g., keyboard, computer mouse, touchpad, and touchscreen) to the sharer network node for controlling, editing, and manipulating the shared content on the sharer network node.
2. Application sharing services
In some embodiments, the application sharing functionality of the communication application 26 is provided by a fetcher module that is a plug-in to an application sharing service that implements a platform-specific portion of the application sharing. The present embodiments implement an application sharing mode in which all windows created by a shared application are automatically shared with subscribing network nodes. This section describes Microsoft Windows providing layered Window functionalityWindowsExemplary embodiments of a fetcher module and an application sharing service implemented in an application program environment.
a. Class I
FIG. 5 illustrates an embodiment of an application sharing class model. In this model, blocks 540, 542, 544, 546 and 548 define interfaces between the application sharing service and the fetcher module, blocks 550, 552, 554 are classes that implement the fetcher module, and blocks 556, 558 are classes in the application sharing service that use these interfaces.
b. Method of producing a composite material
(i) Start-up method and stop method
Before the application sharing service calls any method on the fetcher module, it calls the start method, and during shutdown, it calls the stop method. In the illustrated embodiment, the call may not be made before the start method is called and after the stop method is called.
In the start method, the fetcher module starts a thread that listens to WinEvent. The gripper module listens to winevents to get notifications when windows and menus are created/destroyed and displayed/hidden.
In the stop method, the fetcher module stops all application monitors and then closes the thread that is listening to WinEvent.
When a WinEvent notification is received, the fetcher module obtains a thread Identifier (ID) and a process ID for the window. The fetcher module then looks up the monitor based on the process ID and notifies the application monitor about the event. In response to the notification of the corrupted window, the fetcher module notifies all application monitors of the event because the process ID and thread ID are unavailable.
(ii) get _ sharable _ application method
As shown in fig. 6, the application sharing service may request the fetcher module to provide a list of applications that may be shared. In response, the crawler module builds a list of IDs and a title of the topmost overlapping window on the desktop that is presented on the display of the sharer network node. In the illustrated embodiment, the list is built from processes currently running on the sharer network node. In other embodiments, the list also includes processes that are "runnable" on the node. For example, this may include all Microsoft Windows that the sharer may want to share, but not currently runningWordA document, or another application (e.g., a calculator application) that the sharer may want to share but is not currently running.
(iii) Start _ share _ application method
Referring to FIG. 7, when the application sharing service launches a shared application, the fetcher module creates an instance of an application monitor that will be responsible for crawling application windows. The application monitor enumerates all top-most windows on the desktop. The application monitor uses the results of this enumeration as an initial list of windows. The application monitor determines which windows in the initial list belong to the application being shared. The application monitor then spawns the fetcher thread. The fetcher thread is responsible for fetching the content of the application windows, constructing the final bitmap from all the application windows, and sending the bitmap frames to all subscribing network nodes.
(iv) stop _ share _ application method
Referring to fig. 8, stopping application sharing is responsible for gracefully stopping the application monitor. In this process, the application monitor will shut down the grab thread, which ensures that no more samples are generated. Thereafter, it notifies all subscribing network nodes that unsubscribe. Once all references to the application monitor disappear, the monitor will be destroyed.
(v) get _ shared _ applications method
The method returns a list of applications that are being shared.
(vi) Subscription method
Referring to fig. 9, the subscription method expects a reference to a subscriber and an ID of a process to be shared as parameters. A subscriber is a class (see fig. 5) that implements stream _ subscriber _ itf (stream _ subscriber _ interface), as shown in the class model shown in fig. 5. The crawler module looks up the monitor according to the application ID and adds the subscriber to the monitor. The next time the application monitor generates a frame, it also sends the frame to the subscriber.
(vii) Method for canceling subscription
Referring to fig. 10, the subscription method expects a reference to a subscriber and an ID of a process to be shared as parameters. The method looks up the shared application from the ID and tells it to remove the referenced subscriber. Upon removal of the referenced subscriber, the application monitor will notify the subscriber about the state change. The application monitor also checks the shared process state and if the process exists, the application monitor will notify all subscribers about the state change, unsubscribe all subscribers, and will terminate itself since nothing is on the monitor.
c. Window capture
Each application monitor has a thread that wakes up and performs window grabbing on a periodic basis.
FIGS. 11A and 11B illustrate an embodiment of an exemplary window capture method.
According to the method of FIGS. 11A and 11B, the fetcher module determines the z-order of all windows associated with the shared software process and arranges all windows according to the z-order (FIG. 11B, block 560).
The fetcher module determines a bounding rectangle that encompasses all windows associated with the shared software process (FIG. 11B, block 562). For example, FIG. 12 shows a diagram including Microsoft WindowsWordA main window 566 and a screen layout 564 saved As (Save As) dialog 568. The fetcher module calculates a rectangle 570, the rectangle 570 being the smallest bounding box that encompasses two windows that include any off-screen content (e.g., the shaded region 571 corresponding to the bottom of the main window 566 that is outside of the visible desktop window 572). Bounding rectangle 570 defines the size of the composite bitmap for all windows to be shared.
For each of the windows associated with the shared software process, the fetcher module calls a fetcher function (FIG. 11B, block 576). If the window is not a hierarchical window (FIG. 11B, block 578), the grab function sets the WS _ EX _ LAYERED bit associated with the window via a SetWindowLong API call with GWL _ EXSTYLE (FIG. 11B, block 580). The grab function then grabs the hierarchical window (FIG. 11B, block 582). In this process, the gripper module retrieves images of windows that include portions that are obscured by other windows or that are outside of the visible desktop window (e.g., shaded region 571 shown in FIG. 12) and renders the retrieved images into a composite image. Grabbing windows according to z-order, thereby finding out what is in the final composite imageThe top of the windowed image draws the topmost window. After the grabber module has evaluated the bounding rectangle, the grabber module captures a screen image of the shared window. At MicrosoftWindowsIn the application environment, for each of the windows, the fetcher module makes a GetDC () function call to a GDI (graphics device interface) that retrieves a handle to the display Device Context (DC) of the window. The fetcher module performs an iterative capture of each window. In this process, BitBlt () is called for each window being shared, and the image data of each window is superimposed in the target DC according to the relative positions (in the x, y, and z axes) of the windows in the screen display. The BitBlt function performs a bit block transfer of image data corresponding to a block of pixels from a specified source device context to a target device context. The screen image constitutes an initial composite image. Executing the iterative BitBlt () request automatically filters out screen data from undesired windows and allows individual window compression bitmaps to be sent instead of just compressing the composite bitmap.
After the completion of the grabbing process, the grabber module creates a device-independent bitmap of the composite image (FIG. 11B, block 584). The fetcher module then creates samples from the bytes of the bitmap (FIG. 11B, block 586). The sample is a bitmap in a time sequence of captured bitmaps. In some embodiments, the sample is the current composite image. The current sample depicts how the shared application is displayed at the current time. The successive samples share the perception to the viewer that the application is not just a sequence of images, but a live application. Once the samples are created, the application sharing service presents the samples to all subscribing network nodes one by one (fig. 11B, block 588). Only when a change in the composite image has been made, the samples are typically transmitted to the subscribing network node until the last sample is transmitted. The samples are typically compressed before delivery to the subscribing network node. The gripper module then goes to sleep until the next scheduled time to grip another image (FIG. 11B, block 590). The application sharing service transmits samples of the composite image to each of the subscribing remote network nodes simultaneously (i.e., during the same time period, e.g., during a time period between grab cycles).
FIG. 13 illustrates an embodiment of a method for a fetcher module to determine a current z-order of windows associated with a shared software process (FIG. 11B, block 560).
The fetcher module begins with an initial list of all top-level windows associated with the shared software process (FIG. 13, block 600). The initial list may be obtained from a previous iteration of the crawling process shown in fig. 11A and 11B. Alternatively, the initial list may be obtained directly by querying the application environment. For example, in some embodiments, at Microsoft WindowsWindowsThe fetcher module is implemented in an application environment. In these embodiments, each software process thread has an entry point function called WinMain that registers the window class of the main window by calling the RegisterClass function, and creates the main window by calling the CreateWindowEx function. After creating the window, the create function returns a window handle having the HWND data type and uniquely identifying the window; the window handle is used to direct software process actions to the window. In these embodiments, the fetcher module obtains handles to the various top-level windows associated with the shared software processes by calling the EnumWindows function, which enumerates all windows on the desktop, and then queries the process to which the window belongs by calling GetWindowThreadProcessId.
The fetcher module determines the z-order of all windows currently associated with the shared software process (FIG. 13, block 602). When in MicrosoftWindowsWhen the method is implemented in an application program environment, the fetcher module acquires handles of all windows on the desktop window by calling an EnumChildWindows function, and the EnumChildWindows function enumerates child windows belonging to the desktop window in a z-order by transferring the handles to each child window and then to a callback function of the fetcher module. The fetcher module recursively iterates through all windows on the desktop and identifies those windows that belong to the shared software application. In this process, the fetcher module identifies a child window of that desktop that corresponds to the top-level window of the shared software application. For each of the matching top-level windows, the fetcher module invokes a function that recursively iterates through the child windows that match the top-level windows; the function recursively invokes itself.
The fetcher module orders the initial list of top-level windows associated with the software process according to the determined z-order (FIG. 13, block 604).
The fetcher module appends to the sorted list any windows in the initial list of top-level windows that are not included in the sorted list (FIG. 11, block 606). This step is used to cover any windows that may have been missed (e.g., windows that were deleted after the sequencing process was started).
The fetcher module replaces the initial list with the sorted list of top-level windows (FIG. 13, block 608).
d. Remote access
If the viewer has only viewing access, the application sharing service on the sharer network node transmits only a composite image (in the form of a sample) of the shared window content to the subscribing node of the viewer network node. The viewer on the viewer network node can only passively view the composite image of the shared window on the sharer display.
On the other hand, if the viewer has control access, the application sharing service on the sharer network node transmits a composite image (in the form of a sample) of the shared window content to a subscribing node of the viewer network node. In addition, the fetcher module combines the commands received from the observer network node with commands generated by the sharer on the sharer network node, and passes a combined set of commands to the shared application. This allows the watcher to control, edit, and manipulate the sharing application on the sharer network node. The commands are typically derived from events generated by one or more user input devices (e.g., keyboard, computer mouse, touchpad, and touchscreen) on the observer and sharer network nodes.
Fig. 14 shows an embodiment 610 of a first network node 610 and an embodiment 612 of a second network node 612. In the illustrated embodiment, a first network node 610 (referred to as a "sharer network node 610") shares window content 614 associated with a sharing process 616 with a second network node 612 (referred to as a "watcher network node 612"). The sharer network node 610 includes a display 618 that presents window content 614, a display process 620, an embodiment 622 of the communication application 26, and a network layer 624. Similarly, the observer network node 612 includes a display 626, a display process 628, an embodiment 630 of the communication application 26, and a network layer 632.
The display processes 620, 628 provide the display facilities of the sharer network node 610 and the watcher network node 612, respectively. The display facility controls the writing of visual content on the sharer and viewer displays 618, 626. In some embodiments, the display facility includes a graphical device interface (e.g., at Microsoft Windows @)WindowsGDI available in an application environment) that provides functions that may be called by software processes to present visual content on the displays 618, 626.
The network layers 624, 632 provide networking facilities for the sharer network node 610 and the observer network node 612, respectively. The networking facilities include, for example, networking communication protocol stacks and networking hardware that perform processes associated with sending and receiving information over network 18.
The communication applications 622, 630 provide various communication facilities (including application sharing facilities) to the sharer network node 610 and the watcher network node 612, respectively. In the illustrated embodiment, the communication application 622 on the sharer network node 610 generates a composite image 634 of the shared window content 614 on the sharer's display, transmits the composite image 634 over the network 18 to the observer network node 612 for presentation on the observer's display 626, and grants observer remote control access to the shared window content 614. The communication application 630 on the viewer network node 612 controls the presentation of the composite image 634 on the display 626, transforms user input into commands, and transmits the commands to the sharer network node 610.
FIG. 15 illustrates an embodiment of an application sharing method implemented by the sharer network node 610 in the application sharing context shown in FIG. 14.
According to the method of FIG. 15, a communication application 622 on the sharer network node 610 captures a composite image of a window associated with the sharing software process (FIG. 15, block 637). In some embodiments, the window capture process described above is used to capture a composite image. The communication application 622 transmits the composite image to the observer network node 612 (fig. 15, block 639). In this process, the communication application passes the samples of the composite image to the network layer 624, the network layer 624 converts the samples into a network format, and transmits the converted samples over the network 18 to the observer network node 612.
The communication application 622 receives commands derived from local input device events generated on the sharer network node 610 (fig. 15, block 640). At the same time, the communication application 622 receives commands derived from remote input device events generated on the observer network node 612 (FIG. 15, block 642). In this context, a command is an indication or instruction to perform a task, where the indication or instruction is derived from an interpretation of an input device event (e.g., clicking one or more buttons of a computer mouse or keys of a computer keyboard) initiated by a user action (or input) on one or more input devices. Some pointer input device events, such as computer mouse, touchpad, and touchscreen events, are tied to the position of a pointer (or cursor) in a graphical user interface presented on a display. These types of input device events are typically converted into commands that include both input type parameter values describing the input type (e.g., left click, right click, left key double click, scroll wheel, etc.) and location parameter values describing where to enter the input relative to context-related coordinates.
The operating systems on the sharer and observer network nodes 610, 612 typically translate pointer input device events into user commands, with respect to the main window of the graphical user interface (e.g., Microsoft WindowsWindowsA desktop window in an application environment) to define the location parameter values. The sharer's input commands received by the communication application 622 (fig. 15, block 640) are user commands generated by the operating system and also used to control the operation of user mode software processes (e.g., the sharing process 616) on the sharer network node 610. The viewer's input commands received by the communication application 622 (fig. 15, block 642), on the other hand, are typically versions of user input commands generated by the operating system. In particular, the communication application 630 on the observer network node 612 remaps the location parameter values in the operating system-generated command from the coordinate system of the main window of the graphical user interface on the display 626 to the coordinate system of the composite image 634 before transmitting the observer command to the sharer network node 610. For example, in the illustrated embodiment, the position parameter values are remapped to a coordinate system having an origin (0, 0) in the upper left corner of the composite image 634 and having x and y axes extending along the bottom and left edges of the composite image 634.
The communication application 622 processes the received command into a sequence of commands (fig. 15, block 644). In this process, the communication application 622 typically remaps the position parameter values in the viewer command from the coordinate system of the composite image 634 to the coordinate system of the main window of the graphical user interface on the display 618. The communication application then arranges the received commands into a sequence ordered by time of receipt and stores the resulting sequence of commands in a memory buffer.
Communication application 622 passes the command sequence to sharing process 616 (fig. 15, block 646). In this process, for each command, the communication application typically calls a Win32 API function that allows it to specify the window to which Windows will send commands.
Sharing process 616 invokes one or more graphical device interface functions provided by display process 620 to present a window associated with sharing software process 616 on sharer display 618 in accordance with the received command sequence (fig. 15, block 648).
The process is repeated according to the specified update schedule (fig. 15, block 640-.
Exemplary operating Environment
A. System architecture
1. Introduction to the design reside in
Fig. 16 illustrates an embodiment of an exemplary network communication environment 10 including a first network node 12 (referred to as a "first client network node"), a second network node 14 (referred to as a "second client network node"), and a virtual environment creator 16 interconnected by a network 18. The first client network node 12 and the second client network node 14 are configured as described above in connection with fig. 1. The virtual environment creator 16 includes at least one server network node 28 that provides a network infrastructure service environment 30. The communication application 26 and the network infrastructure service environment 30 together provide a platform (also referred to herein as a "platform") for creating a spatial virtual communication environment (also referred to herein simply as a "virtual environment").
In some embodiments, the network infrastructure service environment 30 manages the sessions of the first and second client nodes 12, 14 in the virtual area 32 according to the virtual area application 34. The virtual area application 34 is hosted by the virtual area 32 and includes a description of the virtual area 32. The communication application 26 operating on the first and second client network nodes 12, 14 presents respective views of the virtual area 32 in accordance with data received from the network infrastructure service environment 30 and provides respective interfaces for receiving commands from the communicants. The communicants are typically represented in the virtual area 32 by respective avatars that move around in the virtual area 32 in response to commands entered by the communicants at their respective network nodes. The view of each communicant of the virtual area 32 is typically presented from the perspective of the communicant's avatar, which increases the level of immersive experience experienced by the communicant. Each communicant is generally able to view any portion of the virtual area 32 around his or her avatar. In some embodiments, the communication application 26 establishes real-time data stream connections between the first and second client network nodes 12, 14 and other network nodes sharing the virtual area 32 based on the position of the correspondent's avatar within the virtual area 32.
The network infrastructure service environment 30 also maintains a relational database 36 containing records 38 of interactions between communicants. Each interaction record 38 describes the context of an interaction between a pair of communicants.
2. Network environment
Network 18 may include any of a Local Area Network (LAN), a Metropolitan Area Network (MAN), and a Wide Area Network (WAN) (e.g., the Internet). Network 18 typically includes a number of different computing platforms and transport facilities that support the transport of a wide variety of different media types (e.g., text, voice, audio, and video) between network nodes.
The communication application 26 (see fig. 13) typically operates on a client network node that includes software and hardware resources that, together with management policies, user preferences, including output regarding user presence and preferences of user connections with areas and other users, and other settings, define local configurations that affect management of real-time connections with other network nodes. Network connections between network nodes may be arranged in a variety of different stream handling topologies, including peer-to-peer architectures, server-arbitrated architectures, and hybrid architectures that combine aspects of peer-to-peer and server-arbitrated architectures. Exemplary topologies of these types are described in U.S. patent application nos. 11/923,629 and 11/923,634, both filed on 24/10/2007.
3. Network infrastructure services
The network infrastructure services environment 30 generally includes one or more network infrastructure services that cooperate with the communication application 26 in establishing and managing network connections between the client nodes 12, 14 and other network nodes (see fig. 13). Network infrastructure services may run on a single network node or may be distributed across multiple network nodes. Network infrastructure services typically run on one or more dedicated network nodes (e.g., server computers or network devices that perform one or more edge services such as routing and switching). However, in some embodiments, one or more of the network infrastructure services run on at least one of the communicants' network nodes. Network infrastructure services included in the exemplary embodiment of network infrastructure services environment 30 are account services, security services, area services, aggregation services, and interaction services.
Account service
The account service manages communicant accounts for the virtual environment. The account service also manages the creation and issuance of authentication tokens that the client network nodes can use to authenticate themselves to any of the network infrastructure services.
Security service
The security service controls access of the communicants to the assets and other resources of the virtual environment. The access control method implemented by the security service is typically based on one or more capabilities in which entities with appropriate capabilities or permissions are granted access and an access control list in which entities with identities on the list are granted access. After a particular communicant has been granted access to resources, the communicant typically interacts in the network communication environment 10 using functionality provided by other network infrastructure services.
Regional service
The regional service manages a virtual region. In some embodiments, the regional service remotely configures the communication applications 26 operating on the first and second client network nodes 12, 14 according to the virtual regional application 34 subject to a set of constraints 47 (see fig. 13). Constraints 47 typically include control over access to the virtual area. Access control is typically based on one or more capabilities in which access is granted to communicants or client nodes having appropriate capabilities or permissions and an access control list in which access is granted to communicants or client nodes having identities on the list.
The zone service also manages network connections associated with the virtual zone according to the capabilities of the requesting entity, maintains global state information for the virtual zone, and serves as a data server for client network nodes participating in the shared communication session in the context defined by the virtual zone 32. The global state information includes a list of all objects in the virtual area and their corresponding locations in the virtual area. The regional service sends instructions to configure the client network node. The regional service also registers and communicates initialization information with other client network nodes requesting to join the communication session. In this process, the zone service may transmit to each joining client network node a list of components (e.g., plug-ins) needed to render the virtual zone 32 on the client network node in accordance with the virtual zone application 34. The regional service also ensures that client network nodes can synchronize to a global state in the event of a communication failure. The zone service typically manages communicant interactions with virtual zones via governing rules associated with the virtual zones.
Aggregation service
The aggregation service manages the collection, storage, and distribution of presence information according to the capabilities of the requesting entity and provides a mechanism for network nodes to communicate with each other (e.g., by managing the distribution of connection handles). The aggregation service typically stores presence information in a presence database. Aggregation services typically manage the interaction of communicants with each other via communicant private preferences.
Interactive service
The interaction service maintains a relational database 36 containing records 38 of interactions between communicants. For each interaction between communicants, one or more services (e.g., regional services) in network infrastructure services environment 30 communicate interaction data to the interaction service. In response, the interaction service generates one or more respective interaction records and stores them in a relational database. Each interaction record describes a context of an interaction between a pair of communicants. For example, in some embodiments, the interaction record contains an identifier of the respective communicant, an identifier of the interaction locale (e.g., a virtual area instance), a description of the hierarchy of the interaction locale (e.g., a description of how the interaction space relates to a larger area), the start time and end time of the interaction, and a list of all files and other data streams shared or recorded during the interaction. Thus, for each real-time interaction, the interaction service keeps track of when, where, and what happens during the interaction in terms of the communicants involved (e.g., entering and exiting), objects activated/deactivated, and files shared.
The interaction service also supports queries to the relational database 36 based on the capabilities of the requesting entity. The interaction service presents the results of queries to the interaction database records in a sorted order (e.g., most frequent or most recent) based on the virtual area. The query results can be used to drive the ranking of the frequency of contacts that a correspondent has encountered in which virtual areas, as well as the ranking of contacts that the correspondent has encountered regardless of virtual area and the ranking of virtual areas that the correspondent most frequently visits. The query results may also be used by application developers as part of a heuristic system that automates certain tasks based on relationships. Examples of this type of elicitation are elicitations that permit a communicant who has visited a particular virtual area more than 5 times to enter without having to tap by default, or elicitations that allow a communicant who exists in an area at a particular time to modify and delete files created by another communicant who exists in the same area at the same time. Queries to relational database 36 may be combined with other searches. For example, queries to the relational database may be combined with queries to contact history data generated using communication systems (e.g., Skype, Facebook, and Flickr) located outside the domain of the network infrastructure services environment 30 to interact with contacts.
4. Virtual area
The communication application 26 and the network infrastructure service environment 30 typically manage real-time connections with network nodes in a communication context defined by an instance of a virtual area. The virtual area instance may correspond to an abstract (non-geometric) virtual space defined relative to abstract coordinates. Alternatively, the virtual region instance may correspond to a visual virtual space defined with respect to one-, two-, or three-dimensional geometric coordinates associated with a particular visualization. The abstract virtual area may or may not be associated with a respective visualization, while the visual virtual area is associated with a respective visualization.
As explained above, communicants are typically represented by respective avatars in a virtual area with associated visualizations. The avatar moves around in the virtual area in response to commands entered by the communicants at their respective network nodes. In some embodiments, the view of the communicant of a virtual area instance is typically presented from the perspective of the communicant's avatar, and each communicant is typically able to observe any portion of the visual virtual area around his or her avatar, thereby increasing the degree of realism experienced by the communicant.
Fig. 17 illustrates an embodiment of an exemplary network node implemented by computer system 48. The computer system 48 includes a display monitor 50, a computer mouse 52, a keyboard 54, speakers 56, 58, and a microphone 60. The display monitor 50 displays a graphical user interface 62. The graphical user interface 62 is a window-based graphical user interface that may include a plurality of windows, icons, and pointers 64. In the illustrated embodiment, the graphical user interface 62 presents a two-dimensional depiction of a shared virtual area 66 associated with a three-dimensional visualization representing a gallery. The communicants are represented in the virtual area 66 by respective avatars 68, 70, 72, each of which may have a respective character (e.g., curator, artist, and visitor) in the context of the virtual area 66.
As explained in detail below, the virtual area 66 includes segments 74, 76, 78, 80, 82 associated with respective rules governing the exchange of real-time data flows between network nodes represented by the avatars 68-72 in the virtual area 66. (during a typical communication session, the dashed lines demarcating the sections 74-82 in FIG. 16 are not visible to the communicant, although there may be visual cues associated with these section boundaries.) the exchange rules specify how the local connection process executing on each of the network nodes establishes communication with other network nodes based on the location of the communicant's avatar 68-72 in the section 74-82 of the virtual area 66.
The virtual area is defined by a specification that includes a description of the geometric elements of the virtual area and one or more rules, including exchange rules and governing rules. The switching rules govern the real-time streaming connections between the network nodes. The governing rules control the communicants' access to resources such as the virtual area itself, the region with the virtual area, and objects within the virtual area. In some embodiments, the geometric elements of the virtual area are described in accordance with COLLADA-Digital Asset Schema Release 1.4.1 April 2006 specification (published 1.4.1 Specification for COLLADA-Digital assets scheme, 4.2006, available from http:// www.khronos.org/COLLADA /), while the switching rules are described using the extensible markup language (XML) textual format (referred to herein as the virtual space description format (VSDL)) in accordance with the COLLADA stream reference specification described in U.S. patent application Nos. 11/923,629 and 11/923,634.
The geometric elements of the virtual area typically include the physical geometry and collision geometry of the virtual area. The physical geometry describes the shape of the virtual area. The physical geometry is typically composed of triangular, quadrilateral, or polygonal surfaces. Colors and textures are mapped onto the physical geometry to create a more realistic appearance of the virtual region. For example, a lighting effect may be provided by drawing a light ray onto a visual geometry and modifying the texture, color, or brightness in the vicinity of the light ray. The collision geometry describes an invisible surface that determines the way an object can move in a virtual area. The collision geometry may coincide with the visual geometry, correspond to a simpler approximation of the visual geometry, or be related to application specific requirements for the virtual area designer.
The switching rules typically include a description of the conditions used to connect the source and sink of the real-time data stream according to the location in the virtual area. Each rule typically includes attributes defining the type of real-time data stream to which the rule applies, and the location or locations in the virtual area to which the rule applies. In some embodiments, the rules each optionally include one or more attributes that specify a desired role for the source, a desired role for the sink, a priority level for the flow, and a requested flow handling topology. In some embodiments, if no explicit exchange rules are defined for a particular portion of the virtual area, one or more implicit or default exchange rules may be applied to that portion of the virtual area. One exemplary default switching rule is a rule that connects each source within the area to each compatible sink according to policy rules. The policy rules may apply globally to all connections between client nodes or only to corresponding connections with respective client nodes. An example of a policy rule is a proximity policy rule that only allows connection of a source to a compatible sink associated with respective objects within a prescribed distance (or radius) from each other in a virtual area.
In some embodiments, governance rules are associated with a virtual area to control who accesses the virtual area, who accesses its content, what the scope of accessing the content of the virtual area is (e.g., what the user can do with the content), and what the subsequent results of accessing those content are (e.g., record tracking, such as audit logs and payment requirements). In some embodiments, the entire virtual area or a section of the virtual area is associated with a "dominating grid". In some embodiments, the dominating grid is implemented in a manner similar to the implementation of the sector grids described in U.S. patent application nos. 11/923,629 and 11/923,634. The governing grid enables a software application developer to associate governing rules with a virtual area or a section of a virtual area. This avoids the need to create individual permissions for each file in the virtual area and avoids the need to deal with the complexity that can arise when the same document needs to be treated separately depending on the context.
In some embodiments, a virtual area is associated with a governing grid that associates one or more sections of the virtual area with Digital Rights Management (DRM) functions. The DRM function controls access to one or more of the virtual area, or one or more sections within the virtual area, or objects within the virtual area. The DRM function is triggered each time the correspondent crosses a dominant grid boundary within the virtual area. The DRM function determines whether the trigger action is permitted and, if so, what the scope of the permitted action is, whether payment is required, and whether an audit record needs to be generated. In an exemplary implementation of a virtual area, the associated governing grid is configured such that if a communicant is able to enter the virtual area, he or she can perform actions on all documents associated with the virtual area, including manipulating documents, viewing documents, downloading documents, deleting documents, modifying documents, and re-uploading documents. In this manner, a virtual area may become a repository for information that is shared and discussed in the context defined by the virtual area.
Additional details regarding the specification of virtual areas are described in U.S. patent application nos. 61/042714 (filed on 4/2008), 11/923,629 (filed on 24/10/2007) and 11/923,634 (filed on 24/10/2007).
5. Communication application
In some embodiments, the communication application 26 includes:
a. a local Human Interface Device (HID) and an audio playback device;
So3D graphical display, avatar and physics engine;
c. a system database and a storage facility.
a. Local Human Interface Device (HID) and audio playback device
The local HID enables the communicant to input commands and other signals to the client network node while participating in the virtual area communication session. Exemplary HIDs include a computer keyboard, a computer mouse, a touch screen display, and a microphone.
An audio playback device enables a communicant to play back audio signals received during a virtual zone communication session. An exemplary audio playback device includes audio processing hardware (e.g., a sound card) for manipulating (e.g., mixing and applying special effects) audio signals, and a speaker for outputting sound.
So3D graphical display, avatar, and physics engine
The So3D engine is a three-dimensional visualization engine that controls the rendering of the virtual region and the corresponding view of the objects within the virtual region on the display monitor. The So3D engine typically interfaces with graphical user interface drivers and HID devices to present a view of the virtual area and to allow the communicant to control the operation of the communication application 26.
In some embodiments, the So3D engine receives graphics rendering instructions from a regional service. The So3D engine can also read a local correspondent's avatar database that contains the images needed to render the correspondent's avatar in the virtual area. Based on this information, the So3D engine generates a visual representation (i.e., an image) of the virtual area and the objects within the virtual area from the perspective (position and orientation) of the correspondent's avatar within the virtual area. The visual representation is typically passed to a graphics rendering component of the operating system that drives graphics rendering hardware to render the visual representation of the virtual area on the client network node.
The communicant can control the view of the rendered virtual area by inputting a view control command via an HID device (e.g., a computer mouse). The So3D engine updates the view of the virtual area according to the view control command. The So3D engine also updates the graphical representation of the virtual area on the display monitor according to the updated object location information received from the area service 26.
c. System database and storage facility
The system databases and storage facilities store various types of information used by the platform. Exemplary information typically stored by a storage facility includes a presence database, a relational database, a head database, a real user id (ruid) database, a style (art) cache database, and a regional application database. This information may be stored on a single network node, or it may be distributed across multiple network nodes.
6. Client node architecture
The correspondent typically connects to the network 18 from a client network node. Client network nodes are typically implemented by general purpose computer systems or special purpose communication computer systems (or "consoles," such as network-enabled video game consoles). The client network node performs a communication process that establishes real-time data stream connections with other network nodes, and typically performs a visualization rendering process that presents a view of each virtual area into which the communicant enters.
Fig. 18 illustrates an embodiment of a client network node implemented by computer system 120. Computer system 120 includes a processing unit 122, a system memory 124, and a system bus 126 that couples processing unit 122 to the various components of computer system 120. The processing unit 122 may include one or more data processors, each of which may be in the form of any one of various commercially available computer processors. The system memory 124 includes one or more computer-readable media typically associated with a software application addressing space that defines addresses available to the software application. System memory 124 may include Read Only Memory (ROM) and Random Access Memory (RAM) that store a basic input/output system (BIOS) that contains the start-up routines for computer system 120. The system bus 126 may be a memory bus, a peripheral bus, or a local bus, and may be compatible with any of a variety of bus protocols, including PCI, VESA, Microchannel, ISA, and EISA. Computer system 120 also includes persistent storage 128 (e.g., hard disk drives, floppy disk drives, CD ROM drives, tape drives, flash memory devices, and digital video disks), where persistent storage 128 is connected to system bus 126 and contains one or more computer-readable media disks providing non-volatile or persistent storage for data, data structures, and computer-executable instructions.
A communicant may interact (e.g., enter commands or data) with computer system 120 using one or more input devices 130 (e.g., one or more keyboards, computer mice, microphones, cameras, joysticks, physical motion sensors such as Wii input devices, and touch pads). The information may be presented through a Graphical User Interface (GUI) presented to the correspondent on a display monitor 132, the display monitor 132 being controlled by a display controller 134. Computer system 120 may also include other input/output hardware (e.g., peripheral output devices such as speakers and printers). Computer system 120 is connected to other network nodes through a network adapter 136 (also referred to as a "network interface card" or NIC).
A number of program modules may be stored in system memory 124, including an application programming interface 138(API), an Operating System (OS)140 (e.g., Windows XP, available from Microsoft corporation of Redmond, Washington, USA)Operating system), communication applications 26, drivers 142 (e.g., GUI drivers)) Network transport protocol 144, and data 146 (e.g., input data, output data, program data, registries, and configuration settings).
7. Server node architecture
In some embodiments, one or more of the server network nodes of the virtual environment creator 16 are implemented by respective general purpose computer systems of the same type as the client network nodes 120, except that each server network node typically includes one or more server software applications.
In other embodiments, one or more server network nodes of virtual environment creator 16 are implemented by respective network devices that perform edge services (e.g., routing and switching).
B. Exemplary communication sessions
Returning to fig. 17, during the communication session, each client network node generates a respective set of real-time data streams (e.g., a motion data stream, an audio data stream, a chat data stream, a file transfer data stream, and a video data stream). For example, each communicant manipulates one or more input devices (e.g., the computer mouse 52 and the keyboard 54) that generate motion data streams that control the movement of his or her avatar within the virtual area 66. Additionally, the communicator's voice and other sounds generated locally near the computer system 48 are captured by the microphone 60. The microphone 60 generates an audio signal that is convertible into a real-time audio stream. Respective copies of the audio stream are transmitted to other network nodes represented by the avatar in virtual area 66. The sound generated locally at these other network nodes is converted to real-time audio signals and transmitted to the computer system 48. The computer system 48 converts audio streams generated by other network nodes into audio signals that are rendered by the speakers 56, 58. The motion data stream and the audio stream may be transmitted from each correspondent node to other client network nodes, either directly or indirectly. In some flow handling topologies, the client network nodes each receive a copy of the real-time data flow transmitted by the other client network nodes. In other flow handling topologies, one or more client network nodes receive one or more flow mixes derived from real-time data flows originating from (or originating from) other ones of the network nodes.
In some embodiments, the regional service maintains global state information that includes the current specification of the virtual region, the current registry of objects located in the virtual region, and a list of any stream mixes that are currently being generated by the network node hosting the regional service. The object registry typically includes, for each object in the virtual area, a respective object identifier (e.g., a tag that uniquely identifies the object), a connection handle (e.g., a URI such as an IP address) that enables establishment of a network connection with the network node associated with the object, and interface data that identifies real-time data sources and sinks associated with the object (e.g., sources and sinks of the network node associated with the object). The object registry also typically includes one or more optional role identifiers for each object; these role identifiers can be explicitly assigned to these objects by the correspondent or regional service, or can be inferred from these objects or other attributes of the user. In some embodiments, the object registry further includes current locations of each object in the virtual area, the current locations determined by the area service from analysis of real-time motion data streams received from network nodes associated with the objects in the virtual area. In this regard, the zone service receives real-time motion data streams from network nodes associated with objects in the virtual zone, tracks avatars and other objects of communicants entering, leaving, and moving around the virtual zone based on these motion data. The regional service updates the object registry based on the current location of the tracked object.
In managing real-time data stream connections with other network nodes, the zone service maintains a set of configuration data for each of the client network nodes, including interface data, a zone list, and the location of objects currently located in the virtual zone. For each object associated with each of the client network nodes, the interface data includes a respective list of all sources and sinks of the real-time data stream type associated with the object. The zone list is a registry of all zones in the virtual area that are currently occupied by avatars associated with respective client network nodes. When a correspondent first enters a virtual area, the area service typically initializes the current object location database with location initialization information. Thereafter, the zone service updates the current object location database with current locations of the objects in the virtual zone, the current locations determined from analysis of real-time motion data streams received from other client network nodes sharing the virtual zone.
C. Interfacing with a spatial virtual communication environment
In addition to local Human Interface Devices (HIDs) and audio playback devices, So3D graphical displays, avatars, and physics engines, and system databases and storage facilities, the communication application 26 also includes a graphical navigation and interaction interface (referred to herein as a "seeker interface") that enables a user to interface with the spatial virtual communication environment. The seeker interface includes navigation controls that enable a user to navigate through the virtual environment, and interaction controls that enable the user to control his or her interaction with other communicants in the virtual communication environment. The navigation controls and interaction controls are generally responsive to user selections made using any type of input device, including a computer mouse, a touch pad, a touch screen display, a keyboard, and a video game controller. The seeker interface is an application operating on each client network node. The seeker interface is a small, lightweight interface that the user can always keep and run on his or her desktop. The seeker interface allows the user to launch virtual area applications and provide the user immediate access to real-time contacts and real-time collaboration places (or areas). The seeker interface is integrated with real-time communication applications and/or real-time communication components of the underlying operating system so that the seeker interface can initiate and receive real-time communications with other network nodes. The virtual area is integrated with the user's desktop through a seeker interface so that the user can upload files into the virtual environment created by the virtual environment creator 16, use files stored in association with the virtual area by using local client software applications that are not related to the virtual environment but still exist in the virtual area, and more generally treat the presence and location within the virtual area as an aspect of its operating environment similar to other operating system functions rather than just one of several applications.
A spatial virtual communication environment can generally be modeled as a spatial hierarchy of places (also referred to herein as "locations") and objects. The spatial hierarchy includes an ordered sequence of levels ranging from top to bottom. The places in successive levels of the spatial hierarchy are each contained in a corresponding place in a previous level. The objects in the spatial hierarchy are each contained in a respective locale. The levels of the spatial hierarchy are typically associated with respective visualizations consistent with geographic, architectural, or urban metaphors, and labeled accordingly. The sections of each virtual area are defined by respective grids, some of which may define elements of a physical environment (e.g., spaces associated with buildings, such as rooms and patios) that may contain objects (e.g., head images and props, such as watch screen objects and meeting objects).
Navigation controls of the seeker interface allow a user to traverse paths in the virtual environment according to a navigation model bound to the underlying spatial hierarchy of places and objects. The network infrastructure service environment 30 records the path traversed by the user. In some embodiments, the network infrastructure service environment 30 records a history including a list of time-ordered views of the virtual area presented by the user as the user navigates within the virtual area. Each view typically corresponds to a view of a respective renderable segment of the virtual area. In these embodiments, the navigation controls enable the user to move to selected sections of the history. The navigation control also includes a graphical representation showing a depth path in the spatial hierarchy corresponding to a position of a current view of the virtual area as viewed by the user. In some embodiments, the graphical representation of the depth path includes a respective user selectable link to a respective view of each previous level in the spatial hierarchy model of the virtual area above the current view.
Interaction controls of the seeker interface allow a user to manage interactions with other communicants. The interaction options available to the user typically depend on the section in which the user is present. In some embodiments, the interaction options available to a communicant who is present in a particular segment are different from the options available to other communicants who are not present in the segment. The degree of detail and the interactivity of the user typically depend on whether the user is present in a particular sector. In one exemplary embodiment, if a user is outside a virtual area, the user is provided with minimal detail of the interaction that takes place within the virtual area (e.g., the user can see the outline of the floor plan, background texture, and foliage of the area, but the user cannot see where other communicants appear in the area); if the user is within the virtual area but outside a particular section of the area, the user is provided with a moderate level of detail of the interaction that occurred within that particular section (e.g., the user can see where other communicants are present within the area, see visualization of their current status-talk, type chat messages, whether their headphones and microphones are on-and see if any viewing screen is active); if the user is within a particular section of the area, the user is provided with a full degree of detail of the interaction that occurred with the particular section (e.g., the user can see thumbnails of the files being shared on the viewing screen, hear and speak to other communicants in the area, and see elements of the log of chat messages generated by the communicants in the section). In some embodiments, the switching and governance rules associated with a section of a virtual area control how the network infrastructure service distinguishes between those people present in a particular section and those people not present in a particular section.
Fig. 19 shows an embodiment 160 of a seeker interface displayed in a window 162 and including one or more tabs 164, each having a browsing area 166 and a toolbar 168.
Each of the tabs 164 is typically associated with a respective view of the virtual environment. In the illustrated embodiment, the views (labeled "My zones") presented in table 164 are associated with respective sets of virtual zones, which may be default sets of virtual zones in the virtual environment, or which may be sets of virtual zones identified by respective filters on the interaction database. In particular, tab 164 is associated with a set of three virtual areas (i.e., Acme, Sococo help area, and personal space), which may be a default set of areas associated with the user or may be identified by a filter that identifies all areas associated with the user (e.g., all areas that the user has interacted with). Additional tabs may be created by selecting the "+" button 170.
The browsing area 166 for each tab shows a graphical representation of the elements of the virtual environment associated with that tab. For example, in the illustrated embodiment, the browsing area 166 shows top-level views 172, 174, 176 of the virtual area associated with the tab 164. The user may navigate to the next lower level in the spatial hierarchy model of any of these virtual areas by selecting the respective graphical representation of that virtual area.
Toolbar 168 includes an adaptive set of navigation and interaction tools that the seeker interface automatically selects based on the user's current location in the virtual environment. In the illustrated embodiment, toolbar 168 includes a back button 178, a forward button 180, a landmark button 182, and a home button 184. The back button 178 corresponds to a back control that enables the user to incrementally move back to previous sections in the history of sections that the user has traversed. The forward button 180 corresponds to a forward control that enables the user to incrementally move forward to subsequent sections in the history of sections that the user has traversed. The landmark button 182 provides a place landmark control for storing links to sections, and a landmark navigation control for viewing a list of links to sections where landmarks have been previously placed. In response to a user selection to place a landmark control, a landmark is created by storing an image of a location shown in the current view in association with a hyperlink to a corresponding location in the virtual area. In response to a user selection of the landmark navigation control, a landmark window is presented to the user. The landmark window includes live visualizations of all locations where the user has made landmarks. The images in the landmark windows are each associated with a respective user-selectable hyperlink. In response to a user selection of one of the hyperlinks in the landmark window, a view of the virtual area corresponding to the location associated with the selected hyperlink is automatically displayed in the browsing area 166 of the seeker interface window 162. The home button 184 corresponds to a control that returns the user to a view (e.g., the view shown in FIG. 19) in the virtual environment that specifies a "home" position.
Referring to FIG. 20, in response to a user selection of the graphical representation 172 of the Acme virtual area shown in FIG. 19, the platform moves the user into a default zone of the virtual area, automatically establishes the user's presence in the default zone, and automatically establishes a network connection between the user and each of the other communicants occupying the selected zone. Based on switching rules established by the zone designer, the platform multiplexes the designated real-time data streams (e.g., streams from microphone and speaker) of all communicants in the default zone so that they can both see each other's sprites or avatars and communicate (e.g., speak and listen) with each other.
The seeker interface shows a top view or floor plan of the Acme virtual area in the browsing area 166 of tab 164 and provides the user with a default set of interaction options. In the embodiment shown, the presence is automatically established in the yard section 190 of the virtual area, and the user's microphone and default speaker (e.g., headphones) are turned on. In the floor plan shown in fig. 20, the user is represented by a circular sub-graphic 192; other users in the Acme virtual area are also represented by the corresponding circular sub-graphic 194-. The state of the user's speaker is depicted by the presence or absence of the headphone graphic 203 on the user sub-graphic 192: when the speaker is on, there is a headphone pattern 203, and when the speaker is off, there is no headphone pattern 203. The state of the user's microphone is depicted by the presence or absence of a microphone graphic 206 on the user sub-graphic 192 and a series of concentric circles 204 around the user sub-graphic 192: when the microphone is on, there is a microphone pattern 206 and concentric circles 204, and when the microphone is off, there is no microphone pattern 206 and concentric circles 204. The earpiece graphic 203, concentric circles 204, and microphone graphic 206 serve as visual reminders for the user's audio playback and status of the microphone device.
In addition to the back button 178, the forward button 180, the landmark button 182, and the home button 184, the toolbar 168 includes a series of one or more breadcrumbs (breadcrumbs) buttons 207 that originate from the home button 184 and include the home button 184. Breadcrumbs button 207 corresponds to a hierarchical sequence of subsequent user-selectable links. Each successor link corresponds to a view of a respective level in the hierarchical model of the virtual area, with each successor level being encompassed by some previous levels. In the illustrated embodiment, breadcrumb buttons 207 include a home button 184 and an Acme button 208 that corresponds to the current view of the Acme virtual area shown in fig. 20. Breadcrumb button 207 provides the user with single button selection access to the respective views of the different levels of the virtual environment. Toolbar 168 also includes a button 210 and a set button 212.
When an area is selected or focused on, the button 210 appears as a portrait representation of two people and is labeled "members," and allows members and arbitrators to see a list of people associated with an area. When the audio or chat segment is focused, the button 210 has a different image (e.g., an image of an arrow pointing down on a plane to represent a fetch operation) and is marked "fetch". In response to user selection of button 210, a list of all members of the Acme virtual area 166 is displayed in the user interface. The user may select any communicant in the list and click on a get button presented in the user interface; in response, the platform transmits an invitation to the selected communicant to join the user in one of the designated zones.
The settings button 212 provides the user with access to a set of controls for specifying default settings associated with the current region.
The user may navigate back from the view of the Acme virtual area shown in FIG. 20 to the view shown in FIG. 19 in a variety of different ways. For example, the user may select any of the following: the back button 178, home button 184, or any portion of the segment 211 outside the boundary is represented graphically by the Acme virtual area shown in fig. 20.
The user may navigate to any section of the Acme virtual area. In some embodiments, to move to a section, the user transmits a command to execute one of the sections displayed on the monitor (e.g., by selecting the section and then clicking an enter button (in the toolbar), or double-clicking the section as a shortcut), and in response, the platform depicts the user's avatar in the section corresponding to the section object. In response to the section execution command, the seeker interface delineates the section (indicating to the user that it is selected) and updates breadcrumb button 207 to display the selected section location in the hierarchy. The toolbar button specific to this selection will also appear to the right of breadcrumbs button 207.
The user may also interact with any objects (e.g., screens, forms, or files) present in a section. In some embodiments, to interact with an object, the user transmits a command to execute one of the objects displayed on the monitor (e.g., by selecting the object and then clicking a view button (in a toolbar), or double-clicking the object as a shortcut), and in response, the platform performs an operation with respect to the object (e.g., presenting a magnified view of the object, opening an interactive interface window, etc.). In response to the object execution command, the seeker interface delineates or otherwise highlights the prop (indicating to the user that it is selected) and updates the breadcrumbs button 207 to display the selected object location in the hierarchy. The toolbar button specific to this selection will also appear to the right of breadcrumbs button 207.
Referring to fig. 21, in some embodiments, in response to a user entering the body space 213, the platform automatically establishes a network connection between the user and each of the other communicants occupying the selected zone. The user may also enter a space (and thus establish a presence in the space) by selecting the space and clicking an enter button; this causes the platform to move the user's sub-graphic from its current location (i.e., yard) to the selected space (i.e., subject). The settings of the user's speaker and microphone are typically not changed as the user moves from place to place.
Fig. 22 shows a situation in which the user has double-clicked the wall object 290 in the view of the body space 213 shown in fig. 21.
In response to a user command to execute the wall object 290, the seeker interface presents the contents of the wall object 290 and a 2.5-dimensional view of the area of the body space 213 surrounding the wall object 290 in the browsing area 166 of the tab 164. In the embodiment shown in FIG. 22, the selected wall object 290 corresponds to a north wall of the body space 213. The north wall contains a pair of view screen objects 289, 291 (labeled "2" and "3," respectively) that appear on a north wall object 290. The viewport objects 289, 291 may be used to present the contents of data files associated with the north wall of the body space 213. The 2.5 dimensional view also shows a west wall object 293 and an east wall object 295 located to the left and right of north wall object 290, respectively. West wall object 293 and east wall object 295 each include a respective viewport object 293, 295 (labeled "1" and "4," respectively) that may be used to present the contents of a respective data file.
The interface also shows a view of the body space 213 and the area of the Acme space surrounding the body space 213 in the minimap 256. The minimap 256 also shows a highlighted view 292 of the selected north wall object 290 in the body space 213.
The breadcrumbs button 207 shown in the toolbar 168 of the tab 164 includes a north wall button 294 corresponding to the current level in the hierarchical spatial model for the virtual area. The toolbar 168 includes a left button 296 and a right button 298 that allow the user to left-rotate and right-rotate the current view ninety degrees (90) so that the user may view the contents of different walls of the main space in the central viewing area of the 2.5-dimensional view of the main body space 213. The user may also double-click on a different one of the walls shown in the minimap 256 to change the content presented in the central viewing area of the 2.5-dimensional view of the body space 213.
FIG. 23 shows the seeker interface after the user has selected the View screen object 291 (labeled "3") on the north wall in the view of the body space 213 shown in FIG. 22. The user may have executed the view screen object 291 shown in the center viewing area of the 2.5-dimensional view shown in the browsing area 166 of the tab 164 by double-clicking any portion of the view screen object 291 shown in the center viewing area or by double-clicking the corresponding view screen object in the minimap 256 shown in fig. 22. In response to the user double-clicking on the view screen object 291 shown in fig. 22, the browsing area 166 of the tab 164 shows an enlarged view of the view screen object 291 and the area of the north wall object 290 surrounding the view screen object 291, as shown in fig. 23. The user may double-click on any area of the north wall object 290 surrounding the view screen object 291 in the browsing area 166 shown in fig. 22 to return to the browsing area and minimap view of the main body space 213 shown in fig. 22. In the embodiment shown in FIG. 23, the minimap 256 shows the contents of the wall object 290 along with a 2.5-dimensional view of the area of the body space surrounding the wall object 290; the view corresponds to a previous layer in the hierarchical spatial model of the virtual area. Breadcrumb button 207 includes a screen 3 button 302 corresponding to the current level in the hierarchical spatial model of the virtual area. Toolbar 168 includes a share button 304 that allows the user to specify a shared data file whose contents are to be presented on the view screen object 291 (i.e., screen 3) and thereby allows all communicants in the body space 213 to share that data file at the same time. The view screen object 291 shown in the browsing area 166 includes a share link 306 that also allows the user to specify the shared data file.
Referring to FIG. 24, in response to user selection of the share button 304 or share link 306, the seeker interface opens a separate selection source interface window 310 that allows the user to specify the data file whose contents are to be shared on the View screen object 291, as described in section IV above. The select source interface includes a textbox 312 for receiving a data file identifier (e.g., a local data file storage pathname or Uniform Resource Identifier (URI)) and a browse button 314 that enables a user to browse through different locations corresponding to the data file identifier. The data file identifier may be located on the client node 12 or another network node. Selecting the source interface window 310 also includes a favorites button 316 that allows the user to browse through a list of URIs for previously bookmarked files, applications, or data file identifiers.
Referring to fig. 25, after the user has selected a data file identifier in the selection source interface, the communication application 26 markets the thumbnail image of the selected data file. In some embodiments, the crawler module is invoked to contain thumbnail images of the selected data files. In some embodiments, the thumbnail image may be a sample of a main window of the shared application associated with the selected data file. Thumbnail images are displayed on the viewing screen object 291 in both the browsing area 166 and the minimap 256. In the illustrated embodiment, the specified data file corresponds to PowerPoint for a slide containing pie chartsA data file. The user may terminate the presentation of the data file on the view screen object 291 by selecting the clear icon 318.
One or more scrolls out of the window that may subscribe to the sharing application, showing the contents of the selected data file by clicking (or double clicking) on the thumbnail image shown on the viewport object 291. Each scroll out may view, control, edit, or manipulate the shared window content presented on the viewport object 192 in accordance with any governing rules associated with the selected data file or containing the viewport object 291. By having controlled access to the shared window content by the watcher, the watcher can input commands to the sharing process executing on the sharing network node by using one or more input devices on the watcher's network node, as described above in section IV. Assuming real-time performance can be achieved through the network connection between the sharer network node and the watcher network node, editing and other manipulation of the shared data file will typically be displayed to each of the collaborators as if they were done on the same network node.
Conclusion V
Embodiments described herein enable sharing of applications with high fidelity, real-time performance, observer presence, and privacy protection. Some embodiments also enable multi-channel application sharing, where two or more communicants share application and screen content with each other simultaneously. These embodiments generally include an interface that allows each viewer to distinguish one shared window from another.
Other embodiments are within the scope of the following claims.

Claims (31)

1. A method, comprising:
identifying a window (510) associated with the software process in a screen layout (506) on a local display (508) of the sharer network node (12);
generating, at the sharer network node (12), a composite image (524) of the identified window (510) and 514) when the identified window (510) and 514) is arranged in the screen layout (506) and is not obscured by any other window in the screen layout (506); and
transmitting the composite image (524) from the sharer network node (12) to a viewer network node (14).
2. The method as recited in claim 1, wherein the identifying includes identifying all windows (510) associated with the software process in the screen layout (506).
3. The method as recited in claim 1, wherein the identifying includes identifying windows of the plurality of windows (510) in the screen layout (506) that match a handle assigned to the software process.
4. The method of claim 3, wherein the identifying comprises identifying a parent window and at least one associated child window created by the software process (514).
5. The method of claim 1, wherein the generating comprises:
determining a hierarchical order of the identified windows (510- & 514) relative to each other, the hierarchical order corresponding to a relative hierarchical order of the identified windows (510- & 514) in the screen layout (506);
retrieving, for each of the identified windows (510-514), a respective image of the window; and
the retrieved images are composited into the composite image according to the determined hierarchical order (524).
6. The method of claim 5, wherein the determining comprises
For each of the windows (510-514) in the screen layout (506), generating a z-order list associating respective z-order values with respective window handles of the window, an
The hierarchical order of the identified windows (510) and 514) is derived from the z-order sequence table.
7. The method of claim 6, wherein the deriving comprises:
for each of the z-order values in the z-order list, matching the associated window handle with a window handle of a respective one of the identified windows (510-514); and
the identified windows (510) and 514) are sorted in the hierarchical order according to respective z-order values in the z-order list that are associated with the z-order values of the z-order values that are determined to match the window handles of the identified windows (510 and 514).
8. The method as recited in claim 5, wherein the generating further comprises determining a two-dimensional location of the identified window (510-514) in the screen layout (506).
9. The method of claim 1, wherein:
the identifying comprises identifying ones of the windows that are associated with a specified group of software processes; and
the generation comprises
Determining a hierarchical order of the identified windows (510- & 514) relative to each other, the hierarchical order corresponding to a relative hierarchical order of the identified windows (510- & 514) in the screen layout (506),
for each of the identified windows (510-
The retrieved images are composited into the composite image according to the determined hierarchical order (524).
10. The method as recited in claim 1, wherein the generating comprises generating a composite image (524) of the identified window (510- "514), the composite image (524) comprising any content of the identified window (510-" 514) that is outside a visible desktop window that includes the screen layout (506).
11. The method as recited in claim 1, wherein respective images of each of the identified windows (510-514) are stored in respective memory buffers, and the generating includes retrieving each of the images from the respective memory buffers and compositing the retrieved images into the composite image.
12. The method as recited in claim 1, further comprising setting windows (510-.
13. The method of claim 1, wherein the transmitting is performed in response to a request from the observer network node (14) to observe screen data associated with the software process.
14. The method of claim 13, further comprising transmitting the composite image (524) from the sharer network node (12) to one or more other observer network nodes (14) in response to respective requests from each of the other observer network nodes (14) to observe screen data associated with the software process, wherein the composite image (524) is transmitted to each of the remote network nodes simultaneously.
15. The method of claim 1, wherein the identifying, generating, and transmitting are performed in the context of a virtual area definition, wherein a sharer at the sharer network node (12) and an observer at the observer network node (14) are both present.
16. The method of claim 15, further comprising:
displaying, on the local display (508), a spatial layout of sections of the virtual area;
on the local display (508), a navigation control and an interaction control are presented, wherein the navigation control enables the sharer to specify where in the virtual area to establish a presence, and the interaction control enables the sharer to manage interactions with viewers in the virtual area.
Establishing a respective presence of the sharer in each of one or more of the sections in response to input received via the navigation control; and
on the local display (508), respective graphical representations of the sharer and the watcher are depicted in each of the sections in which the sharer and the watcher are respectively present.
17. The method as recited in claim 16, wherein the identified window (510-:
in response to an input by the sharer, associating the data file with a view screen object in a respective one of the sections; and
the identifying, generating, and transmitting are performed in response to a request from the observer network node (14) to observe a data file on the watchscreen object.
18. The method of claim 17, further comprising rendering the composite image on the view screen object (524).
19. The method of claim 17, further comprising performing the identifying, generating, and transmitting on a plurality of network nodes and presenting respective composite images on different respective viewport objects in the virtual area.
20. The method of claim 19, further comprising
For each of the view screen objects presenting a respective one of the composite images, a graphical depiction of a respective sharer operating a respective network node from which the respective composite image was transmitted is shown.
21. The method of claim 1, further comprising
Sending a request to share screen data associated with a software process executing on the observer network node (14); and
at the sharer network node (12), receiving a respective composite image (524) of shared screen data from the viewer network node (14), wherein the receiving and transmitting are performed simultaneously.
22. At least one computer readable medium (128, 124) having computer readable program code embodied therein, the computer readable program code adapted to be executed by a computer (52) to implement a method comprising:
identifying a window of a plurality of windows (510) associated with the software process in a screen layout (506) on a local display (508) of the sharer network node (12);
generating, at the sharer network node (12), a composite image (524) of the identified window (510) and 514) when the identified window (510) and 514) is arranged in the screen layout (506) and is not obscured by any other window in the screen layout (506); and
transmitting the composite image (524) from the sharer network node (12) to a viewer network node (14).
23. An apparatus, comprising:
a local display (508);
a computer readable medium (128, 124) storing computer readable instructions; and
a data processing unit (122) coupled to the computer-readable medium, operable to execute the instructions, and based at least in part on execution of the instructions, operable to perform operations comprising:
identifying a window of a plurality of windows (510) associated with the software process in a screen layout (506) on a local display (508);
generating a composite image (524) of the identified window (510- & 514) when the identified window (510- & 514) is arranged in the screen layout (506) and is not occluded by any other window in the screen layout (506); and
transmitting the composite image (524) to a remote observer network node (14).
24. A method, comprising:
receiving a locally generated command derived from a local input device event on a sharer network node (12);
receiving a remotely generated command derived from a remote input device event on a remote observer network node (14);
processing the received command into a command sequence;
passing the sequence of commands to a sharing process executing on the sharer network node (12);
presenting one or more windows (510) associated with the sharing process in accordance with the received command sequence in a screen layout (506) on a local display (508) of the sharer network node (12);
generating images of the one or more windows (510- & 514) when the one or more windows (510- & 514) are presented in the screen layout (506); and
transmitting an image from the sharer network node (12) to the viewer network node (14).
25. The method of claim 24, wherein the processing comprises remapping screen location parameter values in the remotely generated command from a coordinate system of a shared application image received by the sharer network node (12) to a coordinate system of a main window on the local display (508).
26. The method of claim 25, wherein the processing comprises arranging (i) the remotely generated commands with the remapped screen location parameter values and (ii) the locally generated commands into the command sequence.
27. The method of claim 26, wherein the arranging comprises ordering (i) the remotely generated commands with the remapped screen location parameter values and (ii) the locally generated commands based on respective times at which the locally generated commands were derived from the local input device events and respective times at which the remotely generated commands were received.
28. The method of claim 25, further comprising remapping screen location parameter values in commands derived from the remote input device events from a coordinate system of a main window on a remote display (508) of the viewer network node (14) to a coordinate system of the shared application image.
29. The method of claim 28 wherein the remapping is performed on the observer network node (14) and further comprising transmitting a remotely generated command with the remapped screen location parameter value to the shared network node (12).
30. At least one computer readable medium (128, 124) having computer readable program code embodied therein, the computer readable program code adapted to be executed by a computer (52) to implement a method comprising:
receiving a locally generated command derived from a local input device event on a sharer network node (12);
receiving a remotely generated command derived from a remote input device event on a remote observer network node (14);
processing the received command into a command sequence;
passing the sequence of commands to a sharing process executing on the sharer network node (12);
presenting one or more windows (510) associated with the sharing process in accordance with the received command sequence in a screen layout (506) on a local display (508) of the sharer network node (12);
generating images of the one or more windows (510- & 514) when the one or more windows (510- & 514) are presented in the screen layout (506); and
transmitting an image from the sharer network node (12) to the viewer network node (14).
31. An apparatus, comprising:
a local display (508);
a computer readable medium (128, 124) storing computer readable instructions; and
a data processing unit (122) coupled to the computer-readable medium, operable to execute the instructions, and based at least in part on execution of the instructions, operable to perform operations comprising:
receiving a locally generated command derived from a local input device event associated with the local display (508);
receiving a remotely generated command derived from a remote input device event on a remote observer network node (14);
processing the received command into a command sequence;
passing the command sequence to a sharing process executed by the data processing unit;
presenting one or more windows (510) associated with the shared process in accordance with the received command sequence in a screen layout (506) on the local display (508);
generating images of the one or more windows (510- & 514) when the one or more windows (510- & 514) are presented in the screen layout (506); and
transmitting the image to the observer network node (14).
HK13105105.6A 2009-04-03 2010-03-22 Application sharing HK1178279A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/418,270 2009-04-03

Publications (1)

Publication Number Publication Date
HK1178279A true HK1178279A (en) 2013-09-06

Family

ID=

Similar Documents

Publication Publication Date Title
USRE46309E1 (en) Application sharing
US9411489B2 (en) Interfacing with a spatial virtual communication environment
US12386477B2 (en) Shared virtual area communication environment based apparatus and methods
US7290216B1 (en) Method and apparatus for implementing a scene-graph-aware user interface manager
US20090288007A1 (en) Spatial interfaces for realtime networked communications
US20100257468A1 (en) Method and system for an enhanced interactive visualization environment
KR20120118019A (en) Web browser interface for spatial communication environments
EP2606466A2 (en) Promoting communicant interactions in a network communications environment
US20070260675A1 (en) Method and system for adapting a single-client, single-user application to a multi-user, multi-client environment
HK1178279A (en) Application sharing
US7949705B1 (en) Dynamic desktop switching for thin clients
HK1165580A (en) Interfacing with a spatial virtual communication environment
HK1168678A (en) Spatial interfaces for realtime networked communications