HK1116890A - Sharing information between guests in a virtual machine environment - Google Patents
Sharing information between guests in a virtual machine environment Download PDFInfo
- Publication number
- HK1116890A HK1116890A HK08112004.1A HK08112004A HK1116890A HK 1116890 A HK1116890 A HK 1116890A HK 08112004 A HK08112004 A HK 08112004A HK 1116890 A HK1116890 A HK 1116890A
- Authority
- HK
- Hong Kong
- Prior art keywords
- address
- virtual
- memory
- guest
- memory address
- Prior art date
Links
Description
Technical Field
The present disclosure relates to the field of information processing, and more particularly, to the field of memory management in a virtual machine environment.
Background
In general, the concept of virtualization in information handling systems allows multiple instances of one or more operating systems (each referred to as an "OS") to run on a single information handling system, even though each OS is designed to have complete, direct control over the entire system and its resources. Virtualization is typically implemented by using software (e.g., a virtual machine monitor, or "VMM") to provide each OS with a "virtual machine" ("VM") having virtual resources, the virtual machine including one or more virtual processors, the OS having complete and direct control over the virtual machine, and the VMM maintaining a system environment for implementing virtualization policies, such as sharing and/or allocating physical resources between VMs ("virtualization environments"). Each OS and any other software running on a VM is referred to as a "guest" or "guest software", while a "host" or "host software" is software, such as a VMM, that runs outside of the virtualization environment and may or may not be aware of the virtualization environment.
A physical processor in an information handling system may support virtualization, for example, by supporting an instruction to enter a virtualization environment to run a guest on a virtual processor in a VM (i.e., a physical processor under constraints imposed by a VMM). In a virtualized environment, certain events, operations, and conditions, such as external interrupts or attempts to access privileged registers or resources, may be "intercepted", i.e., the processor is caused to exit the virtualized environment so that the VMM may operate, for example, to implement virtualization policies. The physical processor may also support other instructions for maintaining a virtualized environment and may include memory or register bits that indicate or control the virtualization capabilities of the physical processor.
A physical processor supporting a virtualized environment may include a memory management unit to translate virtual memory addresses to physical memory addresses. The VMM may need to maintain ultimate control over the memory management unit to keep the memory space of one guest unaffected by the memory space of another guest. Thus, existing methods of sharing information between guests include adding portions of the memory space of each guest to the memory space of a VMM so that the VMM can copy information from the memory space of one guest to the memory space of another guest, such that each time a guest attempts to copy information to another guest, a transfer of control of the processor from the guest to the VMM is performed, and yet another transfer of control from the VMM back to the guest. Typically, each such transfer of control from the guest to the VMM includes saving a guest state and loading a host state, and each such transfer of control from the VMM to the guest includes saving a host state and loading a guest state.
Drawings
The present invention is illustrated by way of example and not limitation in the accompanying figures.
FIG. 1 illustrates one embodiment of the present invention in a virtualization architecture.
FIG. 2 illustrates one embodiment of the present invention of a method for sharing information between clients in a virtual machine environment.
Detailed Description
Embodiments of an apparatus, method, and system for sharing information between clients in a virtual machine environment are described below. In this description, numerous specific details such as component and system configurations are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. Additionally, some well known structures, circuits, and the like have not been shown in detail, to avoid unnecessarily obscuring the present invention.
The performance of a virtual machine environment may be improved by reducing the number of times control is transferred between a guest and a host. Embodiments of the present invention may be used to reduce the number of transfers necessary to copy information from one client to another client. Performance of a virtual machine environment may thus be improved in which a transfer of control of a processor from a guest to a VMM and a further transfer of control from the VMM back to the guest is performed each time a guest attempts to copy information to another guest. Performance may be further improved by not requiring the VMM's memory management data structure to be modified to include a shared portion of guest memory.
FIG. 1 illustrates one embodiment of the present invention in a virtualization architecture 100. Although FIG. 1 illustrates the invention as being implemented in a virtualized architecture, the invention may be implemented in other architectures, systems, platforms, or environments. For example, one embodiment of the invention may support sharing information between applications in a microkernel or decomposed operating system environment.
In FIG. 1, bare platform hardware 110 may be any data processing device capable of running any OS or VMM software. For example, bare platform hardware may be the hardware of a personal computer, mainframe computer, portable computer, handheld device, set-top box, server, or any other computing system. Bare platform hardware 110 includes a processor 120 and memory 130.
The processor 120 may be any type of processor including a general purpose microprocessor such as the Intel ® Pentium ® family of processors from Intel ®, the Itanium ® family of processors or other family of processors, or another processor from another company, or a digital signal processor or microcontroller. Although fig. 1 shows only one such processor 120, bare platform hardware 110 may include any number of processors, including any number of multi-core processors, each having any number of execution cores, and any number of multithreaded processors, each having any number of threads.
Memory 130 may be static or dynamic random access memory, semiconductor-based read-only or flash memory, magnetic or optical disk memory, any other type of medium readable by processor 120, or any combination of such media. Processor 120 and memory 130 may be directly or indirectly connected or in communication with each other according to any known approach, such as through one or more buses, point-to-point, or other wired or wireless connections. Bare platform hardware 110 may also include any number of additional devices or connections.
In addition to bare platform hardware 100, FIG. 1 illustrates VMM140, VMs 150 and 160, guest operating systems 152 and 162, and guest applications 154, 155, 164, and 165.
VMM140 may be any software, firmware, or hardware host installed on bare platform hardware 110 or accessible by bare platform hardware 110 to provide VMs (i.e., abstractions of bare platform hardware 110) to guests, or to otherwise create VMs, manage VMs, and implement virtualization policies. In other embodiments, the host may be any VMM, hypervisor, OS, or other software, firmware, or hardware capable of controlling bare platform hardware 110. A guest may be any OS, any VMM, including another instance of VMM140, any hypervisor, or any application or other software.
Each guest wishes to access physical resources such as processor and platform registers, memory, and input/output devices of bare platform hardware 110 according to the architecture of the processor and platform provided in the VM. FIG. 1 shows two VMs, 150 and 160, with guest OS 152 and guest applications 154 and 155 installed on VM 150 and guest OS 162 and guest applications 164 and 165 installed on VM 160, respectively. Although FIG. 1 shows only two VMs and two applications per VM, any number of VMs may be created and any number of applications may run on each VM within the scope of the present invention.
Resources that can be accessed by a client may be classified as "privileged" or "non-privileged" resources. For privileged resources, VMM140 facilitates the functionality desired by the guest while retaining ultimate control over the resources. Non-privileged resources need not be controlled by VMM140 and may be accessed directly by a guest.
In addition, each guest OS wishes to handle various events, such as exceptions (e.g., page faults and general protection faults), interrupts (e.g., hardware interrupts and software interrupts), and platform events (e.g., initialization and system management interrupts). These exceptions, interrupts, and platform events are collectively and individually referred to herein as "virtualization events". Some of these virtualization events are referred to as "privileged events" because they must be handled by VMM140 to ensure proper operation of VMs 150 and 160, to protect VMM140 from guests, and to protect guests from each other.
At any given moment, processor 120 may be executing instructions from VMM140 or any guest such that VMM140 or the guest may run on processor 120 or under the control of processor 120. When a privileged event occurs or a guest attempts to access a privileged resource, control may be transferred from the guest to VMM 140. The transfer of control from a guest to VMM140 is referred to herein as a "VM exit". After handling the event or facilitating access to the resource appropriately, VMM140 may return control to the guest. The transfer of control from VMM140 to a guest is referred to herein as a "VM entry".
Processor 120 includes virtual machine control logic 170 to support virtualization, virtual machine control logic 170 including the transfer of control of processor 120 between a host machine (e.g., VMM 140) and guest machines (e.g., guest operating systems 152 and 162 and guest applications 154, 155, 164, and 165). Virtual machine control logic 170 may be microcode, programmable logic, hard-coded logic, or any other form of control logic within processor 120. In other embodiments, virtual machine control logic 170 may be implemented in any form of hardware, software, or firmware, such as a processor abstraction layer, in a processor or in a component or readable medium accessible by a processor, such as memory 130.
Virtual machine control logic 170 includes VM entry logic 171 for transferring control of processor 120 from the host to the guest (i.e., VM entry) and VM exit logic 172 for transferring control of processor 120 from the guest to the host (i.e., VM exit). In some embodiments, control may also be transferred from the guest to the guest or from the host to the host. For example, in one embodiment that supports layered virtualization, software running on a VM of processor 120 may be both a guest and a host (e.g., a VMM running on a VM is a guest to the VMM that controls the VM and a host to a guest running on the VM that it controls).
Processor 120 also includes an execution unit 180 for executing instructions issued by a host or guest as described below, and a memory management unit ("MMU") 190 for managing the virtual and physical memory space of processor 120. MMU 190 supports the use of virtual memory to provide software (including guest software running in a VM and host software running outside of a VM) with an address space for storing and accessing code and data that is larger than the address space of physical memory in the system (e.g., memory 130). The virtual memory space of processor 120 may be limited only by the number of address bits accessible to software running on the processor, while the physical memory space of processor 120 is further limited by the size of memory 130. MMU 190 supports a memory management scheme (paging in this embodiment) for swapping code and data for executing software into and out of memory 130 as needed. As part of this mode, software may access a virtual memory space of a processor using a virtual address that is translated by the processor to a second address, which the processor may use to access a physical memory space of the processor.
Accordingly, MMU 190 includes translation logic 191, a page base register 192, and a translation lookaside buffer ("TLB") 193. Translation logic 191 performs address translation, e.g., virtual address translation to physical address, according to any known memory management technique, such as paging. As used herein, the term "virtual address" includes any address that is considered a logical or linear address. To perform these address translations, translation logic 191 refers to one or more data structures stored in processor 120, memory 130, any other storage location in bare platform hardware 110 not shown in FIG. 1, and/or any combination of these components and storage locations. The data structures may include page directories and page tables according to the architecture of the Pentium ® processor family, as modified according to embodiments of the present invention, and/or tables stored in TLBs such as TLB 193.
Page base register 192 may be any register or other storage location for storing a pointer to a data structure used by translation logic 191. In one embodiment, the page base register 192 may be part of a CR3 register referred to as the PML4 base register, which PML4 base register is used to store page mapping level 4 base addresses according to the architecture of the Pentium ® processor family.
In one embodiment, translation logic 191 receives a linear address provided by an instruction executed by processor 120. Translation logic 191 uses portions of the linear address as indices to a hierarchical table (including a page table) to perform a page walk. The page table contains a plurality of entries (entries), each entry including a field for the base address of a page in memory 130, e.g., bits 39: 12 of a page table entry according to the Pentium ® processor family's extended memory 64 technology. Any page size (e.g., 4 kilobytes) may be used within the scope of the present invention. Thus, the linear addresses used by programs to access memory 130 may be translated into physical addresses used by processor 120 to access memory 130.
The linear address and corresponding physical address may be stored in the TLB 193 so that the appropriate physical address for a subsequent access using the same linear address may be found in the TLB 193 without requiring another page move. The contents of the TLB 193 may be flushed when appropriate, such as upon a software context switch that is typically performed by the operating system.
In a virtual machine environment, VMM140 may need to have ultimate control over the resources of MMU 190 in order to protect the memory space of one guest from another guest. Thus, in one embodiment, virtual machine control logic 170 may include logic to cause a VM exit if the guest issues an instruction intended to change the contents of either paging base register 192 or TLB 193, or otherwise modify the operation of MMU 190. In order to properly operate bare platform hardware 110 with a virtual machine environment in which each virtual machine appears to provide the OS with complete control of its memory management resources, the VMM may therefore maintain MMU 190 with multiple sets of pages or other data structures (e.g., one set per VM).
In another embodiment, MMU 190 may include hardware to support virtualization. For example, translation logic 191 may be configured to translate a linear address to a physical address using a data structure indicated by the contents of page base register 192, as described above. If the translation is performed for the guest, the linear address is referred to as a guest linear address and the resulting physical address is referred to as a guest physical address, and a second translation is performed to translate the guest physical address to a host physical address using a second data structure indicated by a second pointer. In this embodiment, the paging base register 192 and the first translation data structure may be maintained by an OS running on a virtual machine, while the second pointer and the second translation data structure are maintained by a VMM. The second translation may be enabled by a VM entry and disabled by a VM exit.
In this embodiment, returning to execution unit 180, execution unit 180 is configured to execute instructions that may be issued by a host or client. These instructions include an instruction to allocate a portion of the TLB 193 (and/or another structure in processor 120 or bare platform hardware 110, such as memory 130) to a guest for sharing information with other guests ("allocate" instruction), an instruction to register a portion of guest memory for sharing information with other guests ("register" instruction), and an instruction for a guest to copy information to or from another guest without causing a VM exit ("copy" instruction).
The allocate instruction may have a requestor Identifier (ID) and memory size associated with it as operands, parameters, or according to any other explicit or implicit method. The requestor ID may be a value unique to a virtual machine in a virtual machine environment or an application in a decomposed OS environment. The requestor ID identifies the VM or application that is used to make a portion of its memory space shareable, while the memory size indicates the size of the shareable memory space, e.g., the number of pages.
The allocate instruction may be issued only by an entity that has ultimate control over MMU 190, which in this embodiment is VMM 140. For example, the allocate instruction may be ignored if it is issued by a guest application with insufficient privileges, or may cause a VM exit if it is issued by a guest OS that is believed to have sufficient privileges. VMM140 may issue the allocation instruction to make a portion of its memory space shareable in response to a guest request through a program call or other messaging protocol.
In this embodiment, the execution unit 180 executes the allocate instruction by causing an entry storage unit or a storage unit in the TLB 193 to be allocated to the VM requesting the information sharing. In other embodiments, a separate, dedicated TLB, or any other storage unit or data structure (e.g., memory 130) in processor 120 or bare platform hardware 110 may be used in place of TLB 193.
To support information sharing, TLB 193 may include a shared tag storage unit 194, with shared tag storage unit 194 providing a shared tag associated with each TLB entry or any number of sets of TLB entries. Thus, execution of the allocate instruction may include setting the shared flag or flags associated with the allocated TLB entry location to the value of the requestor ID. TLB entries marked for sharing are not flushed on a software context switch.
Execution of the allocate instruction may also cause a security key associated with the allocated TLB entry location to be transmitted to the requestor in a procedure call return or other messaging protocol. The allocated TLB entry location may be freed by a program call, other messaging protocol, or any other method.
The register instruction may have ownership information and access information associated with it as operands, parameters, or according to any other explicit or implicit method. The ownership information may include the identity of the registration entity in the form of the requestor ID or in any other form, and the identity of the memory space to be shared in the form of the virtual address of the page to be shared or in any other form. The access information may include the identity of one or more entities with which the memory space may be shared, in the form of an ID value similar to the requestor ID, or in any other form, as well as any desired access rights, such as read rights and/or write rights. The registration instruction may also associate a security key returned by the corresponding assignment instruction with it.
In this embodiment, the execution of the registration instruction may include: verifying that a security key associated with the registration instruction has been issued to the registration entity by a previously executed allocation instruction, identifying the allocated TLB entry location and associated physical address, and storing a virtual address provided by the registration entity in the allocated TLB entry location. The access information associated with the registration instruction may be stored, for example, in a memory location located by the requestor ID, according to any method that allows it to be used to verify that a subsequent copy instruction will be allowed. The registration instruction may be executed by storing the access information without verifying a security key or using a TLB entry, without requesting the registration entity or allocating a TLB entry storage unit to the registration entity.
The copy instruction may have associated with it a destination entity ID, a destination virtual address, and a source virtual address as operands, parameters, or according to any other explicit or implicit method. The destination entity ID may include the ID of the virtual machine and/or application in the form of a requestor ID or any other form. The source entity ID of the copy instruction may be implied by the identity of the entity issuing the copy instruction.
In this embodiment, execution unit 180 executes the copy instruction by: verifying that the copy instruction is to be allowed according to the method used in storing access information from the corresponding register instruction, causing MMU 190 to translate destination and source virtual addresses into physical addresses, and causing information stored in the memory location identified by the source physical address to be copied into the memory location identified by the destination physical address.
MMU 190 translates the destination virtual address to a destination physical address by consulting TLB 193 to determine whether a TLB entry for the destination virtual address has been registered. If so, the destination physical address is found in the TLB 193. If not, MMU 190 translates to point to the appropriate data structure for the destination entity by utilizing the destination ID or a pointer associated with the destination ID, rather than utilizing page base register 192, providing scalability beyond the number of entries that TLB 193 can accommodate. In this embodiment, translation logic 191 includes multi-domain translation logic 195 to cause the address translation to be performed differently than single domain address translation and to perform the access control functions mentioned above. However, any technique used by MMU 190 to normally protect pages, such as generating page faults based on status, control, access, or other bits or fields in page directories and/or page table entries, may remain unchanged.
For single domain address translation, MMU 190 translates the source virtual address to a source physical address as previously described. Thus, a copy between two virtual machines may be performed without a VM exit, allowing the copy operation to be completed in the execution environment of a single VM. Embodiments of the invention may provide other instructions or operations (instead of or in addition to replication) with access to multiple VM domains.
FIG. 2 illustrates one embodiment of the invention in a method 200, the method 200 being a method for sharing information between clients in a virtual machine environment. Although method embodiments are not limited in this respect, the method embodiment of FIG. 2 is described with reference to the illustration of virtualization architecture 100 of FIG. 1.
In block 210 of fig. 2, a first client running on processor 120 performs a program call to initiate information sharing with a second client. In box 212, a VM exit is performed to transfer control of processor 120 from the first guest to VMM 140.
In box 220, VMM140 issues an allocate instruction as described above. At block 222, the processor 120 allocates one or more TLB entry locations to the first guest. At block 224, the processor 120 stores the requestor ID in the shared tag storage location for the allocated TLB entry storage location. In box 226, the VMM returns a security key to the first guest. At box 228, a VM entry is performed to transfer control of processor 120 from VMM140 to the first guest.
At block 230, the first client issues a register instruction to register the page for sharing. At block 232, processor 120 verifies the security key. At block 234, the processor 120 stores the virtual address of the page in the allocated TLB entry. At block 236, the processor 120 stores the access information associated with the page. At block 238, the first client sends the shared information (e.g., virtual address, and any other information that facilitates sharing) to the second client according to any desired messaging method.
At block 240, the second client receives the shared information. At block 242, the second client issues a copy instruction. At block 244, processor 120 verifies the access information associated with the destination address. At block 246, the processor 120 translates the destination virtual address to the destination physical address. At block 248, the processor 120 translates the source virtual address to the source physical address. At block 250, the processor 120 copies the contents of the memory storage unit identified by the source physical address to the memory storage unit identified by the destination physical address.
Within the scope of the present invention, the method 200 may be performed in a different order, with illustrated blocks omitted, with additional blocks added, or with a combination of reordered, omitted, or additional blocks. For example, the processor 120 may translate the source virtual address before or concurrently with translating the destination virtual address, e.g., blocks 246 and 248 may be rearranged.
Processor 120 or any other component or portion of a component designed according to an embodiment of the present invention may be designed in stages from creation to simulation to fabrication. The data representing the design may represent the design in a number of ways. First, as used in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally or alternatively, a circuit level model with logic and/or transistor gates may be generated at some stages of the design process. Further, at some stage, most designs reach a level where the design can be modeled with data representing the physical layout of various devices. In the case where conventional semiconductor fabrication techniques are used, the data representing the device layout model may be the data used to indicate the presence or absence of various features on different mask layers for masks used to produce an integrated circuit.
In any representation of the design, the data may be stored in any form of a machine-readable medium. A modulated or otherwise generated optical or electrical wave transmitting such information, a memory, or a magnetic or optical storage medium such as a disc may be the machine-readable medium. Any of these mediums may "carry" or "indicate" the design, or other information used in an embodiment of the present invention. When an electrical carrier wave indicating or carrying the information is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, the actions of a communication provider or a network provider may constitute the making of copies of an article, such as a carrier wave, embodying techniques of the present invention.
Accordingly, an apparatus, method, and system are disclosed for sharing information between clients in a virtual machine environment. While certain embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. For example, in another embodiment of the invention, an entity requesting information sharing may register a memory space that acts as a source rather than a destination.
In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims.
Claims (20)
1. An apparatus, comprising:
virtual machine control logic to transfer control of the device between a host and a plurality of guests;
an execution unit to execute a first instruction to copy information from a second virtual memory address in a virtual address space of a second guest of the plurality of guests to a first virtual memory address in a virtual address space of a first guest of the plurality of guests; and
a memory management unit to translate the first virtual memory address to a first physical memory address and to translate the second virtual memory address to a second physical memory address.
2. The apparatus of claim 1, wherein the memory management unit comprises a first storage unit to store a first portion of a first virtual memory address and a corresponding first portion of the first physical memory address.
3. The apparatus of claim 2, wherein the memory management unit comprises a translation lookaside buffer comprising the first memory location.
4. The apparatus of claim 2, wherein the first storage unit comprises a tag storage unit to store an identifier of the first client.
5. The apparatus of claim 2, wherein the first portion of the first virtual memory address is a first virtual page number and the corresponding first portion of the first physical memory address is a first physical page number.
6. The apparatus of claim 2, wherein the execution unit is further to execute a second instruction to store a first portion of the first virtual memory address in the first storage unit.
7. The apparatus of claim 6, wherein the execution unit is further to execute a third instruction to allocate the first storage location for the first guest to store a first portion of the first virtual memory address in the first storage location.
8. The apparatus of claim 7, wherein the virtual machine control logic comprises exit logic to transfer control of the apparatus to the host in response to an attempt by one of the plurality of guests to issue the third instruction.
9. The apparatus of claim 1, wherein the memory management unit is to translate the first virtual memory using a first address translation data structure for the first guest and to translate the second virtual memory address using a second address translation data structure for the second guest.
10. The apparatus of claim 9, further comprising a first base address storage unit to store a first pointer to the first address translation data structure.
11. A method, comprising:
the first client issuing a register instruction to register the shared memory space; and
the second client issues a copy instruction to copy contents of a portion of the memory space of the second client to the shared memory space.
12. The method of claim 11, further comprising: the host issues an allocation command to allocate the shared memory space to the first client.
13. The method of claim 12, further comprising: entries in a translation lookaside buffer are allocated for address translations associated with the shared memory space.
14. The method of claim 13, further comprising: storing an identifier of the first client in a tag storage location associated with the entry in the translation lookaside buffer.
15. The method of claim 12, further comprising: returning a security key associated with the shared memory space to the first client.
16. The method of claim 11, further comprising: storing, in a translation lookaside buffer, a first virtual memory address identifying the shared memory space.
17. The method of claim 16, further comprising: translating the first virtual memory address to a first physical memory address identifying the shared memory space.
18. The method of claim 17, further comprising: translating a second virtual memory address identifying the portion of memory space of the second guest to generate a second physical memory address identifying the portion of memory space of the second guest.
19. A system, comprising:
a memory for storing information shared between the first client and the second client; and
a processor, comprising:
virtual machine control logic to transfer control of the processor between a host, the first guest, and the second guest;
an execution unit to execute a first instruction to copy information from a second virtual memory address in a virtual address space of the second guest to a first virtual memory address in a virtual address space of the first guest; and
a memory management unit to translate the first virtual memory address to a first physical memory address and to translate the second virtual memory address to a second physical memory address.
20. The system of claim 19, wherein the memory is a dynamic random access memory.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/525,980 | 2006-09-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1116890A true HK1116890A (en) | 2009-01-02 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7490191B2 (en) | Sharing information between guests in a virtual machine environment | |
| EP1959348B1 (en) | Address translation in partitioned systems | |
| US8316211B2 (en) | Generating multiple address space identifiers per virtual machine to switch between protected micro-contexts | |
| US8661181B2 (en) | Memory protection unit in a virtual processing environment | |
| EP2889777B1 (en) | Modifying memory permissions in a secure processing environment | |
| US10324863B2 (en) | Protected memory view for nested page table access by virtual machine guests | |
| US8560806B2 (en) | Using a multiple stage memory address translation structure to manage protected micro-contexts | |
| US8549254B2 (en) | Using a translation lookaside buffer in a multiple stage memory address translation structure to manage protected microcontexts | |
| US9684605B2 (en) | Translation lookaside buffer for guest physical addresses in a virtual machine | |
| US20050132365A1 (en) | Resource partitioning and direct access utilizing hardware support for virtualization | |
| US20140108701A1 (en) | Memory protection unit in a virtual processing environment | |
| JP4668166B2 (en) | Method and apparatus for guest to access memory converted device | |
| US7937534B2 (en) | Performing direct cache access transactions based on a memory access data structure | |
| US20070220231A1 (en) | Virtual address translation by a processor for a peripheral device | |
| HK1116890A (en) | Sharing information between guests in a virtual machine environment | |
| HK1149343B (en) | Sharing information between guests in a virtual machine environment |