[go: up one dir, main page]

GB2176918A - Memory management for microprocessor system - Google Patents

Memory management for microprocessor system Download PDF

Info

Publication number
GB2176918A
GB2176918A GB08519991A GB8519991A GB2176918A GB 2176918 A GB2176918 A GB 2176918A GB 08519991 A GB08519991 A GB 08519991A GB 8519991 A GB8519991 A GB 8519991A GB 2176918 A GB2176918 A GB 2176918A
Authority
GB
United Kingdom
Prior art keywords
page
memory
data
address
attributes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB08519991A
Other versions
GB8519991D0 (en
GB2176918B (en
Inventor
John H Crawford
Paul S Ries
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of GB8519991D0 publication Critical patent/GB8519991D0/en
Publication of GB2176918A publication Critical patent/GB2176918A/en
Application granted granted Critical
Publication of GB2176918B publication Critical patent/GB2176918B/en
Priority to SG34290A priority Critical patent/SG34290G/en
Priority to HK536/90A priority patent/HK53690A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1416Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights
    • G06F12/145Protection against unauthorised use of memory or access to memory by checking the object accessibility, e.g. type of access defined by the memory independently of subject rights the protection being virtual, e.g. for virtual blocks or segments before a translation mechanism
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Microprocessor architecture for an address translation unit provides two levels of cache memory. Segmentation registers 21 and an associated segmentation table in main memory provide a first level of memory management which includes attribute bits used for protection, priority, etc. A second page level cache memory 22 which includes an associated page directory and page table in main memory provide a second level of management with independent protection on a page level. <IMAGE>

Description

SPECIFICATION Memory management for microprocessor system Background of the invention 1. Field of the invention.
The invention relates to the field of address translation units for memory management, particularly in a microprocessor system.
2. PriorArt There are many well-known mechanisms for memory management. In some systems, a larger address (virtual address) is translated to a smaller physical address. In others, a smaller address is used to access a larger memory space, for instance, by using bank switching. The present invention relates to the former category, that is, where largervirtual address is used to access a limited physical memory.
In memory management systems, it is also known to provide various protection mechanisms. Forexam- ple, a system may prevent a userfrom writing into an operating system or perhaps even from reading the operating system to external ports. As will be seen, the present invention implements a protection mechanism as partofa broader control scheme which assigns "attributes" to data on two distinct levels.
The closest prior art known to Applicant is that described in U.S. Patent 4,442,484. This patent describes the memory management and protection mechanism embodied in a commercially available microprocessor, the Intel 286. This microprocessor includes segmentation descriptor registers containing segment base addresses, limit information and attributes (e.g., protection bits). The seg ment descrip tortable and the segment descriptor registers both contain bits defining various control mechanisms such as privilege level, types of protection, etc. These control mechanisms are described in detail in U.S.
Patent 4,442,484.
One problem with the Intel 286 is that the segment offset is limited to 64k bytes. It also requires consecutive locations in physical memory for a segment which is not always easy to maintain. As will be seen, one advantage to the invented system is that the segment offset is as large as the physical address space. Yet, the invented system still provides compatibility with the prior segmentation mechanism found in the Intel 286. Other advantages and distinctions between the prior art system discussed in the above-mentioned patent and its commercial realization (Intel 286 microprocessor) will be apparent from the detailed description of the present invention.
Summary of the invention An improvement to a microprocessor system which includes a microprocessor and a data memory is described. The microprocessor includes a segmentation mechanismfortranslating a virtual memory address to a second memory address (linear address) and for testing and controlling attributes of data memory segments. The improvement of the present invention includes a page cache memory on the microprocessorfortranslating a first field from the linear address for a hit or match condition. The data memory also stores page mapping data, specifically, a page directory and a page table. The first field accessesthe page directory and page table if no hit occurs in the page cache memory.The output from eitherthe page cache memory or the page table provide a phy- sical base address for a page in memory. Another field of the linear address provides an offset within the page.
Both the page cache memory and page mapping data in the data memory store signals representing attributes ofthe data in a particular page. These attributes include read and write protection, indicate whether the page has been previously written into, and other information. Importantly, the page level protection provides a second tier of control over data in the memory which is separate and distinguished from the segment attributes.
Brief description ofthe drawings Figure lisa block diagram showing the overall architecture ofthe microprocessor in which the present invention is currently realized.
Figure 2 is a block diagram illustrating the seg men- tation mechanism embodied in the microprocessor of Figure 1.
Figure 3 is a block diagram illustrating the page field mapping for a hit or match in the page cache memory.
Figure 4 is a blockdiagram illustrating the page field mapping for no hit or match in the page cache memory of Figure 3. For this condition, the page directory and page table in main memory are used and, hence, are shown in Figure 4.
Figure is a diagram used to illustrate the attributes stored in the page directory, page table page cache memory.
Figure 6 is a block diagram illustrating the organiza- tion of the content addressable memory and data storage contained within the page cache memory.
Figure 7is an electrical schematic of a portion of the content addressable memory of Figure 6.
Figure 8 is an electrical schematic of the logic circuits associated with the detector of Figure 6.
Detailed description ofthe present invention A microprocessor system and in particular, a memory management mechanism for the system is described. In the following description, numerous specific details are set forth such as specific number of bits, etc., in order to provide a thorough understand ing of the present invention. It will be obvious, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures are not shown in detail in order not to unnecessarily obscure the present invention.
In its currently preferred embodiment, the microprocessor system includes the microprocessor 10 of Figure 1. This microprocessor is fabricated on a single silicon substrate using complementary metal-oxidesemiconductor (CMOS) processing. Any one of many well-known CMOS processes may be employed, moreover, it will be obvious that the present invention may be realized with othertechnologies,for inst ance, n-channel, bipolar, SOS, etc.
The memory management mechanism for some conditions requires access to tables stored in main memory.Arandom-access memory (RAM) 13which functions as the main memoryforthe system is shown in Figure 1. An ordinary RAM may be used such as one employing dynamic memories.
As shown in Figure 1,the microprocessor 10 hasa physical address of 32 bits, and the processor itself is a 32-bit processor. Other components of a microp rocessorsystem commonly used such as drivers, mathematical processors, etc., are not shown in Figure 1.
Highlight ofinvention The invented memory makes use of both segmentation and paging. Segments are defined by a set of segment descriptortablesthat are separate from the page tables used to describe the page translation. The two mechanisms are completely separate and independent. Avirtual address is translated to a physical address in two distinct steps, using two distinct mapping mechanisms. A segmentation technique is used forthe first translation step, and a paging technique is used forthe second translation step. The paging translation can be turned off to produce a one-step translation with segmentation only, which is com patiblewith the 286.
Segmentation (the first translation) translates a 48bit virtual address to a 32-bit linear (intermediate) address. The 48-bit virtual address is composed of a 16-bit segment selector, and a 32-bit offsetwithin this segment. The 16-bit segment selector identifies the segment, and is used to access an entry from the segment descriptor table. This segment descriptor entry contains a base address ofthe segment, the size (limit) ofthe segment, and various attributes ofthe segment The translation steps adds the segment base to the 32-bit offset in the virtual address to obtaina32-bitlinearaddress.Atthesametime,the 32-bit offset in the virtual address is compared against the segment limit, and the type of the access is checked against the segment attributes.Afault is generated and the addressing process is aborted, if the 32-bit offset is outside the segment limit, or if the type of the access is not allowed by the segment attributes.
Paging (the second translation) translates a 32-bit linear address to a 32-bit physical address using a two-level paging table, in a process described in detail below.
The two steps are totally independent. This permits a (large) segmentto be composed of several pages, or a page to be composed of several (small) segments.
A segment can start on any boundary, and be of arbitrary size, and is not limited to starting on a page boundary, orto have a length that is an exact multiple of pages. This allows segments to describe separately protected areas of memory that start at arbitrary addresses and to be of arbitrary size.
Segmentation can be used to cluster a number of small segments, each with its unique protection attributes and size, into a single page. In this case, segmentation provides the protection attributes, and paging provides a convenient method of physical memory mapping a group of related units that must be protected separately.
Paging can be used to break up very large segments into small units for physical memory management. This provides a single identifier (the segment selector), and a single descriptor (the segment de scriptor) for a separately protected unit of memory, ratherthan requiring the use of a multitude of page descriptors. Within a segment, paging provides an additional level of mapping that allows large seg menus to be mapped into separate pages that need not be contiguous in physical memory. In fact, paging allows a large segment to be mapped so that only a few pages at a time are resident in physical memory, with the remaining parts of the segment mapped onto disk.Paging also supports the definition of such structurewithina largesegment,forexample,to write protect some pages of a large segment, while other pages can be written into.
Segmentation provides a very comprehensive protection model which works on the "natural" units used by a programmer: arbitrary sized pieces of linearly addressed memory. Paging provides the most convenient method for managing physical memory, both system main memory and backing disk memory. The combination of the two methods in the present invention provides a very flexible and powerful memory protection model.
Overall microprocessor architecture In Figure 1, the microprocessor includes a bus i nterface unit 14. The bus unit includes buffers for permitting transmission of the 32-bit address signals and for receiving and sending the 32 bits of data. Internal to the microprocessor, unit 14communicates overthe internal bus 19. The bus unit includes a pre-fetch unit forfetching instructions from the RAM 12 and a prefetch queue which communicates with the instruction unit of the instruction decode unit 16. The queued instructions are processed within the execution unit 18 (arithmetic logic unit) which includes a 32-bit regis terfile. This unit, as well as the decode unit, com municatewith the internal bus 19.
The present invention centers around the address translation unit 20. This unit provides two functions; one associated with the segment descriptor registers, and the other with the page descriptor cache memory. The segment registers areforthe most part known in the prior art; even so, they are described in more detail in conjunction with Figure 2. The page cache memory and its interaction with the page directory and page table stored within the main memory 13 is discussed in conjunction with Figures 3-7 and forms the basisforthe present invention.
Segmentation mechanism The segmentation unit of Figure 1 receives a virtual address from the execution unit 18 and accesses the appropriate register segmentation information. The register contains the segment base address which along with the offset from the virtual address are coupled over lines 23 to the page unit.
Figure 2 illustrates the accessing of the tables in main memory when the segmentation registers are loaded with mapping information for a new segment.
The segment field indexes the segment descriptor table in the main memory 13. The contents ofthe table provide a base address and additionally, provide attributes associated with the data in the segment. The base address and offset are compared to the segment limits in comparator 27; the output of this comparator providing a fault signal. The adder 26 which is part of the microprocessor combines the base and offsetto provide a "physical" addresson lines 31. This address may be used by the microprocessor as a physical address or used by the paging unit. This is done toprovide compatibilitywith certain programs written for a prior microprocessor (Intel 286). For the Intel 286, the physical address space is 24 bits.
The segment attributes including details on the descriptors employed such as the various privilege levels are set forth in U.S. Patent 4,442,484.
The fact that the segmentation mechanism is known in the prior art is represented in Figure 2 bythe dotted line 28 which indicates the prior art structures to the left of the dotted line.
The page field mapping block 30 which includes the page unit of Figure 1 as well as its interaction with the page directory and page table stored in the main memory is shown in Figures 3through 7.
While in the currently preferred embodiment the segmentation mechanism uses shadow registers, it also could be implemented with a cache memory as is done with the paging mechanism.
Page descriptor cache memory In Figu re 3 the page descri ptor cache memory of the page unit 22 of Figure 1 is shown within dotted line 22a. This memory comprises two arrays, a content addressable memory (CAM) 34 and a page data (base) memory 35. Both memories are implemented with static memory cells. The organization of memories 34 and 35 is described in conjunction with Figure 6. The specific circuitry used for CAM 34 with its unique masking feature is described in conjunction with Figures 7 and 8.
The linear address from the segment unit 21 are coupled to the page unit 22 of Figure 1. As shown in Figure 3, this linear address comprises two fields, the page information field (20 bits) and a displacement field (12 bits). Additionally, there is a four bit page attribute field provided by the microcode. The 20-bit page information field is compared with the contents ofthe CAM 34.Also, the four attribute bits ("dirty", "valid", "U/S", and "WIR") must also match those in the CAM before a hit occurs. (There is an exception to this when "masking" is used as will be discussed.) For a hit condition, the memory 35 provides a 20-bit base word which is combined with the 12-bit displacement field ofthe linear address as represented by summer 36 of Figure 3 and the resultant physical address selects from a 4k byte page frame in main memory 13.
Page addressing for the no-hit con dition A page directory 13a and a pagetable 13bare stored in the main memory 13 (see Figure 4). The base address for the page directory is provided from the microprocessorand is shown in Figure4asthepage directory base 38. Ten bits ofthe page information field are used as an index (after being scaled by a factor of 4) into the page directory as indicated bythe summer 40 in Figure 4. The page directory provides a 32-bit word. Twenty bits of this word are used as a baseforthe page table.The other 1 0 bits ofthe page information field are similarly used as an index (again being scaled by a factor of 4) into the page table as indicated bythesummer4l.The pagetablealso provides a 32-bit word, 20 bits of which are the page base ofthe physical address. This page base address is combined as indicated by summer42 with the 12-bit displacement field to provide a 32-bit physical address.
Five bits from the 1 2-bitfields of the page directory and table are used for attributes particularly "dirty", "access", "U/S, "RAW" and "present". These will be discussed in more detail in conjunction with Figure 5.
Remaining bits of this field are unassigned.
The stored attributes from the page directory and table are coupled to control logic circuit 75 along with the 4 bits of attribute information associated with the linearaddress. Partsofthis logic circuit are shown in subsequentfigures are discussed in conjunction with these figures.
Page directoryattributes In Figure 5 the page directory word, page table word and CAM word are again shown. The protective/ control attributes assigned to the four bits of the page directory word are listed within bracket 43. The same four attributes with one additional attribute are used for the page table word and are setforth within bracket 44. The four attributes used for the CAM word are set forth within bracket 45.
The attributes are used forthe following purpose: 1. DIRTY. This bit indicates whether a page has been written into. The bit is changed once a page has been written into. This bit is used, for instance, to inform the operating system that an entire page is not "clean". This bit is stored in the page table and in the CAM (not in the page directory). The processor sets this bit in the pagetablewhen a page is written into.
2. ACCESSED. This bit is stored in only the page directory and table (not in the CAM) and is used to indicate that a page has been accessed. Once a page is accessed, this bit is changed in the memory bythe processor. Unlike the dirty bit, this bit indicates whether a page has been accessed eitherforwriting or reading.
3. U/S. The state ofthis bit indicates whether the contentsofthe page is user and supervisoryaccessi be (binary 1)orsupervisoronly(binaryzero).
4. R/W. This read/write protection bit must be a binary 1 to allow the page to be written into by a user level program.
5. PRESENT. This bit in the page table indicates if the associated page is present in the physical memory. This bit in the page directory indicates if the associated page table is present in physical memory.
6. VALID. This bit which is stored only in the CAM is used to indicate if the contents of the CAM is valid. This bit is set to a first state on initialization then changed when a valid CAM word is loaded.
The five bits from the page directory and table are coupled to control logic circuit75to provideappropri- ate fault signals within the microprocessor.
The user/supervisor bits from the page directory and table are logically AN Ded as indicated by gate 46 to provide the RAW bit stored in the CAM 34 of Figure 3. Similarly, the read/write bits from the pagedirec- toryandtablearelogicallyANDedthroughgate47to provide the W/R bit stored in the CAM. The dirty bit from the page table is stored in the CAM. These gates are part ofthe control logic 75 of Figure 4.
The attributes stored in the CAM are "automatically" tested since they are treated as part ofthe address and matched against the four bits from the micro code. Afault condition results even if a valid page base is stored in the CAM, if, for instance, the linear address indicates that a "user" write cycle is to occur into a page with RAW=0.
The ANDing of the U/S bits from the page directory and table ensures that the "worst case" is stored in the cache memory. Similarly, the AN Ding ofthe R/W bit providestheworstcaseforthe cache memory.
Organization of the page descriptorcache memory The CAM 34 as shown in Figure 6 is organized in 8 sets with 4words in each set. Twenty-one bits (17 address and 4 attributes) are used to find a match in this array. The four comparator lines from the four stored words in each set are connected to a detector.
For instance, the comparator lines for the fourwords of set 1 are connected to detector 53. Similarly, the comparator linesforthefourwords in sets 2through 8 are connected to detectors. The comparator lines are sensed bythe detectors to determine which word in the set matches the input (21 bits) to the CAM array.
Each of the detectors contains "hard wired" logic which permits selection of one of the detectors depending upon the state ofthe 3 bits from the 20-bit page information field coupled to the detectors. (Note the other 17 bits of this bit page information field is coupled to the CAM array.) For purposes of explanation, eight detectors are implied from Figure 6. In the current embodiment only one detector is used with the three bits selecting one set offour lines for coupling to the detector. The detector itself is shown in Figure 8.
The data storage portion ofthe cache memory is organized into four arrays shown as arrays 35a-d. The data words corresponding to each set of the CAM are distributed with one word being stored in each ofthe four arrays. For instance, the data word (base address) selected by a hit with word 1 of set 1 is in array 35a, the data word selected by a hitwith word 2 of set 1 is in array 35b, etc. The three bits used to select a detector are also used to select a word in each ofthe arrays. Thus, simultaneously, words are selected from each of the four arrays. The final selection of a word from the arrays is done through the multiplexer 55. This multiplexer is controlled by the fourcomparatorlines in the detector.
When the memory cache is accessed, the matching process which is a relatively slow process begins through use ofthe 21 bits. The otherthree bits are able to immediately select a set offour lines and the detector is prepared for sensing a drop in potential on the comparator lines. (As will be discussed, all the comparator (rows) lines are precharged with the selected (hit) line remaining charged while the nonselected lines discharge.) Simultaneously, four words from the selected set are accessed in arrays 35a-35d. If and when a match occurs,the detector is able to identify the word within the set and this information istransmitted to the multiplexer 55 allowing the selection ofthe data word. This organization improves access time in the cache memory.
Contentaddressable memory {CAM) In Figure 7, the 21 bits which are coupled to the CAM array are again shown with 17 ofthe bits being coupled to the complementgeneratorand override circuit 56 and with the 4 attribute bits coupled to the VUDW logic circuit 57. The 3 bits associated with the selection of the detectors described in conjunction with Figure 6 are notshown in Figure 7.
The circuit 56 generates the true and complement signal for each of the address signals and couples them to parallel lines in the CAM array, such as lines 59 and 60. Similarly, the VUDW logic 57 generates both the true and complement signals for the attri- bute bits and couples them to parallel lines in the array.The lines 59 and 60 are duplicatedforeach of the true and complement bit lines (i.e., 21 pairs of bit and bit/lines).
Each of the 32 rows in the CAM array has a pair of parallel row lines such as lines 68 and 70. An ordinary static memory cell such as cell 67 is coupled between each ofthe bit and bit/lines (columns) and is associated with the pair of row lines. In the presently preferred embodiment, the memory cells comprise ordinary flip-flop static cells using p-channel transistors.
One line of each pair of row lines (line 70) permits the memory cell to be coupled to the bit and bit/ when data is written into the array. Otherwise, the content of the memory cell is compared to the data on the column lines and the resultsofthe comparison is coupled to the hit line 68. The comparison is done by comparators, one associated with each cell. The comparator comprises the n-channel transistors 61 -64.
Each pair of the comparator transistors, for example, transistors 61 and 62, are coupled between one side ofthe memory cell and the opposite bit line.
Assume that data is stored in the memory cell 67 and that the node of the cell closest to bit line 59 is high. When the contents ofthe CAM are examined, first the hit line 68 is precharged through transistor 69. Then the signals coupled to the CAM are placed on the column lines. Assume firstthat line 59 is high.
Transistor 62 does not conduct since line 60 is low.
Transistor 63 does not conduct since the side ofthe cell to which it is connected is low. For these conditions, line 68 is not discharged, indicating that a match has occurred in the cell. The hit line 68 provides ANDing of the comparisons occurring along the row.
If a match does not occur, one or more of the compa ratorswill causethehitlinetodischarge.
During precharging the circuits 56 and 57 generate an override signal causing all column lines (both bit and bit/) to be low. This prevents the comparators from draining the charge from the hit lines before the comparison begins.
It should be noted thatthe comparators examine the "binary one" condition and, in effect, ignore the "binary zero" condition. That is, for instance, if the gate oftransistor64 is high (line 59 high)thentransis tors 63 and 64 control the comparison. Similarly, if the bit line 60 is high, then transistors 61 and 62 control the comparison. This feature ofthe comparator per mits cellsto be ignored. Thus, when a word is coupled to the CAM, certain bits can be masked from the matching process by making both the bit and bit/ line low. This makes it appearthatthe contents of the cell match the condition on the column lines. This feature is used by the VUDW logic circuit 57.
Microcode signals coupled to logic circuit 57 causes the bit and bit/line for selected ones ofthe attribute bits to be low as a function of the microcode bits. This results in the attribute associated with that bit to be ignored. This feature is used, for instance, to ignore the U/S bit in the supervisory mode. That is, the supervisory mode can access user data. Similarly, the read/write bit can be ignored when reading or when the supervisory mode is active. The dirty bit is also ignored when reading. (Thefeatureisnotused forthevalid bit.) When the attribute bits are stored in main memory, they can be accessed and examined and logic circuits used to control accessing, for instance, based on the one or zero state of the U/S bit. However, with the cache memory no separate logic is used.The forcing of both the bit and bit lines low, in effect, provides the extra logic by allowing a match (or preventing a fault) even though the bit patterns of the attribute bits are not matched.
The detectorfrom Figure 6, as shown in Figure 8, includes a plurality of NOR gates such as gates 81,82, 83 and 84. Three of the hit lines from the selected set of CAM lines are coupled to gate 81; these are shown as lines A, B, and C. A different combination ofthe lines are connected to each ofthe other NOR gates.
For instance, NOR gate 84 receives the hit lines D,A, and B. The output of each ofthe NOR gates is an input to a NAND gate such as NAND gate 86. A hit line provides one in put to each NAND gate. This line is the one (ofthefourA,B,C,D) that is not an input to the NOR gate. This is also the bit line from the set entryto be selected. For example, gate 86 should selectthe set that is associated with hit line D. For instance, in the case of NOR gate 81, hit line D is coupled to the NAND gate 86. Similarly, forthe NAND gate 90, the hit line C in addition to the output of gate 84, are inputs to this gate. An enable read signal is also coupled to the NAND gates to prevent the outputs ofthis logicfrom being enabled for a write.The output of the NAND gates, such as line 87, are used to control the multiplexer 55 of Figure 6. In practice, the signal from the NAND gate, such as the signal on line 87, controls the multiplexerthrough p-channel transistors. For purposes of explanation, an additional inverter 88 is shown with an output line 89.
The advantage to this detector is that it enables precharge lines to be used in the multiplexer 55. Alter- nately, a static arrangement could be used, but this would require considerably more power. With the arrangement as shown in Figure8,theoutputfrom the inverters will remain in the same state until one of the hit lines drops in potential. When that occurs, only a single output line will drop in potential, permitting the multiplexerto select the correct word.
Thus, a unique address translation unit has been described which uses two levels of cache memory, oneforsegmentation and one for paging. Independent data attribute control (e.g., protection) is provided on each level.

Claims (26)

1. In a microprocessor system which includes a microprocessor and a data memory where the microprocessor has a segmentation mechanism for translating a virtual memory address to a second memory address and for controlling data based on attributes, an improvement comprising:: a page cache memory integral with said microprocessorfor receiving a first field of said second memory address and for comparing it with contents of said page cache memoryto provide a second field under certain conditions; said data memory including storage for page mapping data, said first field of said second memory address being coupled to said data memory to select a third field from said page data when said certain conditions of said page cache memory are not met; said microprocessor system including a circuitfor combining one of said second and third fields with an offset field from said first address to provide a physical address for said data memory; whereby the physical addressibility of said data memory is improved.
2. The improvement defined by Claim 1 wherein said page cache memory and said storageforsaid page data includes information on the attributes of memory pages.
3. The improvement defined by Claim 2 wherein said storage for said page mapping data comprises at least one page directory and at least one page table.
4. The improvement defined by Claim 3 wherein each of said page directory and said page table store said attributes for said memory pages.
5. The improvement defined by Claim 4wherein at least some of said attributes stored in said page directory and said page table are logically combined and stored in said page cache memory.
6. The improvement defined by Claim 5wherein said microprocessor provides a page directory base for said page directory.
7. Theimprovementdefined byClaim6whereina first portion of said first field provides an index into said page directory base to a location in said page directory.
8. The improvement defined by Claim 7 wherein said locations in said page directory store page table bases and wherein a second portionofsaidfirstfield provides an index into said page table to a pagetable location in said data memory.
9. The improvement defined by Claim 8wherein said locations in said page table provide a base to pages in said data memory.
10. The improvement defined by Claim 2wherein said page cache memory includes a content addressable memory (CAM) and a page base memory, the output of said CAM selecting page bases for said data memory from said page base memory.
11. The improvement defined by Claim 10 where- in said CAM stores attributes of data memory pages.
12. The improvement defined by Claim 11 wherein said CAM includes means for selectively masking at least one of said attributes during said comparison.
13. An improvement in memorymanagementfor a microprocessor system comprising: a microprocessor having a segmentation mechanism fortranslating a virtual memory address to a second memory address and for testing attributes of data memory segments; a data memory coupled to said microprocessor; said microprocessor including a page cache memory integral with said microprocessor for receiving a firstfield of said second memory address and for comparing it with contents of said cache memory to provide a second field under certain conditions; said data memory including storage for page mapping data, said first field of said second memory address being coupled to said data memory to select a third field from said page data when said certain conditions of said page cache memory are not met;; said microprocessor system including a circuit for combining one of said second and third fields with an offsetfield from said first address to provide a physical addressforsaid data memory; whereby the physical addressibility of said data memory is improved.
14. The improvement defined by Claim 13wherein said segmentation mechanism comprises: segment descriptor registers integral with said microprocessorfor providing a segment base; and said data memory including a segment descriptor table which is accessed by a segment field of said first address.
15. The improvement defined by Claim 14wherein said page cache memoryandsaidstorageforsaid page data includes information on the attributes of memory pages.
16. The improvementdefined by Claim 15where- in said storage for said page mapping data comprises a pagedirectoryanda pagetable.
17. The improvement defined by Claim 16 where- in each of said page directory and page table store said attributes for said memory pages.
18. The improvement defined by Claim 17 wherein at least some of said attributes stored in said page directory and page table are logically combined and stored in said page cache memory.
19. An address translation unit formed as part of a microprocessorforoperating with a data memory comprising: segment descriptor registers for receiving a virtual address and for providing a segment base; said microprocessor for providing an address for the data memory to permit addressing of a segment descriptortable in said data memory, said segment descriptortable providing said segment base address; said microprocessor employing said second base address and a portion of said virtual address to provide a second memory address; a page cache memory for receiving a first field of said second memory address and for comparing it with the contents of said page cache memory to provide a second field under certain second conditions;; said microprocessorfor providing said firstfield to a page data table in said data memory for providing said second field if said second conditions are not met; said second field providing a page base for said data memory, whereby the physical addressibility of said data memory is improved.
20. Theunitdefined by Claim 19whereinsaid segment descriptor registers store segment data attributes and wherein said page cache memory stores page data attributes.
21. Acontentaddressablememory(CAM)com- prising: a plurality of buffers, each for receiving first signals and for providing said first signals and second signals, said second signals being complements of said first signals; a plurality of a generally parallel pairs of lines each pair being coupled to receive one of said first and second signals; a plurality of memory cells coupled between each pair of lines said cells being arranged in rows generally perpendicularto said pairs of lines; a plurality of row comparator lines one associated with each of said rows of cells;; a plurality of comparators, one for coupling between each of said memory cells, its respective pair of linesandoneofsaidcomparatorlines,saidcompara- torsforcomparing a binary state stored in said memory cell with said first and second signals; loading means for loading data from said pairs of lines to said cells; said comparators being disabled when its respective pairs of lines are both maintained at a certain binary state; whereby by causing at least some of said buffersto provide said certain binary stateforsaid first and second signals, selected ones of said cells can be ignored for said comparison.
22. The CAM defined by Claim 21 wherein said row comparator lines are precharged lines.
23. The CAM defined by Claim 22 including a storage memorywhich comprises a plurality of sections and wherein data is accessed simultaneously in all of said sections and an output from one of said sections being selected through said row lines.
24. The CAM defined by Claim 23 including detectors coupled to a predetermined number of said row lines, said detectors for sensing which one of said predetermined number of lines remains charged.
25. The CAM defined by Claim 24wherein said selection of said output from one of said sections is made by said detectors.
26. An improvement in memory managementfor a microprocessor system substantially as hereinbefore described with reference to and as illustrated in the accompanying drawings.
GB8519991A 1985-06-13 1985-08-08 Memory management for microprocessor system Expired GB2176918B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SG34290A SG34290G (en) 1985-06-13 1990-05-15 Content addressable memory
HK536/90A HK53690A (en) 1985-06-13 1990-07-19 Content addressable memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US74438985A 1985-06-13 1985-06-13

Publications (3)

Publication Number Publication Date
GB8519991D0 GB8519991D0 (en) 1985-09-18
GB2176918A true GB2176918A (en) 1987-01-07
GB2176918B GB2176918B (en) 1989-11-01

Family

ID=24992533

Family Applications (2)

Application Number Title Priority Date Filing Date
GB8519991A Expired GB2176918B (en) 1985-06-13 1985-08-08 Memory management for microprocessor system
GB8612679A Expired GB2176920B (en) 1985-06-13 1986-05-23 Content addressable memory

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB8612679A Expired GB2176920B (en) 1985-06-13 1986-05-23 Content addressable memory

Country Status (8)

Country Link
JP (1) JPH0622000B2 (en)
KR (1) KR900005897B1 (en)
CN (1) CN1008839B (en)
DE (1) DE3618163C2 (en)
FR (1) FR2583540B1 (en)
GB (2) GB2176918B (en)
HK (1) HK53590A (en)
SG (1) SG34090G (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0468542A3 (en) * 1987-12-22 1992-08-12 Kendall Square Research Corporation Multiprocessor digital data processing system
GB2260629A (en) * 1991-10-16 1993-04-21 Intel Corp A segment descriptor cache for a microprocessor
GB2260630A (en) * 1991-10-16 1993-04-21 Intel Corp A memory management system for preserving cache coherency
US5226039A (en) * 1987-12-22 1993-07-06 Kendall Square Research Corporation Packet routing switch
US5251308A (en) * 1987-12-22 1993-10-05 Kendall Square Research Corporation Shared memory multiprocessor with data hiding and post-store
US5313647A (en) * 1991-09-20 1994-05-17 Kendall Square Research Corporation Digital data processor with improved checkpointing and forking
US5341483A (en) * 1987-12-22 1994-08-23 Kendall Square Research Corporation Dynamic hierarchial associative memory
EP0613090A1 (en) * 1993-02-26 1994-08-31 Siemens Nixdorf Informationssysteme Aktiengesellschaft Method for checking the admissibility of direct memory accesses in a data processing systems
EP0653709A1 (en) * 1993-11-12 1995-05-17 International Business Machines Corporation Computer address space protection system
GB2285323A (en) * 1994-01-04 1995-07-05 Intel Corp Address generation unit with segmented addresses in a microprocessor
US5535393A (en) * 1991-09-20 1996-07-09 Reeve; Christopher L. System for parallel processing that compiles a filed sequence of instructions within an iteration space
US6332185B1 (en) 1991-09-20 2001-12-18 Sun Microsystems, Inc. Method and apparatus for paging data and attributes including an atomic attribute for digital data processor
US7149862B2 (en) 2002-11-18 2006-12-12 Arm Limited Access control in a data processing apparatus
US7171539B2 (en) 2002-11-18 2007-01-30 Arm Limited Apparatus and method for controlling access to a memory
US7185159B2 (en) 2002-11-18 2007-02-27 Arm Limited Technique for accessing memory in a data processing apparatus
US7305534B2 (en) 2002-11-18 2007-12-04 Arm Limited Control of access to a memory by a device
US7487367B2 (en) 2002-11-18 2009-02-03 Arm Limited Apparatus and method for managing access to a memory

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988007721A1 (en) * 1987-04-02 1988-10-06 Unisys Corporation Associative address translator for computer memory systems
US5761413A (en) 1987-12-22 1998-06-02 Sun Microsystems, Inc. Fault containment system for multiprocessor with shared memory
CN1068687C (en) * 1993-01-20 2001-07-18 联华电子股份有限公司 Memory Dynamic Allocation Method for Recording Multi-segment Voices
US6622211B2 (en) * 2001-08-15 2003-09-16 Ip-First, L.L.C. Virtual set cache that redirects store data to correct virtual set to avoid virtual set store miss penalty
KR100406924B1 (en) * 2001-10-12 2003-11-21 삼성전자주식회사 Content addressable memory cell
US7689485B2 (en) 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
US7900017B2 (en) * 2002-12-27 2011-03-01 Intel Corporation Mechanism for remapping post virtual machine memory pages
WO2005017754A1 (en) * 2003-07-29 2005-02-24 Cisco Technology, Inc. Force no-hit indications for cam entries based on policy maps
US20060090034A1 (en) * 2004-10-22 2006-04-27 Fujitsu Limited System and method for providing a way memoization in a processing environment
GB2448523B (en) * 2007-04-19 2009-06-17 Transitive Ltd Apparatus and method for handling exception signals in a computing system
US8799620B2 (en) 2007-06-01 2014-08-05 Intel Corporation Linear to physical address translation with support for page attributes
KR101671494B1 (en) 2010-10-08 2016-11-02 삼성전자주식회사 Multi Processor based on shared virtual memory and Method for generating address translation table
FR3065826B1 (en) * 2017-04-28 2024-03-15 Patrick Pirim AUTOMATED METHOD AND ASSOCIATED DEVICE CAPABLE OF STORING, RECALLING AND, IN A NON-VOLATILE MANNER, ASSOCIATIONS OF MESSAGES VERSUS LABELS AND VICE VERSA, WITH MAXIMUM LIKELIHOOD
KR102686380B1 (en) * 2018-12-20 2024-07-19 에스케이하이닉스 주식회사 Memory device, Memory system including the memory device and Method of operating the memory device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1595740A (en) * 1978-05-25 1981-08-19 Fujitsu Ltd Data processing apparatus
GB2127994A (en) * 1982-09-29 1984-04-18 Apple Computer Memory management unit for digital computer

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA784373A (en) * 1963-04-01 1968-04-30 W. Bremer John Content addressed memory system
GB1281387A (en) * 1969-11-22 1972-07-12 Ibm Associative store
US3761902A (en) * 1971-12-30 1973-09-25 Ibm Functional memory using multi-state associative cells
GB1457423A (en) * 1973-01-17 1976-12-01 Nat Res Dev Associative memories
GB1543736A (en) * 1976-06-21 1979-04-04 Nat Res Dev Associative processors
US4376297A (en) * 1978-04-10 1983-03-08 Signetics Corporation Virtual memory addressing device
US4377855A (en) * 1980-11-06 1983-03-22 National Semiconductor Corporation Content-addressable memory
US4442482A (en) * 1982-09-30 1984-04-10 Venus Scientific Inc. Dual output H.V. rectifier power supply driven by common transformer winding
JPH0658646B2 (en) * 1982-12-30 1994-08-03 インタ−ナショナル・ビジネス・マシ−ンズ・コ−ポレ−ション Virtual memory address translation mechanism with controlled data persistence

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1595740A (en) * 1978-05-25 1981-08-19 Fujitsu Ltd Data processing apparatus
GB2127994A (en) * 1982-09-29 1984-04-18 Apple Computer Memory management unit for digital computer

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5341483A (en) * 1987-12-22 1994-08-23 Kendall Square Research Corporation Dynamic hierarchial associative memory
US6694412B2 (en) 1987-12-22 2004-02-17 Sun Microsystems, Inc. Multiprocessor digital data processing system
EP0468542A3 (en) * 1987-12-22 1992-08-12 Kendall Square Research Corporation Multiprocessor digital data processing system
US5226039A (en) * 1987-12-22 1993-07-06 Kendall Square Research Corporation Packet routing switch
US5251308A (en) * 1987-12-22 1993-10-05 Kendall Square Research Corporation Shared memory multiprocessor with data hiding and post-store
US5297265A (en) * 1987-12-22 1994-03-22 Kendall Square Research Corporation Shared memory multiprocessor system and method of operation thereof
US6332185B1 (en) 1991-09-20 2001-12-18 Sun Microsystems, Inc. Method and apparatus for paging data and attributes including an atomic attribute for digital data processor
US5535393A (en) * 1991-09-20 1996-07-09 Reeve; Christopher L. System for parallel processing that compiles a filed sequence of instructions within an iteration space
US5313647A (en) * 1991-09-20 1994-05-17 Kendall Square Research Corporation Digital data processor with improved checkpointing and forking
GB2260630B (en) * 1991-10-16 1995-06-28 Intel Corp An improved memory management system for a microprocessor
FR2683061A1 (en) * 1991-10-16 1993-04-30 Intel Corp MEMORY SEGMENTATION SYSTEM.
GB2260629A (en) * 1991-10-16 1993-04-21 Intel Corp A segment descriptor cache for a microprocessor
GB2260629B (en) * 1991-10-16 1995-07-26 Intel Corp A segment descriptor cache for a microprocessor
FR2682783A1 (en) * 1991-10-16 1993-04-23 Intel Corp MAINTAINING HIDDEN CONSISTENCY.
GB2260630A (en) * 1991-10-16 1993-04-21 Intel Corp A memory management system for preserving cache coherency
EP0613090A1 (en) * 1993-02-26 1994-08-31 Siemens Nixdorf Informationssysteme Aktiengesellschaft Method for checking the admissibility of direct memory accesses in a data processing systems
EP0653709A1 (en) * 1993-11-12 1995-05-17 International Business Machines Corporation Computer address space protection system
US5548746A (en) * 1993-11-12 1996-08-20 International Business Machines Corporation Non-contiguous mapping of I/O addresses to use page protection of a process
GB2285323B (en) * 1994-01-04 1998-06-24 Intel Corp Address generation unit with segmented addresses in a microprocessor
US5590297A (en) * 1994-01-04 1996-12-31 Intel Corporation Address generation unit with segmented addresses in a mircroprocessor
GB2285323A (en) * 1994-01-04 1995-07-05 Intel Corp Address generation unit with segmented addresses in a microprocessor
US7149862B2 (en) 2002-11-18 2006-12-12 Arm Limited Access control in a data processing apparatus
US7171539B2 (en) 2002-11-18 2007-01-30 Arm Limited Apparatus and method for controlling access to a memory
US7185159B2 (en) 2002-11-18 2007-02-27 Arm Limited Technique for accessing memory in a data processing apparatus
US7305534B2 (en) 2002-11-18 2007-12-04 Arm Limited Control of access to a memory by a device
US7487367B2 (en) 2002-11-18 2009-02-03 Arm Limited Apparatus and method for managing access to a memory

Also Published As

Publication number Publication date
HK53590A (en) 1990-07-27
GB2176920B (en) 1989-11-22
CN1008839B (en) 1990-07-18
JPH0622000B2 (en) 1994-03-23
DE3618163A1 (en) 1986-12-18
GB8519991D0 (en) 1985-09-18
KR900005897B1 (en) 1990-08-13
FR2583540A1 (en) 1986-12-19
SG34090G (en) 1990-08-03
KR870003427A (en) 1987-04-17
GB2176918B (en) 1989-11-01
DE3618163C2 (en) 1995-04-27
JPS61286946A (en) 1986-12-17
GB8612679D0 (en) 1986-07-02
CN85106711A (en) 1987-02-04
FR2583540B1 (en) 1991-09-06
GB2176920A (en) 1987-01-07

Similar Documents

Publication Publication Date Title
US5321836A (en) Virtual memory management method and apparatus utilizing separate and independent segmentation and paging mechanism
GB2176918A (en) Memory management for microprocessor system
KR920005280B1 (en) High speed cache system
US5173872A (en) Content addressable memory for microprocessor system
US5526504A (en) Variable page size translation lookaside buffer
US5412787A (en) Two-level TLB having the second level TLB implemented in cache tag RAMs
US6493812B1 (en) Apparatus and method for virtual address aliasing and multiple page size support in a computer system having a prevalidated cache
US3764996A (en) Storage control and address translation
US5604879A (en) Single array address translator with segment and page invalidate ability and method of operation
EP0496288B1 (en) Variable page size per entry translation look-aside buffer
US4685082A (en) Simplified cache with automatic update
EP0036110B1 (en) Cache addressing mechanism
US4136385A (en) Synonym control means for multiple virtual storage systems
US5287475A (en) Data processing apparatus operable in extended or unextended virtual address spaces without software modification
JP3666689B2 (en) Virtual address translation method
US6874077B2 (en) Parallel distributed function translation lookaside buffer
US5530824A (en) Address translation circuit
JPH07200405A (en) Circuit and method for cache of information
GB2293672A (en) Virtual page memory buffer
JPH08227380A (en) Data-processing system
US5535351A (en) Address translator with by-pass circuit and method of operation
US5530822A (en) Address translator and method of operation
US6385696B1 (en) Embedded cache with way size bigger than page size
US6560689B1 (en) TLB using region ID prevalidation
US6574698B1 (en) Method and system for accessing a cache memory within a data processing system

Legal Events

Date Code Title Description
PE20 Patent expired after termination of 20 years

Effective date: 20050807