[go: up one dir, main page]

CN113392089B - Database index optimization method and readable storage medium - Google Patents

Database index optimization method and readable storage medium Download PDF

Info

Publication number
CN113392089B
CN113392089B CN202110711488.9A CN202110711488A CN113392089B CN 113392089 B CN113392089 B CN 113392089B CN 202110711488 A CN202110711488 A CN 202110711488A CN 113392089 B CN113392089 B CN 113392089B
Authority
CN
China
Prior art keywords
data
additional buffer
buffer area
current
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110711488.9A
Other languages
Chinese (zh)
Other versions
CN113392089A (en
Inventor
张益林
董晓
宋艳丽
苗健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Highgo Base Software Co ltd
Original Assignee
Highgo Base Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Highgo Base Software Co ltd filed Critical Highgo Base Software Co ltd
Priority to CN202110711488.9A priority Critical patent/CN113392089B/en
Publication of CN113392089A publication Critical patent/CN113392089A/en
Application granted granted Critical
Publication of CN113392089B publication Critical patent/CN113392089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/217Database tuning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure provides a database index optimization method and a readable storage medium, including: in the process of creating the tree index for the data table, creating corresponding additional buffer areas for a plurality of tree nodes of the tree index; under the condition that data operation needs to be carried out on the data table, data operation is carried out on the additional buffer area corresponding to the tree node; and detecting whether the current additional buffer area meets an emptying condition, emptying the current additional buffer area under the condition that the current additional buffer area meets a preset emptying condition, and updating and pushing data in the current additional buffer area to an attachment buffer area of a next layer node. According to the embodiment of the invention, the additional buffer areas are created for the nodes, and the data operation is carried out on the additional buffer areas corresponding to the tree nodes under the condition that the data operation is required to be carried out on the data table, so that the delay updating of each tree node of the database can be realized, the performance of the flash memory is fully exerted, and the query efficiency of the database is improved.

Description

Database index optimization method and readable storage medium
Technical Field
The invention relates to the technical field of databases, in particular to a database index optimization method and a readable storage medium.
Background
The index is an important component of a relational database management system, records the mapping relation between data and a storage address thereof, can quickly access specific information in a database table by utilizing the index, reduces the cost of I/O operation, and can effectively improve the performance of database query. Because of the multi-version concurrency control (MVCC) feature of the relational database, when data in the database changes, the storage address of the data changes at the same time, and since the database index records the mapping relationship between the data and the storage address, the index needs to be updated accordingly.
The B-tree is a balanced multi-way lookup tree that operates on the appropriate leaf nodes of the B-tree when data changes. When data is inserted, it may be necessary to split leaf nodes and adjust father nodes in sequence according to actual conditions, and in the worst case, the splitting operation may be transmitted all the way to the root node.
The index structure of the current database is created based on the read-write characteristics of the traditional magnetic disk, while the flash memory is different from the traditional magnetic disk in the read-write characteristics, and the cost of the write operation is determined by the physical characteristics of the flash memory to be far higher than the cost of the read operation. The conventional database index structure does not consider the problem, and taking a B-tree index in a conventional database system as an example, the update operation of each node of a B-tree may result in the write operation of the flash memory, and the update of a certain node may spread to other nodes, which causes the update operation of other nodes, resulting in more write operations of the flash memory.
Disclosure of Invention
Embodiments of the present invention provide a database index optimization method and a readable storage medium, which implement delayed update by creating an additional buffer for a node, thereby fully exerting performance of a flash memory.
The embodiment of the disclosure provides a database index optimization method, which includes:
in the process of creating a tree index for a data table, creating corresponding additional buffer areas for a plurality of tree nodes of the tree index;
under the condition that data operation needs to be carried out on the data table, the data operation is carried out on an additional buffer area corresponding to the tree node;
detecting whether the current additional buffer area meets an emptying condition, emptying the current additional buffer area under the condition that the current additional buffer area meets a preset emptying condition, and updating and pushing data in the current additional buffer area to an additional buffer area of a next layer node.
In an embodiment, the detecting whether the current additional buffer meets an emptying condition, and if the current additional buffer meets a preset emptying condition, emptying the current additional buffer includes:
and under the condition that the data operation comprises data insertion, detecting the size of the inserted data, and if the size of the inserted data exceeds a preset data specification, triggering to empty the current additional buffer area.
In an embodiment, in the case that a data operation needs to be performed on the data table, the performing the data operation on the additional buffer corresponding to the tree node includes:
creating an adaptive load buffer and acquiring data related to data operation based on the adaptive load buffer;
and sending a corresponding data operation request to the tree index based on the related data of the data operation.
In an embodiment, after sending the corresponding data operation request to the tree index, the database index optimization method further includes:
and searching tree nodes and corresponding additional buffers from top to bottom based on the data operation request to acquire associated data matched with the data operation request.
In one embodiment, in the case that the tree node and the corresponding additional buffer area are searched from top to bottom based on the data operation request, and it is determined that the additional buffer area exists in the current tree node, the following process is executed from top to bottom according to the hierarchical relationship of the tree nodes:
checking the adaptive load buffer and an additional buffer of the current tree node through an adaptive algorithm;
if the additional buffer area of the current tree node needs to be emptied is determined through the self-adaptive algorithm, the current additional buffer area is emptied, and data in the current additional buffer area is updated and pushed to the additional buffer area of the next layer of nodes;
and if the additional buffer area of the current tree node does not need to be emptied, scanning the additional buffer area based on the data operation request.
In an embodiment, the determining, by the adaptive algorithm, that the additional buffer of the current tree node needs to be emptied specifically includes:
detecting the size of an additional buffer area of the current tree node;
comparing the cost of scanning the additional buffer area of the current tree node with the cost of emptying the additional buffer area of the current tree node to obtain a comparison result;
and determining that the additional buffer area of the current tree node needs to be emptied according to the comparison result.
In one embodiment, emptying the current additional buffer comprises:
sequencing the additional buffer areas of the current tree nodes;
and distributing the data in the additional buffer area of the current tree node according to the data of the next layer of tree node according to the sequencing result.
In an embodiment, in the case that the flush operation is performed to the lowest tree node, the database index optimization method further includes:
writing and merging all the data emptied by the additional buffer area; and
if the data operation comprises data insertion and the inserted data exceeds a first preset value, splitting the corresponding tree node;
and if the data operation comprises data deletion and the deleted data exceeds a second preset value, merging the corresponding tree nodes.
In one embodiment, after the current additional buffer is emptied, the database index optimization method further includes:
recording data operation records executed by each tree node through the self-adaptive load buffer area;
and increasing the size of the additional buffer area of the update-intensive tree node according to the data operation record, and reducing the size of the additional buffer area of the search-intensive tree node.
The disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the foregoing database index optimization method.
According to the embodiment of the invention, the additional buffer area is created for the node, and the data operation is executed on the additional buffer area corresponding to the tree node under the condition that the data operation is required to be carried out on the data table, so that the redundancy of the data operation can be eliminated and each tree node of the database can be delayed and updated based on the buffer area, and the performance of the flash memory can be fully exerted.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a basic flow diagram of an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a tree index structure according to an embodiment of the disclosure.
Fig. 3 is a schematic diagram illustrating an additional buffer emptying process according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
An embodiment of the present disclosure provides a database index optimization method, as shown in fig. 1, including:
s101, in the process of creating the tree index for the data table, creating corresponding conflict areas for a plurality of tree nodes of the tree index.
S102, under the condition that the data operation is carried out on the data table, the data operation is carried out on the additional buffer area corresponding to the tree node.
S103, detecting whether the current additional buffer area meets an emptying condition, and emptying the current additional buffer area under the condition that the current additional buffer area meets a preset emptying condition.
And S104, updating and pushing the data in the current additional buffer area to an additional buffer area of a next layer node.
As shown in fig. 2, the adaptive index is a balanced tree, the number of nodes is denoted as F, and the maximum number of elements that a child node can correspond to is N. In this example, corresponding buffer areas may be created for a plurality of tree nodes of the tree index, so that when a data operation occurs to a tree node, the additional buffer area may be operated first, for example, if it is necessary to sequentially insert current data of the data table into the adaptive index, the data may be added to the additional buffer area of the root node, thereby achieving elimination of redundancy of the data operation and delay of updating each tree node of the database based on the buffer area, and fully exerting performance of the flash memory.
In an embodiment, the detecting whether the current additional buffer meets an emptying condition, and if the current additional buffer meets a preset emptying condition, emptying the current additional buffer includes: and under the condition that the data operation comprises data insertion, detecting the size of the inserted data, and if the size of the inserted data exceeds a preset data specification, triggering to empty the current additional buffer area. Specifically, as shown in fig. 3, in the case of an insert or update operation occurring in the additional buffer of a tree node, multiple operations on the same piece of data may be merged in the additional buffer. When some additional buffer area has inserted data and the size of the additional buffer area exceeds the predefined size M, triggering a buffer area emptying operation and pushing the data update of the buffer area to the next-stage additional buffer area.
In an embodiment, in the case that a data operation needs to be performed on the data table, performing the data operation on an additional buffer corresponding to the tree node includes: creating an adaptive load buffer and acquiring data related to data operation based on the adaptive load buffer; and sending a corresponding data operation request to the tree index based on the related data of the data operation. Specifically, in the case of data operations such as adding, deleting, modifying, and searching the database, an adaptive load buffer may be additionally created in this example, and data related to the data operations may be acquired through the adaptive load buffer, so that a workload corresponding to the data operations may be formed in the adaptive load buffer. A corresponding data operation request is then sent to the target tree index based on the data associated with the data operation. For example, when data query is required, a data query request may be sent to the target number index, after the target tree index receives the data query request, the root node searches the entire tree from top to bottom, and if there is an additional buffer area on a child node searched by the target tree index, the target tree index further scans the additional buffer area to search for data updates that have not yet reached a next-layer leaf node.
In an embodiment, after sending the corresponding data operation request to the tree index, the database index optimization method further includes: and searching tree nodes and corresponding additional buffers from top to bottom based on the data operation request to acquire associated data matched with the data operation request.
In one embodiment, in the case that the tree node and the corresponding additional buffer area are searched from top to bottom based on the data operation request, and it is determined that the additional buffer area exists in the current tree node, the following process is executed from top to bottom according to the hierarchical relationship of the tree nodes: checking the adaptive load buffer and an additional buffer of the current tree node through an adaptive algorithm; if the additional buffer zone of the current tree node needs to be emptied through the self-adaptive algorithm, the current additional buffer zone is emptied, and the data in the current additional buffer zone is updated and pushed to the additional buffer zone of the next layer of nodes; and if the additional buffer area of the current tree node does not need to be emptied, scanning the additional buffer area based on the data operation request. In this example, the tree nodes and the corresponding additional buffers may be searched from top to bottom based on the B-tree and the obtained data operation request until the bottommost additional buffer and the leaf node are scanned. For example, all matching data in the target tree index may be obtained according to the lookup predicate of the data query request.
In an embodiment, the determining, by the adaptive algorithm, that the additional buffer of the current tree node needs to be emptied specifically includes: detecting the size of an additional buffer area of the current tree node; comparing the cost of scanning the additional buffer area of the current tree node with the cost of emptying the additional buffer area of the current tree node to obtain a comparison result; and determining that the additional buffer area of the current tree node needs to be emptied according to the comparison result. Specifically, in this example, the process of searching for specific data may include a flow of processing all the additional buffers of a certain adaptive tree index by the adaptive algorithm, and recording the update and search request of all the additional buffers. The total size of the additional buffer area is scanned in real time, the cost for scanning the additional buffer area and the cost for emptying the additional buffer area are recorded and recorded in an adjusting mode, and the cost for emptying the buffer area in real time at one time and the cost saved by searching the emptied buffer area subsequently are measured. When the saved cost exceeds the emptying cost, a buffer emptying operation is carried out, and the buffer is pushed down to the next additional buffer.
In one embodiment, as shown in FIG. 3, emptying the current additional buffer comprises: sorting the data types of the additional buffer areas of the current tree node; and distributing the data in the additional buffer area of the current tree node according to the data of the next layer of tree node according to the sequencing result. That is, in the specific process of clearing the current additional buffer area, the data types in the additional buffer area can be sorted first, then the child nodes of the tree node where the additional buffer area is located are searched, and all the data in the additional buffer area are sequentially distributed to the additional buffer areas of the child nodes according to the data occupation or the data types of the child nodes. For example, the data type in the additional buffer may be an index structure of key-value pairs, where a "key" is computed from the data in the index column and a "value" is the lower tree node storage location or physical storage location of the data. The size of the storage space occupied by the additional buffer area can be dynamically increased according to the query times until the upper limit set by the global parameters of the database is reached or the emptying operation is started by the processing of the self-adaptive algorithm. This makes it possible, for example, to store query-intensive or update-intensive data in the respective next-level node.
In an embodiment, in the case that the flush operation is performed to the lowest tree node, the database index optimization method further includes: writing and merging all the data emptied by the additional buffer area; and splitting the corresponding tree node if the data operation comprises data insertion and the inserted data exceeds a first preset value. And if the data operation comprises data deletion and the deleted data exceeds a second preset value, merging the corresponding tree nodes.
For example, in the case where the flush operation of the additional buffer is performed to the leaf node at the bottom, the buffers may also be sorted first, and then all the data in the additional buffer is allocated to the leaf node in order. When the data is updated to the bottommost leaf node, all the data emptied by the additional buffer area at this time are written and combined, so that the writing cost is reduced. For data insertion operations, if the data in a leaf node exceeds a predetermined value, a split of the leaf node is generated, which may be propagated up or down, and a corresponding additional buffer is also allocated for the new leaf node that is generated.
And for the data deleting operation, deleting the data beyond a second preset value. That is, at this time, the stock data in the current node may already be merged into the tree node of the previous layer or the next layer, the merging operation of the nodes may be performed, and for the merging of the child nodes, the merged child nodes are also allocated with corresponding additional buffers.
In one embodiment, after the current additional buffer is emptied, the database index optimization method further includes: recording data operation records executed by each tree node through the self-adaptive load buffer area; and increasing the size of the additional buffer area of the update-intensive tree node according to the data operation record, and reducing the size of the additional buffer area of the search-intensive tree node. In this example, the workload formed by the created adaptive load buffer may be utilized to record the data operations performed by the various tree nodes. Therefore, the type of the corresponding tree node or the additional buffer area can be judged according to the operation record of each tree node and the additional buffer area which execute the data operation, the size M of the additional buffer area is increased when the database service is update intensive, and the size M of the additional buffer area is reduced when the database service is search intensive.
The embodiment of the disclosure provides a database index optimization method, which provides a structure of a delayed adaptive tree index by setting an additional buffer area of a tree node, reduces the updating cost of the index by eliminating redundant operation and batch updating during delayed updating, dynamically adjusts the size of the occupied buffer area by an adaptive algorithm, fully exerts the characteristics of a flash memory, and improves the query efficiency of a database. Compared with the B-tree index of the traditional database, on the original NAND flash memory, under the condition of the same physical memory and database service, the method disclosed by the invention can obtain two to twelve times of query performance improvement. On a Solid State Disk (SSD) encapsulating an original NAND flash memory, the method disclosed by the invention can obtain three to six times of query performance improvement.
The embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the foregoing database index optimization method.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the particular illustrative embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various modifications, equivalent arrangements, and equivalents thereof, which may be made by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A database index optimization method is characterized by comprising the following steps:
in the process of creating a tree index for a data table, creating corresponding additional buffer areas for a plurality of tree nodes of the tree index;
under the condition that data operation needs to be carried out on the data table, the data operation is carried out on an additional buffer area corresponding to the tree node;
detecting whether a current additional buffer area meets an emptying condition, emptying the current additional buffer area under the condition that the current additional buffer area meets a preset emptying condition, and updating and pushing data in the current additional buffer area to an additional buffer area of a next layer node;
the executing the data operation on the additional buffer area corresponding to the tree node under the condition that the data operation needs to be performed on the data table comprises the following steps:
creating an adaptive load buffer and acquiring data related to data operation based on the adaptive load buffer;
and sending a corresponding data operation request to the tree index based on the related data of the data operation.
2. The database index optimization method of claim 1, wherein the detecting whether the current additional buffer meets a clearing condition, and if the current additional buffer meets a preset clearing condition, clearing the current additional buffer comprises:
and under the condition that the data operation comprises data insertion, detecting the size of the inserted data, and if the size of the inserted data exceeds a preset data specification, triggering to empty the current additional buffer area.
3. The database index optimization method of claim 1, wherein after sending the corresponding data operation request to the tree index, the database index optimization method further comprises:
and searching tree nodes and corresponding additional buffers from top to bottom based on the data operation request to acquire associated data matched with the data operation request.
4. The database index optimization method of claim 3, wherein in case of searching tree nodes and corresponding additional buffers from top to bottom based on the data operation request and determining that additional buffers exist in the current tree node, the following procedure is performed from top to bottom according to the hierarchical relationship of the tree nodes:
checking the adaptive load buffer and an additional buffer of the current tree node through an adaptive algorithm;
if the additional buffer area of the current tree node needs to be emptied is determined through the self-adaptive algorithm, the current additional buffer area is emptied, and data in the current additional buffer area is updated and pushed to the additional buffer area of the next layer of nodes;
and if the additional buffer area of the current tree node does not need to be emptied, scanning the additional buffer area based on the data operation request.
5. The database index optimization method of claim 4, wherein said determining, by said adaptive algorithm, that additional buffers of current tree nodes need to be emptied specifically comprises:
detecting the size of an additional buffer area of the current tree node;
comparing the cost of scanning the additional buffer area of the current tree node with the cost of emptying the additional buffer area of the current tree node to obtain a comparison result;
and determining that the additional buffer area of the current tree node needs to be emptied according to the comparison result.
6. The database index optimization method of claim 4, wherein emptying the current additional buffer comprises:
sorting the data types of the additional buffer areas of the current tree node;
and distributing the data in the additional buffer area of the current tree node according to the data of the next layer of tree node according to the sequencing result.
7. The database index optimization method of claim 6, wherein in case of a flush operation performed to a lowest level tree node, the database index optimization method further comprises:
writing and merging all the data emptied by the additional buffer area; and
if the data operation comprises data insertion and the inserted data exceeds a first preset value, splitting the corresponding tree node;
and if the data operation comprises data deletion and the deleted data exceeds a second preset value, merging the corresponding tree nodes.
8. The database index optimization method of claim 6, wherein after emptying the current additional buffer, the database index optimization method further comprises:
recording data operation records executed by each tree node through the self-adaptive load buffer area;
and increasing the size of the additional buffer area of the update-intensive tree node according to the data operation record, and reducing the size of the additional buffer area of the search-intensive tree node.
9. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the database index optimization method according to any one of claims 1 to 8.
CN202110711488.9A 2021-06-25 2021-06-25 Database index optimization method and readable storage medium Active CN113392089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110711488.9A CN113392089B (en) 2021-06-25 2021-06-25 Database index optimization method and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110711488.9A CN113392089B (en) 2021-06-25 2021-06-25 Database index optimization method and readable storage medium

Publications (2)

Publication Number Publication Date
CN113392089A CN113392089A (en) 2021-09-14
CN113392089B true CN113392089B (en) 2023-02-24

Family

ID=77623931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110711488.9A Active CN113392089B (en) 2021-06-25 2021-06-25 Database index optimization method and readable storage medium

Country Status (1)

Country Link
CN (1) CN113392089B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114791913B (en) * 2022-04-26 2024-09-13 北京人大金仓信息技术股份有限公司 Shared memory buffer pool processing method, storage medium and equipment for database
CN117194739B (en) * 2023-09-12 2024-04-19 北京云枢创新软件技术有限公司 Method, electronic equipment and medium for searching hierarchical tree nodes based on hit state

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6470344B1 (en) * 1999-05-29 2002-10-22 Oracle Corporation Buffering a hierarchical index of multi-dimensional data
CN1545048A (en) * 2003-11-17 2004-11-10 中兴通讯股份有限公司 Method for implementing tree storage and access by two-dimensional table
CN101576915A (en) * 2009-06-18 2009-11-11 北京大学 Distributed B+ tree index system and building method
CN104331497A (en) * 2014-11-19 2015-02-04 中国科学院自动化研究所 Method and device using vector instruction to process file index in parallel mode
CN109254962A (en) * 2017-07-06 2019-01-22 中国移动通信集团浙江有限公司 A kind of optimiged index method and device based on T- tree
CN110188108A (en) * 2019-06-10 2019-08-30 北京平凯星辰科技发展有限公司 Date storage method, device, system, computer equipment and storage medium
CN111538724A (en) * 2019-02-07 2020-08-14 株式会社特迈数据 Method for managing index

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8185355B2 (en) * 2007-04-03 2012-05-22 Microsoft Corporation Slot-cache for caching aggregates of data with different expiry times
CN108762664B (en) * 2018-02-05 2021-03-16 杭州电子科技大学 Solid state disk page-level cache region management method
CN111930517B (en) * 2020-09-18 2023-07-14 北京中科立维科技有限公司 A high-performance adaptive garbage collection method and computer system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6470344B1 (en) * 1999-05-29 2002-10-22 Oracle Corporation Buffering a hierarchical index of multi-dimensional data
CN1545048A (en) * 2003-11-17 2004-11-10 中兴通讯股份有限公司 Method for implementing tree storage and access by two-dimensional table
CN101576915A (en) * 2009-06-18 2009-11-11 北京大学 Distributed B+ tree index system and building method
CN104331497A (en) * 2014-11-19 2015-02-04 中国科学院自动化研究所 Method and device using vector instruction to process file index in parallel mode
CN109254962A (en) * 2017-07-06 2019-01-22 中国移动通信集团浙江有限公司 A kind of optimiged index method and device based on T- tree
CN111538724A (en) * 2019-02-07 2020-08-14 株式会社特迈数据 Method for managing index
CN110188108A (en) * 2019-06-10 2019-08-30 北京平凯星辰科技发展有限公司 Date storage method, device, system, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113392089A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
US9672235B2 (en) Method and system for dynamically partitioning very large database indices on write-once tables
US6668263B1 (en) Method and system for efficiently searching for free space in a table of a relational database having a clustering index
CN107491523B (en) Method and apparatus for storing data objects
JP7507143B2 (en) System and method for early removal of tombstone records in a database - Patents.com
US20070100873A1 (en) Information retrieving system
US20100281013A1 (en) Adaptive merging in database indexes
CN111782659B (en) Database index creation method, device, computer equipment and storage medium
CN113392089B (en) Database index optimization method and readable storage medium
US12067279B2 (en) Metadata storage method and device
US8682872B2 (en) Index page split avoidance with mass insert processing
CN115935020A (en) Graph data storage method and device
US10558636B2 (en) Index page with latch-free access
KR101806394B1 (en) A data processing method having a structure of the cache index specified to the transaction in a mobile environment dbms
CN109408539B (en) Data operation method, device, server and storage medium
US8156126B2 (en) Method for the allocation of data on physical media by a file system that eliminates duplicate data
KR102321346B1 (en) Data journaling method for large solid state drive device
US12253974B2 (en) Metadata processing method and apparatus, and a computer-readable storage medium
KR102354343B1 (en) Spatial indexing method and apparatus for blockchain-based geospatial data
CN118051643B (en) Metadata sparse distribution-oriented LSM data organization method and device
US20080005077A1 (en) Encoded version columns optimized for current version access
US9824105B2 (en) Adaptive probabilistic indexing with skip lists
CN110413617B (en) Method for dynamically adjusting hash table group according to size of data volume
CN119960703B (en) Key-value storage inter-layer merging optimization method and device based on Rime application characteristics
JP2010191903A (en) Distributed file system striping class selecting method and distributed file system
US12339860B2 (en) Key-value based data storage device and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant