[go: up one dir, main page]

CN110795395B - File deployment system and file deployment method - Google Patents

File deployment system and file deployment method Download PDF

Info

Publication number
CN110795395B
CN110795395B CN201810856551.6A CN201810856551A CN110795395B CN 110795395 B CN110795395 B CN 110795395B CN 201810856551 A CN201810856551 A CN 201810856551A CN 110795395 B CN110795395 B CN 110795395B
Authority
CN
China
Prior art keywords
file
target file
storage medium
target
local cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810856551.6A
Other languages
Chinese (zh)
Other versions
CN110795395A (en
Inventor
陈康
胡波
龚振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810856551.6A priority Critical patent/CN110795395B/en
Publication of CN110795395A publication Critical patent/CN110795395A/en
Application granted granted Critical
Publication of CN110795395B publication Critical patent/CN110795395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a file deployment system and a file deployment method. The system includes a hot deployment module and a cold deployment module. The cold deployment module is used for writing a plurality of files into the external storage medium through the cache management module, acquiring the files from the external storage medium and writing the files into a local cache space; the hot deployment module is used for receiving a query request aiming at a target file and querying whether the target file exists in the local cache space according to the query request; when the target file is found, reading the target file; when the target file is not found, inquiring whether the target file exists in an external storage medium according to the inquiry request; and when the target file is found, reading the target file and storing the target file in the local cache space. According to the scheme, quick response can be realized, and response reply efficiency is improved.

Description

File deployment system and file deployment method
Technical Field
The present application relates to the field of internet technologies, and in particular, to a file deployment system and a file deployment method.
Background
Under the scene of e-commerce operation or advertisement delivery, a plurality of operation activity pages, merchant brand pages, advertisement delivery pages, file configuration information and the like are often required to be issued, and the files are pushed to an external service machine for external service. On one hand, a large number of user untimely execution page publication push exists, and the publication push is effective in real time; on the other hand, especially the operation activity page may be subjected to traffic peak at some specific time, while the merchant brand page may be accessed by external users at any time at irregular time, and the like.
Millions of pages are accumulated in the process of running the e-commerce platform, the number of pages is continuously and rapidly increased, and in terms of page access, part of pages with long history are accessed by external users at irregular time. The storage problem of the current e-commerce platform has the following problems:
1. from the perspective of mass page storage, all pages can be accessed sporadically, so all pages cannot be lost, but great resources are required to be invested for maintaining mass page storage.
2. From the perspective of external user service, the service is provided to the outside as quickly and efficiently as possible, and the target page is obtained efficiently from a large number of pages, so that the performance is ensured to be reliable enough, and more resources also need to be invested.
3. From the aspect of development technology operation and maintenance, when a machine is replaced and the capacity of the machine is expanded, a large number of pages can be conveniently transferred to a new machine for external service, and the problem to be solved by the industry is considered in the prior art.
Various solutions for file deployment are proposed in the art to solve the above problems, such as application delivery online systems, file synchronization systems, CDN delivery systems, etc., but each solution has major drawbacks.
For example, when a machine expands and replaces, an existing application publishing online system needs to perform online actions such as file compiling, file copying, and restarting, and deployment of a large number of pages is not feasible in terms of time consumption and application needs to be restarted.
For example, the existing file synchronization system mainly copies the corresponding files to each machine, and at least fails to solve the problem of file restoration and deployment when the machines are replaced and expanded.
For another example, the existing CDN delivering system is limited to the domain name of the CDN, and the existing CDN delivering system does not have the capability of actively pushing the change.
Therefore, there is a need to provide a file deployment method and system to at least partially solve the problems of the prior art.
Disclosure of Invention
In view of the foregoing problems, an embodiment of the present application provides a file deployment system and a file deployment method to solve the problems in the prior art.
An embodiment of the application discloses a file deployment system, which comprises a hot deployment module and a cold deployment module;
the cold deployment module is used for writing a plurality of files into the external storage medium, acquiring the files from the external storage medium and writing the files into a local cache space;
the thermal deployment module is to:
receiving a query request aiming at a target file, and querying whether the target file exists in the local cache space according to the query request;
reading the target file when the target file is found in the local cache space;
when the target file is not found in the local cache space, inquiring whether the target file exists in an external storage medium according to the inquiry request; and
and when the target file is found in the external storage medium, reading the target file and storing the target file in the local cache space.
An embodiment of the present application further discloses a file deployment method, including:
writing a plurality of files into the external storage medium, acquiring the files from the external storage medium and writing the files into a local cache space;
receiving a query request aiming at a target file, and querying whether the target file exists in the local cache space according to the query request;
when the target file is not found in the local cache space, inquiring whether the target file exists in an external storage medium according to the inquiry request;
and when the target file is found in the external storage medium, reading the target file and storing the target file in the local cache space.
An embodiment of the present application further discloses a computing processing device, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the computing processing device to perform the above-described methods.
One embodiment of the present application also discloses one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause a computing processing device to perform the above-described methods.
As can be seen from the above, the file deployment method and apparatus provided in the embodiments of the present application at least include the following advantages:
in the scheme of the application, when a query request for a target file is received, an execution main body firstly determines whether a local cache space of a system has deployed the target file, and when undeployment is detected, a hot deployment module is used for deploying the target file to the local cache space from an external storage medium, so that file deployment is realized. When the target file is a frequently queried target file, the instant local cache space can clear the temporally old file according to the loading sequence, and the target file can be frequently loaded to the local cache space and can be quickly called.
Therefore, for some target files with frequent access, the scheme of the application can realize quick response and improve the response reply efficiency; for target files with infrequent access, the files can be called from an external storage medium, so that the local cache space is not occupied, and the influence on the system overhead is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a file deployment system according to a first embodiment of the present application.
Fig. 2 is a system architecture diagram of a file deployment system according to a second embodiment of the present application.
Fig. 3 is an overall flowchart of a file deployment system according to a second embodiment of the present application.
Fig. 4 is a flowchart of a cold deployment of the file deployment system according to the second embodiment of the present application.
Fig. 5 is a disaster recovery flowchart of a file deployment system according to a second embodiment of the present application.
Fig. 6 is a thermal deployment flowchart of the file deployment system according to the second embodiment of the present application.
Fig. 7 is a flowchart of an embodiment of a file deployment method according to a third embodiment of the present application.
Fig. 8 is a flowchart of an embodiment of a file deployment method according to a fourth embodiment of the present application.
Fig. 9 is a flowchart of the sub-steps of step S102.
Fig. 10 is a flowchart of the substeps of step S105.
FIG. 11 schematically shows a block diagram of a computing processing device for performing a method according to the present application.
Fig. 12 schematically shows a storage unit for holding or carrying program code implementing a method according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
One of the core ideas of the application is to provide a file deployment system and a file deployment method. In the scheme provided by the application, for the query request of the target file (page file, configuration file, etc.), the execution main body firstly searches in the local cache space to determine whether the target file exists. When the file is not found in the local cache space, the file can be found in an external storage medium opposite to the local cache space, the target file is returned after the file is found, the target file is stored in the local cache space, and the target file is deployed.
The embodiments of the present application are described below by way of examples.
First embodiment
A first embodiment of the present application provides a file deployment system. The file deployment system can be applied to a hardware system comprising a deployment pushing end and a deployment receiving end. The deployment push terminal is, for example, a server of a content provider, and is used for providing a file to be deployed; the deployment receiving end is, for example, a file deployment server, and is configured to receive a file pushed by the deployment pushing end, and deploy the file at the deployment receiving end for a user to access. The deployment receiving end can be one or more and can communicate with the deployment pushing end through message middleware.
Fig. 1 is a schematic diagram of a file deployment system according to a first embodiment of the present application. As shown in fig. 1, the file deployment system includes a hot deployment module 21 and a cold deployment module 22.
The hot deployment module 21 may be configured to receive an inquiry request for a target file, and inquire whether the target file exists in the local cache space according to the inquiry request; when the target file is not found in the local cache space, inquiring whether the target file exists in an external storage medium according to the inquiry request; and when the target file is found in the external storage medium, reading the target file and storing the target file in the local cache space.
The cold deployment module 22 may be configured to write a plurality of files to an external storage medium, and obtain files from the external storage medium and write the files to the local cache space.
When a request is sent from a user side to the hot deployment module 21, the hot deployment module 21 searches the local cache space by using the information of the target file carried in the request. When the corresponding target file is not queried in the local cache space, the search can be carried out from the external storage medium. After the target file is found in the external storage medium, the target file is deployed to a local cache space so as to be beneficial to subsequent calling.
The external storage medium stores a plurality of files. These files are stored in an external storage medium by the cold deployment module 22. The cold deployment module 22 obtains the file from the external storage medium in addition to storing the file in the external storage medium, and stores the file in the local cache space.
In an optional embodiment, the thermal deployment module 21 may be configured to read the target file when the target file is found in the local cache space.
The scheme of the application is suitable for a mass storage mechanism, and can simultaneously solve the problems of mass storage and efficient calling. In the query for the target file, if a certain target file A1 is frequently queried, the target file A1 always exists in the local cache space, which facilitates efficient query and discovery of a query request from the outside. If a certain target file A2 is queried only a few times, the inactive target file A2 stored in the local cache space overflows from the local cache space and is stored in the external storage medium because other target files (for example, the aforementioned A1) are called to enter the local cache space.
In the scheme of the application, when receiving a query request for a target file, an execution subject first determines whether a local cache space has deployed the target file, and when detecting that the target file is not deployed, deploys the target file from an external storage medium to the local cache space by using a hot deployment module 21, so as to implement file deployment. When the target file is a frequently queried target file, the target file always exists in the local cache space and can be called quickly.
Therefore, for some target files with frequent access, the scheme of the application can realize quick response and improve the response replying efficiency; for target files with infrequent access, the files can be called from an external storage medium, so that the local cache space is not occupied, and the influence on the system overhead is avoided.
Second embodiment
Fig. 2 is a schematic diagram of a file deployment system proposed in a second implementation of the present application. On the basis of fig. 1, the file deployment system according to the embodiment of the present application may further include a disaster recovery module 23, a cache management module 24, and a blacklist management module 25.
In the file deployment system, hardware such as a deployment push terminal 300 and a deployment execution terminal 200 may be included. The cold deployment module 22 described above may include a push unit and an execution unit. The deployment pushing end 300 is composed of a pushing unit of the cold deployment module 22, and is used for executing a deployment pushing function; the execution units of the hot deployment module 21, the disaster recovery module 23, and the cold deployment module 22 constitute a deployment execution end 200 for executing a deployment function.
The hot deployment module 21 may be triggered by a request of a user, and when the user needs a specific service but there is no relevant content in the cache management module 24, the hot deployment module 21 of the file deployment system may be triggered, and depending on an identification such as a request path, the hot deployment module 21 may attempt to locally deploy the corresponding content through an external storage medium (e.g., the remote storage device 26 shown in fig. 2) and update the cache management module 24 to trigger the corresponding cache setting.
The cold deployment module 22 is mainly aimed at enabling a deployment push terminal 300 (e.g., a content provider) to add and modify service contents, first writing a deployed target object (e.g., a page, a configuration file, etc.) into a storage medium (e.g., the remote storage device 26 in fig. 2) and sending a message to notify the deployment execution terminal 200, after receiving the message, the deployment execution terminal 200 parses the message, obtains an address of a file in the external storage medium according to the message, and obtains target file contents from the external storage medium, and then triggers updating of a memory cache, a disk backup, and a disk cache according to the obtained contents, where the updating of the caches is used to provide up-to-date service contents externally, and the caches and backups related to the disk are used to perform memory cache restoration and a bottom entry of a disaster tolerance module.
The disaster recovery module 23 sequentially fetches corresponding entity information and contents in a local cache space (e.g., a file memory cache, a file disk cache), a file disk cache backup, and an external storage medium, until the contents are retrieved, that is, returned, otherwise, the deployment is failed, an alarm is given and the blacklist management module 25 is notified to update the blacklist contents.
The cache management module 24 may be used to manage storage space of the file deployment system, such as local cache space and external storage media. The local cache space includes, for example, a file memory cache and a file disk cache, and may also include a backup of the file disk cache, an address memory cache, an address disk cache, and the like, which are not described herein again. The cache management module 24 includes a file and address memory cache, a file and address disk cache, and an external storage medium, and may be regarded as having three levels of caches and storages, the overall performance of the memory cache and the disk cache can be greatly improved, the memory cache may use an LRU (Least Recently Used) queue, so that a first level cache may be formed in a memory that stores a certain number of files that are accessed most often in a manner of saving most memory, and target files that are accessed occasionally may be synchronized to a disk through thermal deployment, so that target files that are Used occasionally (for example, pages, shunting configuration files, etc.) are stored to the disk cache to form a second season cache, and remaining uncertain files that are accessed appropriately may be stored on the external storage medium to form a third level cache; the three levels of caches are matched with each other to realize that the high query performance is still kept under the condition of supporting a large number of pages.
When the blacklist management module 25 faces an invalid request and a large amount of flooding attack requests, the time-consuming action of continuous heat deployment can be effectively avoided, so that the system resource jitter is avoided, and the blacklist management module is a precondition for real-time heat deployment and disaster recovery guarantee modules, otherwise the system has obvious vulnerability. The thermal deployment module 21 will store the first failed thermal deployment and the identification related to disaster recovery processing, and directly perform interception when the same request matches the identification for the second time. The entity identifier of the added blacklist will be removed when the pushing unit of the cold deployment module 22 of the deployment push terminal 300 pushes and hits the same blacklist identifier, so that subsequent related requests are no longer intercepted. In a word, the illegal request is intercepted by communicating with other modules so as to realize a set of self-adaptive and self-updating memory mapping.
The overall process, the cold deployment process, the hot deployment process and the disaster recovery process in the file deployment system are described below with reference to the disclosures in fig. 3 to 6.
Fig. 3 is an overall flowchart of an alternative embodiment of the present application, and as shown in fig. 3, the overall flowchart of the file deployment system includes the following steps:
a query request is issued by the user, as shown in step 2001. After the deployment executive end 200 receives the query request, it executes step 2002 to query the target file. In the query step, the deployment push terminal 300 pushes the deployment execution terminal 200. A determination is made at step 2003 as to whether the query was successful. When successful, execute step 2004, read the file and reply to the user's query; when the file is not successful, step 2005 is executed to enter a hot deployment module, and query whether the external storage medium has the target file.
Thereafter, it is determined whether the query was successful in step 2006; when the query is successful, step 2007 is executed to reply a response to the query request of the user, and when the query is unsuccessful, as in step 2008, the method enters the disaster recovery module 23 to query from the file disk backup of the local cache space. In step 2009, it is determined whether a file can be found in the file disk backup. When found, the file is read and a response is replied to the user's request in step 20010; when not found, the response is rejected, step 20011.
Fig. 4 is a flowchart illustrating a cold deployment of a file deployment system according to an embodiment of the present application. As shown in fig. 4, for at least one file to be deployed, in step 4001, at the deployment push end 300, the aforementioned cold deployment module 22 may first calculate a storage bucket number of the file according to a preset calculation method; in step 4002, uploading the file to an external storage medium, and acquiring an address where the file is stored; in step 4003, the calculated bucket number and the obtained routing map of the address may be sent to a routing storage medium (e.g., an external address storage medium or a remote address storage medium); and in step 4004, sends a message containing the address and the bucket number to the message middleware.
In step 4005, the message middleware sends the message to the message monitoring module of the deployment execution end 200, and at the same time, in step 4003, stores the file to the file storage medium of the external storage medium, and stores the address to the address storage medium, and in step 4006, obtains the file address; in step 4007, a routing map of the bucket number and the address is obtained, and in step 4008, the file is queried according to the address, and whether the file can be found in the file storage medium is determined. In step 4009, when the target file is found to exist, the blacklist is modified, that is, the target file is considered not to be a blacklist file, and the target file or the address, the identifier, and the like thereof are deleted from the blacklist cache. Meanwhile, step 40010 is executed to write the target file into the file memory cache, and step 40011 may also be executed to backup the file deployed last time and write the file into the file disk backup, and in step 40012, write the disk content into the deployment this time, that is, write the file disk cache. Meanwhile, after step 4007, step 40081 may be executed to query the address of the target file according to the bucket number, determine whether an address corresponding to the bucket number exists, and write the bucket number and the address into the address disk cache in step 40082 when the address exists.
Fig. 5 is a disaster recovery flow chart according to an embodiment of the present application, and as shown in fig. 5, the disaster recovery module 23 searches the local disk to determine whether the target file exists, in this step, the execution main body reads in the file disk cache, and in step 6002, determines whether the reading is successful. When the reading is successful, the step 6003 returns a response; when the reading fails, a step 6004 is executed, the file is acquired from the local file disk backup, and in a step 6005, whether the reading is successful is judged; when the reading is successful, the step 6006 is executed to return the target file, when the reading is unsuccessful, the step 6007 is executed to calculate the bucket number according to the path, and in the step 6008, the file address is obtained from the disk according to the bucket number, that is, in this step, the mapping relationship between the bucket number and the address in the address disk cache is searched, and the address of the file is determined. Judging whether the reading is successful or not in the step 6009, if so, executing a step 6010, and obtaining file contents from an external storage medium (e.g., a file storage medium) according to the file address; if the reading fails, execute step 6011, obtain a file address from an external storage medium (e.g., an address storage medium) according to the bucket number, and return the file address; it is determined in step 6013 whether the reading is successful, and when successful, step 6014 is performed to read and return the target file, and when unsuccessful, step S6015 is performed to reject a response to the user' S request.
The disaster recovery module 23 has the main significance that searching is performed from the local disk cache and the local disk cache backup respectively, when the target file is not searched in the local disk cache, searching is performed again according to the address, when the target file still cannot be located through the address, it is considered that the execution of the disaster recovery module 23 fails, and the query request cannot be returned correctly. The arrangement of the disaster recovery module 23 improves the safety performance of the system, and when a problem occurs in a certain link of the previous sequence, the disaster recovery module 23 can be used for bottom protection, so that the request of a correct target file is prevented from being misjudged as a blacklist file request, and the stability of the system is improved while the high-sensitivity system is ensured.
Fig. 6 is a flowchart of the thermal deployment module 21 according to an embodiment of the present application. As shown in fig. 6, the flow of the thermal deployment is performed as follows.
In step 7001, a query request initiated by a user is received, and the deployment execution end 200 queries according to the file identifier. In step 7002, the blacklist determining module 25 of the execution end 200 is deployed to determine whether the file identifier of the target file is hit in the blacklist. And searching a blacklist cache in the step 7003, confirming whether a corresponding file identifier exists in the blacklist, and entering a step 7004 when the corresponding file identifier exists, so as to judge whether the target file can be acquired from the file memory cache of the local cache space. Step 7005 may be performed to read the file memory cache where the query is made. When the query is successful, step 7005 responds to the user's query, i.e. sends the queried target file to the user. When the query fails, step 7006 is executed, the file disk cache is searched, whether the target file can be acquired from the disk is judged, when the query fails, step 7007 is executed to send a response, step 7008 is executed, the queried target file is written into the file memory cache, and when the query fails, heat deployment 7010 is executed.
In performing hot deployment, step 7011 is performed first to calculate the storage bucket number of the target file. Then, step 7012 is executed to query the address according to the bucket number. In this step, a look-up from the address storage medium is possible. When the query is successful, step 7013 may be performed to read the file storage medium, query the file according to the address, and write the mapping of the bucket number and the address to the address disk cache in step 7014. After the query is successful, step 7021 is executed to read the file content and send a response. Meanwhile, after the query is successful, the file may be written into the file disk cache in step 7013 and copied to the file disk backup. Meanwhile, after the query is successful, step 7017 may be executed to write the file into the file memory cache, which facilitates subsequent call. When querying the file according to the address fails, step 7016 may be performed to determine whether there is a backup of the target file from the backup space. A determination is made in step 7018 as to whether a query is possible. When the query is received, a response is replied, as shown in step 7019, and a disaster recovery alarm is issued to notify the developer that the query is a backup file that returns the target file by looking up the backup space, and that the query is not a normal query. However, since the backup file of the target file can be found, it indicates that the request is not an invalid request or attack. The identification of the target file may be deleted from the blacklist cache. When the backup file of the target file is not found in the backup space yet, which indicates that the query request is an invalid request or a hacking attack, the server may reject the request response as shown in step 7022.
As can be seen from the above, in this embodiment, the cache management module 24 may be configured to manage a local cache space for storing files and an external storage medium;
as shown in step 4003 in fig. 4, the cold deployment module 22 is configured to write a plurality of files into the external storage medium through the cache management module, and as shown in step 40010 in fig. 4, obtain a file from the external storage medium through the cache management module and write the file into a local cache space;
as shown in step 7001 in fig. 6, the thermal deployment module 21 receives a query request for a target file, and queries whether the target file exists in the local cache space according to the query request in steps 7005 and 7009;
when the target file is found in the local cache space, reading the target file through the cache management module;
when the target file is not found in the local cache space, as shown in step 7010, querying whether the target file exists in an external storage medium according to the query request; and
when the target file is found in the external storage medium, the target file is read by the cache management module and stored in the local cache space in step 7013.
The operation of querying whether the target file exists in the files in the local cache space by the hot deployment module 21 according to the query request may include the following operations:
as in step 7005 in fig. 6, the file memory cache of the local cache space is searched to determine whether the target file exists;
reading the target file when the target file is found in the file memory cache;
in step 7009, when the target file is not found in the file memory cache, finding a file disk cache of a local cache space, and determining whether the target file exists;
when the target file is found in the file disk cache, in step 7008, the target file is read and written into the file memory cache from the file disk cache.
As mentioned above, the system further comprises a blacklist management module 25, and the blacklist management module 25 may be configured to:
if yes, step 7002, determining whether the query request is a blacklist request stored in a blacklist;
and when the query request is a blacklist request, stopping querying the target file.
In an embodiment, the operation of stopping querying the target file when the query request is a blacklist request may specifically include:
and intercepting the query request when the query request is a blacklist request, for example, preventing the query request from accessing other modules.
As described above, the local cache space may further include a file disk cache backup for storing a file, the system further includes a disaster recovery module 23, and the disaster recovery module 23 is configured to:
when the target file is not found in the external storage medium by the hot deployment module 21, as shown in steps 6001, 6004, 6010 in fig. 5, querying at least one of the file disk cache, the file disk cache backup, and the external storage medium, and determining whether the target file exists;
and reading the target file when the target file exists.
In an embodiment, the disaster recovery module 23 may be further configured to send a prompt message to prompt the developer that the query is an abnormal query, so as to remind the developer to pay attention and eliminate the problem.
In an embodiment, the hot deployment module 21 or the cold deployment module 22 may be further configured to:
and writing the read target file into a file disk cache backup to prepare for disaster recovery.
In one embodiment, querying the file memory cache and the file disk cache only requires knowledge of the request path or page ID, and does not require knowledge of the address. Therefore, the operation of the hot deployment module 21 querying whether the target file exists in the local cache space according to the query request may include:
acquiring a request path or a page ID of a target file from the query request; and
and determining whether the target file exists in the determined local cache space or not according to the request path or the page ID.
In a specific embodiment, querying the external storage medium may determine the address of the target file on the external storage medium by calculating a bucket number and using a mapping relationship between the bucket number and the address. Therefore, when the target file is not found in the local cache space, the querying whether the target file exists in the external storage medium according to the query request by the hot deployment module 21 may include:
as shown in step 7011 in fig. 6, the storage bucket number of the target file is calculated according to the query request;
in steps 7012 and 7013, the external storage medium is searched according to the mapping between the storage bucket number and the storage address, and whether the target file exists is determined.
In an embodiment, the operation of the cold deployment module 22 obtaining a file from the external storage medium through the cache management module and writing the file into the local cache space may specifically include:
as in step 4001 in fig. 4, the bucket number of the target file is calculated;
in step 4002, the external storage medium is searched according to the bucket number, the target file is obtained, and the address of the target file is determined;
in step 4003, write the target file into the file memory cache of the local cache space, and write the address and the storage bucket number of the target file into the address memory cache of the local cache space.
In one embodiment, the cold deployment module 22 is further configured to:
step 4012, writing the target file into a file disk cache of the local cache space, and writing the address and the barrel number of the target file into an address disk cache of the local cache space.
In one embodiment, the cold deployment module 22 is further configured to:
in step 40011, when it is detected that the target file with the same identifier exists in the file disk cache, the original target file in the file disk cache is written into the file disk cache for backup, and the address of the original target file is written into the address disk cache for backup.
In one embodiment, the cold deployment module 22 is further configured to:
in step 4003, the target file is written to a file storage medium of an external storage medium, and the address and the memory bucket number of the target file are written to an address storage medium of the external storage medium.
In an embodiment, the operation of the cold deployment module 22 obtaining a file from the external storage medium through the cache management module and writing the file to the local cache space further includes:
in step 4009, notify the blacklist management module to delete the target file corresponding to the query request from the blacklist.
In summary, the file deployment system proposed in the second embodiment of the present application has at least the following advantages:
in the scheme of the application, when a query request for a target file is received, an execution main body firstly determines whether a local cache space of a system has deployed the target file through a cache management module, and when undeployment is detected, deploys the target file to the local cache space from an external storage medium through a hot deployment module, so that file deployment is realized. When the target file is a frequently queried target file, the instant local cache space can clear the temporally older file according to the sequence of loading, and the target file can be frequently loaded to the local cache space and can be quickly called.
Therefore, for some target files with frequent access, the scheme of the application can realize quick response and improve the response replying efficiency; for target files with infrequent access, the files can be called from an external storage medium, so that the local cache space is not occupied, and the influence on the system overhead is avoided.
In addition, in the application, the customer group facing a large internet company is very wide, for example, there may be tens of thousands, as the business development and demand increase, each user may edit its own creative and publish at any time, if each user edits dozens of hundreds of different advertising creatives or operation pages, the total number of fast pages may break through millions, even tens of millions, and possibly continue to grow fast. In the scheme adopted by the embodiment of the application, the files are stored in the file storage medium in a persistent mode, the order of magnitude of the page capable of being stored depends on the size of the allocated file storage medium, and no upper limit of the number of the pages exists theoretically.
Meanwhile, file addresses returned from the file storage medium depend on specific characteristics of the storage medium, for example, some file servers use url and md5 combination to locate files, and mapping between the files and the file addresses needs to be established. The bucket dividing mechanism provides an intermediate bridge between mass files and file reading, and ensures that the expandability and maintainability of storage can be kept in mass pages with fast increase of magnitude order
In the scheme of the application, access uncertainty exists for a plurality of advertisement pages, two-skip pages and the like, especially for pages issued by advertisers or merchants with small scales, for example, the traffic of the page with small traffic may suddenly increase when the advertiser promotes at ordinary times, a large number of pages may only have dozens of requests every day, and some pages lose the maintenance of the advertiser and are in a state of irregular access, the requests of external users cannot be predicted, and it is necessary to ensure that the users respond to any page regardless of the proper access. In the embodiment of the application, a multi-level cache mechanism is particularly designed, and comprises a memory level, a local disk level and an external storage medium level, the access cost difference of each level of cache is huge, for example, the access time of the memory level cache can be basically ignored, and the acquisition of the external storage medium level takes tens of milliseconds.
In order to meet the high-performance characteristics of access, the scheme of the embodiment of the application can realize that the page is adaptively placed into caches of different levels according to the page liveness, for example, when a certain advertiser suddenly begins to promote the advertising creative idea of the advertiser in one day, the system firstly accesses the memory cache, if not, the disk cache is read and kept in the memory cache, if not, one-time hot deployment is executed to copy the file from an external storage medium into the memory cache and the disk cache, subsequent requests are directly provided with services through the memory cache, the overall content providing performance of the system is extremely high, and the whole deployment cost of the active page is close to zero. On the other hand, for inactive pages, the LRU queue used by the memory cache has a given size, for example, only 5000 pages are kept, and when the number of pages in the memory is close to or equal to 5000, those pages that are not frequently accessed are gradually removed from the memory cache, and these pages may be kept in the local disk cache, and are waiting for the next call to be directly read from the local disk, or are stored in the external file storage medium, and are waiting for the next hot deployment to be synchronized to the memory cache and the local disk. The cache level self-adaptive adaptation of the page file according to the active reading has great value on average access performance, the verification is actually carried out on the page file with the magnitude of million, the average reading performance is less than 0.3 millisecond, and if the verification is carried out on the page file with the magnitude of million, the average reading performance is less than 0.1 millisecond, so that the page reading with the magnitude of million is realized, and the influence on the whole service is negligible.
In addition, the design of the thermal deployment scheme in the preferred embodiment of the present application, a mechanism for dynamically executing the deployment of a single entity in the process of running on an application line, can greatly relieve the burden of technicians in operation and maintenance, and avoids the technicians from manually maintaining massive files during application capacity expansion and machine replacement.
The design of the blacklist interception scheme in the preferred embodiment of the application can prevent abnormal requests and attacks and ensure the overall stability of the system, and on one hand, the blacklist interception scheme is used for filtering illegal requests, so that unnecessary hot deployment and disaster recovery execution are reduced, and on the other hand, an alarm can be added to inform technicians of possible attacks and exceptions. For such a large service system, there are often illegal requests and attack requests, but both illegal requests and attack requests may trigger hot deployment, and a failed hot deployment requires querying files to various places, which is much more expensive than a successful cache response and hot deployment. For example, a large number of illegal requests from lawless persons may trigger frequent deployment actions of the entire server cluster model, and eventually even exhaust hardware resources of the system. In the embodiment of the application, in order to guarantee the robustness and feasibility of the whole scheme, a blacklist interception mechanism is designed, when an illegal request comes in, the illegal request passes through a cache management module, then enters a hot deployment module and finally enters a disaster recovery module, when all modules fail to execute, the illegal request is added into a blacklist cache, and then all the illegal requests are directly judged and intercepted in a memory cache, so that the later resource overhead is saved. If the identifier used by one intercepted page is selected by the advertiser and used as the access identifier of the advertisement creative of the advertiser, when the advertiser issues and pushes the advertisement through cold deployment, the relevant identifier in the blacklist cache is deleted, and all subsequent requests are used as legal requests to provide services for the outside.
The disaster recovery scheme in the preferred embodiment of the present application is designed to complete the whole service by using the simplest logic, and the simple advantage is that the fault is not easy to occur, on one hand, the service cluster is ensured to be available to the outside as a best effort, and on the other hand, the disaster recovery scheme is used as a parallel service scheme started in case of abnormality and forms a scheme ensuring system availability of both parties together with a normal service scheme.
In the preferred embodiment of the application, all the functional modules are organically combined together and are matched with each other to form a complete scheme, so that an independent and complete deployment unit which supports a large number of pages, is high in performance and reliability is realized. The deployment schemes in the embodiment of the application can be combined in series, independence exists in one set of deployment schemes for different deployment entities, the deployment schemes for different deployment entities (such as pages and shunt configuration files) can be combined, and a combination for a specific service is completed. When a plurality of service platforms provide page content service, shunt service can be provided to complete ABtest, a set of deployment sets can be configured for shunt files by using the method provided by the embodiment of the application, a set of deployment sets is provided for page files, a combination scheme of deployment of a plurality of entities is realized, deployment of the shunt files and deployment of the page files are realized in an application container through different calls, and corresponding service logic is executed, so that a complete service requirement scheme is easily created.
Compared with an online system, the scheme of the application is light enough, and can be deployed and taken effect in real time on the premise that the application is not restarted. If the on-line application system is used for deploying massive pages, the method is basically not feasible in the aspects of execution time, hard disk storage and the like. Compared with a file synchronization system, the general file synchronization system mainly copies corresponding files to each machine, many problems of file restoration and deployment during machine replacement and capacity expansion are not solved, especially the problem of deployment of massive pages is more prominent, and the capacity expansion and replacement capability of a dynamic machine is realized by means of hot deployment in the embodiment of the application. In addition, the method of the embodiment of the application increases characteristics such as multi-level cache, disaster recovery guarantee and the like, and has extremely high performance in reading file contents. Compared with a CDN file delivery push system, the method and the device have the advantages that the files are simply obtained through HTTP, and the mechanisms such as multi-level cache and disaster recovery guarantee provided by the embodiment of the application can more efficiently provide file query service and can be directly embedded into an application container. In addition, the embodiment of the application has the pushing capability of the deployment terminal, and can push the message to the deployment terminal in real time to change and take effect.
The third embodiment:
an embodiment of the present application provides a file deployment method, as shown in fig. 7, the method may include the following steps:
s101, writing a plurality of files into the external storage medium, acquiring the files from the external storage medium and writing the files into a local cache space;
in this step, before determining whether the request is a blacklist request or searching the local cache space, a plurality of target files may be deployed in an external storage medium. For example, a plurality of target files (web pages, files, etc.) may be stored in a file storage medium, and correspondingly, a mapping of the bucket numbers and addresses of the target files may be stored in an address storage medium. Optionally, a part of the target file which is determined in advance to be called with high frequency may be directly stored in the file memory cache and the file disk cache of the local cache space, which is not particularly limited herein.
After writing into the external storage medium, the above-mentioned additional file can be written into the local cache space through a cold deployment process or a hot deployment process.
S102, receiving a query request aiming at a target file, and querying whether the target file exists in the local cache space according to the query request;
in this step, when a user performs an operation on the client to try to obtain a target file, such as web page information, a query request is initiated by clicking a link or the like. The client sends the query request to an execution subject, for example, a server, and triggers the server to query the target file in the storage space.
For example, a user clicks a website link of a certain web page in the client and wants to access the web page. At this time, the web page is the target file, and the website is the address of the target file. The client sends a query request containing the website to the server, and the server receives the query request and searches the webpage content according to the website in the query request in the subsequent steps or searches according to the identification of the target file carried in the query request.
The target file may include the above-mentioned page file, and may further include, for example, a distribution configuration file, etc. The page file is, for example, a page description, such as an html file, a server page template file, other customized files capable of rendering pages completely, and the like, and is generally any file capable of rendering pages at a server, such as in JSON or html format, and the page file contains an identifier of a page, a contained component or specific code, configured operation data, and the like.
The splitting configuration file is used for splitting a plurality of target files when the target files are stored in advance, and the splitting configuration file is used for finding page identifications according to paths, and contains information such as splitting ratio, for example, a request approximately has 20% access to an address A and 80% access to an address B. The splitting configuration file includes, for example, a conventional abest description file, a JSON file, or a file for custom description of splitting information.
The storage space comprises a local cache space and an external storage medium. The local cache space comprises a file memory cache, a file disk cache and the like. A file memory cache (LRU cache) is a segment of memory cache of a set size created on the basis of Least Recently Used (Least-recent-Used). The file disk cache refers to a disk cache for storing a target file. The external storage medium refers to a medium for external storage, such as a file storage medium and an address storage medium. The file storage medium is, for example, a system for storing cloud objects and the like, and is mainly suitable for file storage; the address storage medium is, for example, a relational database or a nosql database, and may persistently store information of the simple mapping relationship.
In this step, a query may be performed on the local cache space according to the content included in the query request, such as a file address, a file ID, file content itself, a CDN address of the file, and so on, to determine whether the target file can be found. In an embodiment, the target file may be searched in the file memory cache according to the address or the identifier of the target file, and when the target file cannot be searched, the target file may be searched in the file disk cache. Or searching in the file memory cache and the file disk cache at the same time, and determining whether the target file exists according to the address or the identifier.
S104, when the target file is not found in the local cache space, inquiring whether the target file exists in an external storage medium according to the inquiry request;
in this step, when the target file is not found in the local cache space, for example, neither the file memory cache nor the file disk cache of the local cache space finds the target file, the target file may be continuously found in the external storage medium.
As described above, the external storage medium refers to a medium for external storage, such as a file storage medium and an address storage medium. The file storage medium is a system for storing cloud objects and the like, and is suitable for file storage; the external storage medium can be infinitely expanded in theory, and the requirement on the search efficiency can be met as long as the storage is organized in a proper manner.
S105, when the target file is found in the external storage medium, reading the target file and storing the target file in the local cache space.
In this step, when it is detected that the target file exists in the external storage medium, the target file is read from the external storage medium, and the read target file may be transmitted to the client. In addition, the target file can be written into a local cache space, such as a file memory cache and a file disk cache, so as to be called directly from the local cache space next time.
Optionally, in step S102, after the step of receiving an inquiry request for a target file and inquiring whether the target file exists in the local cache space according to the inquiry request, the method may further include the following step S103:
s103, reading the target file when the target file is found in the local cache space;
in this step, when the server finds the target file, the target file is read, and the target file may be returned to the client.
For some target files with frequent access, the target files are frequently loaded into the local cache space, so that the corresponding target files can be searched from the local cache space during each search. After finding, the target file can be read and returned to the client.
Fourth embodiment
Fig. 8 is a method flowchart of the fourth embodiment. As shown in fig. 8, in this embodiment, in addition to the foregoing steps S101 to S105, the method may further include the steps of:
s101a, judging whether the query request is a blacklist request stored in a blacklist;
s101b, when the query request is a blacklist request, stopping querying the target file.
In the above two steps, it may be determined whether the query request is a blacklist request, for example, the address of the target file carried in the query request is one of multiple addresses stored in a blacklist, or the identifier of the target file carried in the query request is one of multiple identifiers stored in a blacklist, in which case, the query request may be determined as a blacklist request, for example, an abusive attack initiated by a network hacker. When the received query request is a blacklist request, the system may be crashed if the query request is processed according to a conventional method, so that no processing or prompt information can be returned. Furthermore, the source of the query request can be tracked, and other query requests initiated by the source can be shielded.
In the embodiment of the application, the step of judging whether the blacklist request is a blacklist request is performed before the query of the local cache space, so that the overhead of the system can be saved, and the problem that the execution main body searches from the local cache space or the external storage space due to excessive blacklist requests, so that the efficiency of the system is reduced is solved.
In an embodiment, the method may further include:
s106, when the target file is not found in the external storage medium, inquiring at least one of the file disk cache, the file disk cache backup and the external storage medium, and determining whether the target file exists;
s107, when the target file exists, reading the target file.
The purpose of the disaster recovery procedure in step S106 and step S107 is to implement disaster recovery by using a non-optimal solution instead of the current optimal solution. For example, when the target file is not found in the local file memory cache, a non-optimal scheme may be started, that is, the target file is found from the file disk cache, the file disk cache backup, and the external storage medium. After the target file is found, the disaster tolerance degradation is successful, and other alternative schemes can find the target file which is required to be found by the user. In this case, the user's query request is responded to. At the same time, disaster recovery degradation requires notification to developers to make sure that they are not currently querying with the best solution. Therefore, disaster recovery alarm can be sent out to inform developers.
And when the disaster tolerance processing is unsuccessful, the query request of the user cannot be responded, and the address is added into the blacklist cache. To indicate that the address is different from the address accessed normally, and the address needs to be degraded through disaster tolerance to inquire the corresponding target file.
As can be seen from the above, the disaster recovery processing is provided in terms of system stability, so that the system can be served as much as possible with a strong disaster recovery capability in the event of a partial system failure. Therefore, the disaster recovery processing process in this embodiment increases the stability of the system and improves the impact resistance of the system.
In one embodiment, the local cache space is a cache space that stores files according to an LRU queue.
In an embodiment, as shown in fig. 9, in the step S102, that is, the step of querying whether the target file exists in the files in the local cache space according to the query request may include the following sub-steps:
s1021, searching a file memory cache of the local cache space, and determining whether a target file exists;
the target file is deployed in advance by storing the target file into at least one of two levels of storage spaces, wherein the two levels of storage spaces comprise a file memory cache and a file disk cache. The target file that is called more frequently is typically automatically cached in the file memory cache, and as shown in fig. 4, after the file is queried according to the address in step 40081, if the query is successful, the target file is written into the file memory cache in step 40010. Thus, for these more active pages, they are frequently written to the file memory cache, and these pages are also written to the file disk cache as shown in step 40012. Even if the file memory cache and the file disk cache automatically delete the files with longer time according to the time sequence, the files can be written in one time in the calling process of one time, so that the next time of calling is facilitated.
When the target file is called next time, the target file can be directly obtained from the file memory cache. For some inactive pages, the called times are few, the frequency of being written into the file memory cache and the file disk cache is low, and the pages are easier to be directly deleted by the file memory cache and the file disk cache.
S1022, when the target file is found in the file memory cache, reading the target file;
in this step, when the target files have been found in the file memory cache, the target files are directly read and returned to the client of the user. For example, after the target file is found in the file memory cache according to the address, the execution main body reads the target file, packages the target file, and returns the target file to the client in a specific format.
S1023, when the target file is not found in the file memory cache, searching a file disk cache of a local cache space, and determining whether the target file exists;
in this step, when the file memory cache is not found for various reasons, such as full memory, the query fails, and the execution subject may go to find whether the target file exists in the file disk cache.
S1024, when the target file is found in the file disk cache, reading the target file, and writing the target file into the file memory cache from the file disk cache.
In this step, after the target file is found, the target file can be read and encapsulated, and the specific format is returned to the client.
In an embodiment, the step S102 of querying whether the target file exists in the local cache space according to the query request may include:
s1021a, acquiring a request path or a page ID of a target file from the query request;
s1022b, determining whether the target file exists in the determined local cache space according to the request path or the page ID.
In an embodiment, in the step S104, when the target file is not found in the local cache space, the step of querying whether the target file exists in an external storage medium according to the query request may include:
s1041, calculating a storage barrel number of the target file according to the query request;
s1042, searching the external storage medium according to the mapping of the storage barrel number and the storage address, and determining whether the target file exists.
In an embodiment, as shown in fig. 10, the step S105 of obtaining the file from the external storage medium and writing the file into the local cache space may include:
s1051, calculating the storage barrel number of the target file;
s1052, searching the external storage medium according to the storage barrel number, acquiring the target file and determining the address of the target file;
s1053, writing the target file into the file memory cache of the local cache space, and writing the address and the storage barrel number of the target file into the address memory cache of the local cache space.
S1051 may correspond to step 4001 in fig. 4. S1052 and S1053 may correspond to the steps of storing to the file storage medium and the address storage medium in step 4003 in fig. 4, respectively. In S1051, the storage bucket number of the target file may be calculated according to a specified calculation method, and then in sub-step S1052, the target file is uploaded to a file storage medium of an external storage medium, and the address of the stored file is recorded, and since the address of the target file and the storage bucket number have a corresponding relationship, in step S1053, the storage bucket number of the target file and the mapping of the address may be stored to an address storage medium of the external storage medium.
In the embodiment of the application, a concept of barrel storage is introduced, the barrel number can be stored according to a calculation mode set by a developer, the stored barrel number is corresponding to an address according to the address, an address table can be arranged under the stored barrel number, and each address table can store a plurality of addresses, so that the corresponding relationship between the stored barrel number and the address is a one-to-many relationship. After the calculated storage bucket number, when the target file is to be searched, the stored bucket can be determined according to the storage bucket number, and the search is continued in the storage bucket.
It should be noted that, some external data storage media have natural bucket characteristics, and therefore, the present application may also use the existing bucket number calculation method of the data storage media when setting the corresponding relationship between the bucket number and the address, and is not limited herein.
The bucket calculation mainly realizes the bucket division characteristic, namely different entities are divided into different buckets, so that grouping and database division are realized, massive address information can be mapped into different buckets by adopting the mode, one bucket can be a database division table or a data classification, and the expandability of massive data storage can be improved by adopting a bucket division strategy.
The specific bucket partitioning strategy to be used needs to be determined according to requirements, and different heuristic functions are used for the buckets according to different address storage media.
For example, the following heuristic function scheme may be used for the bucketized storage:
1. if the relational database is used as an address storage medium, the bucket number enlightening function directly returns the entity representation, wherein 'bucket = id, id is the identifier' of the entity, the bucket is stored as an index, and one entity, namely one bucket, is not divided into buckets.
2. If the nosql database is used or when the database sub-tables need to be used as an address storage medium, then:
and identifying that id is not a number, using hash to carry out number division, wherein a heuristic function is 'bucket = [ hash (id)% mod ], mod belongs to N +', providing mod buckets, and storing the entity address into one of the mod buckets to realize grouping and table division. If mod is 1000, then 1000 buckets are represented and the physical address will be scattered among the 1000 buckets.
If id is a number, the heuristic function is' bucket = [ id/mod ], mod belongs to N +, theoretically, an infinite number of buckets are provided according to the value of id, and numerically adjacent entities can be classified into one bucket or similar buckets.
The specific method for dividing the buckets or not directly dividing the buckets depends on the service requirement and the characteristics of the address storage medium, but relates to the deployment of massive pages, and in order to avoid the situation that the massive pages need to be migrated afterwards due to the imperfection in the initial design, a specific dividing bucket strategy is used in the initial design to ensure that the address storage data has theoretical expansibility.
In an embodiment, after the target file is deployed from the external storage medium to the local cache, the file and the corresponding address/bucket number may be written into the disk cache, the disk cache backup, the external storage medium, and the blacklist management module is notified to delete the path, the ID, and the like of the target file. That is, step S105, namely, the step of obtaining the file from the external storage medium and writing the file into the local cache space further includes the following sub-steps:
s1054, writing the target file into the file disk cache of the local cache space, and writing the address and the barrel number of the target file into the address disk cache of the local cache space.
S1055, when detecting that the target file with the same identification exists in the file disk cache, writing the original target file in the file disk cache into the file disk cache for backup, and writing the address of the original target file into the address disk cache for backup.
S1056, writing the target file into the file storage medium of the external storage medium, and writing the address and the storage barrel number of the target file into the address storage medium of the external storage medium.
S1057, informing the blacklist management module to delete the target file corresponding to the query request from the blacklist.
In an embodiment, when the target file is confirmed to exist, which indicates that the file is a proper request and is not a malicious attack, the file identifier or address of the target file may be deleted from the blacklist, so as to avoid a query request for mistakenly killing a correct target file.
In summary, the file deployment method provided in this embodiment has at least the following advantages:
in the scheme of the application, when a query request for a target file is received, an execution main body firstly determines whether a local cache space of a system has deployed the target file, and when undeployment is detected, deploys the target file to the local cache space from an external storage medium to realize file deployment. When the target file is a frequently queried target file, the instant local cache space can clear the temporally old file according to the loading sequence, and the target file can be frequently loaded to the local cache space and can be quickly called.
Therefore, for some target files with frequent access, the scheme of the application can realize quick response and improve the response reply efficiency; for target files with infrequent access, the files can be called from an external storage medium, so that the local cache space is not occupied, and the influence on the system overhead is avoided.
In addition, in the application, the customer group facing a large internet company is very wide, each user may edit its own creative and publish it at any time with the development and increase of business demand, and if each user edits dozens of hundreds of different advertising creatives or operation pages, the total number of the fast pages may break through millions, even tens of millions, and possibly continue to grow fast. In the scheme adopted by the embodiment of the application, the files are stored through the file storage medium in a persistent mode, the order of magnitude of the pages capable of being stored depends on the size of the allocated file storage medium, and no upper limit of the number of the pages is provided in theory.
Meanwhile, file addresses returned from the file storage medium depend on specific characteristics of the storage medium, for example, some file servers use url and md5 combination to locate files, and mapping between the files and the file addresses needs to be established. The bucket dividing mechanism provides an intermediate bridge between mass files and file reading, and ensures that the expandability and maintainability of storage can be kept in mass pages with fast increase of magnitude order
In the scheme of the application, access uncertainty exists for a plurality of advertisement pages, two-hop pages and the like, for example, the traffic of a page with low traffic may be suddenly increased when an advertiser promotes the page at ordinary times, a large number of pages may only have dozens of requests every day, and some pages lose the maintenance of the advertiser and are in a state of irregular access, the requests of external users cannot be predicted, and it is necessary to ensure that the users respond to any page regardless of proper access. In the embodiment of the application, a multi-level cache mechanism is particularly designed, and includes a memory level, a local disk level and an external storage medium level, the access cost difference of each level of cache is large, for example, the access time of the memory level cache can be basically ignored, and the acquisition of the external storage medium level takes tens of milliseconds.
In order to meet the high-performance characteristics of access, the scheme of the embodiment of the application can realize that the page is adaptively placed into caches of different levels according to the activity of the page, for example, when an advertiser suddenly starts to promote the advertising creative idea of the advertiser in one day, the system firstly accesses the memory cache, if the advertising creative idea does not exist, the disk cache is read and kept in the memory cache, if the advertising creative idea does not exist, one-time hot deployment is executed to copy the file from an external storage medium into the memory cache and the disk cache, subsequent requests are directly provided for services through the memory cache, the overall content providing performance is extremely high, and the whole deploying expense for the active page is close to zero. On the other hand, for inactive pages, the LRU queue used by the memory cache has a given size, for example, only 5000 pages are kept, and when the number of pages in the memory is close to or equal to 5000, those pages that are not frequently accessed are gradually removed from the memory cache, and these pages may be kept in the local disk cache, and are waiting for the next call to be directly read from the local disk, or are stored in the external file storage medium, and are waiting for the next hot deployment to be synchronized to the memory cache and the local disk. The cache level self-adaptive adaptation of the page file according to the active reading has great value on the average access performance, the verification is actually carried out on the million-level page file, the average reading performance is less than 0.3 millisecond, and if the verification is carried out on a small file, the average reading performance is less than 0.1 millisecond, so that the million-level page reading is realized, and the influence on the whole service is negligible.
In addition, the design of the thermal deployment scheme in the preferred embodiment of the present application, a mechanism for dynamically executing the deployment of a single entity in the process of running on an application line, can greatly relieve the burden of technicians in operation and maintenance, and avoids the technicians from manually maintaining massive files during application capacity expansion and machine replacement.
The design of the blacklist interception scheme in the preferred embodiment of the application can prevent abnormal requests and attacks and ensure the overall stability of the system, and on one hand, the blacklist interception scheme is used for filtering illegal requests, so that unnecessary hot deployment and disaster recovery execution are reduced, and on the other hand, an alarm can be added to inform technicians of possible attacks and exceptions. For such a large service system, there are often illegal requests and attack requests, both of which may trigger hot deployment, and a failed hot deployment needs to query files to various places, which is much larger than a successful cache response and hot deployment overhead. For example, a large number of illegal requests from lawless persons may trigger frequent deployment actions of the entire server cluster model, and eventually even exhaust hardware resources of the system. In the embodiment of the application, in order to guarantee the robustness and feasibility of the whole scheme, a blacklist interception mechanism is designed, when an illegal request comes in, the illegal request passes through a cache management module, then enters a hot deployment module and finally enters a disaster recovery module, when all modules fail to execute, the illegal request is added into a blacklist cache, and then all the illegal requests are directly judged and intercepted in a memory cache, so that the later resource overhead is saved. If the identifier used by one intercepted page is selected by the advertiser and used as the access identifier of the advertisement creative of the advertiser, when the advertiser issues and pushes the advertisement through cold deployment, the relevant identifier in the blacklist cache is deleted, and all subsequent requests are used as legal requests to provide services for the outside.
The design of the disaster recovery scheme in the preferred embodiment of the application uses the simplest logic to complete the whole service, and has the simple advantage that the fault is not easy to occur, on one hand, the service cluster is ensured to be available to the outside in a best effort manner, and on the other hand, the disaster recovery scheme is used as a parallel service scheme started in an abnormal state and forms the availability of a scheme guarantee system of both parties together with a normal service scheme.
In the preferred embodiment of the application, all the functional modules are organically combined together and are matched with each other to form a complete scheme, so that an independent and complete deployment unit which supports a large number of pages, is high in performance and reliability is realized. The deployment schemes in the embodiment of the application can be combined in series, independence exists in one set of deployment schemes for different deployment entities, the deployment schemes for different deployment entities (such as pages, distribution configuration files and other files) can be combined, and a combination for a specific service is completed. When a plurality of service platforms provide page content service, shunt service can be provided to complete ABtest, a set of deployment sets can be configured for shunt files by using the method provided by the embodiment of the application, a set of deployment sets is provided for page files, a combination scheme of deployment of a plurality of entities is realized, deployment of the shunt files and deployment of the page files are realized in an application container through different calls, and corresponding service logic is executed, so that a complete service requirement scheme is easily created.
Compared with an online system, the scheme of the application is light enough, and can be deployed and taken effect in real time on the premise that the application is not restarted. If the on-line application system is used for deploying massive pages, the method is basically not feasible in the aspects of execution time, hard disk storage and the like. Compared with a file synchronization system, the general file synchronization system mainly copies corresponding files to each machine, many problems of file restoration and deployment during machine replacement and capacity expansion are not solved, especially the problem of deployment of massive pages is more prominent, and the capacity expansion and replacement capability of a dynamic machine is realized by means of hot deployment in the embodiment of the application. In addition, the method of the embodiment of the application increases characteristics such as multi-level cache and disaster recovery guarantee, and has extremely high performance in reading file contents. Compared with a CDN file delivery push system, files are simply obtained through HTTP, and the mechanisms such as multi-level cache and disaster recovery guarantee provided by the embodiment of the application can provide file query service more efficiently and can be directly embedded into an application container. In addition, the embodiment of the application has the pushing capability of the deployment terminal, and can push the message to the deployment terminal in real time to change and take effect.
Fig. 11 is a schematic hardware structure diagram of a computing processing device according to an embodiment of the present application. As shown in fig. 11, the computing processing device may include an input device 90, a processor 91, an output device 92, a memory 93, and at least one communication bus 94. The communication bus 94 is used to enable communication connections between the elements. The memory 93 may comprise a high speed RAM memory, and may also include a non-volatile storage NVM, such as at least one disk memory, in which various programs may be stored in the memory 93 for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 91 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 91 is coupled to the input device 90 and the output device 92 through a wired or wireless connection.
Alternatively, the input device 90 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, a software-programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip, and the transceiver may be, for example, a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 92 may include a display, a sound, or other output device.
In this embodiment, the processor of the computing processing device includes a module for executing functions of each module of the data processing apparatus in each device, and specific functions and technical effects may be obtained by referring to the foregoing embodiments, which are not described herein again.
Fig. 12 is a schematic hardware structure diagram of a computing processing device according to another embodiment of the present application. FIG. 12 is a specific embodiment of FIG. 11 in an implementation. As shown in fig. 12, the computing processing device of the present embodiment includes a processor 101 and a memory 102.
The processor 101 executes the computer program codes stored in the memory 102 to implement the file deployment method in fig. 1 to 9 in the above embodiment.
The memory 102 is configured to store various types of data to support operations at the computing processing device. Examples of such data include instructions for any application or method operating on a computing processing device, such as messages, pictures, videos, and so forth. The memory 102 may include a Random Access Memory (RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 101 is provided in the processing assembly 100. The computing processing device may further include: a communication component 103, a power component 104, a multimedia component 105, an audio component 106, an input/output interface 107 and/or a sensor component 108. The components specifically included in the computing device are set according to actual requirements, which is not limited in this embodiment.
The processing component 100 generally controls the overall operation of the computing processing device. The processing component 100 may include one or more processors 101 to execute instructions to perform all or a portion of the steps of the methods of fig. 1-9 described above. Further, the processing component 100 can include one or more modules that facilitate interaction between the processing component 100 and other components. For example, the processing component 100 may include a multimedia module to facilitate interaction between the multimedia component 105 and the processing component 100.
The power component 104 provides power to various components of the computing processing device. The power components 104 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for a computing processing device.
The multimedia component 105 includes a display screen that provides an output interface between the computing processing device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 106 is configured to output and/or input audio signals. For example, audio component 106 includes a Microphone (MIC) configured to receive external audio signals when the computing processing device is in an operating mode, such as a speech recognition mode. The received audio signal may further be stored in the memory 102 or transmitted via the communication component 103. In some embodiments, the audio component 106 also includes a speaker for outputting audio signals.
The input/output interface 107 provides an interface between the processing component 100 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor component 108 includes one or more sensors for providing various aspects of state assessment for the computing processing device. For example, the sensor component 108 can detect an open/closed state of the computing processing device, a relative positioning of the components, a presence or absence of user contact with the computing processing device. The sensor assembly 108 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the computing processing device. In some embodiments, the sensor assembly 108 may also include a camera or the like.
The communication component 103 is configured to facilitate communication between the computing processing device and other devices in a wired or wireless manner. The computing processing device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the computing processing device may include a SIM card slot therein for insertion of a SIM card, such that the computing processing device may log onto a GPRS network to establish communication with a server via the internet.
From the above, the communication component 103, the audio component 106, the input/output interface 107 and the sensor component 108 involved in the embodiment of fig. 12 can be implemented as the input device in the embodiment of fig. 11.
An embodiment of the present application provides a computing processing device, including: one or more processors; and one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the computing processing device to perform a method as described in one or more of the embodiments of the application.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or computing processing device that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or computing processing device. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or computing processing device that comprises the element.
The above detailed description is given to a file deployment system and a file deployment method provided by the present application, and specific examples are applied herein to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (25)

1. A file deployment system is characterized by comprising a cold deployment module, a hot deployment module and a blacklist management module;
the cold deployment module is used for writing a plurality of files into an external storage medium, acquiring the files from the external storage medium and writing the files into a local cache space;
the hot deployment module is used for receiving an inquiry request aiming at a target file, inquiring whether the target file exists in a local cache space, and inquiring whether the target file exists in the external storage medium according to the inquiry request when the target file is not found in the local cache space; when the target file is found in an external storage medium, reading the target file and storing the target file in the local cache space;
the operation of the cold deployment module obtaining the file from the external storage medium and writing the file into the local cache space comprises the following steps:
calculating a storage barrel number of the target file;
searching the external storage medium according to the storage barrel number, and determining the address of the target file;
writing the target file into a file memory cache of a local cache space, and writing the address and the storage barrel number of the target file into an address memory cache of the local cache space;
and informing the blacklist management module to delete the target file corresponding to the query request from the blacklist.
2. The file deployment system of claim 1, wherein the thermal deployment module is further configured to: and reading the target file when the target file is found in the local cache space.
3. The file deployment system of claim 1, further comprising: a cache management module; the cache management module is used for managing a local cache space for storing files and an external storage medium; and the cold deployment module and the hot deployment module read or write files from the local cache space and the external storage medium through the cache management module.
4. The system of claim 1, wherein the local cache space comprises a file memory cache and a file disk cache for storing files;
the operation that the hot deployment module inquires whether the target file exists in the files in the local cache space according to the inquiry request comprises the following steps:
searching a file memory cache of a local cache space, and determining whether a target file exists;
reading the target file when the target file is found in the file memory cache;
when the target file is not found in the file memory cache, searching a file disk cache of a local cache space, and determining whether the target file exists;
and when the target file is found in the file disk cache, reading the target file, and writing the target file into the file memory cache from the file disk cache.
5. The system of claim 1, further comprising a blacklist management module to:
judging whether the query request is a blacklist request stored in a blacklist;
and when the query request is a blacklist request, stopping querying the target file.
6. The system of claim 4, wherein the local cache space further comprises a file disk cache backup for storing files, the system further comprising a disaster recovery module configured to:
when the target file is not found in the external storage medium by the hot deployment module, inquiring at least one of the file disk cache, the file disk cache backup and the external storage medium to determine whether the target file exists;
and when the target file exists, reading the target file.
7. The system of claim 1, wherein the local cache space is a cache space that stores files according to an LRU queue.
8. The system according to claim 1, wherein the operation of the hot deployment module querying whether the target file exists in the local cache space according to the query request comprises:
acquiring a request path or a page ID of a target file from the query request;
and determining whether the target file exists in the determined local cache space or not according to the request path or the page ID.
9. The system according to claim 1, wherein the operation of querying, by the hot deployment module, whether the target file exists in the external storage medium according to the query request when the target file is not found in the local cache space comprises:
calculating a storage barrel number of the target file according to the query request;
and searching the external storage medium according to the mapping of the storage barrel number and the storage address, and determining whether the target file exists.
10. The system of claim 1, wherein the cold deployment module is further to:
and writing the target file into a file disk cache of the local cache space, and writing the address and the barrel number of the target file into an address disk cache of the local cache space.
11. The system of claim 10, wherein the cold deployment module is further configured to:
when detecting that the target files with the same identification exist in the file disk cache, writing the pre-existing target files in the file disk cache into a file disk cache backup, and writing the addresses of the pre-existing target files into an address disk cache backup.
12. The system of claim 1, wherein the cold deployment module is further to:
and writing the target file into a file storage medium of an external storage medium, and writing the address and the storage barrel number of the target file into an address storage medium of the external storage medium.
13. A file deployment method, comprising:
writing a plurality of files into an external storage medium, acquiring the files from the external storage medium and writing the files into a local cache space;
receiving a query request aiming at a target file, and querying whether the target file exists in the local cache space according to the query request;
when the target file is not found in the local cache space, inquiring whether the target file exists in an external storage medium according to the inquiry request;
when the target file is found in the external storage medium, reading the target file and storing the target file in the local cache space;
the step of obtaining the file from the external storage medium and writing the file into the local cache space comprises:
calculating a storage barrel number of the target file;
searching the external storage medium according to the storage barrel number, and determining the address of the target file;
writing the target file into a file memory cache of a local cache space, and writing the address and the storage barrel number of the target file into an address memory cache of the local cache space;
and deleting the target file corresponding to the query request from the blacklist.
14. The file deployment method according to claim 13, wherein after receiving a query request for a target file and querying whether the target file exists in the local cache space according to the query request, the method further comprises:
and reading the target file when the target file is found in the local cache space.
15. The method according to claim 13, wherein the step of querying whether the target file exists in the files in the local cache space according to the query request comprises:
searching a file memory cache of a local cache space, and determining whether a target file exists;
reading the target file when the target file is found in the file memory cache;
when the target file is not found in the file memory cache, searching a file disk cache of a local cache space, and determining whether the target file exists;
and when the target file is found in the file disk cache, reading the target file, and writing the target file into the file memory cache from the file disk cache.
16. The method of claim 13, further comprising:
judging whether the query request is a blacklist request stored in a blacklist or not;
and when the query request is a blacklist request, stopping querying the target file.
17. The method of claim 14, further comprising:
when the target file is not found in the external storage medium, inquiring at least one of a file disk cache, a file disk cache backup and the external storage medium, and determining whether the target file exists;
and when the target file exists, reading the target file.
18. The method of claim 13, wherein the local cache space is a cache space that stores files according to an LRU queue.
19. The method according to claim 13, wherein the step of querying whether the target file exists in the local cache space according to the query request comprises:
acquiring a request path or a page ID of a target file from the query request;
and determining whether the target file exists in the determined local cache space or not according to the request path or the page ID.
20. The method of claim 13, wherein the step of retrieving the file from the external storage medium and writing the file to the local cache space comprises:
calculating a storage barrel number of the target file;
searching the external storage medium according to the storage barrel number, acquiring the target file and determining the address of the target file;
and writing the target file into a file memory cache of a local cache space, and writing the address and the storage barrel number of the target file into an address memory cache of the local cache space.
21. The method of claim 20, wherein the step of retrieving the file from the external storage medium and writing the file to the local cache space further comprises:
and writing the target file into a file disk cache of the local cache space, and writing the address and the barrel number of the target file into an address disk cache of the local cache space.
22. The method of claim 21, wherein the step of retrieving the file from the external storage medium and writing the file to the local cache space further comprises:
when detecting that the target files with the same identification exist in the file disk cache, writing the pre-existing target files in the file disk cache into a file disk cache backup, and writing the addresses of the pre-existing target files into an address disk cache backup.
23. The method of claim 20, wherein the step of retrieving the file from the external storage medium and writing the file to the local cache space further comprises:
and writing the target file into a file storage medium of an external storage medium, and writing the address and the storage barrel number of the target file into an address storage medium of the external storage medium.
24. A computing processing device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the computing processing device to perform the method of any of claims 13-23.
25. One or more machine-readable media having instructions stored thereon that, when executed by one or more processors, cause a computing processing device to perform the method of any of claims 13-23.
CN201810856551.6A 2018-07-31 2018-07-31 File deployment system and file deployment method Active CN110795395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810856551.6A CN110795395B (en) 2018-07-31 2018-07-31 File deployment system and file deployment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810856551.6A CN110795395B (en) 2018-07-31 2018-07-31 File deployment system and file deployment method

Publications (2)

Publication Number Publication Date
CN110795395A CN110795395A (en) 2020-02-14
CN110795395B true CN110795395B (en) 2023-04-18

Family

ID=69424941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810856551.6A Active CN110795395B (en) 2018-07-31 2018-07-31 File deployment system and file deployment method

Country Status (1)

Country Link
CN (1) CN110795395B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111628889B (en) * 2020-05-11 2023-04-07 Oppo(重庆)智能科技有限公司 Heat deployment method and device, electronic device, and storage medium
CN111797095B (en) * 2020-06-10 2024-05-03 阿里巴巴集团控股有限公司 Index construction method and JSON data query method
CN111813783B (en) * 2020-07-27 2024-03-26 南方电网数字电网研究院有限公司 Data processing method, device, computer equipment and storage medium
CN112950370B (en) * 2021-02-25 2024-08-16 西藏纳柯电子科技有限公司 Service processing method, device, equipment and storage medium
CN113076292B (en) * 2021-03-30 2023-03-14 山东英信计算机技术有限公司 File caching method, system, storage medium and equipment
CN115221200A (en) * 2021-04-16 2022-10-21 中国移动通信集团辽宁有限公司 Data query method and device, electronic equipment and storage medium
CN115865794B (en) * 2021-09-24 2025-05-09 中移(杭州)信息技术有限公司 Information determination method, device, equipment and storage medium
CN114089912B (en) * 2021-10-19 2024-05-24 银联商务股份有限公司 Data processing method and device based on message middleware and storage medium
CN114547493B (en) * 2022-02-22 2025-08-15 广联达科技股份有限公司 CIM engine data visualization method and device and electronic equipment
CN115329178A (en) * 2022-08-31 2022-11-11 浪潮电子信息产业股份有限公司 Management software acceleration method, device, equipment and medium
CN116320029B (en) * 2023-01-10 2025-06-27 天翼云科技有限公司 A method and system for hot write-back of CDN system cache

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020146A (en) * 2012-11-22 2013-04-03 华为技术有限公司 Data processing method and equipment
CN103092920A (en) * 2012-12-26 2013-05-08 新浪网技术(中国)有限公司 Storage method of semi-structured data and storage system
CN104268286A (en) * 2014-10-21 2015-01-07 北京国双科技有限公司 Method for querying hot data
CN104407990A (en) * 2014-12-08 2015-03-11 北京星网锐捷网络技术有限公司 Disk access method and device
CN104462194A (en) * 2014-10-28 2015-03-25 北京国双科技有限公司 Service data processing method, device and server
CN104834607A (en) * 2015-05-19 2015-08-12 华中科技大学 Method for improving distributed cache hit rate and reducing solid state disk wear
CN105447171A (en) * 2015-12-07 2016-03-30 北京奇虎科技有限公司 Data caching method and apparatus
CN106657258A (en) * 2016-11-04 2017-05-10 成都视达科信息技术有限公司 Realization method and device of safe acceleration middleware based on NGINX+LUA
CN106934001A (en) * 2017-03-03 2017-07-07 广州天源迪科信息技术有限公司 Distributed quick inventory inquiry system and method
CN107147648A (en) * 2017-05-11 2017-09-08 北京奇虎科技有限公司 Resource request processing method, client, server and system
CN107451152A (en) * 2016-05-31 2017-12-08 阿里巴巴集团控股有限公司 Computing device, data buffer storage and the method and device of lookup
CN107943594A (en) * 2016-10-13 2018-04-20 北京京东尚科信息技术有限公司 Data capture method and device
CN107958033A (en) * 2017-11-20 2018-04-24 郑州云海信息技术有限公司 Lookup method, device, distributed file system and the storage medium of metadata
WO2018077292A1 (en) * 2016-10-28 2018-05-03 北京市商汤科技开发有限公司 Data processing method and system, electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10168912B2 (en) * 2016-02-17 2019-01-01 Panzura, Inc. Short stroking and data tiering for a distributed filesystem

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020146A (en) * 2012-11-22 2013-04-03 华为技术有限公司 Data processing method and equipment
CN103092920A (en) * 2012-12-26 2013-05-08 新浪网技术(中国)有限公司 Storage method of semi-structured data and storage system
CN104268286A (en) * 2014-10-21 2015-01-07 北京国双科技有限公司 Method for querying hot data
CN104462194A (en) * 2014-10-28 2015-03-25 北京国双科技有限公司 Service data processing method, device and server
CN104407990A (en) * 2014-12-08 2015-03-11 北京星网锐捷网络技术有限公司 Disk access method and device
CN104834607A (en) * 2015-05-19 2015-08-12 华中科技大学 Method for improving distributed cache hit rate and reducing solid state disk wear
CN105447171A (en) * 2015-12-07 2016-03-30 北京奇虎科技有限公司 Data caching method and apparatus
CN107451152A (en) * 2016-05-31 2017-12-08 阿里巴巴集团控股有限公司 Computing device, data buffer storage and the method and device of lookup
CN107943594A (en) * 2016-10-13 2018-04-20 北京京东尚科信息技术有限公司 Data capture method and device
WO2018077292A1 (en) * 2016-10-28 2018-05-03 北京市商汤科技开发有限公司 Data processing method and system, electronic device
CN106657258A (en) * 2016-11-04 2017-05-10 成都视达科信息技术有限公司 Realization method and device of safe acceleration middleware based on NGINX+LUA
CN106934001A (en) * 2017-03-03 2017-07-07 广州天源迪科信息技术有限公司 Distributed quick inventory inquiry system and method
CN107147648A (en) * 2017-05-11 2017-09-08 北京奇虎科技有限公司 Resource request processing method, client, server and system
CN107958033A (en) * 2017-11-20 2018-04-24 郑州云海信息技术有限公司 Lookup method, device, distributed file system and the storage medium of metadata

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Zhao X M 等.Chord-Based Multi-Attribute Multi-Keyword Query and Hot-Set Cache.International Conference on Internet Technology &amp Applications.2011,全文. *
葛微;罗圣美;周文辉;赵;唐云;周娟;曲文武;袁春风;黄宜华.HiBase:一种基于分层式索引的高效HBase查询技术与系统.计算机学报.2016,(第01期),全文. *

Also Published As

Publication number Publication date
CN110795395A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
CN110795395B (en) File deployment system and file deployment method
CN107133234B (en) Method, device and system for updating cache data
CN106649349B (en) Data caching method, device and system for game application
US9753954B2 (en) Data node fencing in a distributed file system
CN110096517B (en) Method, device and system for monitoring cache data based on distributed system
CN108683668B (en) Resource checking method, device, storage medium and equipment in content distribution network
CN106487936A (en) Data transmission method and device, distributed storage system
CN110191168A (en) Online business data processing method, device, computer equipment and storage medium
CN104714965A (en) Static resource weight removing method, and static resource management method and device
CN112433921A (en) Method and apparatus for dynamic point burying
US9379849B2 (en) Content delivery failover
CN111240892A (en) Data backup method and device
CN108696562B (en) Method and device for acquiring website resources
CN111767481A (en) Access processing method, device, equipment and storage medium
CN104915387A (en) Internet website static state page processing system and method
CN112861031A (en) URL (Uniform resource locator) refreshing method, device and equipment in CDN (content delivery network) and CDN node
CN110221916A (en) A kind of memory expansion method, device, configuration center system and electronic equipment
US20240089339A1 (en) Caching across multiple cloud environments
US20230069845A1 (en) Using a threat intelligence framework to populate a recursive dns server cache
US7058773B1 (en) System and method for managing data in a distributed system
CN112433891B (en) Data processing method, device and server
CN110807040B (en) Method, device, equipment and storage medium for managing data
CN111240750B (en) Awakening method and device for target application program
CN111901243A (en) Routing method, scheduler and business platform for business requests
US12407654B2 (en) System and method for firewall policy rule management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant