CN116828226A - Cloud edge end collaborative video stream caching system based on block chain - Google Patents
Cloud edge end collaborative video stream caching system based on block chain Download PDFInfo
- Publication number
- CN116828226A CN116828226A CN202311084846.3A CN202311084846A CN116828226A CN 116828226 A CN116828226 A CN 116828226A CN 202311084846 A CN202311084846 A CN 202311084846A CN 116828226 A CN116828226 A CN 116828226A
- Authority
- CN
- China
- Prior art keywords
- video
- server
- video content
- indicates
- mec
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005265 energy consumption Methods 0.000 claims abstract description 53
- 238000003860 storage Methods 0.000 claims abstract description 14
- 238000004891 communication Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 48
- 238000000034 method Methods 0.000 claims description 31
- 230000008569 process Effects 0.000 claims description 30
- 230000005540 biological transmission Effects 0.000 claims description 21
- 238000005457 optimization Methods 0.000 claims description 9
- 238000012790 confirmation Methods 0.000 claims description 6
- 238000002360 preparation method Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 4
- 238000013524 data verification Methods 0.000 claims description 3
- 238000007689 inspection Methods 0.000 claims description 3
- 238000004806 packaging method and process Methods 0.000 claims description 3
- 230000000737 periodic effect Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000010295 mobile communication Methods 0.000 abstract description 2
- 230000009471 action Effects 0.000 description 14
- 230000000875 corresponding effect Effects 0.000 description 11
- 239000003795 chemical substances by application Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 3
- 230000002787 reinforcement Effects 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
- H04N21/23106—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/50—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/254—Management at additional data server, e.g. shopping server, rights management server
- H04N21/2543—Billing, e.g. for subscription services
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Transfer Between Computers (AREA)
Abstract
本发明属于移动通信中的边缘缓存技术领域,公开了基于区块链的云边端协同视频流缓存系统,通过网络模块实现CDN服务器、边缘服务器、视频用户三者之间的通信;缓存模块用于计算视频内容的流行度,以及视频内容缓存在CDN服务器层和边缘服务器层时的访问延迟、流量成本和缓存能耗;并将所有视频请求的内容访问延迟、流量成本和能耗的问题最小化;区块链模块用于计算视频用户请求的付费视频内容上链所产生的能耗。本发明充分发挥边缘侧MEC服务器的计算和存储能力,并加入区块链技术,解决互联网视频流量大幅度增长所导致的时延及能耗过高的问题以及计费信息安全问题;实现协同边缘缓存。
The invention belongs to the field of edge caching technology in mobile communications. It discloses a cloud-edge-end collaborative video stream caching system based on blockchain, and realizes communication between a CDN server, an edge server, and a video user through a network module; the caching module is used It is used to calculate the popularity of video content, as well as the access delay, traffic cost and cache energy consumption when video content is cached at the CDN server layer and edge server layer; and minimize the content access delay, traffic cost and energy consumption of all video requests. ization; the blockchain module is used to calculate the energy consumption generated by uploading paid video content requested by video users. This invention gives full play to the computing and storage capabilities of the edge-side MEC server and adds blockchain technology to solve the problems of delay and excessive energy consumption caused by the substantial increase in Internet video traffic and the security of billing information; realizing collaborative edge cache.
Description
技术领域Technical Field
本发明属于移动通信中的边缘缓存技术领域,具体是涉及基于区块链的云边端协同视频流缓存系统。The present invention belongs to the field of edge caching technology in mobile communications, and specifically relates to a cloud-edge collaborative video stream caching system based on blockchain.
背景技术Background Art
随着移动设备数量的爆炸式增长,大量设备得以互联并产生巨大的数据流量。与此同时,用户对各种视频内容的体验质量(Quality of Experience,QoE)的要求逐渐提高。因此,如此大规模的视频流量和高QoE需求,给骨干网带来了巨大的压力。With the explosive growth in the number of mobile devices, a large number of devices are connected and generate huge data traffic. At the same time, users' requirements for the quality of experience (QoE) of various video content are gradually increasing. Therefore, such large-scale video traffic and high QoE requirements have brought huge pressure to the backbone network.
视频内容缓存在内容分发网络(Content delivery network,CDN)中被广泛应用于减少重复流量和提高QoE。然而,CDN服务器和用户终端之间的流量仍然可能在很大程度上是冗余的,大多数内容不必上传到云数据中心,因为只有小部分内容被频繁请求,这将导致较差的QoE和较大的内容访问延迟。幸运的是,移动边缘计算(Mobile edge computing,MEC)提供了另一种解决方案,通过将视频内容推近终端用户进行边缘缓存,弥补了CDN的不足。新兴的5G网络中,基站(Base station,BS)都配备有边缘服务器来为缓存服务提供存储和计算能力。通过在附近基站边缘缓存适当的视频内容,用户可以在本地而不是从远程CDN服务器获得相应的目标视频,这不仅提供了更好的QoE和更低的延迟,还缓解了骨干网的流量压力。Video content caching is widely used in content delivery networks (CDNs) to reduce duplicate traffic and improve QoE. However, the traffic between CDN servers and user terminals may still be largely redundant, and most content does not have to be uploaded to cloud data centers because only a small portion of content is frequently requested, which will lead to poor QoE and large content access latency. Fortunately, mobile edge computing (MEC) provides another solution, which makes up for the shortcomings of CDN by pushing video content closer to end users for edge caching. In the emerging 5G network, base stations (BSs) are equipped with edge servers to provide storage and computing capabilities for caching services. By caching appropriate video content at the edge of nearby base stations, users can obtain the corresponding target video locally instead of from remote CDN servers, which not only provides better QoE and lower latency, but also relieves the traffic pressure on the backbone network.
与CDN服务器相比,边缘服务器的视频缓存容量是有限的;现在的内容提供商大多使用简单的基于规则的缓存策略,如最近最少使用(Least recently used,LRU),最不频繁使用(Least frequently used,LFU)等。然而,与基于CDN服务器的缓存环境不同,边缘缓存环境较为复杂,不同的边缘区域拥有多样化和动态化的视频请求,邻近基站间需要通过协作边缘缓存,以更好地负担单个边缘服务器上有限的存储容量。因此,传统的缓存策略已不适用于动态复杂的边缘缓存环境。Compared with CDN servers, the video cache capacity of edge servers is limited; most current content providers use simple rule-based cache strategies, such as Least Recently Used (LRU), Least Frequently Used (LFU), etc. However, unlike the cache environment based on CDN servers, the edge cache environment is more complex. Different edge areas have diverse and dynamic video requests. Neighboring base stations need to use collaborative edge caches to better bear the limited storage capacity on a single edge server. Therefore, traditional cache strategies are no longer suitable for dynamic and complex edge cache environments.
视频用户对服务的需求不断增加,产生了更多的支付内容,因此必须保护计费数据的安全性和隐私性。然而,由于不同的基站(BS)由不同的供应商操作,在基站之间的数据交互中存在信任问题。用户设备的移动性使得基站返回给用户的计费信息更容易被泄露或操纵。The increasing demand for video services from users has generated more payment content, so the security and privacy of billing data must be protected. However, since different base stations (BSs) are operated by different vendors, there are trust issues in the data interaction between base stations. The mobility of user equipment makes the billing information returned by the base station to the user more vulnerable to leakage or manipulation.
发明内容Summary of the invention
为解决上述技术问题,本发明提供了一种基于区块链的云边端协同视频流缓存系统,充分发挥边缘服务器的计算和存储能力,并加入区块链技术,解决互联网视频流量大幅度增长所导致的时延及能耗过高的问题以及计费信息安全问题,实现协同边缘缓存。In order to solve the above technical problems, the present invention provides a cloud-edge collaborative video stream caching system based on blockchain, which fully utilizes the computing and storage capabilities of the edge server and adds blockchain technology to solve the problems of latency and excessive energy consumption caused by the substantial growth of Internet video traffic and the problem of billing information security, and realizes collaborative edge caching.
本发明所述的基于区块链的云边端协同视频流缓存系统,包括网络模块、缓存模块和区块链模块;The cloud-edge-end collaborative video stream caching system based on blockchain described in the present invention includes a network module, a cache module and a blockchain module;
所述网络模块包括CDN服务器层、边缘服务器层和视频用户层;所述CDN服务器层存储所有视频内容并为边缘服务器层提供计算资源;所述视频用户层向边缘服务器层发出视频请求,并接收边缘服务器层返回的视频;The network module includes a CDN server layer, an edge server layer and a video user layer; the CDN server layer stores all video content and provides computing resources for the edge server layer; the video user layer sends a video request to the edge server layer and receives the video returned by the edge server layer;
所述缓存模块包括计算视频内容流行度单元、计算访问延迟单元、计算流量成本单元、计算缓存能耗单元,分别用于计算视频内容的流行度,以及视频内容缓存在CDN服务器层和边缘服务器层时的访问延迟、流量成本和缓存能耗;并将所有视频请求的内容访问延迟、流量成本和能耗的问题最小化;The cache module includes a video content popularity calculation unit, an access delay calculation unit, a traffic cost calculation unit, and a cache energy consumption calculation unit, which are respectively used to calculate the popularity of the video content, and the access delay, traffic cost, and cache energy consumption when the video content is cached in the CDN server layer and the edge server layer; and minimize the content access delay, traffic cost, and energy consumption problems of all video requests;
针对产生的付费视频内容,采用所述区块链模块计算视频用户请求的付费视频内容上链所产生的额外能耗。For the paid video content generated, the blockchain module is used to calculate the additional energy consumption generated by putting the paid video content requested by the video user on the chain.
进一步的,所述边缘服务器层包括基站和MEC服务器;基站覆盖当地区域的所有MEC服务器,利用微波的方式实现基站与MEC服务器之间的数据传输,并通过5G网络实现覆盖,每个基站通过骨干网与远端CDN服务器连接;Furthermore, the edge server layer includes base stations and MEC servers; the base stations cover all MEC servers in the local area, and use microwaves to achieve data transmission between the base stations and MEC servers, and achieve coverage through the 5G network. Each base station is connected to the remote CDN server through the backbone network;
边缘服务器层响应视频用户层的请求,具体为:将时间段内对视频内容的请求的数目表示为,二进制变量 用于表示在时间段 内对视频内容的请求是否应当由当前MEC服务器来服务;表示在时间段内视频内容的请求应该由当前MEC服务器来服务;表示当前MEC服务器不存在视频内容,则由相邻MEC服务器或远程CDN服务器提供服务;其中,是指缓存视频内容的MEC服务器或CDN服务器,表示MEC服务器,表示CDN服务器;此外,使用二进制变量来表示视频内容是否被缓存在服务器中,表示视频内容缓存在服务器中;表示视频内容未缓存在服务器中。The edge server layer responds to the request of the video user layer by: Internal video content The number of requests is expressed as , binary variable Used to indicate a period of time Internal video content Whether the request should be served by the current MEC server; Indicates the time period Video content The request should be served by the current MEC server; Indicates that there is no video content on the current MEC server , then the service is provided by the adjacent MEC server or the remote CDN server; , Refers to caching video content MEC server or CDN server, Indicates the MEC server. Indicates the CDN server; in addition, using binary variables To represent the video content Is it cached on the server? middle, Indicates video content Cache on the server middle; Indicates video content Not cached on the server middle.
进一步的,所述视频用户层由若干个视频用户组成;假设所有视频内容文件具有相同单位大小,为S比特;规定每个用户视频请求属于一个特定的边缘区域,并且将由所述特地边缘区域的基站及MEC服务器提供服务,因此等效地将每个视频用户的请求聚集到其对应的基站及MEC服务器。Furthermore, the video user layer is composed of several video users; it is assumed that all video content files have the same unit size, which is S bits; it is stipulated that each user video request belongs to a specific edge area, and will be served by the base station and MEC server in the specific edge area, so the request of each video user is equivalently aggregated to its corresponding base station and MEC server.
进一步的,利用计算视频内容流行度单元进行流行度计算的具体步骤如下:Furthermore, the specific steps of calculating popularity using the video content popularity calculation unit are as follows:
S1-1、将视频用户请求的视频内容按受欢迎程度降序排列,,表示视频内容的流行度,K表示视频的个数;表示第个视频内容具有最高流行度;表示第个视频内容具有最低流行度;并且视频流行度遵循具有衰减参数的Zipf分布,即视频流行度具有周期性变化;S1-1, sorting the video contents requested by the video users in descending order of popularity, , Indicates video content The popularity of , K represents the number of videos; Indicates Video content has the highest popularity; Indicates The video content has the lowest popularity; and the video popularity follows a decay parameter Zipf distribution, that is, the popularity of the video has periodic changes;
S1-2、计算视频内容的请求概率:S1-2. Calculating video content Request probability:
; ;
其中,是控制视频内容流行度的内容请求系数,该系数越大表示视频内容重复使用率越高。in, It is the content request coefficient that controls the popularity of video content. The larger the coefficient is, the higher the reuse rate of video content is.
进一步的,利用计算访问延迟单元进行访问延迟计算的具体步骤如下:Furthermore, the specific steps of using the access delay calculation unit to calculate the access delay are as follows:
S2-1、当视频请求到达时,如果在本地MEC服务器中找到缓存的视频内容,则本地MEC服务器立即返回缓存的内容,即本地命中;S2-1, when a video request arrives, if the cached video content is found in the local MEC server, the local MEC server immediately returns the cached content, i.e., local hit;
S2-2、如果本地MEC服务器缓存未命中,则转向其相邻MEC服务器以获得相应的缓存内容,该内容存在则返回该内容,即相邻命中;S2-2, if the local MEC server cache does not hit, then turn to its neighboring MEC server to obtain the corresponding cache content, and if the content exists, return the content, that is, neighbor hit;
S2-3、若相邻命中不成立,则本地MEC服务器从CDN服务器获取视频内容以服务于相关缓存请求,即CDN命中;S2-3, if the adjacent hit does not hold, the local MEC server obtains the video content from the CDN server to serve the relevant cache request, i.e., CDN hit;
在边缘缓存环境中,根据不同的缓存命中情况,存在延迟的三个组成部分,包括视频用户层到边缘服务器层的延迟、边缘服务器层之间的延迟、边缘服务器层到CDN服务器层延迟;因视频用户层到边缘服务器层的延迟包含在所有请求中,为方便计算,忽略该延迟;因此,将时间段内的MEC服务器的总传输等待时间表示为:In the edge cache environment, there are three components of delay, depending on the different cache hit situations, including the delay from the video user layer to the edge server layer, the delay between edge server layers, and the delay from the edge server layer to the CDN server layer. Since the delay from the video user layer to the edge server layer is included in all requests, this delay is ignored for the convenience of calculation. Therefore, the time period Total transmission waiting time of MEC servers within It is expressed as:
, ,
其中,表示当前MEC服务器与另一服务器之间的等待时间,表示视频内容集合;表示相邻MEC服务器;表示时间段内对视频内容的请求的数目。in, Indicates the current MEC server and another server The waiting time between Represents a collection of video content; Indicates the adjacent MEC server; Indicates time period Internal video content The number of requests.
进一步的,利用计算流量成本单元进行流量成本计算的具体步骤如下:Furthermore, the specific steps of using the traffic cost calculation unit to calculate the traffic cost are as follows:
S3-1、对于视频访问流量成本,忽略视频用户层到边缘服务器层的成本,计算视频访问流量成本为:S3-1. For the video access traffic cost, ignore the cost from the video user layer to the edge server layer and calculate the video access traffic cost. for:
, ,
其中,表示当前MEC服务器和其余服务器之间的通信量成本;in, Indicates the current MEC server and other servers The communication cost between
S3-2、在每个缓存周期结束时,如果下一时刻要缓存的视频与当前缓存的视频不完全相同,则每个MEC服务器都需要从邻近MEC服务器或CDN服务器获取新的视频,这也将引入额外的流量成本,将此成本表示为视频替换流量成本,计算为:S3-2. At the end of each cache cycle, if the video to be cached at the next moment is not exactly the same as the currently cached video, each MEC server needs to obtain a new video from a neighboring MEC server or CDN server, which will also introduce additional traffic costs. This cost is expressed as the video replacement traffic cost. , calculated as:
, ,
其中表示当前MEC服务器下一个时刻要缓存的视频内容,表示当前MEC服务器当前时刻缓存的视频内容;in Indicates the video content to be cached by the current MEC server at the next moment. Indicates the video content cached by the current MEC server at the current moment;
S3-3、总流量成本表达式为:S3-3, the total flow cost expression is:
。 .
进一步的,利用计算缓存能耗单元进行缓存能耗计算的具体步骤如下:Furthermore, the specific steps of calculating cache energy consumption using the cache energy consumption calculation unit are as follows:
S4-1、当视频用户请求CDN服务器层中的视频内容时,如果视频内容已经被CDN服务器发送给MEC服务器,则视频内容将从MEC服务器发送给视频用户层,即本地命中或邻近命中;仅考虑内容经由无线下行链路信道从边缘服务器层到视频用户层的传输过程,在时间段内,可用下行链路传输速率为:S4-1. When a video user requests video content in the CDN server layer If the video content Has been sent to the MEC server by the CDN server, then the video content The content is sent from the MEC server to the video user layer, that is, local hits or neighboring hits; only the transmission process of the content from the edge server layer to the video user layer via the wireless downlink channel is considered, in the time period The available downlink transmission rates are:
, ,
其中,表示MEC服务器的发射功率,是在时间段内视频用户与MEC服务器之间的信道增益,是视频用户处的噪声方差;in, Indicates the transmit power of the MEC server. It is in the time period Internal video users With MEC server The channel gain between is the noise variance at the video user;
因此,通过在时间段发送存储在边缘处的视频内容而产生的能耗表示为:Therefore, by Send video content stored at the edge The energy consumption is expressed as:
, ,
其中,所有视频内容文件具有相同单位大小,为S比特;表示MEC服务器的发射功率;表示可用下行链路传输速率;Among them, all video content files have the same unit size, which is S bits; Indicates the transmit power of the MEC server; Indicates the available downlink transmission rate;
S4-2、如果视频内容尚未存储在MEC服务器中,则MEC服务器将从CDN服务器处获得所请求的内容,然后MEC服务器再将视频内容发送到视频用户层,即CDN命中;S4-2. If the video content Not yet stored on the MEC server In the MEC server The requested content will be obtained from the CDN server, and then the MEC server will send the video content Sent to the video user layer, i.e. CDN hit;
越是热门的视频内容越容易被用户请求,会影响系统能耗的产生;在视频内容流行度和能耗之间建立联系;The more popular the video content is, the more likely it is to be requested by users, which will affect the energy consumption of the system; establish a relationship between the popularity of video content and energy consumption;
假设从MEC服务器向CDN服务器发送接收到的用户请求,并返回所请求的视频内容的往返时间为;因此,结合视频内容流行度,在时间段内发送边缘服务器层未存储的视频内容的能耗表示为:Assume that from the MEC server To CDN server The round-trip time for sending the received user request and returning the requested video content is Therefore, combined with the popularity of video content, Send video content that is not stored in the edge server layer The energy consumption is expressed as:
, ,
其中,表示边缘服务器和CDN服务器之间的发射功率;表示从MEC服务器向CDN服务器发送接收到的用户请求,并返回所请求的视频内容的往返时间;in, Indicates the transmission power between the edge server and the CDN server; Indicates that from the MEC server To CDN server Send the received user request and return the round-trip time of the requested video content;
S4-3、考虑传输所有个视频内容的两种情况,将该过程的能耗定义为:S4-3. Consider transmitting all There are two cases of video content, and the energy consumption of this process is defined as:
, ,
其中,表示视频内容已经被缓存到边缘服务器层;表示视频内容未被缓存到边缘服务器层;in , Indicates video content Already cached at the edge server layer; Indicates video content Not cached at the edge server layer;
S4-4、若请求的视频内容为付费视频内容,采用区块链模块计算共识机制能耗;S4-4. If the requested video content is paid video content, the blockchain module is used to calculate the energy consumption of the consensus mechanism. ;
S4-5、计算总能耗表示为:S4-5. Calculate the total energy consumption as:
。 .
进一步的,对于区块链模块,将所有MEC服务器视为区块链节点;对于每个时隙,在视频用户向MEC服务器请求延迟不敏感的计费视频内容,每个MEC服务器将收集CDN服务器提供给视频用户层的交易数据,并将其发送到区块链系统进行交易数据验证和核算;在共识过程完成后,每个MEC服务器将计费结果发送给视频用户进行检查和支付;Furthermore, for the blockchain module, all MEC servers are considered as blockchain nodes; for each time slot, when a video user requests delay-insensitive billing video content from a MEC server, each MEC server will collect the transaction data provided by the CDN server to the video user layer and send it to the blockchain system for transaction data verification and accounting; after the consensus process is completed, each MEC server will send the billing result to the video user for inspection and payment;
当故障节点数小于时,利用PBFT机制来保证数据的真实性;假设生成或验证签名和消息认证码MAC分别需要和个CPU周期,PBFT共识过程包括以下步骤:When the number of failed nodes is less than When using the PBFT mechanism to ensure the authenticity of the data, it is assumed that the generation or verification of the signature and the message authentication code MAC requires and CPU cycles, the PBFT consensus process includes the following steps:
S5-1、请求阶段;在每个时隙期间,CDN服务器向MEC服务器上传计费视频内容账单,每个MEC服务器向整个网络广播收集的交易信息;主节点由区块链系统随机分配,根据打包超过时间和最大分块大小将交易信息打包到新的分块中;然后在主节点上对签名和MAC进行验证;该过程计算周期表示为:S5-1, request phase: During each time slot, the CDN server uploads the billing video content bill to the MEC server. Each MEC server Broadcast the collected transaction information to the entire network; the master node is randomly assigned by the blockchain system based on the time exceeded for packaging. and maximum chunk size Pack the transaction information into a new block; then verify the signature and MAC on the master node; this process calculates the cycle It is expressed as:
, ,
其中,表示块中的总事务大小,表示事务的平均大小;in, represents the total transaction size in the block, Indicates the average size of a transaction;
S5-2、预准备阶段;在所有交易验证完成后,主节点将丢弃MEC服务器收集的错误交易,并生成独立的签名和个MAC,它们将与新的块一起发送到每个副节点;如果副节点接收到这个新的块,则签名和MAC将被验证;该过程中的计算周期表示为:S5-2, Pre-preparation phase; After all transactions are verified, the master node will discard the erroneous transactions collected by the MEC server and generate independent signatures and MACs will be sent to each secondary node together with the new block; if the secondary node receives this new block, the signature and MAC will be verified; the calculation cycle in this process is expressed as:
, ,
, ,
其中,表示经由MEC服务器接收的正确事务的百分比;表示主节点结算周期,表示副节点计算周期;in, Indicates the percentage of correct transactions received via the MEC server; Indicates the masternode settlement cycle, Indicates the calculation period of the secondary node;
S5-3、准备阶段;如果新的区块和交易已经被验证,副节点生成签名和个MAC,并将它们发送到其他区块链节点;之后,每个节点需要接收并验证个签名和MAC,其中;因此,对于主节点和副节点,该过程的计算周期表示为:S5-3, preparation phase; if the new block and transaction have been verified, the secondary node generates a signature and MACs and send them to other blockchain nodes; each node then needs to receive and verify signatures and MACs, where ; Therefore, for the primary node and the secondary node, the calculation cycle of the process is expressed as:
, ,
; ;
S5-4、确认阶段;如果验证过的节点收到个正确的消息,它们会发送一个签名和个MAC给其他节点;同时,每个节点必须检查个签名和MAC;因此,对于主节点和副节点,计算周期的计算公式为:S5-4, confirmation phase; if the verified node receives A correct message will be sent with a signature and MAC to other nodes; at the same time, each node must check signatures and MACs; therefore, for the master node and the slave node, the calculation formula for the calculation cycle is:
; ;
S5-5、回复阶段;验证节点收集到个有效确认消息后,向主节点发送包含签名和MAC的回复消息,主节点需要检查个签名和MAC;计算周期的计算公式为:S5-5, reply phase; verification nodes collect After a valid confirmation message is sent to the master node, Signature and MAC reply message, the master node needs to check signatures and MACs; the calculation cycle is calculated as:
, ,
; ;
在共识过程中,所有节点都需要验证签名和MAC,这需要更多的计算资源来完成这些计算任务;因此,节点可以选择MEC服务器或CDN服务器来执行计算任务;而且,每个节点根据自己的计算能力需求来选择计算方法;During the consensus process, all nodes need to verify signatures and MACs, which requires more computing resources to complete these computing tasks; therefore, nodes can choose MEC servers or CDN servers to perform computing tasks; moreover, each node chooses the computing method according to its own computing power requirements;
当个节点通过CDN服务器执行计算任务,其余节点选择MEC服务器时,总计算周期如下:when When one node performs computing tasks through the CDN server and the other nodes select the MEC server, the total computing cycle is as follows:
, ,
区块链共识部分能耗计算如下:The energy consumption of the blockchain consensus part is calculated as follows:
, ,
其中,表示 MEC服务器到CDN服务器之间的传输速率,表示 CDN服务器的计算能力,表示 MEC服务器的计算能力,表示 CDN服务器的计算能力,表示 CPU处理器芯片的算力。in, Indicates the transmission rate between the MEC server and the CDN server. Indicates the computing power of the CDN server. Indicates the computing power of the MEC server. Indicates the computing power of the CDN server. Indicates the computing power of the CPU processor chip.
进一步的,最小化所有视频请求的内容访问延迟、流量成本和能耗,将问题公式化为:Furthermore, we minimize the content access delay, traffic cost, and energy consumption of all video requests and formulate the problem as follows:
, ,
其中,是加权因子,用于调整延迟、流量成本和功耗之间的偏好;约束条件P1保证每个请求将仅由一个MEC或CDN服务器服务;约束条件P2表示每个MEC服务器的存储使用不应超过存储容量上限;约束条件P3保证视频用户请求只能由缓存了对应视频内容的MEC或CDN服务器来服务;约束条件P4和约束条件P5将优化变量限制为二进制值。in, is a weighting factor used to adjust the preference between latency, traffic cost, and power consumption; constraint P1 ensures that each request will be served by only one MEC or CDN server; constraint P2 indicates that the storage usage of each MEC server should not exceed the storage capacity limit ; Constraint P3 ensures that video user requests can only be served by MEC or CDN servers that cache the corresponding video content; Constraints P4 and P5 restrict the optimization variables to binary values.
本发明所述的有益效果为:本发明所述的系统,旨在最小化访问延迟、流量成本和缓存能耗,以此来解决互联网视频流量大幅度增长所导致的时延及能耗过高的问题;充分发挥边缘侧MEC服务器的计算和存储能力,赋予MEC服务器智能决策能力;加入区块链技术,解决付费视频内容的安全问题;充分考虑本地命中、邻近命中、CDN命中这三种服务命中情况,减轻CDN服务器负载,并实现协同边缘缓存。The beneficial effects described in the present invention are as follows: the system described in the present invention aims to minimize access delay, traffic cost and cache energy consumption, so as to solve the problems of delay and excessive energy consumption caused by the substantial growth of Internet video traffic; give full play to the computing and storage capabilities of the edge-side MEC server, and give the MEC server intelligent decision-making capabilities; add blockchain technology to solve the security issues of paid video content; fully consider the three service hit situations of local hit, neighboring hit and CDN hit, reduce the CDN server load, and realize collaborative edge caching.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明一种基于区块链的云边端协同视频流缓存系统架构图;FIG1 is an architecture diagram of a cloud-edge-end collaborative video stream caching system based on blockchain according to the present invention;
图2是缓存模块框架图;Fig. 2 is a schematic diagram of a cache module framework;
图3是PBFT共识过程图;Figure 3 is a diagram of the PBFT consensus process;
图4是系统框架图;Figure 4 is a system framework diagram;
图5是系统运行流程图。FIG5 is a flow chart of system operation.
具体实施方式DETAILED DESCRIPTION
为了使本发明的内容更容易被清楚地理解,下面根据具体实施例并结合附图,对本发明作进一步详细的说明。In order to make the contents of the present invention more clearly understood, the present invention is further described in detail below based on specific embodiments in conjunction with the accompanying drawings.
本发明所述的基于区块链的云边端协同视频流缓存系统架构如图1及图4所示,包括网络模块、缓存模块和区块链模块;所述网络模块包括CDN服务器层、边缘服务器层和视频用户层;The cloud-edge-end collaborative video stream caching system architecture based on blockchain of the present invention is shown in FIG1 and FIG4, including a network module, a cache module and a blockchain module; the network module includes a CDN server layer, an edge server layer and a video user layer;
对于CDN服务器层,假设CDN服务器具有足够的存储容量并且拥有所有视频请求的视频内容,CDN服务器层还用于给边缘服务器层提供更多的计算资源;For the CDN server layer, assuming that the CDN server has sufficient storage capacity and has the video content for all video requests, the CDN server layer is also used to provide more computing resources to the edge server layer;
对于边缘服务器层,边缘服务器层包括基站和MEC服务器;MEC服务器提供用于视频缓存的存储容量和用于缓存决策的计算能力,每个MEC服务器有最大存储容量;每个基站服务于其覆盖范围内的本地视频请求;各个基站通过骨干网与远端CDN服务器连接;在5G网络中,基站能够与其他相邻基站通信,每个基站并不是单独工作。此外,每个基站可以通过前向链路从相邻基站检索所请求的视频内容,比如使用高带宽和低延迟CPRI链路进行数据传输。For the edge server layer, the edge server layer includes base stations and MEC servers; MEC servers provide storage capacity for video caching and computing power for caching decisions, and each MEC server has a maximum storage capacity; each base station serves local video requests within its coverage area; each base station is connected to the remote CDN server through the backbone network; in the 5G network, the base station can communicate with other adjacent base stations, and each base station does not work alone. In addition, each base station can retrieve the requested video content from the adjacent base station through the forward link, such as using the high-bandwidth and low-latency CPRI link for data transmission.
边缘服务器层响应视频用户层的请求,具体为:将时间段内对视频内容的请求的数目表示为,二进制变量 用于表示在时间段 内对视频内容的请求是否应当由当前MEC服务器来服务;表示在时间段内视频内容的请求应该由当前MEC服务器来服务;表示当前MEC服务器不存在视频内容,则由相邻MEC服务器或远程CDN服务器提供服务;其中,是指缓存视频内容的MEC服务器或CDN服务器,表示MEC服务器,表示CDN服务器;此外,使用二进制变量来表示视频内容是否被缓存在服务器中,表示视频内容缓存在服务器中;表示视频内容未缓存在服务器中。The edge server layer responds to the request of the video user layer by: Internal video content The number of requests is expressed as , binary variable Used to indicate a period of time Internal video content Whether the request should be served by the current MEC server; Indicates the time period Video content The request should be served by the current MEC server; Indicates that there is no video content on the current MEC server , then the service is provided by the adjacent MEC server or the remote CDN server; , Refers to caching video content MEC server or CDN server, Indicates the MEC server. Indicates the CDN server; in addition, using binary variables To represent the video content Is it cached on the server? middle, Indicates video content Cache on the server middle; Indicates video content Not cached on the server middle.
对于视频用户层,由视频用户组成;假设所有视频内容文件具有相同单位大小,为S比特;规定每个用户视频请求属于一个特定的边缘区域,并且将由其中的基站及MEC服务器提供服务,因此等效地将每个视频用户请求聚集到其对应的基站及MEC服务器。For the video user layer, it is composed of video users; it is assumed that all video content files have the same unit size, which is S bits; it is stipulated that each user video request belongs to a specific edge area and will be served by the base station and MEC server therein, so each video user request is equivalently aggregated to its corresponding base station and MEC server.
对于区块链层,将所有MEC服务器视为区块链节点,保护用户付费视频数据安全。For the blockchain layer, all MEC servers are regarded as blockchain nodes to protect the security of user paid video data.
如图2所示,所述缓存模块包括计算视频内容流行度单元、计算访问延迟单元、计算流量成本单元、计算缓存能耗单元。As shown in FIG. 2 , the cache module includes a video content popularity calculation unit, an access delay calculation unit, a traffic cost calculation unit, and a cache energy consumption calculation unit.
所述计算视频内容流行度单元,具体步骤如下:The specific steps of the video content popularity calculation unit are as follows:
S1-1、将视频用户请求的视频内容按受欢迎程度降序排列,,表示视频内容的流行度,K表示视频的个数;表示第个视频内容具有最高流行度;表示第个视频内容具有最低流行度;并且视频流行度遵循具有衰减参数的Zipf分布,即视频流行度具有周期性变化;S1-1, sorting the video contents requested by the video users in descending order of popularity, , Indicates video content The popularity of , K represents the number of videos; Indicates Video content has the highest popularity; Indicates The video content has the lowest popularity; and the video popularity follows a decay parameter Zipf distribution, that is, the popularity of the video has periodic changes;
S1-2、计算视频内容的请求概率:S1-2. Calculating video content Request probability:
; ;
其中,是控制视频内容流行度的内容请求系数,该系数越大表示视频内容重复使用率越高。in, It is the content request coefficient that controls the popularity of video content. The larger the coefficient is, the higher the reuse rate of video content is.
所述计算访问延迟单元,具体步骤如下:The specific steps of calculating the access delay unit are as follows:
S2-1、当视频请求到达时,如果在本地MEC服务器中找到缓存的视频内容,则本地MEC服务器立即返回缓存的内容,即本地命中;S2-1, when a video request arrives, if the cached video content is found in the local MEC server, the local MEC server immediately returns the cached content, i.e., local hit;
S2-2、如果本地MEC服务器缓存未命中,则转向其相邻MEC服务器以获得相应的缓存内容,该内容存在则返回该内容,即相邻命中;S2-2, if the local MEC server cache does not hit, then turn to its neighboring MEC server to obtain the corresponding cache content, and if the content exists, return the content, that is, neighbor hit;
S2-3、若相邻命中不成立,则本地MEC服务器从CDN服务器获取视频内容以服务于相关缓存请求,即CDN命中;S2-3, if the adjacent hit does not hold, the local MEC server obtains the video content from the CDN server to serve the relevant cache request, i.e., CDN hit;
在边缘缓存环境中,根据不同的缓存命中情况,存在延迟的三个组成部分,包括视频用户层到边缘服务器层的延迟、边缘服务器层之间的延迟、边缘服务器层到CDN服务器层延迟;因视频用户层到边缘服务器层的延迟包含在所有请求中,为方便计算,忽略该延迟;因此,将时间段内的MEC服务器的总传输等待时间表示为:In the edge cache environment, there are three components of delay, depending on the different cache hit situations, including the delay from the video user layer to the edge server layer, the delay between edge server layers, and the delay from the edge server layer to the CDN server layer. Since the delay from the video user layer to the edge server layer is included in all requests, this delay is ignored for the convenience of calculation. Therefore, the time period Total transmission waiting time of MEC servers within It is expressed as:
, ,
其中,表示当前MEC服务器与另一服务器之间的等待时间,表示视频内容集合;表示相邻MEC服务器;表示时间段内对视频内容的请求的数目。in, Indicates the current MEC server and another server The waiting time between Represents a collection of video content; Indicates the adjacent MEC server; Indicates time period Internal video content The number of requests.
所述计算流量成本单元,具体步骤如下:The specific steps of the traffic cost calculation unit are as follows:
S3-1、对于视频访问流量成本,忽略视频用户层到边缘服务器层的成本,计算视频访问流量成本为:S3-1. For the video access traffic cost, ignore the cost from the video user layer to the edge server layer and calculate the video access traffic cost. for:
, ,
其中,表示当前MEC服务器和其余服务器之间的通信量成本;in, Indicates the current MEC server and other servers The communication cost between
S3-2、在每个缓存周期结束时,如果下一时刻要缓存的视频与当前缓存的视频不完全相同,则每个MEC服务器都需要从邻近MEC服务器或CDN服务器获取新的视频,这也将引入额外的流量成本,将此成本表示为视频替换流量成本,计算为:S3-2. At the end of each cache cycle, if the video to be cached at the next moment is not exactly the same as the currently cached video, each MEC server needs to obtain a new video from a neighboring MEC server or CDN server, which will also introduce additional traffic costs. This cost is expressed as the video replacement traffic cost. , calculated as:
, ,
其中表示当前MEC服务器下一个时刻要缓存的视频内容,表示当前MEC服务器当前时刻缓存的视频内容;in Indicates the video content to be cached by the current MEC server at the next moment. Indicates the video content cached by the current MEC server at the current moment;
S3-3、总流量成本表达式为:S3-3, the total flow cost expression is:
。 .
所述计算缓存能耗单元,具体步骤如下:The specific steps of calculating the cache energy consumption unit are as follows:
S4-1、当视频用户请求CDN服务器层中的视频内容时,如果视频内容已经被CDN服务器发送给MEC服务器,则视频内容将从MEC服务器发送给视频用户层,即本地命中或邻近命中;仅考虑内容经由无线下行链路信道从边缘服务器层到视频用户层的传输过程,在时间段内,可用下行链路传输速率为:S4-1. When a video user requests video content in the CDN server layer If the video content Has been sent to the MEC server by the CDN server, then the video content The content is sent from the MEC server to the video user layer, that is, local hits or neighboring hits; only the transmission process of the content from the edge server layer to the video user layer via the wireless downlink channel is considered, in the time period The available downlink transmission rates are:
, ,
其中,表示MEC服务器的发射功率,是在时间段内视频用户与MEC服务器之间的信道增益,是视频用户处的噪声方差;in, Indicates the transmit power of the MEC server. It is in the time period Internal video users With MEC server The channel gain between is the noise variance at the video user;
因此,通过在时间段发送存储在边缘处的视频内容而产生的能耗表示为:Therefore, by Send video content stored at the edge The energy consumption is expressed as:
; ;
S4-2、如果视频内容尚未存储在MEC服务器中,则MEC服务器将从CDN服务器处获得所请求的内容,然后MEC服务器再将视频内容发送到视频用户层,即CDN命中;S4-2. If the video content Not yet stored on the MEC server In the MEC server The requested content will be obtained from the CDN server, and then the MEC server will send the video content Sent to the video user layer, i.e. CDN hit;
此外,利用内容流行度可以大大提高网络缓存性能和视频用户对数据请求的满意度。当视频内容已经被存储在MEC服务器中时,即可以被邻近MEC服务器请求使用,从而降低回程传输的成本;在内容受欢迎程度和能耗之间建立联系,越是热门的视频内容越容易被用户请求,会影响系统能耗的产生;In addition, using content popularity can greatly improve network cache performance and video users' satisfaction with data requests. When video content has been stored in an MEC server, it can be requested by neighboring MEC servers, thereby reducing the cost of backhaul transmission; establishing a connection between content popularity and energy consumption, the more popular the video content, the easier it is to be requested by users, which will affect the generation of system energy consumption;
假设从MEC服务器向CDN服务器发送接收到的用户请求,并返回所请求的视频内容的往返时间为;因此,结合视频内容流行度,在时间段内发送边缘服务器层未存储的视频内容的能耗表示为:Assume that from the MEC server To CDN server The round-trip time for sending the received user request and returning the requested video content is Therefore, combined with the popularity of video content, Send video content that is not stored in the edge server layer The energy consumption is expressed as:
, ,
其中,表示边缘服务器和CDN服务器之间的发射功率;in, Indicates the transmission power between the edge server and the CDN server;
S4-3、考虑传输所有个视频内容的两种情况,将该过程的能耗定义为:S4-3. Consider transmitting all There are two cases of video content, and the energy consumption of this process is defined as:
, ,
其中,表示视频内容已经被缓存到边缘服务器层;表示视频内容未被缓存到边缘服务器层。in , Indicates video content Already cached at the edge server layer; Indicates video content Not cached at the edge server tier.
如图3所示,PBFT共识过程:对于区块链层,将所有MEC服务器视为区块链节点。对于每个时隙,在视频用户向MEC服务器请求延迟不敏感的计费视频内容,每个MEC服务器将收集CDN服务器提供给视频用户层的交易数据,并将其发送到区块链系统进行交易数据验证和核算。在共识过程完成后,每个MEC服务器将计费结果发送给视频用户进行检查和支付。As shown in Figure 3, the PBFT consensus process: For the blockchain layer, all MEC servers are considered blockchain nodes. For each time slot, when the video user requests delay-insensitive billing video content from the MEC server, each MEC server will collect the transaction data provided by the CDN server to the video user layer and send it to the blockchain system for transaction data verification and accounting. After the consensus process is completed, each MEC server sends the billing result to the video user for inspection and payment.
当故障节点数小于时,利用PBFT机制来保证数据的真实性;假设生成或验证签名和消息认证码MAC分别需要和个CPU周期,PBFT共识过程包括以下步骤:When the number of failed nodes is less than When using the PBFT mechanism to ensure the authenticity of the data, it is assumed that the generation or verification of the signature and the message authentication code MAC requires and CPU cycles, the PBFT consensus process includes the following steps:
S5-1、请求阶段;在每个时隙期间,CDN服务器向MEC服务器上传计费视频内容账单,每个MEC服务器向整个网络广播收集的交易信息;主节点由区块链系统随机分配,根据打包超过时间和最大分块大小将交易信息打包到新的分块中;然后在主节点上对签名和MAC进行验证;该过程计算周期表示为:S5-1, request phase: During each time slot, the CDN server uploads the billing video content bill to the MEC server. Each MEC server Broadcast the collected transaction information to the entire network; the master node is randomly assigned by the blockchain system based on the time exceeded for packaging. and maximum chunk size Pack the transaction information into a new block; then verify the signature and MAC on the master node; this process calculates the cycle It is expressed as:
, ,
其中,表示块中的总事务大小,表示事务的平均大小;in, represents the total transaction size in the block, Indicates the average size of a transaction;
S5-2、预准备阶段;在所有交易验证完成后,主节点将丢弃MEC服务器收集的错误交易,并生成独立的签名和个MAC,它们将与新的块一起发送到每个副节点;如果副节点接收到这个新的块,则签名和MAC将被验证;该过程中的计算周期表示为:S5-2, Pre-preparation phase; After all transactions are verified, the master node will discard the erroneous transactions collected by the MEC server and generate independent signatures and MACs will be sent to each secondary node together with the new block; if the secondary node receives this new block, the signature and MAC will be verified; the calculation cycle in this process is expressed as:
, ,
, ,
其中,表示经由MEC服务器接收的正确事务的百分比;in, Indicates the percentage of correct transactions received via the MEC server;
S5-3、准备阶段;如果新的区块和交易已经被验证,副节点生成签名和个MAC,并将它们发送到其他区块链节点;之后,每个节点需要接收并验证个签名和MAC,其中;因此,对于主节点和副节点,该过程的计算周期表示为:S5-3, preparation phase; if the new block and transaction have been verified, the secondary node generates a signature and MACs and send them to other blockchain nodes; each node then needs to receive and verify signatures and MACs, where ; Therefore, for the primary node and the secondary node, the calculation cycle of the process is expressed as:
, ,
; ;
S5-4、确认阶段;如果验证过的节点收到个正确的消息,它们会发送一个签名和个MAC给其他节点;同时,每个节点必须检查个签名和MAC;因此,对于主节点和副节点,计算周期的计算公式为:S5-4, confirmation phase; if the verified node receives A correct message will be sent with a signature and MAC to other nodes; at the same time, each node must check signatures and MACs; therefore, for the master node and the slave node, the calculation formula for the calculation cycle is:
; ;
S5-5、回复阶段;验证节点收集到个有效确认消息后,向主节点发送包含签名和MAC的回复消息,主节点需要检查个签名和MAC;计算周期的计算公式为:S5-5, reply phase; verification nodes collect After a valid confirmation message is sent to the master node, Signature and MAC reply message, the master node needs to check signatures and MACs; the calculation cycle is calculated as:
, ,
; ;
在共识过程中,所有节点都需要验证签名和MAC,这需要更多的计算资源来完成这些计算任务。因此,节点可以选择MEC服务器或CDN服务器来执行计算任务;而且,每个节点根据自己的计算能力需求来选择计算方法。当个节点通过CDN服务器执行计算任务,其余节点选择MEC服务器时,总计算周期如下:During the consensus process, all nodes need to verify signatures and MACs, which requires more computing resources to complete these computing tasks. Therefore, nodes can choose MEC servers or CDN servers to perform computing tasks; moreover, each node chooses a computing method based on its own computing power requirements. When one node performs computing tasks through the CDN server and the other nodes select the MEC server, the total computing cycle is as follows:
, ,
区块链共识部分能耗计算如下:The energy consumption of the blockchain consensus part is calculated as follows:
, ,
其中,表示 MEC服务器到CDN服务器之间的传输速率,表示 CDN服务器的计算能力,表示 MEC服务器的计算能力,表示 CDN服务器的计算能力,表示 CPU处理器芯片的算力;in, Indicates the transmission rate between the MEC server and the CDN server. Indicates the computing power of the CDN server. Indicates the computing power of the MEC server. Indicates the computing power of the CDN server. Indicates the computing power of the CPU processor chip;
因此,本系统的总能耗表示为:Therefore, the total energy consumption of this system is expressed as:
。 .
本系统旨在最小化所有视频请求的内容访问延迟、流量成本和能耗,将问题公式化为:This system aims to minimize the content access delay, traffic cost and energy consumption of all video requests, and formulates the problem as follows:
, ,
其中,是加权因子,用于调整延迟、流量成本和功耗之间的偏好;约束条件P1保证每个请求将仅由一个MEC或CDN服务器服务;约束条件P2表示每个MEC服务器的存储使用不应超过存储容量上限;约束条件P3保证视频用户请求只能由缓存了对应视频内容的MEC或CDN服务器来服务;约束条件P4和约束条件P5将优化变量限制为二进制值。in, is a weighting factor used to adjust the preference between latency, traffic cost, and power consumption; constraint P1 ensures that each request will be served by only one MEC or CDN server; constraint P2 indicates that the storage usage of each MEC server should not exceed the storage capacity limit ; Constraint P3 ensures that video user requests can only be served by MEC or CDN servers that cache the corresponding video content; Constraints P4 and P5 restrict the optimization variables to binary values.
本发明所述的系统运行流程如图5所示:The system operation process of the present invention is shown in Figure 5:
步骤1、采集数据;获取边缘缓存系统的覆盖范围内各视频用户在对应MEC服务器触发的视频内容请求;Step 1: Collect data; obtain the video content requests triggered by each video user in the coverage area of the edge cache system on the corresponding MEC server;
步骤2、缓存计算:MEC服务器的缓存模块同步计算请求的视频内容流行度、三种命中状态的访问延迟、流量成本及缓存能耗;Step 2: Cache calculation: The cache module of the MEC server synchronously calculates the popularity of the requested video content, the access delay of the three hit states, the traffic cost, and the cache energy consumption;
步骤3、上链计算:由区块链模块进一步计算付费视频内容的共识部分额外能耗;Step 3: On-chain calculation: The blockchain module further calculates the additional energy consumption of the consensus part of the paid video content;
步骤4、求解;联合优化访问延迟、流量成本和能耗,将优化问题建模,基于MAPPO算法框架求解优化问题。Step 4: Solve; jointly optimize access delay, traffic cost and energy consumption, model the optimization problem, and solve the optimization problem based on the MAPPO algorithm framework.
步骤5、输出缓存决策;系统控制各MEC服务器执行对应的动作决策。Step 5: Output cache decision; the system controls each MEC server to execute the corresponding action decision.
基于步骤4,本发明建立基于多智能体的近端策略优化算法结构的边缘缓存策略,具体步骤如下:Based on step 4, the present invention establishes an edge caching strategy based on a multi-agent proximal strategy optimization algorithm structure, and the specific steps are as follows:
步骤4-1、策略设计;Step 4-1, strategy design;
将多智能体强化学习的场景设置为MEC服务器自身获取视频内容访问延迟、流量成本和能耗最小,在该系统中,每个MEC服务器所做出的决策都是基于使自身获得更小的访问延迟、流量成本和能耗所做出的,而根据决策做出的动作会造成环境改变,从而会影响其他MEC服务器获得内容的时延;强化学习将问题抽象为马尔可夫过程,此过程中最重要的三个元素为状态、动作和回报;动作就是每一个任务中MEC服务器做出的选择,状态是做出选择的基础,回报则是评价动作好不好的基础。The scenario of multi-agent reinforcement learning is set to minimize the access delay, traffic cost and energy consumption of the MEC server to obtain video content. In this system, the decisions made by each MEC server are based on obtaining smaller access delays, traffic costs and energy consumption for itself, and the actions made based on the decisions will cause environmental changes, which will affect the delay of other MEC servers obtaining content. Reinforcement learning abstracts the problem into a Markov process. The three most important elements in this process are state, action and reward. Action is the choice made by the MEC server in each task, state is the basis for making choices, and reward is the basis for evaluating whether the action is good or not.
本发明提出的基于多智能体的近端策略优化的边缘缓存策略是基于部分可观测的马尔可夫决策过程的;每个MEC服务器只能观测到自身的视频请求内容和周围MEC服务器的缓存情况;每个MEC服务器根据自己的观测结果可以自主的选择是否缓存请求的视频内容以及获得视频请求内容的方式;请求内容的方式有三种,本地MEC服务器、邻近MEC服务器和远程CDN服务器的方式;每个MEC服务器采取的动作都会对其他MEC服务器的观测结果产生影响;本策略的目标是使请求的视频内容在交付过程中获得最小的访问延迟、流量成本和能耗;在该策略中,MEC服务器在时隙采取动作后会获得系统给予的即时奖励;如果视频内容请求不能在有效时延内获得请求的内容,系统会给予该MEC服务器惩罚;MEC服务器的回报是从某一时刻开始到计算回报时MEC服务器获得的所有奖励的加权和,MEC服务器的回报依赖于从这一时隙开始的所有的动作。The edge caching strategy based on multi-agent proximal strategy optimization proposed in the present invention is based on a partially observable Markov decision process; each MEC server can only observe its own video request content and the cache status of surrounding MEC servers; each MEC server can independently choose whether to cache the requested video content and the way to obtain the video request content according to its own observation results; there are three ways to request content, namely, local MEC server, neighboring MEC server and remote CDN server; the actions taken by each MEC server will affect the observation results of other MEC servers; the goal of this strategy is to minimize the access delay, traffic cost and energy consumption of the requested video content during the delivery process; in this strategy, the MEC server will obtain an immediate reward given by the system after taking an action in the time slot; if the video content request cannot obtain the requested content within the effective delay, the system will punish the MEC server; the reward of the MEC server is the weighted sum of all rewards obtained by the MEC server from a certain moment to the time when the reward is calculated, and the reward of the MEC server depends on all actions starting from this time slot.
步骤4-2、训练分布式执行的多智能体近端策略优化框架;方法如下:Step 4-2: Train a distributed multi-agent proximal policy optimization framework; the method is as follows:
多智能体近端策略优化框架是基于部分可观测的马尔可夫决策过程的,每个MEC服务器有自己的策略网络,CDN服务器有G个价值网络,每个价值网络对应一个MEC服务器,本发明采用的学习算法结构是集中式训练-分布式执行;The multi-agent proximal policy optimization framework is based on a partially observable Markov decision process. Each MEC server has its own policy network, and the CDN server has G value networks. Each value network corresponds to a MEC server. The learning algorithm structure adopted by the present invention is centralized training-distributed execution.
通过策略可以将MEC服务器的观测结果映射到有效的动作空间;在每个时隙,MEC服务器会根据自己的观测结果和策略选择合适的动作,价值网络是用来估计每个MEC服务器的状态-动作函数的,每个MEC服务器执行自己的策略网络选择的动作之后,都会把动作以及从环境的反馈、对当前环境的观测结果和获得的奖励发送给CDN服务器,然后在CDN服务器上训练价值网络的参数,价值网络的输出会发送给对应的MEC服务器的策略网络,用来训练策略网络的参数。The observation results of the MEC server can be mapped to the effective action space through the strategy; in each time slot, the MEC server will select the appropriate action according to its own observation results and strategy. The value network is used to estimate the state-action function of each MEC server. After each MEC server executes the action selected by its own policy network, it will send the action as well as the feedback from the environment, the observation results of the current environment and the reward obtained to the CDN server, and then train the parameters of the value network on the CDN server. The output of the value network will be sent to the policy network of the corresponding MEC server to train the parameters of the policy network.
步骤4-3,基于多智能体强化学习的边缘缓存算法描述,描述如下:Step 4-3, the edge cache algorithm based on multi-agent reinforcement learning is described as follows:
步骤4-3-1,初始化状态空间,每个MEC服务器的目标策略网络,主价值网络和主策略网络的参数,MEC服务器的个数,MEC服务器的最大缓存容量,视频内容集合,采样批次大小;Step 4-3-1, initialize the state space, the target policy network of each MEC server, the parameters of the main value network and the main policy network, the number of MEC servers, the maximum cache capacity of the MEC servers, the video content set, and the sampling batch size;
步骤4-3-2,初始化一个随机过程以便进行探索,初始化接收的状态空间;Step 4-3-2, initialize a random process for exploration and initialize the received state space;
步骤4-3-3,按照Zipf分布获得视频内容的请求概率并且按照概率请求内容;Step 4-3-3, obtaining the request probability of the video content according to the Zipf distribution and requesting the content according to the probability;
步骤4-3-4,每个MEC服务器根据自己的策略网络选择动作并执行;Step 4-3-4, each MEC server selects an action and executes it according to its own policy network;
步骤4-3-5,执行动作后判断缓存的视频内容是否超出缓存容量,若超出,则删除缓存模块中请求概率较低的视频内容,并获得环境奖励和新的观测空间,将每个MEC服务器的当前状态、执行动作、奖励、下一状态存入对应的经验回放池中;Step 4-3-5: After executing the action, determine whether the cached video content exceeds the cache capacity. If it exceeds, delete the video content with a lower request probability in the cache module, obtain the environmental reward and the new observation space, and store the current state, execution action, reward, and next state of each MEC server in the corresponding experience replay pool;
步骤4-3-6,将新的环境观测空间赋值给原来的观测结果,从经验回放池中随机选择数据,每个MEC服务器根据公式更新策略网络的参数和价值网络的参数,并更新每个MEC服务器的目标网络的参数。Step 4-3-6, assign the new environmental observation space to the original observation results, randomly select data from the experience replay pool, and each MEC server updates the parameters of the policy network and the value network according to the formula, and updates the parameters of the target network of each MEC server.
以上所述仅为本发明的优选方案,并非作为对本发明的进一步限定,凡是利用本发明说明书及附图内容所作的各种等效变化均在本发明的保护范围之内。The above description is only a preferred embodiment of the present invention and is not intended to be a further limitation of the present invention. All equivalent changes made using the contents of the present specification and drawings are within the protection scope of the present invention.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311084846.3A CN116828226B (en) | 2023-08-28 | 2023-08-28 | Cloud edge end collaborative video stream caching system based on block chain |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311084846.3A CN116828226B (en) | 2023-08-28 | 2023-08-28 | Cloud edge end collaborative video stream caching system based on block chain |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116828226A true CN116828226A (en) | 2023-09-29 |
| CN116828226B CN116828226B (en) | 2023-11-10 |
Family
ID=88139527
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311084846.3A Active CN116828226B (en) | 2023-08-28 | 2023-08-28 | Cloud edge end collaborative video stream caching system based on block chain |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116828226B (en) |
Citations (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190020657A1 (en) * | 2017-07-13 | 2019-01-17 | Dell Products, Lp | Method and apparatus for optimizing mobile edge computing for nomadic computing capabilities as a service |
| US20200008044A1 (en) * | 2019-09-12 | 2020-01-02 | Intel Corporation | Multi-access edge computing service for mobile user equipment method and apparatus |
| US20200007414A1 (en) * | 2019-09-13 | 2020-01-02 | Intel Corporation | Multi-access edge computing (mec) service contract formation and workload execution |
| CN110928678A (en) * | 2020-01-20 | 2020-03-27 | 西北工业大学 | A resource allocation method for blockchain system based on mobile edge computing |
| CN111901392A (en) * | 2020-07-06 | 2020-11-06 | 北京邮电大学 | Mobile edge computing-oriented content deployment and distribution method and system |
| CN112637908A (en) * | 2021-03-08 | 2021-04-09 | 中国人民解放军国防科技大学 | Fine-grained layered edge caching method based on content popularity |
| KR102260781B1 (en) * | 2020-04-29 | 2021-06-03 | 홍익대학교세종캠퍼스산학협력단 | Integration System of Named Data Networking-based Edge Cloud Computing for Internet of Things |
| KR102271371B1 (en) * | 2020-12-24 | 2021-06-30 | 전남대학교산학협력단 | Super-Resolution Streaming Video Delivery System Based-on Mobile Edge Computing for Network Traffic Reduction |
| CN113225584A (en) * | 2021-03-24 | 2021-08-06 | 西安交通大学 | Cross-layer combined video transmission method and system based on coding and caching |
| US20210329075A1 (en) * | 2020-04-16 | 2021-10-21 | Verizon Patent And Licensing Inc. | Content consumption measurement for digital media using a blockchain |
| KR102367568B1 (en) * | 2020-12-14 | 2022-02-24 | 숙명여자대학교산학협력단 | Contents caching system in cooperative MEC based on user similarity, and method thereof |
| US20220109713A1 (en) * | 2019-06-28 | 2022-04-07 | Samsung Electronics Co., Ltd. | Content distribution server and method |
| KR102391956B1 (en) * | 2020-11-26 | 2022-04-28 | 주식회사 그리드위즈 | Coalitional Method for Optimization of Computing Offloading in Multiple Access Edge Computing (MEC) supporting Non-Orthogonal Multiple Access (NOMA) |
| US20220224776A1 (en) * | 2022-04-01 | 2022-07-14 | Kshitij Arun Doshi | Dynamic latency-responsive cache management |
| CN114760311A (en) * | 2022-04-22 | 2022-07-15 | 南京邮电大学 | Optimized service caching and calculation unloading method for mobile edge network system |
| US20220353801A1 (en) * | 2021-04-29 | 2022-11-03 | International Business Machines Corporation | Distributed multi-access edge service delivery |
| CN115720237A (en) * | 2022-11-14 | 2023-02-28 | 华南理工大学 | Buffering and resource scheduling method for edge network adaptive bitrate video |
| CN116056156A (en) * | 2022-12-08 | 2023-05-02 | 重庆大学 | A MEC-assisted collaborative caching system supporting adaptive bitrate video |
| US20230164237A1 (en) * | 2020-04-10 | 2023-05-25 | Lenovo (Beijing) Ltd. | Methods and apparatus for managing caching in mobile edge computing systems |
| WO2023108718A1 (en) * | 2021-12-16 | 2023-06-22 | 苏州大学 | Spectrum resource allocation method and system for cloud-edge collaborative optical carrier network |
| US20230208659A1 (en) * | 2021-12-29 | 2023-06-29 | POSTECH Research and Business Development Foundation | Blockchain apparatus and method for mobile edge computing |
| CN116546021A (en) * | 2023-06-12 | 2023-08-04 | 重庆邮电大学 | Agent policy learning method with privacy protection in mobile edge calculation |
| CN116566838A (en) * | 2023-06-14 | 2023-08-08 | 重庆邮电大学 | A method for task offloading and content caching of Internet of Vehicles in collaboration with blockchain and edge computing |
-
2023
- 2023-08-28 CN CN202311084846.3A patent/CN116828226B/en active Active
Patent Citations (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190020657A1 (en) * | 2017-07-13 | 2019-01-17 | Dell Products, Lp | Method and apparatus for optimizing mobile edge computing for nomadic computing capabilities as a service |
| US20220109713A1 (en) * | 2019-06-28 | 2022-04-07 | Samsung Electronics Co., Ltd. | Content distribution server and method |
| US20200008044A1 (en) * | 2019-09-12 | 2020-01-02 | Intel Corporation | Multi-access edge computing service for mobile user equipment method and apparatus |
| US20200007414A1 (en) * | 2019-09-13 | 2020-01-02 | Intel Corporation | Multi-access edge computing (mec) service contract formation and workload execution |
| CN110928678A (en) * | 2020-01-20 | 2020-03-27 | 西北工业大学 | A resource allocation method for blockchain system based on mobile edge computing |
| US20230164237A1 (en) * | 2020-04-10 | 2023-05-25 | Lenovo (Beijing) Ltd. | Methods and apparatus for managing caching in mobile edge computing systems |
| US20210329075A1 (en) * | 2020-04-16 | 2021-10-21 | Verizon Patent And Licensing Inc. | Content consumption measurement for digital media using a blockchain |
| KR102260781B1 (en) * | 2020-04-29 | 2021-06-03 | 홍익대학교세종캠퍼스산학협력단 | Integration System of Named Data Networking-based Edge Cloud Computing for Internet of Things |
| CN111901392A (en) * | 2020-07-06 | 2020-11-06 | 北京邮电大学 | Mobile edge computing-oriented content deployment and distribution method and system |
| KR102391956B1 (en) * | 2020-11-26 | 2022-04-28 | 주식회사 그리드위즈 | Coalitional Method for Optimization of Computing Offloading in Multiple Access Edge Computing (MEC) supporting Non-Orthogonal Multiple Access (NOMA) |
| KR102367568B1 (en) * | 2020-12-14 | 2022-02-24 | 숙명여자대학교산학협력단 | Contents caching system in cooperative MEC based on user similarity, and method thereof |
| KR102271371B1 (en) * | 2020-12-24 | 2021-06-30 | 전남대학교산학협력단 | Super-Resolution Streaming Video Delivery System Based-on Mobile Edge Computing for Network Traffic Reduction |
| CN112637908A (en) * | 2021-03-08 | 2021-04-09 | 中国人民解放军国防科技大学 | Fine-grained layered edge caching method based on content popularity |
| CN113225584A (en) * | 2021-03-24 | 2021-08-06 | 西安交通大学 | Cross-layer combined video transmission method and system based on coding and caching |
| US20220353801A1 (en) * | 2021-04-29 | 2022-11-03 | International Business Machines Corporation | Distributed multi-access edge service delivery |
| WO2023108718A1 (en) * | 2021-12-16 | 2023-06-22 | 苏州大学 | Spectrum resource allocation method and system for cloud-edge collaborative optical carrier network |
| US20230208659A1 (en) * | 2021-12-29 | 2023-06-29 | POSTECH Research and Business Development Foundation | Blockchain apparatus and method for mobile edge computing |
| US20220224776A1 (en) * | 2022-04-01 | 2022-07-14 | Kshitij Arun Doshi | Dynamic latency-responsive cache management |
| CN114760311A (en) * | 2022-04-22 | 2022-07-15 | 南京邮电大学 | Optimized service caching and calculation unloading method for mobile edge network system |
| CN115720237A (en) * | 2022-11-14 | 2023-02-28 | 华南理工大学 | Buffering and resource scheduling method for edge network adaptive bitrate video |
| CN116056156A (en) * | 2022-12-08 | 2023-05-02 | 重庆大学 | A MEC-assisted collaborative caching system supporting adaptive bitrate video |
| CN116546021A (en) * | 2023-06-12 | 2023-08-04 | 重庆邮电大学 | Agent policy learning method with privacy protection in mobile edge calculation |
| CN116566838A (en) * | 2023-06-14 | 2023-08-08 | 重庆邮电大学 | A method for task offloading and content caching of Internet of Vehicles in collaboration with blockchain and edge computing |
Non-Patent Citations (6)
| Title |
|---|
| WENQIAN ZHANG 等: "Learning-based joint service caching and load balancing for MEC blockchain networks", 《IEEE》 * |
| YIMING LIU 等: "Decentralized Resource Allocation for Video Transcoding and Delivery in Blockchain-Based System With Mobile Edge Computing", 《IEEE》 * |
| 刘佳迪: "动态边缘网络中的资源分配和缓存服务框架", 《中国优秀博士论文电子期刊》 * |
| 张平;李世林;刘宜明;秦晓琦;许晓东;: "区块链赋能的边缘异构计算系统中资源调度研究", 通信学报, no. 10 * |
| 李佳;谢人超;贾庆民;黄韬;刘韵洁;孙礼;: "面向视频流的MEC缓存转码联合优化研究", 电信科学, no. 08 * |
| 武继刚;刘同来;李境一;黄金瑶;: "移动边缘计算中的区块链技术研究进展", 计算机工程, no. 08 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116828226B (en) | 2023-11-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Fang et al. | DRL-driven joint task offloading and resource allocation for energy-efficient content delivery in cloud-edge cooperation networks | |
| CN112218337B (en) | A cache policy decision method in mobile edge computing | |
| CN108834080B (en) | Distributed caching and user association method based on multicast technology in heterogeneous networks | |
| CN101018172A (en) | A method for optimizing the P2P transfer in the MAN | |
| CN110290507B (en) | Caching strategy and spectrum allocation method of D2D communication auxiliary edge caching system | |
| CN113225584B (en) | Cross-layer combined video transmission method and system based on coding and caching | |
| CN109673018B (en) | A Novel Content Cache Distribution Optimization Method in Wireless Heterogeneous Networks | |
| CN108600998B (en) | Cache optimization decision method for ultra-density cellular and D2D heterogeneous converged network | |
| CN116260871A (en) | Independent task unloading method based on local and edge collaborative caching | |
| CN112052198B (en) | Hash routing collaborative caching method based on node betweenness popularity under energy monitoring platform | |
| CN116916390A (en) | An edge collaborative cache optimization method and device combined with resource allocation | |
| CN108541025B (en) | Wireless heterogeneous network-oriented base station and D2D common caching method | |
| CN108616845A (en) | D2D grouping multiple target caching methods based on social content and its system, device | |
| Kabir et al. | Energy-aware caching and collaboration for green communication systems. | |
| CN107949007A (en) | A kind of resource allocation algorithm based on Game Theory in wireless caching system | |
| Rui et al. | Content collaborative caching strategy in the edge maintenance of communication network: A joint download delay and energy consumption method | |
| CN113473540B (en) | Mixed caching method based on base station cooperation in heterogeneous cellular network | |
| CN110062356B (en) | A Layout Method of Cached Replicas in D2D Networks | |
| CN116828226B (en) | Cloud edge end collaborative video stream caching system based on block chain | |
| Lin et al. | Performance and implications of RAN caching in LTE mobile networks: A real traffic analysis | |
| Fang et al. | Offloading strategy for edge computing tasks based on cache mechanism | |
| CN112911614A (en) | Cooperative coding caching method based on dynamic request D2D network | |
| Sun et al. | Distributed caching in wireless cellular networks incorporating parallel processing | |
| CN117998420A (en) | DAG-based fragmented cascading distributed 6G network joint optimization method and device | |
| Liu et al. | Proactive data caching and replacement in the edge computing environment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |