[go: up one dir, main page]

CN121116404A - Data caching method and device, nonvolatile storage medium and electronic equipment - Google Patents

Data caching method and device, nonvolatile storage medium and electronic equipment

Info

Publication number
CN121116404A
CN121116404A CN202511194969.1A CN202511194969A CN121116404A CN 121116404 A CN121116404 A CN 121116404A CN 202511194969 A CN202511194969 A CN 202511194969A CN 121116404 A CN121116404 A CN 121116404A
Authority
CN
China
Prior art keywords
data
index data
cache
user plane
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511194969.1A
Other languages
Chinese (zh)
Inventor
李望发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Intelligent Network Technology Co ltd
Original Assignee
China Telecom Intelligent Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Intelligent Network Technology Co ltd filed Critical China Telecom Intelligent Network Technology Co ltd
Priority to CN202511194969.1A priority Critical patent/CN121116404A/en
Publication of CN121116404A publication Critical patent/CN121116404A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephonic Communication Services (AREA)

Abstract

The application discloses a data caching method and device, a nonvolatile storage medium and electronic equipment. The method comprises the steps of obtaining performance index data of a user plane functional entity, analyzing the performance index data by using an online learning model to obtain cache quota index data output by the online learning model, and applying the cache quota index data to a cache strategy of the user plane functional entity. The application solves the technical problem of waste of calculation and storage resources caused by the fact that the related technology can not adaptively and dynamically adjust the cache strategy of the user plane functional entity.

Description

Data caching method and device, nonvolatile storage medium and electronic equipment
Technical Field
The present application relates to the field of wireless communication networks, and in particular, to a data caching method and apparatus, a nonvolatile storage medium, and an electronic device.
Background
Reduced capability is a 5G technology for internet of things (IoT) devices that aims to provide a more efficient connection solution for medium bandwidth demanding devices. The application background mainly originates from the optimal utilization of 5G network resources and the supporting requirement of the diversity of the equipment of the Internet of things. With the deployment of 5G networks and the popularization of internet of things devices, more and more devices need to access the network, but the performance and requirements of these devices are different. Some internet of things devices (e.g., wearable devices, industrial sensors, etc.) have lower demands on bandwidth and computing power, but have higher demands on battery life and connection stability. Traditional 5G device designs may be overly complex and resource consuming and are not suitable for these low power, low cost devices. RedCap technology enables these medium-performance internet of things devices to access 5G networks more efficiently by reducing the bandwidth, computing power and power consumption requirements of the devices. This not only reduces the manufacturing cost of the device, but also extends battery life, improving the overall efficiency and capacity of the network.
In the scenario of updating the buffer action rules (Update Buffering Action Rule, BAR) +downlink buffer duration information (DL Buffering Duration IE), the user plane function (User Plane Function, UPF) needs to buffer the downlink user plane packets. When the gNB (RAN) informs the UPF of the downlink buffer status of the current terminal Equipment (UE) through the Update BAR request, the UPF temporarily buffers the downlink data packet of the UE according to the request and the buffer duration specified in DL Buffering Duration IE, instead of immediately discarding or forwarding.
However, for the buffer allocation of the UPF, the relevant buffer policy is set based on static configuration or experience, and it is difficult to adapt to dynamic changes of network states. If the buffer number is set too large, for example, when the network load suddenly increases or the user behavior mode changes, resource waste is caused or UPF performance bottleneck is caused, if the buffer number is set too small, the buffer hit rate is reduced, the transmission effect of the downlink flow of the UE is affected, and the buffer hit rate is discarded when the maximum buffer number is exceeded. The setting of the maximum number of caches does not take into account the adaptive setting. In addition, the maximum number of caches set by different UPFs is different, and setting the maximum number of caches by fixed setting is not flexible enough in each time period or different application scenarios.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The application provides a data caching method and device, a nonvolatile storage medium and electronic equipment, which at least solve the technical problem of waste of calculation and storage resources caused by the fact that the related technology cannot adaptively and dynamically adjust the caching strategy of a user plane functional entity.
According to one aspect of the application, a data caching method is provided, which comprises the steps of obtaining performance index data of a user plane functional entity, analyzing the performance index data by utilizing an online learning model to obtain cache quota index data output by the online learning model, and applying the cache quota index data to a cache policy of the user plane functional entity.
Optionally, before the performance index data of the user plane functional entity is obtained, the method further comprises the steps of receiving uplink data sent by the terminal equipment, and training the online learning model according to the historical cache quota index data of the user plane functional entity on the uplink data and the historical performance index data of the user plane functional entity.
The method comprises the steps of obtaining a multi-objective optimization function, determining the multi-objective optimization function as a loss function, and adjusting internal parameters of the online learning model based on the loss function and an iterative optimization algorithm until a preset stopping condition is met, wherein the preset stopping condition comprises that the online learning model outputs predicted cache quota index data corresponding to optimal performance index data.
Optionally, the cache quota index data is applied to a cache policy of the user plane functional entity, and the method comprises the steps of receiving uplink data sent by the terminal equipment, forwarding the uplink data of the terminal equipment to a network through an N6 interface under the condition that the terminal equipment is in a connected state, receiving downlink data returned by the network and forwarding the downlink data to the terminal equipment, applying the cache quota index data to the cache policy of the user plane functional entity under the condition that the terminal equipment is in a downlink buffer state, and caching the downlink data returned by the network based on the cache policy.
Optionally, after the buffer quota index data is applied to the buffer policy of the user plane functional entity, the method further comprises sending the downlink data buffered based on the buffer policy to the terminal device after the connection with the N3 interface of the base station is reestablished.
Optionally, the downlink buffer status includes that the data receiving buffer of the terminal device reaches a saturated status or the terminal device cannot receive data.
Optionally, after the cache quota index data is applied to the cache policy of the user plane functional entity, the method further comprises updating model parameters of the online learning model according to the cache quota index data and the performance index data after the cache policy is adjusted.
According to still another aspect of the present application, there is further provided a data caching apparatus, including an obtaining module configured to obtain performance index data of a user plane functional entity, an analyzing module configured to analyze the performance index data by using an online learning model to obtain cache quota index data output by the online learning model, and an application module configured to apply the cache quota index data to a cache policy of the user plane functional entity.
According to still another aspect of the present application, there is also provided a nonvolatile storage medium including a stored program, wherein the program controls a device in which the storage medium is located to execute the above data caching method when running.
According to still another aspect of the present application, there is also provided an electronic device including a memory and a processor for running a program stored in the memory, wherein the program executes the above data caching method when running.
According to yet another aspect of the present application, there is also provided a computer program, wherein the computer program implements the above data caching method when executed by a processor.
According to yet another aspect of the present application, there is also provided a computer program product comprising a non-volatile computer readable storage medium, wherein the non-volatile computer readable storage medium stores a computer program which, when executed by a processor, implements the above data caching method.
The application adopts the mode of acquiring the performance index data of the user plane functional entity, analyzing the performance index data by utilizing an online learning model to obtain the cache quota index data output by the online learning model, and applying the cache quota index data to the cache strategy of the user plane functional entity, thereby achieving the purpose of self-adapting dynamic adjustment of the cache strategy of the user plane functional entity, realizing the technical effects of saving resources and improving the transmission efficiency of the data, and further solving the technical problem of waste of calculation and storage resources caused by the fact that the related technology cannot self-adapting dynamic adjustment of the cache strategy of the user plane functional entity.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a data caching method according to an embodiment of the application;
FIG. 2 is an interactive schematic diagram according to an embodiment of the application;
FIG. 3 is a diagram of a network architecture according to an embodiment of the present application;
FIG. 4 is a schematic diagram of model parameter optimization in accordance with an embodiment of the present application;
FIG. 5 is a flow chart of another data caching method according to an embodiment of the application;
FIG. 6 is a block diagram of a data caching apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of a hardware structure of a computer terminal of a data buffering method according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, there is provided a method embodiment of a data caching method, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that illustrated herein.
FIG. 1 is a flow chart of a data caching method according to an embodiment of the application, as shown in FIG. 1, the method includes the following steps:
step S102, obtaining the performance index data of the user plane functional entity.
Among these, performance index data include, but are not limited to, the following. CPU utilization rate, which reflects the use degree of the UPF processor, helps to judge whether the processing capacity is sufficient or not and whether performance bottleneck exists or not. And the memory utilization rate is that the occupation condition of UPF memory resources is monitored, the remaining available memory is known, and the data packet processing delay or discarding caused by memory overflow is avoided. The packet loss rate, which is an important index for measuring the proportion of the packets which are not successfully transmitted in the network transmission process, is an important index for the network stability and the service quality. Average transmission delay, which is the average time from sending to receiving of data packets, reflects the network transmission speed and directly affects user perception and service performance. Cache hit ratio, which refers to the frequency with which UPF cache data can be successfully read and used, means that the cache policy is effective, reducing the need to read data from the outside. Traffic throughput refers to the amount of data that can be processed by the network in a unit time, reflecting the carrying capacity and transmission efficiency of the network. The connection state information includes the connection state of the terminal equipment, such as whether the terminal equipment is in Idle state, and the buffer state change of the UE notified by the gNB. Network load, namely the data transmission quantity of the whole network and the data flow density processed by UPF, which are used for judging the busyness of the network. The service quality parameters include transmission rate, delay, packet loss rate, etc., and network service quality index related to specific service.
Step S104, analyzing the performance index data by using the online learning model to obtain cache quota index data output by the online learning model.
The online learning model in step S104 can process the performance data of the UPF in real time, and dynamically predict the most suitable cache quota index data through algorithm calculation and model iteration. Among them, cache quota index data includes, but is not limited to, the following. Maximum number of buffer users-maximum number of terminal devices that the UPF can effectively buffer at the same time. The maximum buffer user number needs to comprehensively consider the factors such as the hardware resource (such as the memory) of the UPF, the current network load, the service quality requirement and the like. The buffer packet size of each user limits the maximum number or size of data packets allowed when buffering data for each terminal device. The reasonable cache packet size not only can avoid resource waste caused by excessive cache, but also can prevent frequent retransmission caused by insufficient cache, thereby improving data transmission efficiency and user experience.
Step S106, the cache quota index data is applied to the cache policy of the user plane functional entity.
In step S106, the UPF adaptively adjusts the number of users and the size of the buffer packet of each user according to the latest buffer quota index data provided by the model, so as to improve the buffer efficiency and the utilization rate of network resources to the maximum extent on the premise of ensuring the service quality.
The steps can realize dynamic and intelligent adjustment of the caching strategy, and effectively solve the problems of flexibility and efficiency of the fixed caching strategy. The method not only reduces the phenomenon of data retransmission and discarding caused by improper allocation of cache resources, but also improves the overall transmission effect and user satisfaction of the network.
The steps shown in fig. 1 are exemplarily illustrated and explained below.
According to some optional embodiments of the application, before the performance index data of the user plane functional entity is obtained, the method further comprises the steps of receiving uplink data sent by the terminal equipment, and training the online learning model according to the historical cache quota index data of the user plane functional entity on the uplink data and the historical performance index data of the user plane functional entity.
The uplink Data includes, but is not limited to, various multimedia contents, control information and application Data, and is transferred from the UE to the gNB through a wireless link, and then sent to the UPF through an N3 interface, and further interacted with an external Data Network (DN) through an N6 interface.
The online learning model allows the algorithm to learn and update model parameters while receiving new data without retraining the entire model. Unlike traditional batch learning, which requires training a model once after collecting a large amount of data, online learning models can learn in real time or continuously, adapting to changes in data flow, making them particularly suitable for scenes where large-scale, real-time or streaming data is processed. A key advantage of online learning is that it can quickly react to new information, adjusting the predictive model to reflect the latest data trends and patterns. In the environment that the data distribution changes with time, the real-time updating capability is particularly important, so that the model can be ensured to be kept in an optimal state, and the prediction precision and the decision efficiency are improved. For example, in a network traffic management scenario, the dynamic nature of the data requires that the model be able to adapt quickly to optimize performance and avoid outdated predictions.
The method comprises the steps of obtaining a multi-objective optimization function, determining the multi-objective optimization function as a loss function, and adjusting internal parameters of the online learning model based on the loss function and an iterative optimization algorithm until a preset stopping condition is met, wherein the preset stopping condition comprises that the online learning model outputs predicted cache quota index data corresponding to optimal performance index data.
The acquisition of the multi-objective optimization function refers to establishing a mathematical expression or algorithm framework for simultaneously considering and quantifying the comprehensive performance of a plurality of performance indexes under a specific caching strategy in the process of training an online learning model. Specifically, the historical cache quota index data includes specific values such as a maximum number of cache users and a cache packet size of each user set by the UPF in a certain period. The historical performance index data covers the actual running condition of the UPF when executing the caching strategy, such as CPU utilization rate, memory utilization rate, packet loss rate, average delay, traffic throughput, cache hit rate and the like. The multi-objective optimization function is used for comprehensively considering all the performance indexes, and judging the quality degree of the network running state recorded by different historical data under the same buffer quota condition.
In an actual network environment, the performance indexes are mutually dependent and conflict, for example, increasing the cache may increase the cache hit rate and reduce the delay, but at the same time, the burden of the CPU and the memory may be increased, which results in an increase in the packet loss rate and a decrease in the throughput. Therefore, a simple single objective function cannot fully reflect the comprehensive effect of the caching strategy. The multi-objective optimization function evaluates the impact of different historical performance data on the overall performance of the network under a specific cache quota by defining multiple objective functions, such as minimizing a weighted sum of delay and packet loss rate, while maximizing traffic throughput and cache hit rate.
Determining the multi-objective optimization function as a loss function means that the value of this function is made as small as possible by model training, i.e. the model predicted cache quota strategy is enabled to reach the historically best performance index data as much as possible. Through the multi-objective optimization, the model learns not only the optimal solution under the single performance index, but also the balance points of all key performance indexes, so that the UPF can be more flexibly and efficiently adapted to the dynamic change of the network, and more excellent connection experience and service quality are provided for the Internet of things equipment with medium bandwidth requirements especially in Redcap scenes.
Fig. 2 is a schematic diagram of interaction according to an embodiment of the present application, and a specific interaction flow between UE, gNB, UPF and DN as shown in fig. 2 is as follows.
The method comprises the steps of receiving uplink data sent by terminal equipment through UPF, forwarding the uplink data of the terminal equipment to DN through N6 interface under the condition that the terminal equipment is in a connection state, receiving downlink data returned by DN and forwarding the downlink data to UE through gNB, applying buffer quota index data to a buffer strategy of a user plane functional entity under the condition that the terminal equipment is in a downlink buffer state, and buffering the downlink data returned by network based on the buffer strategy. The downlink buffer status includes that a data receiving buffer of the terminal device reaches a saturated status or the terminal device cannot receive data.
In this embodiment, when the UPF receives the uplink data sent by the terminal device, the UPF forwards the uplink data to the network under the condition that the terminal device is perceived to be in a connected state. The UPF receives the downlink data from the network and forwards the downlink data to the terminal equipment.
In case that the terminal device is perceived to be in a downlink buffer state, in order to avoid data loss or repeated transmission, the UPF enables a buffer mechanism, and temporarily stores downlink data returned by the network in its own buffer, instead of immediately forwarding the downlink data to the terminal device. The downlink buffer status is that the data receiving buffer of the terminal device has reached a saturated state, i.e. cannot accommodate more data, or cannot receive data temporarily for some reasons (e.g. the terminal device is in a power saving mode, the signal is poor).
In order to effectively manage caching, the UPF uses the cache quota index data output by the online learning model, applies the cache quota index to its own cache policy, decides which data packets need to be cached, how long to cache, and how to reasonably schedule and distribute the data when the terminal device is restarted. The caching strategy dynamically adjusts the caching quota index according to the real-time network performance index and the behavior mode of the terminal equipment so as to optimize the use efficiency of network resources. For example, if more terminal devices in the network are found to be in a downlink buffer state at the same time, the UPF may reduce the buffer quota to avoid excessively fast saturation of the buffer space, otherwise, when the network load is low, the buffer quota may be increased appropriately to ensure timeliness and integrity of data transmission.
Optionally, after the buffer quota index data is applied to the buffer policy of the user plane function entity, the following steps may be further performed, where after the connection with the N3 interface of the base station is reestablished, the downlink data buffered based on the buffer policy is sent to the terminal device.
Specifically, when the UE disconnects from the network (e.g., enters an idle state or is forced to be offline due to signal loss), the UPF caches downlink data to be transmitted to the UE in the network according to a preset caching policy, so as to avoid loss or retransmission of the data due to unreachable UE in the transmission process, thereby reducing consumption of network resources and improving data transmission efficiency. Once the N3 interface connection between the UE and the base station is re-established, i.e. the UE is re-online and ready to receive data, the UPF detects this state change. At this time, the UPF will extract the downlink data reserved for the UE from the cache according to the previous cache policy, and send the data to the UE through the N3 interface, so as to ensure that the UE can seamlessly continue its communication task or service reception, and no data loss or experience interruption will occur due to the previous disconnection.
Fig. 3 is a network architecture diagram according to an embodiment of the present application, where the data buffering method shown in fig. 1 may be applied to the network architecture, where the gNB is a network node in the 5G network that directly communicates with the UE and is responsible for radio access and signal strength management of the UE. The gNB monitors the status of the UE, such as whether the UE is in Idle mode, signal quality, etc., and reports this information to the 5G core network component. gNB interacts with UPF through N3 interface, mainly responsible for forwarding user plane data. When the UE cannot receive the downlink data (e.g., enter Idle mode), the gNB will notify UPFUE the downlink buffer status through the Update BAR request to trigger the buffer mechanism of the UPF.
The gNB communicates with the AMF through an N2 interface, and the AMF is responsible for access control, mobile management and session management of the UE. The gNB reports the connection status and mobility information of the UE to the AMF so that the AMF makes corresponding access and mobility management decisions. The AMF is responsible for access control and mobility management of the UE, and the SMF manages sessions and data flows of the UE. The AMF and the SMF interact through an N11 interface, and the AMF notifies the SMF to establish a session when the UE accesses the network and cooperates with the SMF in the moving process of the UE so as to ensure the seamless migration of the session between different network nodes. The SMF communicates with the PCF via the N7 interface to obtain network policies and rules, such as QoS policies, charging rules, etc., for application in session management of the UE. The PCF is used as a policy decision center to provide policy information for the SMF so as to ensure reasonable allocation and use of network resources.
The SMF and the UPF interact through the N4 interface, the SMF is responsible for session management including session establishment, modification and release, and the UPF is responsible for data transmission and processing. The SMF may send instructions, such as defining a data forwarding path, setting QoS parameters, controlling a data flow, etc., to the UPF through the N4 interface according to the session requirements of the UE and the network policy. The UPF applies the instructions to data processing and forwarding according to the instructions, so that data transmission is ensured to meet the session requirements and network policies of the UE.
The UPF interacts with DN (data network) through N6 interface, and is responsible for forwarding the data packet from DN to UE and transmitting the data packet generated by UE back to DN. The N6 interface is the boundary between the 5G network and the external data network, and the UPF realizes the routing and forwarding of the data packet through the interface, thereby ensuring the data transmission between the UE and the external service. In the application, under Redcap scene, UPF dynamically adjusts the caching strategy according to the online learning model, so that even if the UE can not immediately receive the data, the data can be cached properly, and the data is sent through the N3 interface after the UE is reconnected, thereby optimizing network resources and data transmission efficiency.
Fig. 4 is a schematic diagram of model parameter optimization according to an embodiment of the present application, and after the buffer quota index data is applied to the buffer policy of the user plane functional entity, as shown in fig. 4, a step of updating the model parameters of the online learning model according to the buffer quota index data and the performance index data after the buffer policy is adjusted may be further performed.
Specifically, after the cache policy is adjusted, performance index data related to the current cache quota, such as CPU utilization, memory usage, packet loss rate, average latency, traffic throughput, and cache hit rate, is collected. Meanwhile, cache quota index data before adjustment is recorded, wherein the cache quota index data comprises parameters such as the number of cache users and the size of a cache packet.
And evaluating the influence of the cache strategy adjustment on the network performance by comparing the performance index data before and after adjustment. For example, if after adjusting the cache quota, the cache hit rate is increased, the packet loss rate is reduced, and meanwhile, good CPU and memory use efficiency is maintained, the adjustment is considered to be effective, which is helpful for optimizing network resource use. Based on the evaluation result, a feedback value is calculated, wherein the feedback value reflects the quality degree of the network performance under the new cache policy, and can be the output of a multi-objective optimization function, such as the weighted sum of a plurality of performance indexes or the absolute value or the change rate of a certain key performance index.
The online learning model uses the collected cache quota index data and feedback values (namely adjustment effects) as training data, and adjusts model parameters through an iterative optimization algorithm of the model. The goal of the adjustment process is to enable the model to better predict a cache quota indicator that is appropriate for the current network state in future predictions, to achieve optimal network performance.
The updated model parameters are applied to the online learning model, and then the real-time adjustment of the caching strategy is affected. The model continuously monitors the network state, predicts a new cache quota based on the updated parameters, and realizes dynamic optimization. Meanwhile, the parameter optimization process is circularly carried out, the actual effect after each strategy adjustment can be collected and evaluated again, and the method is used for updating the model parameters of the next round, and continuous fine adjustment and optimization of the model and the cache strategy are realized.
FIG. 5 is a flowchart of another data caching method according to an embodiment of the present application, as shown in FIG. 5, the method includes the steps of:
In step S501, performance index data is collected. The UPF monitors and collects key performance indexes in the network in real time, wherein the performance indexes are the basis for subsequent online learning model training and cache strategy adjustment.
Step S502, training an online learning model by using a small amount of data. The online learning model can be quickly adapted to the change of the network state through continuous iteration and optimization, and reasonable prediction can be made even under the condition that the data set is relatively small. The purpose of model training is to find the relation between the performance index and the optimal caching strategy so as to realize intelligent adjustment.
In step S503, the UE disconnects. When the UE disconnects from the network, for example, enters a low power mode, exceeds a signal coverage range, actively disconnects, or disconnects due to a network failure, the UPF is triggered to perform a flow of adaptive cache policy adjustment.
Step S504, self-adaptively adjusting cache quota indexes according to the online learning model, so that the performance maximization UPF of the UPF is based on the prediction result of the online learning model, and self-adaptively adjusting the cache quota indexes, including the maximum number of cache users and the cache data packet size of each user. The aim of the adjustment is to ensure that data can be effectively cached during the disconnection of the UE, meanwhile, resource waste is avoided, and the forwarding performance of UPF and the network transmission efficiency are improved. The adjustment process is dynamic and can be optimized according to the real-time condition of the network.
In step S505, the UE reconnects. When the UE reestablishes a connection with the network, e.g., the UE wakes up from a low power mode, moves to a well-signaled area, or network failure resumes, the UE reestablishes a connection with the gNB.
Step S506, sending the buffer data to the N3 interface. The UPF sends the cached data packet to the UE through an N3 interface. Wherein the N3 interface is a communication interface between the UPF and the gNB for transmitting user data.
In step S507, the UE receives data. The UE receives the buffered data, resumes normal communication and service, and completes the entire buffer optimization cycle from UE disconnection to data reception.
By introducing the self-adaptive cache adjustment mechanism based on online learning, the UPF can automatically and intelligently dynamically adjust the cache strategy according to the current network state and the user demand, and the self-adaptive cache adjustment mechanism is not dependent on the manual configuration of operation and maintenance personnel. Specifically, the online learning algorithm enables the UPF to analyze network performance indexes in real time, including CPU utilization, memory usage, packet loss rate, and the like, and predict optimal cache parameters according to these data, so as to maintain optimal cache performance under various conditions. The dynamically adjusted caching strategy ensures the full utilization of caching resources, avoids resource idling caused by excessive caching, and simultaneously prevents service interruption caused by insufficient caching. The self-adaptive caching strategy reduces invalid processing of the data packet, improves data forwarding efficiency of UPF, ensures smoothness and low delay of data transmission, and improves service quality of the whole network. In Redcap scenes, especially for the internet of things equipment with medium bandwidth requirements, the optimized caching strategy obviously improves the reliability and efficiency of downlink data transmission, and enhances the network experience of the equipment.
Fig. 6 is a block diagram of a data caching apparatus according to an embodiment of the present application, as shown in fig. 6, the apparatus includes:
And the obtaining module 62 is configured to obtain performance index data of the user plane functional entity.
The analysis module 64 is configured to analyze the performance index data by using the online learning model, so as to obtain cache quota index data output by the online learning model.
An application module 66, configured to apply the cache quota index data to a cache policy of the user plane function entity.
Optionally, before the performance index data of the user plane functional entity is obtained, the method further comprises the steps of receiving uplink data sent by the terminal equipment, and training the online learning model according to the historical cache quota index data of the user plane functional entity on the uplink data and the historical performance index data of the user plane functional entity.
The method comprises the steps of obtaining a multi-objective optimization function, determining the multi-objective optimization function as a loss function, and adjusting internal parameters of an online learning model based on the loss function and an iterative optimization algorithm until a preset stopping condition is met, wherein the preset stopping condition comprises that the online learning model outputs predicted cache quota index data corresponding to optimal performance index data.
Optionally, the cache quota index data is applied to a cache policy of the user plane functional entity, and specifically comprises the following steps of receiving uplink data sent by the terminal equipment, forwarding the uplink data of the terminal equipment to a network through an N6 interface when the terminal equipment is determined to be in a connection state, receiving downlink data returned by the network and forwarding the downlink data to the terminal equipment, applying the cache quota index data to the cache policy of the user plane functional entity when the terminal equipment is determined to be in a downlink buffer state, and caching the downlink data returned by the network based on the cache policy.
Optionally, after the buffer quota index data is applied to the buffer policy of the user plane function entity, the following steps may be further performed, where after the connection with the N3 interface of the base station is reestablished, the downlink data buffered based on the buffer policy is sent to the terminal device.
Optionally, the downlink buffer status includes that the data receiving buffer of the terminal device reaches a saturated status or the terminal device cannot receive data.
Optionally, after the cache quota index data is applied to the cache policy of the user plane functional entity, the method further comprises the step of updating model parameters of the online learning model according to the cache quota index data and the performance index data after the cache policy is adjusted.
It should be noted that each module in fig. 6 may be a program module (for example, a set of program instructions for implementing a specific function), or may be a hardware module, and for the latter, it may be expressed in a form, but is not limited to, that each module is expressed in a form of one processor, or the functions of each module are implemented by one processor.
It should be noted that, the preferred implementation manner of the embodiment shown in fig. 6 may refer to the related description of the embodiment shown in fig. 1, which is not repeated herein.
Fig. 7 shows a block diagram of a hardware structure of a computer terminal for implementing a data buffering method. As shown in fig. 7, the computer terminal 70 may include one or more processors 702 (shown as 702a, 702b, 702 n) and a memory 704 for storing data, and a transmission module 706 for communication functions. Among other things, a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 7 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 70 may also include more or fewer components than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
It should be noted that the one or more processors 702 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 70. As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The memory 704 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the data caching method in the embodiment of the present application, and the processor 702 executes the software programs and modules stored in the memory 704, thereby performing various functional applications and data processing, that is, implementing the data caching method described above. Memory 704 may include high-speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 704 may further include memory located remotely from the processor 702, which may be connected to the computer terminal 70 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 706 is used to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 70. In one example, the transmission module 706 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission module 706 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 70.
It should be noted here that, in some alternative embodiments, the computer terminal shown in fig. 7 may include hardware elements (including circuits), software elements (including computer code stored on a computer readable medium), or a combination of both hardware and software elements. It should be noted that fig. 7 is only one example of a specific example, and is intended to illustrate the types of components that may be present in the computer terminal described above.
It should be noted that, the computer terminal shown in fig. 7 is configured to execute the data buffering method shown in fig. 1, so that the explanation of the method for executing the command is also applicable to the electronic device, and is not repeated herein.
The embodiment of the application also provides a nonvolatile storage medium, which comprises a stored program, wherein the program controls the equipment where the storage medium is located to execute the data caching method when running.
The nonvolatile storage medium executes a program of acquiring performance index data of a user plane functional entity, analyzing the performance index data by utilizing an online learning model to obtain cache quota index data output by the online learning model, and applying the cache quota index data to a cache policy of the user plane functional entity.
The embodiment of the application also provides the electronic equipment, which comprises a memory and a processor, wherein the processor is used for running the program stored in the memory, and the data caching method is executed when the program runs.
The processor is used for running a program for executing the following functions of acquiring performance index data of the user plane functional entity, analyzing the performance index data by utilizing the online learning model to obtain cache quota index data output by the online learning model, and applying the cache quota index data to a cache strategy of the user plane functional entity.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the above embodiment of the present application, the collected information is information and data authorized by the user or sufficiently authorized by each party, and the processes of collection, storage, use, processing, transmission, provision, disclosure, application, etc. of the related data all comply with the related laws and regulations and standards, necessary protection measures are taken without violating the public welfare, and corresponding operation entries are provided for the user to select authorization or rejection.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the related art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, etc. which can store the program code.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (11)

1.一种数据缓存方法,其特征在于,应用于用户面功能实体,包括:1. A data caching method, characterized in that it is applied to user plane functional entities and includes: 获取所述用户面功能实体的性能指标数据;Obtain the performance index data of the user face functional entity; 利用在线学习模型对所述性能指标数据进行分析,得到所述在线学习模型输出的缓存配额指标数据;The performance index data is analyzed using an online learning model to obtain the cache quota index data output by the online learning model; 将所述缓存配额指标数据应用于所述用户面功能实体的缓存策略。The cache quota data is applied to the caching strategy of the user plane functional entity. 2.根据权利要求1所述的方法,其特征在于,获取所述用户面功能实体的性能指标数据之前,所述方法还包括:2. The method according to claim 1, characterized in that, before obtaining the performance index data of the user plane functional entity, the method further includes: 接收终端设备发送的上行数据;Receive uplink data sent by the terminal device; 根据所述用户面功能实体对所述上行数据的历史缓存配额指标数据以及所述用户面功能实体的历史性能指标数据,对所述在线学习模型进行训练。The online learning model is trained based on the historical cache quota data of the user plane functional entity for the uplink data and the historical performance data of the user plane functional entity. 3.根据权利要求2所述的方法,其特征在于,对所述在线学习模型进行训练,包括:3. The method according to claim 2, characterized in that training the online learning model includes: 获取多目标优化函数,其中,所述多目标优化函数用于量化评估在相同的目标历史缓存配额指标数据下,不同的历史性能指标数据的优劣程度;Obtain a multi-objective optimization function, wherein the multi-objective optimization function is used to quantitatively evaluate the superiority or inferiority of different historical performance index data under the same target historical cache quota index data; 将所述多目标优化函数确定为损失函数,基于所述损失函数和迭代优化算法调整所述在线学习模型的内部参数,直至满足预设停止条件,其中,所述预设停止条件包括:所述在线学习模型输出与最优性能指标数据相对应的预测缓存配额指标数据。The multi-objective optimization function is determined as the loss function. Based on the loss function and the iterative optimization algorithm, the internal parameters of the online learning model are adjusted until a preset stopping condition is met. The preset stopping condition includes: the online learning model outputs predicted cache quota data corresponding to the optimal performance index data. 4.根据权利要求1所述的方法,其特征在于,将所述缓存配额指标数据应用于所述用户面功能实体的缓存策略,包括:4. The method according to claim 1, characterized in that applying the cache quota index data to the caching strategy of the user plane functional entity includes: 接收终端设备发送的上行数据;Receive uplink data sent by the terminal device; 在确定所述终端设备处于连接态的情况下,通过N6接口将所述终端设备的上行数据转发至网络;When it is determined that the terminal device is in a connected state, the uplink data of the terminal device is forwarded to the network through the N6 interface; 接收所述网络返回的下行数据,并将所述下行数据转发至所述终端设备;Receive downlink data returned by the network and forward the downlink data to the terminal device; 在确定所述终端设备处于下行缓冲状态的情况下,将所述缓存配额指标数据应用于所述用户面功能实体的缓存策略;When it is determined that the terminal device is in a downlink buffer state, the cache quota index data is applied to the caching strategy of the user plane function entity; 基于所述缓存策略对所述网络返回的下行数据进行缓存。The downlink data returned by the network is cached based on the caching strategy. 5.根据权利要求4所述的方法,其特征在于,将所述缓存配额指标数据应用于所述用户面功能实体的缓存策略之后,所述方法还包括:5. The method according to claim 4, characterized in that, after applying the cache quota index data to the caching strategy of the user plane functional entity, the method further includes: 在与基站的N3接口连接重建之后,将基于所述缓存策略所缓存的下行数据发送至所述终端设备。After the connection with the N3 interface of the base station is re-established, the downlink data cached based on the caching strategy will be sent to the terminal device. 6.根据权利要求4所述的方法,其特征在于,所述下行缓冲状态包括:所述终端设备的数据接收缓冲区达到饱和状态或所述终端设备无法接收数据。6. The method according to claim 4, wherein the downlink buffer state includes: the data receiving buffer of the terminal device reaching a saturation state or the terminal device being unable to receive data. 7.根据权利要求1所述的方法,其特征在于,将所述缓存配额指标数据应用于所述用户面功能实体的缓存策略之后,所述方法还包括:7. The method according to claim 1, characterized in that, after applying the cache quota index data to the caching strategy of the user plane functional entity, the method further includes: 根据所述缓存配额指标数据和调整所述缓存策略之后的性能指标数据,对所述在线学习模型的模型参数进行更新。The model parameters of the online learning model are updated based on the cache quota data and the performance metrics after adjusting the cache strategy. 8.一种数据缓存装置,其特征在于,包括:8. A data caching device, characterized in that it comprises: 获取模块,用于获取用户面功能实体的性能指标数据;The acquisition module is used to acquire performance index data of user-face functional entities; 分析模块,用于利用在线学习模型对所述性能指标数据进行分析,得到所述在线学习模型输出的缓存配额指标数据;The analysis module is used to analyze the performance index data using an online learning model to obtain the cache quota index data output by the online learning model. 应用模块,用于将所述缓存配额指标数据应用于所述用户面功能实体的缓存策略。The application module is used to apply the cache quota index data to the caching strategy of the user plane functional entity. 9.一种非易失性存储介质,其特征在于,所述非易失性存储介质包括存储的程序,其中,在所述程序运行时控制所述非易失性存储介质所在设备执行权利要求1至7中任意一项所述的数据缓存方法。9. A non-volatile storage medium, characterized in that the non-volatile storage medium includes a stored program, wherein, when the program is executed, it controls the device where the non-volatile storage medium is located to execute the data caching method according to any one of claims 1 to 7. 10.一种电子设备,其特征在于,包括:存储器和处理器,所述处理器用于运行存储在所述存储器中的程序,其中,所述程序运行时执行权利要求1至7中任意一项所述的数据缓存方法。10. An electronic device, characterized in that it comprises: a memory and a processor, the processor being configured to run a program stored in the memory, wherein the program, when running, executes the data caching method according to any one of claims 1 to 7. 11.一种计算机程序产品,包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任意一项所述的数据缓存方法。11. A computer program product comprising a computer program, characterized in that, when the computer program is executed by a processor, it implements the data caching method according to any one of claims 1 to 7.
CN202511194969.1A 2025-08-25 2025-08-25 Data caching method and device, nonvolatile storage medium and electronic equipment Pending CN121116404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511194969.1A CN121116404A (en) 2025-08-25 2025-08-25 Data caching method and device, nonvolatile storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511194969.1A CN121116404A (en) 2025-08-25 2025-08-25 Data caching method and device, nonvolatile storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN121116404A true CN121116404A (en) 2025-12-12

Family

ID=97950702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511194969.1A Pending CN121116404A (en) 2025-08-25 2025-08-25 Data caching method and device, nonvolatile storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN121116404A (en)

Similar Documents

Publication Publication Date Title
CN113114756B (en) Video cache updating method for self-adaptive code rate selection in mobile edge calculation
Xing et al. A real-time adaptive algorithm for video streaming over multiple wireless access networks
Pedersen et al. Enhancing mobile video capacity and quality using rate adaptation, RAN caching and processing
EP3262819B1 (en) Coordinated techniques to improve application, network and device resource utilization of a data stream
US20180176325A1 (en) Data pre-fetching in mobile networks
US10499332B2 (en) Mobile network traffic optimization
US8750188B2 (en) System support for accessing and switching among multiple wireless interfaces on mobile devices
Wu et al. Quality-aware energy optimization in wireless video communication with multipath TCP
US11606409B1 (en) Optimizing quality of experience (QoE) levels for video streaming over wireless/cellular networks
CN111431941A (en) A real-time video bit rate adaptation method based on mobile edge computing
CN111414252A (en) A task offloading method based on deep reinforcement learning
KR102046713B1 (en) Method for traffic management in mobile edge cloud for quality improvement of mobile video and apparatus thereof
CN115720237B (en) Caching and resource scheduling method for adaptive bitrate video in edge networks
CN113766540B (en) Low-latency network content transmission method, device, electronic device and medium
CN111432231B (en) Content scheduling method of edge network, home gateway, system and server
Shi et al. LEAP: Learning-based smart edge with caching and prefetching for adaptive video streaming
Kim et al. Traffic management in the mobile edge cloud to improve the quality of experience of mobile video
Zhang et al. MoWIE: toward systematic, adaptive network information exposure as an enabling technique for cloud-based applications over 5G and beyond
Kim et al. Multipath-based HTTP adaptive streaming scheme for the 5G network
CN120263653A (en) Network transmission method, device, storage medium and electronic device based on imperceptible computing
Zhan et al. Cloud–edge learning for adaptive video streaming in B5G internet of things systems
CN114866475A (en) Network-on-chip congestion control method, system, device and storage medium
Shi et al. Learning-based fuzzy bitrate matching at the edge for adaptive video streaming
CN121116404A (en) Data caching method and device, nonvolatile storage medium and electronic equipment
CN110224861A (en) The implementation method of adaptive dynamic heterogeneous network selection policies based on study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination