[go: up one dir, main page]

CN117858165A - A vehicle group collaborative computing offloading strategy based on V2V in vehicle network environment - Google Patents

A vehicle group collaborative computing offloading strategy based on V2V in vehicle network environment Download PDF

Info

Publication number
CN117858165A
CN117858165A CN202311712133.7A CN202311712133A CN117858165A CN 117858165 A CN117858165 A CN 117858165A CN 202311712133 A CN202311712133 A CN 202311712133A CN 117858165 A CN117858165 A CN 117858165A
Authority
CN
China
Prior art keywords
vehicle
task
unloading
tasks
delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311712133.7A
Other languages
Chinese (zh)
Inventor
邹洋
熊能
蒋溢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202311712133.7A priority Critical patent/CN117858165A/en
Publication of CN117858165A publication Critical patent/CN117858165A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0958Management thereof based on metrics or performance parameters
    • H04W28/0967Quality of Service [QoS] parameters
    • H04W28/0975Quality of Service [QoS] parameters for reducing delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种车载网络环境下基于V2V的车辆成组协同计算卸载策略,属于边缘计算技术领域。它包括以下步骤:1)在车辆成组的车载网络中,将不同类型任务按照大小划分为若干子任务,并根据任务局部延迟查找组内服务车辆;2)根据车辆历史数据,构建状态转移矩阵,建立基于马尔科夫链的车辆离群率预测模型;3)根据每辆车的计算需求、离群率预测、任务大小以及通信链路的带宽等,构建车辆任务卸载的时延模型并建立目标函数;4)利用贪心算法对步骤S3的目标函数求解,得到子任务预卸载策略;再采用基于MAB模型的学习式计算卸载算法动态调整得到最优任务卸载决策;本发明所述的协同计算卸载策略,能够有效提高车载环境中V2V计算卸载效率,优化计算资源的使用。

The present invention discloses a vehicle group collaborative computing unloading strategy based on V2V in a vehicle network environment, which belongs to the field of edge computing technology. It includes the following steps: 1) In a vehicle network in which vehicles are grouped, different types of tasks are divided into several subtasks according to size, and the service vehicles in the group are searched according to the local delay of the task; 2) According to the vehicle historical data, a state transfer matrix is constructed, and a vehicle outlier rate prediction model based on a Markov chain is established; 3) According to the computing requirements, outlier rate prediction, task size, and bandwidth of the communication link of each vehicle, a delay model for vehicle task unloading is constructed and an objective function is established; 4) The objective function of step S3 is solved by a greedy algorithm to obtain a subtask pre-unloading strategy; and then a learning computing unloading algorithm based on the MAB model is used to dynamically adjust to obtain the optimal task unloading decision; the collaborative computing unloading strategy described in the present invention can effectively improve the V2V computing unloading efficiency in a vehicle environment and optimize the use of computing resources.

Description

V2V-based vehicle group collaborative computing unloading strategy in vehicle-mounted network environment
Technical Field
The invention belongs to the technical field of vehicle-mounted edge network computing and unloading, and particularly relates to a V2V-based vehicle group collaborative computing and unloading strategy in a vehicle-mounted network environment.
Background
With the explosive development of wireless communication technology and the in-vehicle internet, more and more vehicles are equipped with in-vehicle units having communication, storage, computing and man-machine interaction capabilities. Many intelligent transportation applications can run on-board units, and a significant proportion of on-board applications have a significant demand for computing resources and energy, known as computationally intensive applications. For these applications, the computing power of a single on-board unit may not be fully satisfactory.
The advent of the vehicle networking edge computing (VEC) provides a new approach to solving this problem. The VEC changes its terminal from a personal mobile device to a vehicle moving at high speed compared to a Mobile Edge Computation (MEC). By deploying computing resources of the core network to the edge of the vehicle network, vehicles within the coverage area of a Road Side Unit (RSU) may offload part or all of the tasks to MEC servers deployed at the RSU for better computing services, in a manner known as vehicle-to-infrastructure (V2I) offload. Another approach is to offload tasks to other computing-capable vehicles, known as vehicle-to-vehicle (V2V) offload. Obviously, V2I computation offload relies on a large number of fixed edge nodes, and therefore inevitably creates high deployment and operational overhead. Compared with V2I unloading, V2V unloading can fully utilize idle computing resources of other vehicles, and stress of roadside MEC servers is relieved.
However, in the conventional V2V-based calculation offloading scheme in the in-vehicle network, there are two problems: 1) The problem of calculation task unloading delay caused by heterogeneous calculation capability and diversified vehicle tasks in a vehicle-mounted network is not fully considered; 2) Stability problems due to dynamic fluctuations in available computing resources are not fully considered for high speed movement of the vehicle.
Therefore, the invention provides a V2V-based vehicle group collaborative computing and unloading strategy in a vehicle-mounted network environment, and solves the defects of the computing and unloading schemes of the vehicle-mounted systems.
Disclosure of Invention
In view of the above, the present invention aims to provide a V2V-based vehicle group collaborative computing and offloading policy in a vehicle-mounted network environment, which aims to obtain an optimal subtask allocation manner through a MAB model-based learning computing and offloading algorithm, improve the computing and offloading efficiency from vehicle to vehicle (V2V), optimize the use of computing resources and effectively reduce the total time delay of task completion; meanwhile, the unexpected group separation situation of the train members in the task unloading execution process is jointly considered, so that the method is applicable to a high-mobility complex vehicle-mounted network environment, and the task unloading of calculation among vehicles can be managed more efficiently;
in order to achieve the above purpose, the present invention provides the following technical solutions:
the V2V collaborative computing unloading strategy based on the vehicle group in the vehicle-mounted network environment is characterized by comprising the following steps:
s1, in a vehicle-mounted network of a vehicle group, dividing different types of tasks into a plurality of subtasks according to the size, and searching for service vehicles in the group according to the local delay of the tasks;
s2, constructing a state transition matrix according to vehicle history data, and establishing a vehicle outlier prediction model based on a Markov chain;
s3, constructing a time delay model for unloading the vehicle task and establishing an objective function according to the calculation requirement, the outlier rate prediction, the task size, the bandwidth of a communication link and the like of each vehicle;
s4, solving the objective function in the step S3 by using a greedy algorithm to obtain a subtask pre-unloading strategy; dynamically adjusting by adopting a learning type calculation unloading algorithm based on the MAB model to obtain an optimal task unloading decision;
further, the step S1 specifically includes the following steps:
s11, classifying the task types of the vehicle into four types of image processing, video processing, interactive game and augmented reality, wherein L= (L) A ,L A ,L A ,L A ) A representation; each type is divided into a plurality of subtasks according to different sizes, and the number of each subtask of 4 kinds of tasks is represented as N A ,N B ,N C ,N D The total number of subtasks is N;
s12, calculating local delay of a task according to the calculation workload of the vehicle and the maximum calculation resource, and dividing the vehicle into a task vehicle and a service vehicle;
calculating local task delays for each vehicleWherein->Is the calculation workload of the vehicle v in the time slot t, F v Representing the maximum computing resources of vehicle v. When each vehicle is locally delayed +>Less than the task-specified deadline->When the vehicle is executed as a service vehicle (SeV); otherwise the vehicle acts as a mission vehicle (TaV).
Further, the step S2 specifically includes the following steps:
s21, extracting key features such as vehicle operation records and failure rate according to historical operation data of the vehicle, and defining all states in a Markov chain model: state 1 (normal vehicle operation), state 2 (potential vehicle failure), state 3 (vehicle disconnect); and initializing a markov chain model state distribution:
π=[π 1 ,π 2 ,π 3 ]
wherein pi is i The probability that the initial moment is in state i is represented;
s22, calculating a state transition matrix according to the features provided in the step S21:
wherein P is 11 、P 22 、P 33 Probability of remaining in states 1, 2, 3, respectively, P ij Representing the probability of transitioning from state i to state j;
s23, calculating the state distribution pi (t) at any time step t according to the initial state distribution and the state transition matrix:
π(t)=π×P t
wherein pi (t) represents the state distribution after t step, P t Is the power of t of the state transition matrix;
s24, extracting the probability (state 3) that the system is in an 'outlier' state from the state distribution pi (t), and constructing a vehicle outlier rate prediction model;
further, the step S24 specifically includes the following steps:
s241, extracting the probability of the ith vehicle being in the disconnected state (state 3) at each time t from the state distribution obtained in the step S23, which is expressed as
S242, calculatingThe difference over successive time steps approximates the probability density:
in a discrete-time model, this can be approximated as:
s243, constructing a vehicle outlier ratio prediction model according to the Markov chain model and the current task execution time:
wherein,is a probability density function of the occurrence of unexpected outliers for the ith vehicle,Is the computation time delay of the ith vehicle to the task allocated to the ith vehicle;
aggregate epsilon for probability of all vehicles in consist leaving consist={ε 12 ,…,ε m-1 And } represents.
Further, the step S3 specifically includes the following steps:
s31, constructing a total task time delay model of each vehicle according to the calculation requirement, the outlier rate prediction, the task size, the bandwidth of a communication link and the like of each vehicle, wherein the total task time delay model is as follows:
wherein:is the transmission delay of all tasks offloaded to the ith vehicle,/->Is the computational delay of all tasks offloaded to the ith vehicle,Is the return delay of all calculation tasks of the ith vehicle;
further, the step S31 specifically includes the following steps:
s311, calculating the task transmission time of the ith vehicle according to the vehicle uploading rate and the task size
The vehicles in the group communicate in an LTE-V mode, and the speed of uploading the task to the service vehicles in the group is known according to a shannon formula
The task vehicles divide the bandwidth into M-1 parts and transmit the allocated tasks to each service vehicle in the group, and then the task transmission time unloaded to the ith vehicle can be expressed as:
wherein the size of each subtask is I bit, p i Is the number of subtasks offloaded to the ith vehicle;
s312, calculating task calculation time delay of the ith vehicle according to the number of subtasks, calculation complexity and vehicle calculation capacity
Each vehicle in the vehicle group is distributed with a certain number of subtasks by the task vehicle, and the subtask numbers of 4 tasks distributed to the ith vehicle are respectively as follows: p is p i ={p iA ,p iB ,p iC ,p iD };
The calculation delay of the ith vehicle includes calculation delays of 4 tasks, which can be expressed as:
wherein the computational complexity of the 4 types of tasks is alpha respectively A ,α B ,α C ,α D The unit is CPU circle number/bit; f (f) i Is the computational power of each car within the consist, f when i=0 0 Representing the computing power of the local computation;
s313, calculating the task return time delay of the ith vehicle according to the number of RSUs passed after the vehicle i leaves the vehicle group, the moment of leaving the vehicle group and the like
For the service vehicles with the computing tasks distributed in the group, the computing results are directly returned to the task vehicles after the task computing is completed. For the service vehicle which leaves halfway, the vehicle needs to be considered to continue to carry out calculation work after leaving the vehicle group, and after calculation is completed, the calculation task is returned to the task vehicle in the original vehicle group by means of the RSU;
it is assumed that the vehicle remains waiting in place after leaving the consist without stopping the calculation process of the mission while the consist continues to remain in progress. The number of RSUs that the consist passes after vehicle i leaves the consist is:
wherein:indicating the moment when the ith vehicle leaves the train set; r is the distance between the vehicle and the nearest RSU; the simplified model assumption 2r represents the average distance between two adjacent RSUs;
the service vehicle leaving the consist can obtain the current location of the consist through the RUS number calculation formula and return the result to the RSU within its range.
The results will then be transmitted between adjacent RSUs and eventually reach the RSU where the consist is currently located. The return delay can be defined as:
s314, the total task time delay model of each vehicle in the vehicle group is the sum of the task uploading time delay, the task calculating time delay and the task returning time delay, namely
S32, defining the total task time delay of the whole train set as the time for completing all the tasks in the train set, namely the time for completing the task of the last vehicle in the train set. According to step S31, the objective function instant delay minimization problem can be expressed as the following formula:
wherein, the limiting conditions are as follows:
a)t ii
b)p i =p iA +p iB +p iC +p iD
c)
d)0≤p i ≤N
e)p iA ≤N A ,p iB ≤N B ,p iC ≤N C ,p iD ≤N D
among the above conditions, condition a) is a limitation of the calculation task time delay for each vehicle, i.e., the calculation task for each vehicle must be completed within the limitation time delay of the task. b) The constraint on the number of subtasks, i.e., the number of subtasks assigned to each vehicle, is equal to the sum of the number of 4 subtasks. c) And d) is a limit on the number of subtasks assigned to each vehicle, indicating that each type of subtask to which each vehicle is assigned does not exceed the total number of subtasks of that type within the consist. e) A constraint is given on the upper limit on the number of subtasks of each type within the consist.
Further, the step S4 specifically includes the following steps:
s41, solving the objective function of the step S32 by using a greedy algorithm, and sequentially distributing the subtasks of the 4 types of tasks to service vehicles in the group to obtain a pre-unloading strategy of the subtasks;
s42, dynamically adjusting a subtask pre-unloading strategy by adopting a learning type calculation unloading algorithm based on the MAB model according to the result obtained in the step S41 to obtain an optimal task unloading decision set, wherein the optimal task unloading decision set is expressed as:
p i ={p iA ,p iB ,p iC ,p iD }
wherein p is i A subtask number indicating 4 kinds of tasks allocated to the ith vehicle;
further, the step S42 specifically includes the following steps:
s421, calculating index values of each service vehicle SeV S and each task vehicle TaVn in a time period t:
wherein,representing the capacity awareness of SeV;Representing the average offload delay of task n after (t-1) time slots;Is a confidence bound for achieving a balance of exploration and production.
This index value reflects the historical performance and the expected performance of the current state based on the SeV;
s422, comparing the index values of different SeVsSelect the value with lowest index +.>The SeV (i.e., the best performance expected) serves as the offload target, allocates offload resources, and updates the task offload decision set: p is p i ={p iA ,p iB ,p iC ,p iD };
S423, selecting an optimal SeV for unloading according to the allocation decision set provided in the step S53 so as to minimize task delay.
According to the invention, the V2V in the vehicle-mounted network is unloaded in groups, information is shared, calculation tasks are completed mutually, and the task unloading efficiency is improved; the problem of the connection stability of the computing resources among the V2V is fully considered under the condition that the vehicle moves at a high speed, and the outlier prediction model is provided, so that the method is more suitable for a complex topological structure in a real vehicle network; according to the invention, a learning type calculation unloading algorithm based on the MAB model is adopted, taV can learn the unloading performance from the candidate SeVs in the group, and make calculation unloading decisions without providing complete unloading information in advance, thereby being beneficial to more effectively and safely carrying out V2V calculation unloading.
Drawings
In order to make the objects, technical solutions and advantageous effects of the present invention more clear, the present invention provides the following drawings for description:
FIG. 1 is a flowchart illustrating the overall steps of a vehicle group collaborative computing offloading strategy of the present invention;
FIG. 2 is a flowchart of a learning type calculation unloading algorithm based on the MAB model according to the present invention;
Detailed Description
The collaborative computing offload strategy of the present invention will be further described with reference to the accompanying drawings:
the collaborative computing unloading strategy provided by the invention is that computing resources among V2V are fully utilized under a vehicle-mounted edge computing scene, an objective function is constructed by considering the unexpected group-separating condition of the vehicle in the unloading process, and a subtask pre-allocation mode is obtained by solving through a greedy algorithm; and combining a learning type calculation unloading algorithm based on the MAB model to obtain an optimal allocation mode of the calculation subtasks in the current dynamic environment so as to realize a V2V collaborative calculation unloading optimal strategy.
FIG. 1 is a flowchart of the overall steps of the vehicle group collaborative computing offloading strategy of the present invention, including the steps of: 1) In a vehicle-mounted network of a vehicle group, dividing different types of tasks into a plurality of subtasks according to the size, and searching for service vehicles in the group according to the local delay of the tasks; 2) According to vehicle history data, a state transition matrix is built, and a vehicle outlier prediction model based on a Markov chain is built; 3) According to the calculation requirement, the outlier ratio prediction, the size of task data, the bandwidth of a communication link and the like of each vehicle, constructing a time delay model for unloading the vehicle task and establishing an objective function; 4) Solving the objective function in the step S3 by using a greedy algorithm to obtain a subtask pre-allocation mode; optimizing by adopting a learning type calculation unloading algorithm based on the MAB model to obtain an allocation strategy of the optimal subtasks; 5) And sequentially selecting corresponding vehicles in the vehicle group according to the strategy of the step S4 and issuing calculation unloading tasks to realize the V2V collaborative calculation unloading optimal strategy.
Fig. 2 is a flowchart of a learning type computing and unloading algorithm based on a MAB (multi-arm slot machine) model according to the present invention, and the flowchart mainly includes five phases, namely an initialization phase (step 201), a vehicle role determination phase (steps 202-203), a candidate SeV identification phase (step 204), an unloading learning phase (steps 205-206), and a computing resource allocation phase (step 207).
The flow begins at step 201 with the setting of initial candidate SeVs as empty sets; the number of selections per SeV and the average task delay are initialized to zero. Task n's candidate SeV defaults to the empty set of initial timeslots. Is provided withAnd->Representing the selected time of processing task n and the average task delay of SeV s after t slots, respectively, are set to zero in the initial slot, i.e
In step 202, a local task delay is calculated for each vehicle.
In step 203, it is determined whether the local task delay is less than the task expiration date. When the local mission delay is less than the mission deadline, the vehicle is classified as SeV; otherwise, the vehicle is classified as TaV. In V2V computing offloading, the vehicle roles (i.e., seV and TaV) may change at different times due to the different vehicle tasks requested. The vehicle as a SeV reenters the vehicle role determination phase and returns to step 202.
At step 204, if the vehicle is classified as TaV, a candidate set of SeVs for each TaV is obtained. The candidate SeV needs to satisfy: 1) Physical constraints, i.e. the candidate SeV must communicate directly with TaV and maintain the same direction of travel; 2) Service caching constraints, i.e., the SeV must configure TaV the requested computing service. These two constraints ensure reliable communication links and computing service support. When both constraints are satisfied, the SeV becomes one of the candidate SeVs of TaV.
In step 205, it is determined whether there is an unselected SeV. Definition of the definitionTo represent the SeV selected by processing task n on time slot t. If there is a SeV in the t-1 time slots that is not selected, then it is selected in time slot t. This behaviour facilitates learning exploration, avoids local optima, in which case +.>
At step 206, when each candidate SeV is selected at least once after t-1 slots, an index-based minimum study is derived to achieve V2V computation offload. The index function of SeV s at time slot t is defined as:
wherein,representing the capacity awareness of SeV;Representing the average offload delay of task n after (t-1) time slots;Is a confidence bound for achieving a balance of exploration and production.
For a SeV, the higher its unloading capacity, the less the SeV contributes to the index value and therefore the greater the chance of being selected.
Confidence bounds can be expressed as:
wherein t represents the current total learning time (i.e., instant);in order to process the selection times of SeV s of the task n after t-1 times of learning, the smaller the selection times are, the smaller the index value is, which is beneficial to learning and exploration; the parameter beta is used for adjusting exploration weights;
on this basis, the target SeV is found by minimum value research based on indexes:
after that, we will SeVThe number of selections of (a) is updated to the nth learning, expressed as:
can obtain the SeV needing to be updatedThe average task delay of (a) is:
in step 207, a task set for each SeV is obtained based on the offloading decision. Then, each SeV is determinedThe assigned computing resources handle offloaded tasks.
At step 208, the learning iterations of stage 2 through stage 5 in the MAB model-based learning calculation offloading algorithm are repeated until the number of learnt t > T, where T represents the preset total number of iterations of the algorithm.

Claims (5)

1. The V2V-based vehicle group collaborative computing unloading strategy in the vehicle-mounted network environment is characterized by comprising the following steps of:
s1, in a vehicle-mounted network of a vehicle group, dividing different types of tasks into a plurality of subtasks according to the size, and searching for service vehicles in the group according to the local delay of the tasks;
s2, constructing a state transition matrix according to vehicle history data, and establishing a vehicle outlier prediction model based on a Markov chain;
s3, constructing a time delay model for unloading the vehicle task and establishing an objective function according to the calculation requirement, the outlier rate prediction, the task size, the bandwidth of a communication link and the like of each vehicle;
s4, solving the objective function in the step S3 by using a greedy algorithm to obtain a subtask pre-unloading strategy; and obtaining an optimal task unloading decision by adopting a learning type calculation unloading algorithm based on the MAB model.
2. The V2V-based vehicle group cooperative computing offloading policy according to claim 1, wherein the step S1 specifically includes the steps of:
s11, classifying the vehicle task types into four types of image processing, video processing, interactive games and augmented reality, wherein each type is divided into a plurality of subtasks according to different sizes;
s12, calculating the local delay of the task according to the size of the task, if the local delay is smaller than the time limit of the task delay, the vehicle is a service vehicle in the group, and otherwise, the vehicle is a task vehicle in the group.
3. The V2V-based vehicle group cooperative computing offloading policy of claim 1, wherein the step S2 specifically includes the steps of:
s21, extracting characteristics such as vehicle running speed, running state, running duration and the like according to historical running data of the vehicle, defining all states in a Markov chain model and initializing;
s22, calculating to obtain a state transition matrix according to the characteristics provided in the step S21;
s23, calculating the state distribution pi (t) at any time t according to the initial state distribution and the state transition matrix;
s24, extracting the probability of the system in an 'outlier' state from the state distribution pi (t), and constructing a vehicle outlier rate prediction model.
4. The V2V-based vehicle group cooperative computing offloading policy according to claim 1, wherein the step S3 specifically includes the steps of:
s31, constructing a total task time delay model of each vehicle according to the calculation requirement, the outlier rate prediction, the task size, the bandwidth of a communication link and the like of each vehicle, wherein the total task time delay model is as follows:
wherein:is the transmission delay of all tasks offloaded to the ith vehicle,/->Is the computational delay of all tasks offloaded to the ith vehicle,Is the return delay of all calculation tasks of the ith vehicle;
s32, according to the step S31, taking the delay minimization problem as an objective function as the following formula:
where M represents the number of vehicles in the consist.
5. The V2V-based vehicle group cooperative computing offloading policy according to claim 1, wherein the step S4 specifically includes the steps of:
s41, solving an objective function in the step S32 by using a greedy algorithm, and sequentially distributing sub-tasks of the 4 types of tasks to service vehicles in the group to obtain a pre-unloading strategy of the sub-tasks;
s42, dynamically adjusting a subtask pre-unloading strategy by adopting a learning type calculation unloading algorithm based on the MAB model according to the result obtained in the step S41 to obtain an optimal task unloading decision set, wherein the optimal task unloading decision set is expressed as:
p i ={p iA ,p iB ,p iC ,p iD }
wherein p is i The number of subtasks assigned to the 4 kinds of tasks of the i-th vehicle is represented.
CN202311712133.7A 2023-12-13 2023-12-13 A vehicle group collaborative computing offloading strategy based on V2V in vehicle network environment Pending CN117858165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311712133.7A CN117858165A (en) 2023-12-13 2023-12-13 A vehicle group collaborative computing offloading strategy based on V2V in vehicle network environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311712133.7A CN117858165A (en) 2023-12-13 2023-12-13 A vehicle group collaborative computing offloading strategy based on V2V in vehicle network environment

Publications (1)

Publication Number Publication Date
CN117858165A true CN117858165A (en) 2024-04-09

Family

ID=90533599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311712133.7A Pending CN117858165A (en) 2023-12-13 2023-12-13 A vehicle group collaborative computing offloading strategy based on V2V in vehicle network environment

Country Status (1)

Country Link
CN (1) CN117858165A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119136257A (en) * 2024-08-09 2024-12-13 杭州电子科技大学 Link topology adaptive offloading method for edge computing in Internet of Vehicles
CN119545431A (en) * 2024-08-09 2025-02-28 杭州电子科技大学 Topology link awareness task collaborative offloading method for V2V and V2I joint system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119136257A (en) * 2024-08-09 2024-12-13 杭州电子科技大学 Link topology adaptive offloading method for edge computing in Internet of Vehicles
CN119545431A (en) * 2024-08-09 2025-02-28 杭州电子科技大学 Topology link awareness task collaborative offloading method for V2V and V2I joint system
CN119136257B (en) * 2024-08-09 2025-10-28 杭州电子科技大学 Link topology adaptive offloading method for edge computing in Internet of Vehicles

Similar Documents

Publication Publication Date Title
Dai et al. Task offloading for vehicular edge computing with edge-cloud cooperation
CN112163720A (en) A battery swap scheduling method for multi-agent unmanned electric vehicles based on the Internet of Vehicles
CN114338504A (en) Micro-service deployment and routing method based on network edge system
CN117858165A (en) A vehicle group collaborative computing offloading strategy based on V2V in vehicle network environment
CN111711666A (en) Internet of vehicles cloud computing resource optimization method based on reinforcement learning
CN113114721A (en) Software defined Internet of vehicles service migration method based on MEC
CN119562311B (en) Revenue-aware service migration and resource allocation method for multi-edge cellular Internet of vehicles
Ahmed et al. MARL based resource allocation scheme leveraging vehicular cloudlet in automotive-industry 5.0
Li et al. Deep reinforcement learning for load balancing of edge servers in iov
CN115904731A (en) An Edge Collaborative Replica Placement Method
Karimi et al. Intelligent and decentralized resource allocation in vehicular edge computing networks
Chen et al. Graph neural network aided deep reinforcement learning for microservice deployment in cooperative edge computing
CN117938959A (en) Multi-target SFC deployment method based on deep reinforcement learning and genetic algorithm
CN117290071A (en) A fine-grained task scheduling method and service architecture in vehicle edge computing
CN115208892B (en) Vehicle-road collaborative online task scheduling method and system based on dynamic resource requirements
CN112750298A (en) Truck formation dynamic resource allocation method based on SMDP and DRL
CN114531669B (en) Task unloading method and system based on vehicle edge calculation
Nguyen et al. EdgePV: Collaborative edge computing framework for task offloading
CN119136257B (en) Link topology adaptive offloading method for edge computing in Internet of Vehicles
CN119889042A (en) Urban traffic cloud edge integrated collaborative management and control method, system and platform
Zhao et al. Research on the edge resource allocation and load balancing algorithm based on vehicle trajectory
CN117376141A (en) A task scheduling allocation method based on Lyapunov optimized DQN algorithm
CN117409582B (en) A vehicle collaborative task offloading method in VEC
CN117336696A (en) A resource allocation method for joint storage and computing in Internet of Vehicles
Wu et al. Service chain caching and task offloading in two-tier uav-assisted vehicular edge computing networks: an attention-drl method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination