CN109299491B - Meta-model modeling method based on dynamic influence graph strategy and using method - Google Patents
Meta-model modeling method based on dynamic influence graph strategy and using method Download PDFInfo
- Publication number
- CN109299491B CN109299491B CN201810472772.3A CN201810472772A CN109299491B CN 109299491 B CN109299491 B CN 109299491B CN 201810472772 A CN201810472772 A CN 201810472772A CN 109299491 B CN109299491 B CN 109299491B
- Authority
- CN
- China
- Prior art keywords
- strategy
- simulation
- probability
- model
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
一种基于动态影响图对策的元模型建模方法及使用方法,属于计算机应用技术领域,该模型把仿真状态、决策方案和决策目标分别用机会结点、决策结点和效用结点表示,对手的行为采用决策者相似的决策结点进行建模。该模型的构造方法步骤包括选择所需变量、确定仿真数据、确定拓扑结构和概率表、拓扑结构和概率表正确性检验和确定效用函数。模型使用方法包括观测系统状态、假设分析和求解最优策略。该方法为简化系统仿真模型提供了一种有效的技术手段,解决了目前元模型没有考虑到决策者的偏好、对手所采取的行动以及不能提供直接的手段优化仿真输出等问题,可广泛应用于计算机仿真领域,尤其是高对抗、不完全信息下的仿真建模情况。
A meta-model modeling method and application method based on a dynamic influence graph strategy, which belong to the field of computer application technology. The behavior of is modeled using decision nodes similar to decision makers. The steps of the construction method of the model include selecting required variables, determining simulation data, determining topological structure and probability table, checking the correctness of topological structure and probability table, and determining utility function. Model usage methods include observing the system state, what-if analysis and solving the optimal strategy. This method provides an effective technical means for simplifying the system simulation model, and solves the problems that the current meta-model does not take into account the preferences of the decision maker, the actions taken by the opponent, and cannot provide direct means to optimize the simulation output. It can be widely used in In the field of computer simulation, especially simulation modeling under high confrontation and incomplete information.
Description
技术领域technical field
本发明属于计算机应用技术领域,具体涉及一种基于动态影响图 对策的元模型建模方法及使用方法。The invention belongs to the field of computer application technology, and in particular relates to a metamodel modeling method based on a dynamic influence graph strategy and a using method.
背景技术Background technique
仿真技术以其经济性、安全性、可重复性、无破坏性的优点,被 广泛应用于航天、航空、航海、电力、化工、核能、通信等领域。现 代仿真技术不仅应用于传统的工程领域,而且日益广泛地应用于社 会、经济、生物、军事等领域,如交通控制、城市规划、资源利用、 环境污染防治、生产管理、市场预测、世界经济的分析和预测、人口 控制等。对于社会经济、军事等系统,很难在真实的系统上进行实验。 因此,利用仿真技术来研究这些系统就具有更为重要的意义。Simulation technology is widely used in aerospace, aviation, navigation, electric power, chemical industry, nuclear energy, communication and other fields due to its advantages of economy, safety, repeatability and non-destructiveness. Modern simulation technology is not only used in traditional engineering fields, but also increasingly widely used in social, economic, biological, military and other fields, such as traffic control, urban planning, resource utilization, environmental pollution prevention, production management, market forecasting, world economic Analysis and forecasting, population control, etc. For socio-economic, military and other systems, it is difficult to conduct experiments on real systems. Therefore, it is more important to use simulation technology to study these systems.
由于高层的推理与交流决策支持、探索分析、多分辨率模型的建 立及快速的自适应计算都需要简单模型。现有的仿真组织常常拥有大 量复杂而有效的仿真模型,而缺乏准确适用的简单模型。由于仿真模 型的复杂和文档的缺乏,或是组织没有专门的人才再去钻研模型的 内部结构,或是由于耗时太多,使得我们拥有大型仿真模型的组织 不能通过严谨的研究和简化目标模型得到急需的相对简单的模型。解 决这种仿真危机的有效手段是构建基于仿真模型的元模型。Simple models are required for high-level reasoning and communication decision support, exploratory analysis, establishment of multi-resolution models, and fast adaptive calculations. Existing simulation organizations often have a large number of complex and effective simulation models, but lack accurate and applicable simple models. Due to the complexity of the simulation model and the lack of documentation, or the organization does not have specialized personnel to delve into the internal structure of the model, or because it takes too much time, our organization with a large simulation model cannot pass rigorous research and simplify the target model. Get much needed relatively simple models. An effective means to solve this simulation crisis is to construct a meta-model based on the simulation model.
元模型的概念与方法是系统仿真领域新兴的一个热点,它适用于 低分辨率快速仿真与跨层次仿真建模,在工业设计与制造、管理科学 及军事装备采办有着较广泛的应用。所谓元模型是仿真模型的一个简 单的替代模型,即通过对原始仿真模型的输入-输出结果序列进行拟 合而得到的新的简化的、近似的数学模型,是对原始仿真模型输入输 出关系的逼近。因此,它大大降低了仿真模型的复杂度,用它来代替 或部分代替仿真模型进行仿真实验,能在满足精度要求的条件下,大 幅度减少计算开销,提高仿真的效率。利用元模型,通常能达到下述 五个目的:(1)增加对真实系统(源系统)的理解,方便地形成真实系统的有价值的知识;(2)预测输出变量或响应变量的值;(3)应用于 高层仿真;(4)进行系统或体系的优化;(5)辅助仿真模型的校验与确 认。The concept and method of meta-model is an emerging hotspot in the field of system simulation. It is suitable for low-resolution fast simulation and cross-level simulation modeling, and has a wide range of applications in industrial design and manufacturing, management science and military equipment procurement. The so-called meta-model is a simple substitute model of the simulation model, that is, a new simplified and approximate mathematical model obtained by fitting the input-output result sequence of the original simulation model, which is the relationship between the input and output of the original simulation model. Approaching. Therefore, it greatly reduces the complexity of the simulation model, and using it to replace or partly replace the simulation model for simulation experiments can greatly reduce the calculation overhead and improve the efficiency of simulation under the condition of meeting the accuracy requirements. Utilizing the meta-model can usually achieve the following five purposes: (1) increase the understanding of the real system (source system), and conveniently form valuable knowledge of the real system; (2) predict the value of the output variable or response variable; (3) Apply to high-level simulation; (4) Optimize the system or system; (5) Assist the verification and confirmation of the simulation model.
目前常见的仿真元模型包括:回归模型,遗传算法,Kriging, 对策模型,样条模型,神经网络,径向基核函数,空间相关性模型, 频域模型,响应曲面等。上述模型都不能表示仿真状态变量随时间变 化情况,不能描述仿真状态变量在不同时间之间的关系,没有考虑到 决策者的偏好、对手所采取的行动以及不能提供直接的手段优化仿真 输出。Currently common simulation meta-models include: regression model, genetic algorithm, Kriging, game model, spline model, neural network, radial basis kernel function, spatial correlation model, frequency domain model, response surface, etc. None of the above models can represent the variation of simulation state variables over time, cannot describe the relationship between simulation state variables at different times, does not take into account the preferences of decision makers, actions taken by opponents, and cannot provide direct means to optimize simulation output.
发明内容Contents of the invention
为了克服上述现有技术的不足,本发明个的目的是提供一种基于 动态影响图对策的元模型建模方法及使用方法,动态影响图对策是影 响图理论的拓展,是与非协商对策论相结合的理论。动态影响图对策 是在单级影响图对策基础上,引入时间片概念,主要用于解决动态决 策问题。In order to overcome the above-mentioned deficiencies in the prior art, the purpose of the present invention is to provide a meta-model modeling method and using method based on dynamic influence graph strategy. combined theory. The dynamic influence diagram strategy is based on the single-level influence diagram strategy and introduces the concept of time slice, which is mainly used to solve dynamic decision-making problems.
为了实现上述目的,本发明采用的技术方案是:In order to achieve the above object, the technical scheme adopted in the present invention is:
一种基于动态影响图对策的元模型建模方法,包括以下步骤:A metamodel modeling method based on dynamic influence graph strategy, comprising the following steps:
步骤一:从所有变量中选择所需变量,这些变量包括:状态变量, 决策变量,外部变量,价值变量;Step 1: Select the required variables from all variables, these variables include: state variables, decision variables, external variables, value variables;
步骤二:确定仿真数据,在动态影响图对策模型中,可以认为时 间是连续的,即仿真事件可以在任何时刻t∈[t0,tf]发生,在[t0,tf]观测 到的时间序列仿真数据可以反映仿真状态变化情况,而仿真状态的概 率分布可以通过多次仿真估计获得,通过大量仿真数据可以获得仿真 状态的高精度估计概率曲线,并将其作为参考概率曲线,参考概率曲 线可用于确定离散采样的次数和采样时刻;Step 2: Determine the simulation data. In the dynamic influence graph game model, it can be considered that time is continuous, that is, the simulation event can occur at any time t∈[t 0 ,t f ], and it can be observed at [t 0 ,t f ] The time series simulation data of the simulation state can reflect the change of the simulation state, and the probability distribution of the simulation state can be obtained through multiple simulation estimates. Through a large amount of simulation data, a high-precision estimated probability curve of the simulation state can be obtained, and it can be used as a reference probability curve. The probability curve can be used to determine the number of discrete sampling and the sampling moment;
所述的最优采样次数,可以根据如下的步骤获得:The optimal number of sampling times can be obtained according to the following steps:
1)假设j=1和t∈[t0,tf],设定最大误差值ε,变量的数值l∈L的概 率用pl(t)=p(x(t)=l)表示;L为变量的取值范围;1) Suppose j=1 and t∈[t 0 ,t f ], set the maximum error value ε, and the probability of variable value l∈L is represented by p l (t)=p(x(t)=l); L is the value range of the variable;
2)确定每一近似概率误差是否小于ε,即是 否小于ε(ε由分析者自己设定),这里pl(t)为参考概率, 如果满足上述条件,则跳转到步骤(5), 否则跳转步骤(3);2) Determine whether each approximate probability error is less than ε, namely is less than ε (ε is set by the analyst himself), where p l (t) is the reference probability, If the above conditions are met, go to step (5), otherwise go to step (3);
3)获得时间间隔Δt,选择造成概率误差最大的数值k∈L,时间 间隔Δt满足下述方程:3) Obtain the time interval Δt, select the value k∈L that causes the largest probability error, and the time interval Δt satisfies the following equation:
这里 here
4)令t0:=t0+Δt,j:=j+1,跳转到步骤(2);4) Make t 0 :=t 0 +Δt,j:=j+1, jump to step (2);
5)获得最优采样次数n,令n=j+1,输出n。5) Obtain the optimal number of sampling times n, set n=j+1, and output n.
一旦获得最优采样次数,则优选的最优采样时刻可以通过求解下 述最优化问题获得:Once the optimal sampling times are obtained, the optimal optimal sampling time can be obtained by solving the following optimization problem:
这里T={t0,t1,...,tf},元素的数目|T|=n;Here T={t 0 ,t 1 ,...,t f }, the number of elements |T|=n;
步骤三:确定拓扑结构和概率表,影响图对策的拓扑结构,也就 是变量之间的关系,可以根据专家知识、对策双方的背景知识以及仿 真数据集成的方法获得;确定影响图对策的结点后,根据专家知识和 背景知识画出原始图;先确定单个时间片段的结构,然后确定各时间 片段之间的关系,获得多个原始影响图,再依据打分方法从多个影响 图中选择一个最适合的结构;Step 3: Determine the topological structure and probability table. The topological structure of the influence graph strategy, that is, the relationship between variables, can be obtained according to expert knowledge, background knowledge of both parties to the strategy and the method of simulation data integration; determine the nodes of the influence graph strategy Finally, draw the original graph based on expert knowledge and background knowledge; first determine the structure of a single time segment, then determine the relationship between each time segment, obtain multiple original influence graphs, and then select one of the multiple influence graphs according to the scoring method the most suitable structure;
步骤四:拓扑结构和概率表正确性检验,结构检验原则如下:Step 4: Verify the correctness of the topology structure and the probability table. The principles of structure verification are as follows:
1)有向图无环路;1) There are no loops in the directed graph;
2)如果有价值结点,则它没有后序结点;2) If there is a valuable node, it has no subsequent nodes;
3)有一条路径包含所有的决策结点,概率表的检验一般采用和 测试和乘测试两种方法。如果拓扑结构或概率表不正确,跳转到步骤 三。3) There is a path that includes all the decision nodes, and the test of the probability table generally adopts two methods of sum test and multiplication test. If the topology or probability table is incorrect, skip to step 3.
步骤五:确定效用函数。效用函数可以由决策者自己设定或依据拓扑 结构和概率表以及仿真数据学习获得。Step 5: Determine the utility function. The utility function can be set by the decision makers themselves or learned from topological structures, probability tables and simulation data.
所述的打分方法基于模型规模和从模型及仿真数据获得的概率 匹配度之间的权衡。如果从仿真数据学习中发现结点之间存在明显地 缺适性,则将额外的弧加入到结点之间,从而改进原始影响图对策。 最后是加入对策双方的价值结点,获得最终影响图对策。概率的估计 基于观察到的频率,也就是观测到变量在各取值的次数。The scoring method described is based on a trade-off between model size and probability fit obtained from the model and simulation data. If it is found from the simulation data that there is an obvious lack of fitness between the nodes, additional arcs are added between the nodes, thereby improving the original influence diagram strategy. Finally, the value nodes of both sides of the strategy are added to obtain the final influence graph strategy. Estimates of probability are based on observed frequencies, that is, the number of times a variable is observed to take on each value.
一种基于动态影响图对策的元模型建模的使用方法,包括以下三 种:A meta-model modeling method based on a dynamic influence diagram strategy, including the following three:
方法一:动态影响图对策的仿真元模型可以用于观测系统状态; 假定S(t)={x1(t),x2(t),...,xn(t)},这里xi(t)(i=1,...,n)为仿真状态变量,在特定 时刻的概率分布的时间演变描述了在整个仿真期间仿真状态xi(t)的变 化,用p(xi(t))表示。观测系统状态方法就是在特定时刻t计算仿真状态 变量xi(t)在取值为l上的概率,即p(xi(t)=l):Method 1: The simulation meta-model of the dynamic influence graph strategy can be used to observe the state of the system; Assume that S(t)={x 1 (t),x 2 (t),...,x n (t)}, where x i (t) (i=1,...,n) is the simulation state variable, and the time evolution of the probability distribution at a specific moment describes the change of the simulation state x i (t) during the entire simulation period, with p( xi (t)) said. The method of observing the state of the system is to calculate the probability of the simulated state variable x i (t) on the value l at a specific time t, that is, p( xi (t)=l):
步骤一:设定动态影响图对策的参数;Step 1: Set the parameters of the dynamic influence diagram strategy;
步骤二:从局中人策略集中选择策略;Step 2: Select a strategy from the player's strategy set;
步骤三:利用贝叶斯定理计算在局中人选择策略情况下状态变量 xi(t)在取值为l上的概率;步骤四:根据在特定时刻t∈T={t1,...,tf}的xi(t)取 值l的概率判断系统状态变化情况;Step 3: Use Bayesian theorem to calculate the probability of the state variable x i (t) taking the value l in the case of the player choosing a strategy; Step 4: According to the specific moment t∈T={t 1 ,.. .,t f } the probability of taking the value l of x i (t) to judge the state change of the system;
方法二:动态影响图对策的仿真元模型可以用于假设分析;Method 2: The simulation meta-model of the dynamic influence diagram strategy can be used for what-if analysis;
步骤一:设定动态影响图对策的参数;Step 1: Set the parameters of the dynamic influence diagram strategy;
步骤二:从局中人策略集中选择策略;Step 2: Select a strategy from the player's strategy set;
步骤三:利用贝叶斯定理计算在局中人选择策略情况下状态变量 xi(t)在取值为l上的概率;步骤四:根据在特定时刻t∈T={t1,...,tf}的xi(t)取 值l概率判断系统状态变化情况。Step 3: Use Bayesian theorem to calculate the probability of the state variable x i (t) taking the value l in the case of the player choosing a strategy; Step 4: According to the specific moment t∈T={t 1 ,.. .,t f } xi (t) takes the value l probability to judge the state change of the system.
方法三:动态影响图对策的仿真元模型可以用于求解最优策略;Method 3: The simulation meta-model of the dynamic influence graph strategy can be used to solve the optimal strategy;
步骤一:设定动态影响图对策的参数;Step 1: Set the parameters of the dynamic influence diagram strategy;
步骤二:采用枚举法或者平均移动控制法计算对策双方的最优策 略,如果对策的一方已经选择了策略,则可转化为最优化问题处理。Step 2: Use the enumeration method or the average moving control method to calculate the optimal strategy of both countermeasures. If one of the countermeasures has selected a strategy, it can be transformed into an optimization problem.
本发明的有益效果是:The beneficial effects of the present invention are:
1)信息考虑到元模型中,利用对策论模型,解决了其它元模型 未考虑偏好信息以及对手所采取的行动的问题。1) Information is taken into account in the meta-model, and the game theory model is used to solve the problem that other meta-models do not consider the preference information and the actions taken by the opponent.
2)动态影响图的元模型采用最优采样算法,在不降低性能情况 下,简化了元模型。以及不能提供直接的手段优化仿真输出。2) The meta-model of the dynamic influence diagram adopts the optimal sampling algorithm, which simplifies the meta-model without reducing the performance. and does not provide a direct means of optimizing the simulation output.
3)利用动态影响图的元模型求解的结果,可以为决策者提供直 接的手段优化仿真输出。3) Using the meta-model solution results of the dynamic influence diagram, it can provide decision makers with a direct means to optimize the simulation output.
4)动态影响图的元模型可以用于假设分析、策略对仿真演变。4) The meta-model of the dynamic influence diagram can be used for what-if analysis and strategy-to-simulation evolution.
附图说明Description of drawings
图1为本发明动态影像图对策模型的示意图;Fig. 1 is the schematic diagram of the dynamic image map countermeasure model of the present invention;
图2为本发明建模方法流程图;Fig. 2 is the flowchart of modeling method of the present invention;
图3为实例的动态影响图的元模型;Fig. 3 is the metamodel of the dynamic influence diagram of the example;
图4为实例的概率曲线;Fig. 4 is the probability curve of example;
图5为实例的假设分析结果图。Figure 5 is a diagram of the hypothetical analysis results of the example.
具体实施方式Detailed ways
下面结合附图对本发明进一步说明:Below in conjunction with accompanying drawing, the present invention is further described:
一种基于影响图对策的元模型构建方法,其特征在于:步骤一: 选择变量;步骤二:确定仿真数据;步骤三:确定拓扑结构和概率表; 步骤四:拓扑结构和概率表正确性检验,如果拓扑结构或概率表不正 确,跳转到步骤三,步骤五:确定效用函数。A meta-model construction method based on an influence graph strategy, characterized in that: Step 1: Select variables; Step 2: Determine simulation data; Step 3: Determine topology and probability table; Step 4: Check the correctness of topology and probability table , if the topology or probability table is incorrect, skip to Step 3, Step 5: Determine the utility function.
所述的选择变量包括:(1)状态变量。对策双方的状态变量是对 策问题全面、有效的描述。在每一时刻t∈{t0,t1,...,tf},对策的仿真状态 用离散随机变量xi(t)表示,这里i表示对策的局中人。在动态影响图对 策模型中,状态变量用机会结点表示;(2)决策变量。由于仿真参数 影响仿真的演变,如果控制仿真参数表示的各要素是可控的,则可以 用决策变量表示,而在影响图模型中用决策结点表示;(3)外部变量。 如果仿真参数不可控,则用外部随机变量表示,在影响图中用机会结 点表示;(4)价值变量。在影响图中用价值结点表示,它表示局中人 的目标。仿真输出通过机会结点的概率分布表示。The selection variables include: (1) State variables. The state variables of both sides of the game are a comprehensive and effective description of the game problem. At each moment t∈{t 0 ,t 1 ,...,t f }, the simulated state of the game is represented by a discrete random variable x i (t), where i represents the player in the game. In the dynamic influence graph game model, state variables are represented by chance nodes; (2) Decision variables. Since the simulation parameters affect the evolution of the simulation, if the elements represented by the control simulation parameters are controllable, they can be represented by decision variables, while in the influence diagram model they are represented by decision nodes; (3) External variables. If the simulation parameters are uncontrollable, they are represented by external random variables, and represented by chance nodes in the influence diagram; (4) value variables. Represented in an influence diagram by a value node, which represents the player's goals. The simulation output is represented by a probability distribution of chance nodes.
所述的确定仿真数据包括:在动态影响图对策模型中,可以认为 时间是连续的,即仿真事件可以在任何时刻t∈[t0,tf]发生。在[t0,tf]观 测到的时间序列仿真数据可以反映仿真状态变化情况,而仿真状态的 概率分布可以通过多次仿真估计获得。通过大量仿真数据可以获得仿 真状态的高精度估计概率曲线,并将其作为参考概率曲线。参考概率曲线可用于确定离散采样的次数和采样时刻。如果采样次数足够多, 则离散后的概率曲线就能表示参考概率曲线的变化情况,则离散后的 曲线应该比较光滑,同时采样次数越大,元模型精度越高但也越复杂。 除此之外,离散时刻的选择也是影响到元模型的精度的因素之一,也 是重点考虑的问题。The determination of simulation data includes: in the dynamic influence diagram game model, time can be considered to be continuous, that is, simulation events can occur at any time t∈[t 0 ,t f ]. The time series simulation data observed at [t 0 ,t f ] can reflect the change of the simulation state, and the probability distribution of the simulation state can be obtained through multiple simulation estimates. A high-precision estimation probability curve of the simulation state can be obtained through a large amount of simulation data, and it can be used as a reference probability curve. The reference probability curve can be used to determine the number of discrete samples and the sampling instant. If the number of sampling is large enough, the discretized probability curve can represent the change of the reference probability curve, and the discretized curve should be relatively smooth. At the same time, the larger the sampling frequency, the higher the accuracy of the meta-model but the more complex it is. In addition, the choice of discrete time is also one of the factors that affect the accuracy of the meta-model, and it is also a key consideration.
优选是采用最优采样次数,可以根据如下的步骤获得:Preferably, the optimal number of sampling times is used, which can be obtained according to the following steps:
(1)假设j=1和t∈[t0,tf],设定最大误差值ε,变量的数值l∈L的 概率用pl(t)=p(x(t)=l)表示;L为变量的取值范围;(1) Suppose j=1 and t∈[t 0 ,t f ], set the maximum error value ε, and the probability of variable value l∈L is represented by p l (t)=p(x(t)=l) ;L is the value range of the variable;
(2)确定每一近似概率误差是否小于ε,即是否小于ε(ε由分析者自己设定),这里pl(t)为参考概率, 如果满足上述条件,则跳转到步骤(5), 否则跳转步骤(3);(2) Determine whether each approximate probability error is less than ε, namely is less than ε (ε is set by the analyst himself), where p l (t) is the reference probability, If the above conditions are met, go to step (5), otherwise go to step (3);
(3)获得时间间隔Δt。选择造成概率误差最大的数值k∈L,时 间间隔Δt满足下述方程:(3) Obtain the time interval Δt. Select the value k∈L that causes the largest probability error, and the time interval Δt satisfies the following equation:
这里 here
(4)令t0:=t0+Δt,j:=j+1,跳转到步骤(2);(4) make t 0 :=t 0 +Δt,j:=j+1, jump to step (2);
(5)获得最优采样次数n。令n=j+1,输出n。(5) Obtain the optimal sampling times n. Let n=j+1, output n.
一旦获得最优采样次数,则优选的最优采样时刻可以通过求解下 述最优化问题获得:Once the optimal sampling times are obtained, the optimal optimal sampling time can be obtained by solving the following optimization problem:
这里T={t0,t1,...,tf},元素的数目|T|=n。Here T={t 0 ,t 1 ,...,t f }, the number of elements |T|=n.
所述的确定拓扑结构和概率表包括:影响图对策的拓扑结构,也 就是变量之间的关系,可以根据专家知识、对策双方的背景知识以及 仿真数据集成的方法获得。确定影响图对策的结点后,根据专家知识 和背景知识画出原始图。先确定单个时间片段的结构,然后确定各时 间片段之间的关系,获得多个原始影响图,再依据打分方法从多个影 响图中选择一个最适合的结构。打分方法基于模型规模和从模型及仿 真数据获得的概率匹配度之间的权衡。如果从仿真数据学习中发现结 点之间存在明显地缺适性,则将额外的弧加入到结点之间,从而改进 原始影响图对策。最后是加入对策双方的价值结点,获得最终影响图 对策。概率的估计基于观察到的频率,也就是观测到变量在各取值的 次数。在影响图对策中,每一变量xi(t)都有一个条件概率表。条件概 率表提供每一结点的父结点各取值组合的条件概率p(xi(t)|Φi=lΦ),则条 件概率θl=p(xi(t)=l|Φi=lΦ)的估计值为:The said determined topology and probability table include: the topology of the influence graph strategy, that is, the relationship between variables, can be obtained according to expert knowledge, background knowledge of both parties to the strategy, and simulation data integration. After determining the nodes of the strategy of the influence diagram, the original diagram is drawn according to expert knowledge and background knowledge. First determine the structure of a single time segment, then determine the relationship between each time segment, obtain multiple original influence diagrams, and then select a most suitable structure from multiple influence diagrams according to the scoring method. The scoring method is based on a trade-off between model size and the probability match obtained from the model and the simulation data. If there is a clear lack of fitness between nodes found from simulation data learning, additional arcs are added between nodes, thereby improving the original influence diagram strategy. Finally, the value nodes of both sides of the strategy are added to obtain the final influence graph strategy. Estimates of probability are based on observed frequency, that is, the number of times a variable is observed to take on each value. In an influence diagram game, there is a conditional probability table for each variable x i (t). The conditional probability table provides the conditional probability p( xi (t)|Φ i =l Φ ) of each value combination of the parent node of each node, then the conditional probability θ l =p( xi (t)=l| The estimated value of Φ i =l Φ ) is:
这里Nl为观测到仿真中xi(t)=l且Φi=lΦ次数,而NΦ为观测到仿真中 Φi=lΦ的次数。Here N l is the observed number of times xi (t)=l and Φ i =l Φ in the simulation, and N Φ is the number of observed Φ i =l Φ in the simulation.
所述拓扑结构和概率表正确性检验包括:动态影响图对策模型正 确性检验需要检验结构和概率表两个方面。结构检验基于下面三条准 则:(1)有向图无环路;(2)如果有价值结点,则它没有后序结点; (3)有一条路径包含所有的决策结点。概率表的检验一般采用和测 试和乘测试两种方法。(1)和测试:和测试是检验一个结点的所有概 率是否满足归一性。设C是一个二值结点,先确定P(C)的值,一定时 间后,再确定的值,将两者相加,如果等于或接近1,说明数值 接近真实,否则要重新获取;(2)乘测试:设C1是C2唯一的一个父结 点,则P(C2)=∑P(C1)P(C2|C1),可以先获取P(C1)和P(C2|C1)的值,然后获取 P(C2),测试上式两边接近或相等,如果是,则结果可信,否则重新获 取。The correctness check of the topology structure and the probability table includes: the correctness check of the dynamic influence diagram countermeasure model needs to check two aspects of the structure and the probability table. The structure check is based on the following three criteria: (1) the directed graph has no loops; (2) if there is a valuable node, it has no subsequent nodes; (3) there is a path that includes all decision nodes. The test of the probability table generally adopts two methods of sum test and multiplication test. (1) Sum test: The sum test is to check whether all the probabilities of a node satisfy the normalization. Let C be a binary node, first determine the value of P(C), after a certain period of time, then determine If it is equal to or close to 1, it means that the value is close to the real value, otherwise it needs to be obtained again; (2) multiplication test: Let C 1 be the only parent node of C 2 , then P(C 2 ) =∑P(C 1 )P(C 2 |C 1 ), you can first obtain the values of P(C 1 ) and P(C 2 |C 1 ), then obtain P(C 2 ), and test that both sides of the above formula are close to or Equal, if yes, the result is trusted, otherwise re-fetch.
所述确定效用函数可以效用函数可以由决策者自己设定或依据 拓扑结构和概率表以及仿真数据学习获得。The determined utility function may be set by the decision maker or learned from the topology, probability table and simulation data.
基于动态影响图对策的仿真元模型有三种使用方法,具体如下:There are three ways to use the simulation meta-model based on the dynamic influence diagram strategy, as follows:
方法一:动态影响图对策的仿真元模型可以用于观测系统状态。 假定S(t)={x1(t),x2(t),...,xn(t)},这里xi(t)(i=1,...,n)为仿真状态变量。在影响 图对策模型中,在特定时刻t∈T={t1,...,tf}的概率分布的时间演变描述了 在整个仿真期间仿真状态的变化,用p(S(t))表示。换句话说,影响图 对策用于计算状态变量在时刻t∈T的概率,即p(S(t)=lS)。研究单个仿 真变量,可以追踪p(xi(t))。而且,状态变量特定的取值可以通过p(xi(t)=l) (l∈Li)进行追踪。观察概率p(xi(t)=l)可以用于分析给定仿真事件发 生的可能性。为了分析局中人可以采用策略,影响图对策可以用于评 估局中人每一策略条件概率分布这里局中 人1、2的策略分别为D1(i)和D2(j)分别为局中 人1、2的策略集.。条件概率分布描述了策略选择后仿真结果随时间 变化的情况。如果变量有期望值,则它的平均值可以用期望值代替, 期望值的计算公式为:Approach 1: The simulation meta-model of the dynamic influence graph strategy can be used to observe the state of the system. Suppose S(t)={x 1 (t), x 2 (t), ..., x n (t)}, where x i (t) (i=1, ..., n) is the simulation state variable. In the influence graph game model, the time evolution of the probability distribution at a specific time t ∈ T = {t 1 ,...,t f } describes the change of the simulation state throughout the simulation period, with p(S(t)) express. In other words, the influence diagram game is used to calculate the probability of a state variable at time t∈T, ie p(S(t)=l S ). Studying a single simulated variable, p( xi (t)) can be tracked. Moreover, specific values of state variables can be tracked by p( xi (t)=l) (l∈L i ). The observation probability p( xi (t)=l) can be used to analyze the likelihood of a given simulated event occurring. In order to analyze the strategies that players can adopt, influence diagram games can be used to evaluate the conditional probability distribution of each player's strategy Here the strategies of
方法二:动态影响图对策的仿真元模型可以用于假设分析。在假 设分析中,影响图对策用于确定条件概率p(S(t)|xi(t)=l),即固定若干个 随机变量的取值,观测其它仿真变量的联合概率的变化。因此条件概 率揭示了观测的仿真状态与仿真时间演变之间的联系。同样的分析也 可用于在决策结点上的局中人策略对仿真状态的影响,即当选择策略对以及观测到在时刻t上状态l,计算条件概率 分析概率随时间的变化。表示仿真参数的 外部随机变量也用相似的方法进行分析。假设外部变量为zj=h,则用p(S(t)|zj=h)或者p(xi(t)|zj=h)对状态变量进行更新,检验这些条件概率 看是否仿真受这些外部变量的影响。如果同时分析外部变量与策略对 的影响,则可以计算条件概率在假设分析 中的条件概率也可以用于计算在时间t∈T上仿真状态的条件期望,计 算公式如下:Method 2: The simulation meta-model of the dynamic influence diagram strategy can be used for what-if analysis. In what-if analysis, the influence diagram strategy is used to determine the conditional probability p(S(t)| xi (t)=l), that is, to fix the values of several random variables and observe the changes of the joint probability of other simulated variables. The conditional probability thus reveals the link between the observed simulation state and the evolution of the simulation over time. The same analysis can also be used for the effect of the player strategy on the decision node on the simulation state, that is, when the choice strategy And observe the state l at time t, calculate the conditional probability Analyze changes in probabilities over time. External random variables representing simulation parameters are also analyzed in a similar way. Assuming that the external variable is z j =h, then use p(S(t)|z j =h) or p( xi (t)|z j =h) to update the state variable, and check these conditional probabilities to see if they are simulated affected by these external variables. Conditional probabilities can be calculated if the effects of external variables and policy pairs are analyzed simultaneously The conditional probability in what-if analysis can also be used to calculate the conditional expectation of the simulated state at time t ∈ T, the calculation formula is as follows:
方法三:动态影响图对策的仿真元模型可以用于求解最优策略。 求解影响图对策就是观察策略对怎样影响效用结点。由于对策双方是 不能合作的,且双方同时获得相关信息,因此可以同时最大化非零和 影响图对策的支付,从策略对中找出Pareto平衡。如果模型较为简 单,可以采用枚举法。如果模型复杂,则可以采用移动平均控制法, 即每个阶段的决策向量的计算仅仅考虑前面决策的几个阶段,计算过 程一直重复到对策结束。另一方面,影响图对策也保留重要的信息, 用于进一步研究不同的效用函数对Pareto平衡的影响。如果一个局 中人选择一个策略,另一个局中人可以最大化期望效用找出效用最大 的策略,这与影响图的求解没有区别。Method 3: The simulation meta-model of the dynamic influence graph game can be used to solve the optimal strategy. Solving an influence diagram game is to observe how pairs of strategies affect utility nodes. Since the two sides of the strategy cannot cooperate, and both parties obtain relevant information at the same time, they can maximize the payment of the non-zero sum influence graph strategy at the same time, and find out the Pareto balance from the strategy pair. If the model is relatively simple, enumeration method can be used. If the model is complex, the moving average control method can be used, that is, the calculation of the decision vector of each stage only considers the previous stages of decision-making, and the calculation process is repeated until the end of the strategy. On the other hand, the influence diagram game also retains important information for further research on the impact of different utility functions on Pareto equilibrium. If one player chooses a strategy, the other player can maximize the expected utility to find the strategy with the greatest utility, which is no different from solving the influence diagram.
通过一个空战仿真分析的例子来说明影响图对策的构造和应用。 空战双方为红蓝两方,各有一架飞机,都挂载两枚导弹。空战的初始 状态是红机处于优势,红机靠近蓝机正准备发射导弹攻击蓝机。利用 本发明公开的方法,建立了影响图对策。在时间t上空战仿真状态用 两个状态变量S(t)={x1(t),x2(t)}表示,在图3中用机会结点表示。这两个 变量有各自的指标分别为“在时间t红机生存”和“在时间t蓝机生存”。 当状态变量为二元的,则它们取值集合为L1=L2={0,1},则两个变量可 以定义空战4个状态:(1)红机优势:红机生存且蓝机被击落,即 x1(t)=1,,x2(t)=0;(2)蓝机优势:蓝机生存且红机被击落,即x1(t)=0, x2(t)=1;(3)均势:双方都生存,即x1(t)=1,x2(t)=1;(4)均不利:双 方都被击落,即x1(t)=0,x2(t)=0。红蓝双方都有两条策略供选择:攻 击策略和防御策略,图3中用决策结点表示。攻击策略是不管对手态 势以击落对方为目标,而防御策略是仅在处于优势时才攻击对方,否 则先规避,抢占有利位置再攻击。在构造影响图对策的过程中,首先 使用仿真数据估计状态变量的概率曲线。重复进行了仿真2000次, 每一次终止的条件是红蓝双方被击落或者在时间达到终止时间 tf=300s。根据设定的精度ε=0.03以及本发明公开的算法,可以确定时 间片段的数目为|T|=10,最优时刻的位置为 T={0,56,79,89,97,121,132,185,230,300}。影响图对策模型见图3。在图3中, 黑色箭头的表示原始影响图对策,而白色箭头的弧表示利用仿真数据 加入的额外弧。图4是仿真估计曲线图,虚线是参考估计曲线,从图 中可以看出元模型效果理想。图4描述了当双方都选择攻击策略后空 战进展情况。概率曲线说明在79秒前和185秒后状态变量的概率分 布没有突出的变化。因此,最明显的变化发生在79秒到185秒这段 时间。“红方优势”和“蓝方优势”的概率在89秒到132秒之间有两 个峰值。原因是红方首先发射导弹,然后蓝方在未击落之前也发射导 弹。在这种情形下,双方不管态势总是攻击对方。在89秒时,红方 先发射导弹,在97秒前击落蓝方的概率为0.45。不过,蓝方在未击 落之前也有时间发射导弹。在仿真结束时,红方优势的概率为0.303, 蓝方优势0.184,均不利为0.378,均势为0.135。利用影响图对策 元模型,很容易分析出仿真的进展。An example of air combat simulation analysis is used to illustrate the construction and application of influence diagram countermeasures. The two sides of the air battle are red and blue, each with an aircraft, and both are equipped with two missiles. The initial state of the air battle is that the red plane is in the upper hand, and the red plane is close to the blue plane and is preparing to launch a missile to attack the blue plane. Using the method disclosed in the present invention, an influence diagram strategy is established. The air combat simulation state at time t is represented by two state variables S(t)={x 1 (t), x 2 (t)}, which are represented by chance nodes in Figure 3 . These two variables have their own indicators respectively as "survival of red machines at time t" and "survival of blue machines at time t". When the state variables are binary, their set of values is L 1 =L 2 ={0, 1}, and the two variables can define 4 states of air combat: (1) Red machine advantage: Red machine survives and blue machine was shot down, that is, x 1 (t)=1, x 2 (t)=0; (2) blue machine advantage: the blue machine survives and the red machine is shot down, that is, x 1 (t)=0, x 2 (t )=1; (3) balance of power: both sides survive, that is, x 1 (t)=1, x 2 (t)=1; (4) unfavorable: both sides are shot down, that is, x 1 (t)=0, x 2 (t)=0. Both red and blue have two strategies to choose from: attack strategy and defense strategy, which are represented by decision nodes in Figure 3. The attack strategy is to shoot down the opponent regardless of the opponent's posture, while the defense strategy is to attack the opponent only when he is in an advantage, otherwise, evade first, seize a favorable position and then attack. In the process of constructing an influence diagram strategy, the simulation data is used to first estimate the probability curve of the state variables. The simulation is repeated 2000 times, and the condition for each termination is that the red and blue sides are shot down or the time reaches the termination time t f =300s. According to the set accuracy ε=0.03 and the algorithm disclosed in the present invention, the number of time segments can be determined as |T|=10, and the position of the optimal moment is T={0, 56, 79, 89, 97, 121, 132 , 185, 230, 300}. The influence diagram strategy model is shown in Figure 3. In Fig. 3, the black arrows represent the original influence diagram strategies, while the white arrow arcs represent additional arcs added using simulated data. Figure 4 is a simulation estimation curve, and the dotted line is a reference estimation curve. It can be seen from the figure that the effect of the meta-model is ideal. Figure 4 depicts how the air battle progresses when both sides choose an attack strategy. The probability curve shows no significant change in the probability distribution of the state variables before 79 seconds and after 185 seconds. Therefore, the most noticeable change occurs between 79 seconds and 185 seconds. The probability of "red advantage" and "blue advantage" has two peaks between 89 seconds and 132 seconds. The reason is that the red team launched the missile first, and then the blue team also fired the missile before it was shot down. In this case, both sides always attack each other regardless of the situation. At 89 seconds, the red team launches the missile first, and the probability of shooting down the blue team before 97 seconds is 0.45. However, the blue team also had time to launch the missile before it was shot down. At the end of the simulation, the probability of the red team's advantage is 0.303, the blue team's advantage is 0.184, both unfavorable is 0.378, and the balance of power is 0.135. Using the influence diagram game meta-model, it is easy to analyze the progress of the simulation.
为了研究在不同时刻仿真状态不同取值之间的关系,可以利用影 响图对策元模型进行假设分析。例如,假设在时刻t=89s,红蓝双方都 还生存,蓝方选择攻击策略,而红方选择分别选择两种策略,即 x1(89)=1.x2(89)=1,dB={aggressive}和dR{aggressive,defensive}。近似曲线见图4。 从图5以看出如果红方的第一枚导弹没有击中蓝方,则采用防御策略 是相对糟糕的。在这种情况下蓝方赢得空战的概率为0.514,而红方 只有0.288。In order to study the relationship between the different values of the simulation state at different times, the meta-model of the strategy of the influence diagram can be used for hypothesis analysis. For example, assuming that at time t=89s, both red and blue are still alive, the blue team chooses an attack strategy, and the red team chooses two strategies respectively, that is, x 1 (89)=1.x 2 (89)=1, d B = {aggressive} and d R {aggressive, defensive}. The approximate curve is shown in Figure 4. From Figure 5, it can be seen that if the red team's first missile does not hit the blue team, it is relatively bad to adopt a defensive strategy. In this case, the blue team has a probability of winning the air battle of 0.514, while the red team has only 0.288.
表1效用结点的各状态概率分布Table 1 Probability distribution of each state of utility nodes
影响图对策第三方面的应用是获得对策双方的最优策略。如果蓝 方选择攻击策略,效用结点的各状态概率分布见表1,在表2中提出 三个效用函数,效用函数f1(·)表示决策者的目标是红方的生存,f2(·)表 示仅对击落蓝方感兴趣,f3(·)表示红方赢得胜利。表2也说明不同效 用函数产生不同最优决策。如果选择f1(·)和f3(·),则防御策略更适合。 而选择效用函数f2(·),则决策者更倾向于攻击策略。如果红蓝双方有 两个策略可供选择,则Pareto平衡就是非零和对策的解。假设对策 双方都选择效用函数f1(·),则防御策略是双方的最优策略,计算结果 见表3,双方都应该选择防御策略。The third application of the influence diagram game is to obtain the optimal strategy of both sides of the game. If the blue team chooses an attack strategy, the probability distribution of each state of the utility node is shown in Table 1. Three utility functions are proposed in Table 2. The utility function f 1 (·) indicates that the decision maker’s goal is the survival of the red team, and f 2 ( ) means only interested in shooting down the blue side, and f 3 (·) means the red side wins. Table 2 also shows that different utility functions produce different optimal decisions. If f 1 (·) and f 3 (·) are chosen, the defensive strategy is more suitable. And choosing the utility function f 2 (·), the decision maker is more inclined to the attack strategy. If the red and the blue have two strategies to choose from, then the Pareto equilibrium is the solution of the non-zero-sum game. Assuming that both sides of the countermeasure choose the utility function f 1 (·), then the defense strategy is the optimal strategy for both sides. The calculation results are shown in Table 3, and both sides should choose the defense strategy.
表2不同效用函数下红方的最优决策Table 2 The optimal decision of the red square under different utility functions
表3对策双方的最优解Table 3 Optimal solutions for both sides of the game
综上所述,仅为本发明的较佳实例而已,并非用了限定本发明实 施的范围,凡依本发明权利要求范围所述方法、步骤,均应包括于本 发明的权利要求范围内。In summary, it is only a preferred example of the present invention, and is not used to limit the scope of the present invention. All methods and steps according to the scope of the claims of the present invention should be included in the scope of the claims of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810472772.3A CN109299491B (en) | 2018-05-17 | 2018-05-17 | Meta-model modeling method based on dynamic influence graph strategy and using method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810472772.3A CN109299491B (en) | 2018-05-17 | 2018-05-17 | Meta-model modeling method based on dynamic influence graph strategy and using method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109299491A CN109299491A (en) | 2019-02-01 |
CN109299491B true CN109299491B (en) | 2023-02-10 |
Family
ID=65167688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810472772.3A Expired - Fee Related CN109299491B (en) | 2018-05-17 | 2018-05-17 | Meta-model modeling method based on dynamic influence graph strategy and using method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109299491B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109977571B (en) * | 2019-04-01 | 2021-07-16 | 清华大学 | Simulation calculation method and device based on hybrid data and model |
DE102019209540A1 (en) * | 2019-06-28 | 2020-12-31 | Robert Bosch Gmbh | Process and device for the optimal distribution of test cases on different test platforms |
CN112464548B (en) * | 2020-07-06 | 2021-05-14 | 中国人民解放军军事科学院评估论证研究中心 | Dynamic allocation device for countermeasure unit |
CN112464549B (en) * | 2020-07-06 | 2021-05-14 | 中国人民解放军军事科学院评估论证研究中心 | Dynamic allocation method of countermeasure unit |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405737A (en) * | 2006-04-11 | 2009-04-08 | 国际商业机器公司 | Method and system for verifying performance of an array by simulating operation of edge cells in a full array model |
CN104484500A (en) * | 2014-09-03 | 2015-04-01 | 北京航空航天大学 | Air combat behavior modeling method based on fitting reinforcement learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7707131B2 (en) * | 2005-03-08 | 2010-04-27 | Microsoft Corporation | Thompson strategy based online reinforcement learning system for action selection |
-
2018
- 2018-05-17 CN CN201810472772.3A patent/CN109299491B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101405737A (en) * | 2006-04-11 | 2009-04-08 | 国际商业机器公司 | Method and system for verifying performance of an array by simulating operation of edge cells in a full array model |
CN104484500A (en) * | 2014-09-03 | 2015-04-01 | 北京航空航天大学 | Air combat behavior modeling method based on fitting reinforcement learning |
Non-Patent Citations (2)
Title |
---|
基于影响图的空战机动决策模型;钟麟等;《系统仿真学报》;20070420(第08期);全文 * |
航空装备对地突击效能评估主动元模型研究;华玉光等;《系统仿真学报》;20071105(第21期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109299491A (en) | 2019-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114358141B (en) | A multi-agent reinforcement learning method for collaborative decision-making of multiple combat units | |
CN109299491B (en) | Meta-model modeling method based on dynamic influence graph strategy and using method | |
Xiaohong et al. | Robustness evaluation method for unmanned aerial vehicle swarms based on complex network theory | |
CN111275174B (en) | A Game-Oriented Radar Countermeasure Strategy Generation Method | |
CN112329348A (en) | An intelligent decision-making method for military confrontation games under incomplete information conditions | |
CN113298260B (en) | Confrontation simulation deduction method based on deep reinforcement learning | |
CN110099045B (en) | Network security threat early warning method and device based on qualitative differential gaming and evolutionary gaming | |
CN115438467B (en) | A kill chain-based equipment system mission reliability assessment method | |
CN107832885A (en) | A kind of fleet Algorithm of Firepower Allocation based on adaptive-migration strategy BBO algorithms | |
CN105893694A (en) | Complex system designing method based on resampling particle swarm optimization algorithm | |
CN111784135B (en) | System combat capability quantitative analysis method based on hyper-network and OODA (object oriented data acquisition) ring theory | |
CN109101721B (en) | Multi-unmanned aerial vehicle task allocation method based on interval intuitionistic blurring in uncertain environment | |
CN110083748A (en) | A kind of searching method based on adaptive Dynamic Programming and the search of Monte Carlo tree | |
CN108696534A (en) | Real-time network security threat early warning analysis method and its device | |
CN116582349B (en) | Method and device for generating attack path prediction model based on network attack graph | |
CN113792984B (en) | Cloud model-based anti-air defense anti-pilot command control model capability assessment method | |
CN114492749A (en) | Time-limited red-blue countermeasure problem-oriented game decision method with action space decoupling function | |
CN115047907B (en) | Air isomorphic formation command method based on multi-agent PPO algorithm | |
CN114862152A (en) | Object importance evaluation method based on complex network | |
Li et al. | Dynamic weapon target assignment based on deep q network | |
CN110930054A (en) | A data-driven method for fast optimization of key parameters of combat system | |
CN117078182A (en) | Air defense and reflection conductor system cooperative method, device and equipment of heterogeneous network | |
CN116841707A (en) | Sensor resource scheduling method based on HDABC algorithm | |
CN114826737A (en) | Scale-free network defense performance improving method based on AI-assisted game | |
Gao et al. | Research on virtual entity decision model for LVC tactical confrontation of army units |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20230210 |
|
CF01 | Termination of patent right due to non-payment of annual fee |