[go: up one dir, main page]

CN115328758B - A performance testing method and system for industrial software with large data volumes - Google Patents

A performance testing method and system for industrial software with large data volumes

Info

Publication number
CN115328758B
CN115328758B CN202210768773.9A CN202210768773A CN115328758B CN 115328758 B CN115328758 B CN 115328758B CN 202210768773 A CN202210768773 A CN 202210768773A CN 115328758 B CN115328758 B CN 115328758B
Authority
CN
China
Prior art keywords
test
data
script
management center
performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210768773.9A
Other languages
Chinese (zh)
Other versions
CN115328758A (en
Inventor
吴彬彬
徐文豪
袁强
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkong Technology Co ltd
Zhejiang Supcon Technology Co Ltd
Original Assignee
Zhongkong Technology Co ltd
Zhejiang Supcon Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongkong Technology Co ltd, Zhejiang Supcon Technology Co Ltd filed Critical Zhongkong Technology Co ltd
Priority to CN202210768773.9A priority Critical patent/CN115328758B/en
Publication of CN115328758A publication Critical patent/CN115328758A/en
Application granted granted Critical
Publication of CN115328758B publication Critical patent/CN115328758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

本发明公开了一种工业软件大数据量的性能测试方法和系统,其中,性能测试方法包括部署API请求抓取工具;模拟用户访问待测系统,对API请求信息进行收集生成性能测试脚本;部署脚本管理中心工具用于导入测试用例,对导入的测试用例进行解析,将解析的内容存储至用例维护表,通过数据测试脚本连接数据库;部署测试管理中心工具,包括用于发送消息任务至脚本管理中心的消息管理中心模块,发送大数据量的性能测试的任务;性能测试脚本调用数据测试脚本,数据测试脚本基于测试用例维护表的状态标志向数据库插入测试数据,生成的测试数据用于大数据量的性能测试;针对工业软件不同的业务场景,快速创建大数据量的测试数据。

The present invention discloses a performance testing method and system for industrial software with large amounts of data, wherein the performance testing method includes deploying an API request capture tool; simulating user access to a system to be tested, collecting API request information to generate a performance testing script; deploying a script management center tool for importing test cases, parsing the imported test cases, storing the parsed content in a use case maintenance table, and connecting to a database through a data testing script; deploying a test management center tool, including a message management center module for sending message tasks to a script management center, sending tasks for performance testing of large amounts of data; the performance testing script calls a data testing script, and the data testing script inserts test data into the database based on a status flag of the test case maintenance table, and the generated test data is used for performance testing of large amounts of data; and rapidly creating test data of large amounts of data for different business scenarios of industrial software.

Description

Performance test method and system for large data volume of industrial software
Technical Field
The invention relates to the technical application field of industrial software performance test, in particular to a performance test method and system for large data volume of industrial software.
Background
The traditional performance test has single service scene, the performance test result can be obtained only after the one-time performance test is finished, the message is lagged, and the time and the resource utilization rate are low. The invention patent CN201010613464.1 provides a performance test system and a performance test method, wherein the system can monitor performance data of a tested server in real time, increase the concurrent user number when the CPU utilization rate of the tested server does not reach a set threshold value, and stop performance test when the CPU utilization rate of the tested server exceeds the set threshold value. The performance test method realizes a certain degree of unattended operation, but has certain defects:
1. The system is still aimed at a single performance test scene, when a service system is huge and the performance scenes to be tested are more, the intervention of a tester is still needed to switch the scenes, a large amount of sql data is needed to be created for switching the service scenes, and a large amount of time is spent for artificially creating the data.
2. The system only monitors the tested server resources in real time, judges single conditions, does not acquire other key performance results and analyzes the results, can cause invalid subsequent performance tests, and wastes time and resources of the subsequent performance tests.
3. When the CPU utilization rate of the tested server exceeds a set threshold, the current test is stopped, the user cannot be timely notified, and the time difference still exists between the problem investigation and the subsequent test.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a large-data-volume performance test method and system for industrial software, which are used for quickly creating test data with large data volume according to different business scenes of the industrial software, so that the efficiency of manually creating sql sentences is improved. Meanwhile, the test result and the hardware resources of the tested server are monitored in real time, and when the preset performance index and the server resources are not met in the test process, notification and early warning are timely given to the testers.
The first aspect of the invention provides a performance test method for large data volume of industrial software, comprising the steps of deploying an API request grabbing tool, wherein the API request grabbing tool is used for collecting API request information;
Based on a preset performance test scene and performance test requirements, simulating a plurality of users to access corresponding service functions through a browser, collecting API request information generated by access based on an API request grabbing tool, generating a performance test script, deploying a script management center tool, wherein the script management center tool at least comprises a test case maintenance module used for importing a plurality of test cases, a functional module used for importing the performance test script and a data test script of test cases corresponding to different test scenes, the script management center analyzes the imported test cases through the test case module, stores analyzed contents into the case maintenance table, wherein the data test script at least comprises a script required by connecting a database, deploying the test management center tool at least comprises a message management center module used for sending a message task to the script management center and starting performance test of a large data amount, the performance test script is called by the performance test script, the data test script inserts simulation data of a preset data amount into the database based on the state label of the test case maintenance table of the corresponding to the test case, the generated simulation data of the data amount is used for analyzing the test case maintenance table, the data of the test script is used for starting the performance test result is received by the test script management center, and the test result is started by the test result is analyzed by the test script.
Further, the performance test script calls a data test script, the data test script inserts simulation data of preset data volume into a database based on a state mark of a test case maintenance module, and the method specifically comprises the steps that after receiving a performance test request of a large data volume of a message management center module, a script management center tool starts to execute the test case, and calls the data test script through the performance test script, the data test script judges whether the data of the preset data volume needs to be inserted or not based on a state mark of the test case maintenance table, if the state mark indicates that the data for performance test already exists, the data does not need to be inserted again, and if the state mark indicates that the data for performance test does not exist, the data of the preset data volume is inserted.
The method comprises the steps of setting database information and table names in a data test script, initializing database connection, calling a database insertion statement, obtaining a cursor object capable of executing an SQL statement, writing the insertion data into a random variable form, circularly inserting the random variable form in a large scale, sending a message management center insertion completion message after execution based on the preset data amount is completed, and returning a result set to be displayed in a tuple.
Further, the method also comprises the steps of adding the identification of the interface for sending the API request and the identification of the testing step to the performance test sample and storing the performance test sample.
Further, the method for starting the test result collection and analysis of the test results specifically comprises the steps of storing pressure test result data based on each user server and resource use condition data of the server deploying the industrial software to be tested into a time sequence database according to time sequence, wherein each piece of data at least comprises a time stamp, sending the data at fixed time, and dynamically displaying the pressure test result data and the resource use condition data of the server based on a Web page visual view.
Further, if the test result data is wrong or the resource use condition data of the server exceeds a preset threshold, notifying a receiver set in the configuration test parameters through the message middleware, and accessing and checking the pressure test result data and the resource use condition data of the server through a Web page of the test management center tool.
Furthermore, the performance test method for the large data volume of the industrial software further comprises the steps of collecting, storing and filtering the logs based on the preset log level, and displaying the logs based on the graphical programming, so that the problem is rapidly located.
The test management center tool further comprises a system configuration module, a resource file module, a task scheduling module, a monitoring module, a data analysis module and a data display module, wherein the system configuration module is used for configuring operation parameters and a test environment of a tested server and a test execution machine, the resource file module is used for configuring a test script, the task scheduling module is used for configuring a task name, selecting the test script to be executed, executing time and frequency, and starting performance test and executing the test script after task construction and execution. The system comprises a monitoring module, a data analysis module, a data display module, a message management center module, a log management module and a log management module, wherein the monitoring module is used for monitoring the running conditions of a tested server and a test execution machine pool and collecting the running data of the tested server and the test execution machine pool, the data analysis module is used for analyzing and summarizing performance test result indexes and server resource use conditions in a Python programming mode, the data display module is used for visually and dynamically displaying various performance test result indexes and server resource use conditions set by a user in real time, the message management center module is also used for configuring related notification personnel, notification modes and notification frequencies, timely notifying related responsible persons through message middleware, and the log management module is used for collecting log files in the execution process of the test execution machine pool.
Further, the data test script is a Python script, and the performance test script is Jmeter scripts.
The invention also provides a performance test system for running the performance test method, which at least comprises a script management center, a real-time monitoring center and a message management center, wherein the script management center is used for adapting different service scenes to run corresponding scripts and creating test data, the real-time monitoring system is used for acquiring the performance test result and server resource information after the scripts are run in real time and sending the performance test result and the server resource information to the message management center, and the message management center feeds back the test result based on a preset threshold value, the performance test result and the server resource information.
The beneficial effects of the invention are as follows:
1. For pain points with complex service scene, large data volume and high timeliness in the field of performance test of industrial software, different scripts are called to quickly generate a large amount of database data;
2. the performance test result and the server resource are monitored in real time in the performance test process, and a message is timely and actively sent to the user so that the user can quickly make adjustment, and the performance test method is good in adaptability, strong in pertinence, quick and efficient in the industrial software field;
3. Meanwhile, the test result can be sent to the analysis device in real time through configuration of the timing task, the analysis device can analyze the result and send the analysis result to the user through the mail configuration server, and the unattended performance test process is truly realized.
Drawings
FIG. 1 is a flow chart of a method for testing large data volume performance of industrial software according to an embodiment of the invention;
FIG. 2 is a flow chart of generating a performance test sample according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an example workflow of a scripting management center tool according to an embodiment of the present invention;
FIG. 4 is a diagram of a test case maintenance representation intent of an embodiment of the present invention
Fig. 5 is a flow chart illustrating a test result exceeding a preset threshold in the test execution process according to an embodiment of the invention.
Detailed Description
API-application program interface (Application Programming Interface, abbreviated as API), also called application programming interface, a convention whereby different components of a software system are partially joined.
Test cases-a specific set of input data, operational or various environmental settings and desired results provided to the system under test for the purpose of conducting the test;
Test script, which is script written for automatic test and corresponding test case;
Fiddler, internet debug agent tool, not only can grasp various http communication between computer and even mobile phone and internet, but also can check them for analysis.
Jmeter software (e.g., a web application) for testing client/server architecture. It can be used to test the performance of static and dynamic resources.
The invention will now be described in further detail with reference to the drawings and the specific examples, which are given by way of illustration only and are not intended to limit the scope of the invention, in order to facilitate a better understanding of the invention to those skilled in the art.
The invention discloses a performance test method of large data volume of industrial software, as shown in fig. 1, which is a flow diagram of the performance test method of large data volume of industrial software, specifically comprising the steps of deploying an API request grabbing tool, wherein the API request grabbing tool is used for collecting API request information; based on preset performance test scenes and performance test requirements, simulating a plurality of users to access corresponding service functions through a browser, collecting API request information generated by access based on an API request capture tool, generating a performance test script, deploying a script management center tool, wherein the script management center tool at least comprises a test case maintenance module for importing a plurality of test cases, a function module for importing the performance test script and a data test script of test cases corresponding to different test scenes, the script management center analyzes the imported test cases through the test case module, stores analyzed contents into a case maintenance table, wherein the data test script at least comprises scripts required by connecting a database, deploying the test management center tool at least comprises a message management center module, the message management center module is used for sending a message task to the script management center, starting performance test of a large data volume, based on the script management center tool importing the performance test script, the message management center receives a message, and sends a scheduling task to the script management center, the performance test script calls the data test script, the data test script is inserted into the data simulation script based on the state of the test case maintenance table corresponding to the test cases, the data management center receives the data simulation result of the data simulation script, the data simulation script is generated by the data simulation script, the data management center receives the data simulation result of the data simulation script, starting performance test tasks with large data volume, starting test result collection and analyzing the test result.
The following describes the steps taking as an example a large number of performance tests of industrial software applied in the pharmaceutical industry.
S1, deploying an API request grabbing tool, wherein the API request grabbing tool is used for collecting API request information.
In an embodiment of the present invention, fiddler is employed as the API request grabbing tool. Downloading and installing Fiddler, and starting Fiddler when user tests to realize API request grabbing tool
S2, based on a preset performance test scene and performance test requirements, simulating a plurality of users to access corresponding service functions through a browser, and collecting API request information generated by access based on an API request grabbing tool so as to generate a performance test script;
And simulating a user to access the tested system through the browser according to the application scene of the industrial software to be tested and the requirements of the corresponding performance test, namely the industrial software to be tested. The simulation user sends an API request to the tested system through the browser.
For example, when multiple users simultaneously request a certain service function, at the installation and starting Fiddlerr of the tested system, the simulated user accesses the tested system through the browser, and the flow is schematically shown in fig. 2. Fiddlerr intercepts an API request sent by a simulation user, collects all API request information generated in the Web product operation process, and stores the API request information to form a Jmeter format script.
In some embodiments, the method further comprises adding an identification of an interface sending the API request and an identification of a testing step to the performance test sample, generating a unique ID according to a timestamp sent by the request, intercepting a request path production value, and adding the request identification according to the ID and the value.
S3, deploying a script management center tool, wherein the script management center tool at least comprises a test case maintenance module for importing a plurality of test cases, an importing performance test script functional module for importing performance test scripts and data test scripts of the test cases corresponding to different test scenes, the script management center analyzes the imported test cases through the test case module, and the analyzed contents are stored in the case maintenance table, wherein the data test scripts are connected with a database.
The script management center is realized by python, and the python script file is written and then is imported into the script management center, and the script management center analyzes the imported test cases and manages the test cases. A schematic of the operation of the script management center tool is shown in fig. 3. The script management center imports the performance test script, the message management center module receives the message, then sends the task to the script management center, the script management center starts the task, executes the performance test script, judges the status mark of the test case, calls the data test script to produce a large amount of test data if the status mark bit False of the test status case, sends the message to the message management center module after the execution is completed, and starts the test of the large amount of data.
In some embodiments, the script management center has an import button to import performance test scripts, i.e., import test cases. The script management center maintains and stores the test cases in an oracle data table, and the table statement is created as follows:
CREATE TABLE"TEST_CASES"("ID"NUMBER(20,0)NOT NULL ENABLE,"CASE_NAME"VARCHAR2(256)NOT NULL ENABLE,"PYCASENAME"VARCHAR2(20,0)NOT NULL ENABLE,"OPERATE"VARCHAR2(256)NOT NULL ENABLE,"STATUS"NUMBER(1,0)DEFAULT 0,"CREATOR"VARCHAR2(64),"MODIFY_TIME"TIMESTAMP(6)DEFAULT NULL,PRIMARY KEY("ID"));
ID representing unique identification of our script use case table
CASE_NAME represents the CASE NAME, test CASE custom NAMEs, such as test CASE 1, test CASE 2
PYCASENAME python script file corresponding to the representation use case
STATUS, validation use case is identified by True, invalidation use case is identified by False
OPERATE operations are stored in the database with 0,1,2, 0 representing editing, 1 representing execution, 2 representing deletion
CREATOR represents who the use case was created from
Modification_TIME, modification TIME, represents modification TIME of use case
When a certain test case is executed, a corresponding python script file is operated, and data can be generated by inputting a start value and an end value through an interface, so that the test case can cover different service scenes only by writing a plurality of python scripts for realizing different service scenes. Writing a python script file first requires creating a use case maintenance directory such as: D \ datatest \case \test_process_type. Py, a schematic diagram of the use case maintenance table of this embodiment example is shown in FIG. 4.
And S4, deploying a test management center tool, wherein the test management center at least comprises a message management center module, and the message management center module is used for sending a message task to the script management center and starting performance test of a large amount of data.
In some embodiments, the test management center tool comprises several modules, namely a system configuration module, which is mainly used for configuring the tested server, the operation parameters of the test execution machine and the test environment. The resource file module is mainly used for configuring the test script. The task scheduling module is mainly used for configuring task names, test scripts for selecting execution, and executing time and frequency. After the task is constructed and executed, performance test is started, and test scripts are executed. The system comprises a monitoring module, a data analysis module and a data analysis module, wherein the monitoring module is mainly used for monitoring the running conditions of a tested server and a test execution machine pool and collecting the running data of the tested server and the test execution machine pool, and the data analysis module is used for analyzing and summarizing performance test result indexes and server resource use conditions in a Python programming mode. And the data display module is used for dynamically displaying various performance test result indexes and server resource use conditions set by a user in real time in a visible mode. The message management center module is mainly used for configuring related notification personnel, notification modes and notification frequencies, and timely notifying related responsible persons and script management centers to start tasks through the message middleware. The log management module is mainly used for collecting log files in the execution process of the test execution machine pool.
The test management center tool is accessible through a Web page.
The method comprises the steps of importing a plurality of test cases in a case maintenance module of a script management center, wherein one test case corresponds to one performance test script, a user can directly access a Web interface to import the performance test script in a resource file, setting and modifying virtual user numbers of each scene in test parameters in system setting of a system to be tested, user thinking time, a test result saving path, newly increasing a test server resource CPU, memory, IO, network card use threshold, a test error rate threshold, a log level, various index display conditions of the test result, server resource use conditions and the like in threshold management, configuring a timing task in task scheduling, constructing a trigger to execute the timing task, configuring post-construction operation, configuring a test report template and mail receiver information, wherein the test report template supports default template and user self-defined setting, and realizing unattended operation.
The configuration of the timing task can trigger the interface to send the task, namely, the test result can be sent to the information collecting device at regular time for collecting the test result.
S5, the performance test script calls a data test script, the data test script inserts simulation data with preset data quantity into a database based on the state mark of the test case maintenance table of the corresponding test case, and the generated simulation data is used for performance test with large data quantity.
When a certain test case is executed, namely after a performance test with large data volume is started, a corresponding python script file is operated, data is inserted into a database, the data is written into a random variable form through cyclic insertion, and the effective realization of batch test data is realized. In the process of inserting data, whether the data needs to be inserted is judged according to the status field of the test case maintenance table, if the data exists, the data does not need to be inserted again, and if the data does not exist, the data starts to be prepared for insertion.
Taking a database Oracle connected in a data test script as an example, creating one million data in the Oracle database, we can initialize database connection cx-Oracle by defining a table object class a to obtain a cursor object capable of executing an SQL sentence, and the result set returned after execution is displayed by a tuple by default.
In some embodiments, for example, creating a time field, we automatically subtract 1 day by setting a string in time format, by writing a time field dt= (dt+datetime. Timedelta (days= -1)) for each cycle insertion, modifying the time field may require data fixed at a certain time period, time may be converted into a corresponding timestamp modify_time=time.strftime('%Y-%m-%d%H:%M:%S',time.localtime(1585497600-random.randrange(1,1000000)))., and then data insert into table name values (placeholders) are inserted in batch by executemany method.
Another business scenario is a different field, such as an ID field, where the location ID is str (i) in the cycle, and the input_key field we set to input_key= 'LIMS: ammonia nitrogen content' +shift_date.
S6, the message management center module receives the execution result of the data test script and starts a performance test task with large data volume;
s7, starting test result collection and analyzing the test result.
And starting a performance test task, and starting a result collection device and a server monitoring device, wherein the server monitoring device monitors hardware resources of the tested server, and the result collection device collects performance test receipts. In one embodiment of the invention, the interface is triggered to send tasks through configuration of timing tasks, and test results are sent to the test result collecting device at fixed time. The pressure test results of each server and the service conditions of server resources are collected and stored in a time sequence database, each piece of data has a time stamp based on the time stored database, and the collected test results are integrated and then sent to an analysis device. The analysis device collects data based on the collected test result data and server resource use data, analyzes the data, and dynamically displays various performance test result indexes and server resource use conditions set by a user in a visual mode in real time. When the test result error rate and the server resource utilization rate exceed the set threshold, the flow chart is shown in fig. 5, the relevant responsible person is informed in time through the message middleware, and the relevant responsible person can access and check the historical test index data and the server resource monitoring data through the Web page to conduct the check and analysis.
The user can learn the real-time performance test result in the first time, and perform manual intervention in time, so that time and resource waste from the occurrence of errors to the end of the performance test are avoided, and the performance test efficiency is ensured.
S8, collecting, storing and filtering the logs based on the preset log level, and displaying the logs based on the Python graphical programming, so that the problem is rapidly located.
In some embodiments, the log management module of the script-based management center is used for collecting log information generated by a test execution machine pool, and the log level is optional as follows, the device < INFO < WARNING < ERROR < Fatal, corresponding date files are generated by date and stored on a hard disk, log analysis can analyze the log files of the date, search is performed according to keywords input by a user, and quick positioning can be realized.
S9, configuring a mail service center, and sending test report results to different users
In some embodiments, the method further comprises configuring the mail service center to send test report results to different users.
The invention provides a large-data-volume performance test system of industrial software, which at least comprises a script management center, a real-time monitoring center and a message management center, wherein the script management center is used for adapting different service scenes to run corresponding scripts and creating test data, the real-time monitoring system is used for acquiring performance test results and server resource information after running the scripts in real time and sending the performance test results and the server resource information to the message management center, and the message management center feeds back the test results based on preset thresholds, the performance test results and the server resource information.
It should be noted that in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, a single step described in this specification may be described as being split into multiple steps in other embodiments, while multiple steps described in this specification may be described as being combined into a single step in other embodiments.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The system and system embodiments described above are merely illustrative, and some or all of the modules may be selected according to actual needs to achieve the objectives of the present embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.

Claims (9)

1. A performance test method for large data volume of industrial software is characterized by comprising the following steps:
based on a preset performance test scene and performance test requirements, simulating a plurality of users to access corresponding service functions through a browser, and based on the API request grabbing tool, collecting the API request information generated by access, thereby generating a performance test script;
The method comprises the steps that a script management center tool is deployed, the script management center tool at least comprises a test case maintenance module used for importing a plurality of test cases, a functional module used for importing performance test scripts and data test scripts of the test cases corresponding to different test scenes, the script management center analyzes the imported test cases through the test case module, and analyzed contents are stored in the case maintenance table, wherein the data test scripts at least comprise scripts required by connecting a database;
Deploying a test management center tool, wherein the test management center at least comprises a message management center module, and the message management center module is used for sending a message task to a script management center and starting performance test of a large amount of data;
The performance test script calls a data test script, the data test script inserts simulation data with preset data quantity into a database based on the state mark of a test case maintenance table of the corresponding test case, and the generated simulation data is used for performance test with large data quantity;
The message management center module receives the execution result of the data test script and starts the performance test task with large data volume;
starting test result collection and analysis of the test result;
the inserting data of the preset data amount comprises the following steps:
Initializing database connection and calling in the data test script through the set database information and table name
Inserting sentences into the database;
And (3) obtaining a cursor object capable of executing the SQL sentence, writing the inserted data into a random variable form, circularly inserting in a large batch, sending an insertion completion message of the message management center after the execution is completed based on the preset data quantity, and returning a result set to be displayed in a tuple.
2. The method for testing the performance of the industrial software with the large data volume according to claim 1, wherein the performance test script calls a data test script, and the data test script inserts simulation data with the preset data volume into a database based on the state mark of the test case maintenance module, and specifically comprises the following steps:
The script management center tool starts to execute the test case after receiving the performance test request of the large data volume of the message management center module, and calls the data test script through the performance test script, the data test script judges whether the data of the preset data volume is needed to be inserted or not based on the state mark of the test case maintenance table,
If the status flag indicates that data already exists for performance testing then no data needs to be inserted,
If the status flag indicates that the data does not exist for performance testing, inserting data of a preset data amount.
3. The method for testing the performance of the large data volume of the industrial software according to claim 1, further comprising adding the identification of the interface transmitting the API request and the identification of the testing step to the performance test sample and saving.
4. The method for testing the large data volume performance of the industrial software according to claim 1, wherein the steps of starting the collection of the test results and analyzing the test results comprise,
And storing the pressure test result data based on each user server and the resource use condition data of the server deploying the industrial software to be tested into a time sequence database according to time sequence, wherein each piece of data at least comprises a time stamp, sending the data at fixed time, and dynamically displaying the pressure test result data and the resource use condition data of the server based on a Web page visual view.
5. The method of claim 4, further comprising notifying a recipient set in the configuration test parameters via the message middleware if the test result data is erroneous or the resource usage data of the server exceeds a preset threshold, and viewing the stress test result data and the resource usage data of the server via a Web page access of the test management center tool.
6. The method for testing the performance of the large data volume of the industrial software according to claim 5, further comprising the steps of collecting, storing and filtering the logs based on a preset log level, and displaying the logs based on graphical programming, so that the problem can be quickly located.
7. The method for testing the large data volume performance of the industrial software according to claim 5, wherein the test management center tool further comprises a system configuration module, a resource file module, a task scheduling module, a monitoring module, a data analysis module and a data display module;
The system configuration module is used for configuring the operation parameters and the test environment of the tested server and the test execution machine;
the resource file module is used for configuring a test script;
the task scheduling module is used for configuring task names, selecting executed test scripts, executing time and executing frequency, and starting performance tests and executing the test scripts after the tasks are constructed and executed;
the monitoring module is used for monitoring the operation conditions of the tested server and the test execution machine pool and collecting the operation data of the tested server and the test execution machine pool;
the data analysis module is used for analyzing and summarizing performance test result indexes and server resource use conditions in a Python programming mode;
The data display module is used for performing visual real-time dynamic display on various performance test result indexes and server resource use conditions set by a user;
the message management center module is also used for configuring related notification personnel, notification modes and notification frequencies, and timely notifying related responsible persons through the message middleware.
8. The method for testing the performance of industrial software with large data volume according to any one of claims 1 to 7, wherein the data testing script is Python script and the performance testing script is Jmeter script.
9. A performance test system for running the industrial software large data volume performance test method according to any one of claims 1-8, characterized by comprising at least a script management center, a real-time monitoring center and a message management center,
The script management center is used for adapting different service scenes to run corresponding scripts and creating test data;
The real-time monitoring center is used for acquiring the performance test result and the server resource information after running the script in real time and sending the performance test result and the server resource information to the message management center;
And the message management center feeds back the test result based on the preset threshold value, the performance test result and the server resource information.
CN202210768773.9A 2022-06-30 2022-06-30 A performance testing method and system for industrial software with large data volumes Active CN115328758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210768773.9A CN115328758B (en) 2022-06-30 2022-06-30 A performance testing method and system for industrial software with large data volumes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210768773.9A CN115328758B (en) 2022-06-30 2022-06-30 A performance testing method and system for industrial software with large data volumes

Publications (2)

Publication Number Publication Date
CN115328758A CN115328758A (en) 2022-11-11
CN115328758B true CN115328758B (en) 2025-08-05

Family

ID=83918627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210768773.9A Active CN115328758B (en) 2022-06-30 2022-06-30 A performance testing method and system for industrial software with large data volumes

Country Status (1)

Country Link
CN (1) CN115328758B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11829745B2 (en) * 2021-09-20 2023-11-28 Salesforce, Inc. Augmented circuit breaker policy
CN116185846A (en) * 2023-01-13 2023-05-30 企知道科技有限公司 Interface performance test method and device, electronic equipment and storage medium
CN116070046B (en) * 2023-02-17 2024-10-18 贝壳找房(北京)科技有限公司 Project test information display method and device, electronic equipment and storage medium
CN116303011A (en) * 2023-03-14 2023-06-23 湖南快乐阳光互动娱乐传媒有限公司 A pressure measurement method and device
CN117176611B (en) * 2023-10-30 2024-01-30 翼方健数(北京)信息科技有限公司 Performance test method, system and medium of distributed communication library
CN117785643B (en) * 2024-02-23 2024-05-14 广州飞进信息科技有限公司 Performance test platform for software development

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461856A (en) * 2013-09-22 2015-03-25 阿里巴巴集团控股有限公司 Performance test method, device and system based on cloud computing platform
CN112486791A (en) * 2020-12-14 2021-03-12 政采云有限公司 Performance test method, device and equipment of server cluster

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506807B (en) * 2021-02-07 2021-05-11 上海洋漪信息技术有限公司 Automatic test system for interface serving multiple systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461856A (en) * 2013-09-22 2015-03-25 阿里巴巴集团控股有限公司 Performance test method, device and system based on cloud computing platform
CN112486791A (en) * 2020-12-14 2021-03-12 政采云有限公司 Performance test method, device and equipment of server cluster

Also Published As

Publication number Publication date
CN115328758A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN115328758B (en) A performance testing method and system for industrial software with large data volumes
CN109302522B (en) Test method, test device, computer system, and computer medium
CN112286806B (en) Automatic test method and device, storage medium and electronic equipment
US9111019B2 (en) Modeling and testing interactions between components of a software system
CN111737140B (en) Interface automatic test method, device, equipment and computer readable storage medium
US9454450B2 (en) Modeling and testing of interactions between components of a software system
US8234633B2 (en) Incident simulation support environment and business objects associated with the incident
US8898643B2 (en) Application trace replay and simulation systems and methods
CN112597018A (en) Interface test case generation method, device, equipment and storage medium
CN110750458A (en) Big data platform testing method and device, readable storage medium and electronic equipment
US10509719B2 (en) Automatic regression identification
CN111597104A (en) Multi-protocol adaptive interface regression testing method, system, equipment and medium
CN114911712A (en) Visual test execution method and device, electronic equipment and readable storage medium
US10528456B2 (en) Determining idle testing periods
CN113448985A (en) API (application program interface) interface generation method, calling method and device and electronic equipment
CN113419872A (en) Application system interface integration system, integration method, equipment and storage medium
CN112181852A (en) Interface automatic testing method and device, computer equipment and storage medium
CN114185791A (en) Method, device and equipment for testing data mapping file and storage medium
CN103248511B (en) A kind of analysis methods, devices and systems of single-point service feature
CN113419738A (en) Interface document generation method and device and interface management equipment
CN117370203A (en) Automatic test method, system, electronic equipment and storage medium
CN112612697A (en) Software defect testing and positioning method and system based on byte code technology
CN111309625A (en) Regression testing method and device based on real transaction data
CN116069649A (en) Page testing method, device, equipment and medium
CN118519920B (en) Automated testing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: No. 309 Liuhe Road, Binjiang District, Hangzhou City, Zhejiang Province (High tech Zone)

Applicant after: Zhongkong Technology Co.,Ltd.

Address before: No. six, No. 309, Binjiang District Road, Hangzhou, Zhejiang

Applicant before: ZHEJIANG SUPCON TECHNOLOGY Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant