Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a large-data-volume performance test method and system for industrial software, which are used for quickly creating test data with large data volume according to different business scenes of the industrial software, so that the efficiency of manually creating sql sentences is improved. Meanwhile, the test result and the hardware resources of the tested server are monitored in real time, and when the preset performance index and the server resources are not met in the test process, notification and early warning are timely given to the testers.
The first aspect of the invention provides a performance test method for large data volume of industrial software, comprising the steps of deploying an API request grabbing tool, wherein the API request grabbing tool is used for collecting API request information;
Based on a preset performance test scene and performance test requirements, simulating a plurality of users to access corresponding service functions through a browser, collecting API request information generated by access based on an API request grabbing tool, generating a performance test script, deploying a script management center tool, wherein the script management center tool at least comprises a test case maintenance module used for importing a plurality of test cases, a functional module used for importing the performance test script and a data test script of test cases corresponding to different test scenes, the script management center analyzes the imported test cases through the test case module, stores analyzed contents into the case maintenance table, wherein the data test script at least comprises a script required by connecting a database, deploying the test management center tool at least comprises a message management center module used for sending a message task to the script management center and starting performance test of a large data amount, the performance test script is called by the performance test script, the data test script inserts simulation data of a preset data amount into the database based on the state label of the test case maintenance table of the corresponding to the test case, the generated simulation data of the data amount is used for analyzing the test case maintenance table, the data of the test script is used for starting the performance test result is received by the test script management center, and the test result is started by the test result is analyzed by the test script.
Further, the performance test script calls a data test script, the data test script inserts simulation data of preset data volume into a database based on a state mark of a test case maintenance module, and the method specifically comprises the steps that after receiving a performance test request of a large data volume of a message management center module, a script management center tool starts to execute the test case, and calls the data test script through the performance test script, the data test script judges whether the data of the preset data volume needs to be inserted or not based on a state mark of the test case maintenance table, if the state mark indicates that the data for performance test already exists, the data does not need to be inserted again, and if the state mark indicates that the data for performance test does not exist, the data of the preset data volume is inserted.
The method comprises the steps of setting database information and table names in a data test script, initializing database connection, calling a database insertion statement, obtaining a cursor object capable of executing an SQL statement, writing the insertion data into a random variable form, circularly inserting the random variable form in a large scale, sending a message management center insertion completion message after execution based on the preset data amount is completed, and returning a result set to be displayed in a tuple.
Further, the method also comprises the steps of adding the identification of the interface for sending the API request and the identification of the testing step to the performance test sample and storing the performance test sample.
Further, the method for starting the test result collection and analysis of the test results specifically comprises the steps of storing pressure test result data based on each user server and resource use condition data of the server deploying the industrial software to be tested into a time sequence database according to time sequence, wherein each piece of data at least comprises a time stamp, sending the data at fixed time, and dynamically displaying the pressure test result data and the resource use condition data of the server based on a Web page visual view.
Further, if the test result data is wrong or the resource use condition data of the server exceeds a preset threshold, notifying a receiver set in the configuration test parameters through the message middleware, and accessing and checking the pressure test result data and the resource use condition data of the server through a Web page of the test management center tool.
Furthermore, the performance test method for the large data volume of the industrial software further comprises the steps of collecting, storing and filtering the logs based on the preset log level, and displaying the logs based on the graphical programming, so that the problem is rapidly located.
The test management center tool further comprises a system configuration module, a resource file module, a task scheduling module, a monitoring module, a data analysis module and a data display module, wherein the system configuration module is used for configuring operation parameters and a test environment of a tested server and a test execution machine, the resource file module is used for configuring a test script, the task scheduling module is used for configuring a task name, selecting the test script to be executed, executing time and frequency, and starting performance test and executing the test script after task construction and execution. The system comprises a monitoring module, a data analysis module, a data display module, a message management center module, a log management module and a log management module, wherein the monitoring module is used for monitoring the running conditions of a tested server and a test execution machine pool and collecting the running data of the tested server and the test execution machine pool, the data analysis module is used for analyzing and summarizing performance test result indexes and server resource use conditions in a Python programming mode, the data display module is used for visually and dynamically displaying various performance test result indexes and server resource use conditions set by a user in real time, the message management center module is also used for configuring related notification personnel, notification modes and notification frequencies, timely notifying related responsible persons through message middleware, and the log management module is used for collecting log files in the execution process of the test execution machine pool.
Further, the data test script is a Python script, and the performance test script is Jmeter scripts.
The invention also provides a performance test system for running the performance test method, which at least comprises a script management center, a real-time monitoring center and a message management center, wherein the script management center is used for adapting different service scenes to run corresponding scripts and creating test data, the real-time monitoring system is used for acquiring the performance test result and server resource information after the scripts are run in real time and sending the performance test result and the server resource information to the message management center, and the message management center feeds back the test result based on a preset threshold value, the performance test result and the server resource information.
The beneficial effects of the invention are as follows:
1. For pain points with complex service scene, large data volume and high timeliness in the field of performance test of industrial software, different scripts are called to quickly generate a large amount of database data;
2. the performance test result and the server resource are monitored in real time in the performance test process, and a message is timely and actively sent to the user so that the user can quickly make adjustment, and the performance test method is good in adaptability, strong in pertinence, quick and efficient in the industrial software field;
3. Meanwhile, the test result can be sent to the analysis device in real time through configuration of the timing task, the analysis device can analyze the result and send the analysis result to the user through the mail configuration server, and the unattended performance test process is truly realized.
Detailed Description
API-application program interface (Application Programming Interface, abbreviated as API), also called application programming interface, a convention whereby different components of a software system are partially joined.
Test cases-a specific set of input data, operational or various environmental settings and desired results provided to the system under test for the purpose of conducting the test;
Test script, which is script written for automatic test and corresponding test case;
Fiddler, internet debug agent tool, not only can grasp various http communication between computer and even mobile phone and internet, but also can check them for analysis.
Jmeter software (e.g., a web application) for testing client/server architecture. It can be used to test the performance of static and dynamic resources.
The invention will now be described in further detail with reference to the drawings and the specific examples, which are given by way of illustration only and are not intended to limit the scope of the invention, in order to facilitate a better understanding of the invention to those skilled in the art.
The invention discloses a performance test method of large data volume of industrial software, as shown in fig. 1, which is a flow diagram of the performance test method of large data volume of industrial software, specifically comprising the steps of deploying an API request grabbing tool, wherein the API request grabbing tool is used for collecting API request information; based on preset performance test scenes and performance test requirements, simulating a plurality of users to access corresponding service functions through a browser, collecting API request information generated by access based on an API request capture tool, generating a performance test script, deploying a script management center tool, wherein the script management center tool at least comprises a test case maintenance module for importing a plurality of test cases, a function module for importing the performance test script and a data test script of test cases corresponding to different test scenes, the script management center analyzes the imported test cases through the test case module, stores analyzed contents into a case maintenance table, wherein the data test script at least comprises scripts required by connecting a database, deploying the test management center tool at least comprises a message management center module, the message management center module is used for sending a message task to the script management center, starting performance test of a large data volume, based on the script management center tool importing the performance test script, the message management center receives a message, and sends a scheduling task to the script management center, the performance test script calls the data test script, the data test script is inserted into the data simulation script based on the state of the test case maintenance table corresponding to the test cases, the data management center receives the data simulation result of the data simulation script, the data simulation script is generated by the data simulation script, the data management center receives the data simulation result of the data simulation script, starting performance test tasks with large data volume, starting test result collection and analyzing the test result.
The following describes the steps taking as an example a large number of performance tests of industrial software applied in the pharmaceutical industry.
S1, deploying an API request grabbing tool, wherein the API request grabbing tool is used for collecting API request information.
In an embodiment of the present invention, fiddler is employed as the API request grabbing tool. Downloading and installing Fiddler, and starting Fiddler when user tests to realize API request grabbing tool
S2, based on a preset performance test scene and performance test requirements, simulating a plurality of users to access corresponding service functions through a browser, and collecting API request information generated by access based on an API request grabbing tool so as to generate a performance test script;
And simulating a user to access the tested system through the browser according to the application scene of the industrial software to be tested and the requirements of the corresponding performance test, namely the industrial software to be tested. The simulation user sends an API request to the tested system through the browser.
For example, when multiple users simultaneously request a certain service function, at the installation and starting Fiddlerr of the tested system, the simulated user accesses the tested system through the browser, and the flow is schematically shown in fig. 2. Fiddlerr intercepts an API request sent by a simulation user, collects all API request information generated in the Web product operation process, and stores the API request information to form a Jmeter format script.
In some embodiments, the method further comprises adding an identification of an interface sending the API request and an identification of a testing step to the performance test sample, generating a unique ID according to a timestamp sent by the request, intercepting a request path production value, and adding the request identification according to the ID and the value.
S3, deploying a script management center tool, wherein the script management center tool at least comprises a test case maintenance module for importing a plurality of test cases, an importing performance test script functional module for importing performance test scripts and data test scripts of the test cases corresponding to different test scenes, the script management center analyzes the imported test cases through the test case module, and the analyzed contents are stored in the case maintenance table, wherein the data test scripts are connected with a database.
The script management center is realized by python, and the python script file is written and then is imported into the script management center, and the script management center analyzes the imported test cases and manages the test cases. A schematic of the operation of the script management center tool is shown in fig. 3. The script management center imports the performance test script, the message management center module receives the message, then sends the task to the script management center, the script management center starts the task, executes the performance test script, judges the status mark of the test case, calls the data test script to produce a large amount of test data if the status mark bit False of the test status case, sends the message to the message management center module after the execution is completed, and starts the test of the large amount of data.
In some embodiments, the script management center has an import button to import performance test scripts, i.e., import test cases. The script management center maintains and stores the test cases in an oracle data table, and the table statement is created as follows:
CREATE TABLE"TEST_CASES"("ID"NUMBER(20,0)NOT NULL ENABLE,"CASE_NAME"VARCHAR2(256)NOT NULL ENABLE,"PYCASENAME"VARCHAR2(20,0)NOT NULL ENABLE,"OPERATE"VARCHAR2(256)NOT NULL ENABLE,"STATUS"NUMBER(1,0)DEFAULT 0,"CREATOR"VARCHAR2(64),"MODIFY_TIME"TIMESTAMP(6)DEFAULT NULL,PRIMARY KEY("ID"));
ID representing unique identification of our script use case table
CASE_NAME represents the CASE NAME, test CASE custom NAMEs, such as test CASE 1, test CASE 2
PYCASENAME python script file corresponding to the representation use case
STATUS, validation use case is identified by True, invalidation use case is identified by False
OPERATE operations are stored in the database with 0,1,2, 0 representing editing, 1 representing execution, 2 representing deletion
CREATOR represents who the use case was created from
Modification_TIME, modification TIME, represents modification TIME of use case
When a certain test case is executed, a corresponding python script file is operated, and data can be generated by inputting a start value and an end value through an interface, so that the test case can cover different service scenes only by writing a plurality of python scripts for realizing different service scenes. Writing a python script file first requires creating a use case maintenance directory such as: D \ datatest \case \test_process_type. Py, a schematic diagram of the use case maintenance table of this embodiment example is shown in FIG. 4.
And S4, deploying a test management center tool, wherein the test management center at least comprises a message management center module, and the message management center module is used for sending a message task to the script management center and starting performance test of a large amount of data.
In some embodiments, the test management center tool comprises several modules, namely a system configuration module, which is mainly used for configuring the tested server, the operation parameters of the test execution machine and the test environment. The resource file module is mainly used for configuring the test script. The task scheduling module is mainly used for configuring task names, test scripts for selecting execution, and executing time and frequency. After the task is constructed and executed, performance test is started, and test scripts are executed. The system comprises a monitoring module, a data analysis module and a data analysis module, wherein the monitoring module is mainly used for monitoring the running conditions of a tested server and a test execution machine pool and collecting the running data of the tested server and the test execution machine pool, and the data analysis module is used for analyzing and summarizing performance test result indexes and server resource use conditions in a Python programming mode. And the data display module is used for dynamically displaying various performance test result indexes and server resource use conditions set by a user in real time in a visible mode. The message management center module is mainly used for configuring related notification personnel, notification modes and notification frequencies, and timely notifying related responsible persons and script management centers to start tasks through the message middleware. The log management module is mainly used for collecting log files in the execution process of the test execution machine pool.
The test management center tool is accessible through a Web page.
The method comprises the steps of importing a plurality of test cases in a case maintenance module of a script management center, wherein one test case corresponds to one performance test script, a user can directly access a Web interface to import the performance test script in a resource file, setting and modifying virtual user numbers of each scene in test parameters in system setting of a system to be tested, user thinking time, a test result saving path, newly increasing a test server resource CPU, memory, IO, network card use threshold, a test error rate threshold, a log level, various index display conditions of the test result, server resource use conditions and the like in threshold management, configuring a timing task in task scheduling, constructing a trigger to execute the timing task, configuring post-construction operation, configuring a test report template and mail receiver information, wherein the test report template supports default template and user self-defined setting, and realizing unattended operation.
The configuration of the timing task can trigger the interface to send the task, namely, the test result can be sent to the information collecting device at regular time for collecting the test result.
S5, the performance test script calls a data test script, the data test script inserts simulation data with preset data quantity into a database based on the state mark of the test case maintenance table of the corresponding test case, and the generated simulation data is used for performance test with large data quantity.
When a certain test case is executed, namely after a performance test with large data volume is started, a corresponding python script file is operated, data is inserted into a database, the data is written into a random variable form through cyclic insertion, and the effective realization of batch test data is realized. In the process of inserting data, whether the data needs to be inserted is judged according to the status field of the test case maintenance table, if the data exists, the data does not need to be inserted again, and if the data does not exist, the data starts to be prepared for insertion.
Taking a database Oracle connected in a data test script as an example, creating one million data in the Oracle database, we can initialize database connection cx-Oracle by defining a table object class a to obtain a cursor object capable of executing an SQL sentence, and the result set returned after execution is displayed by a tuple by default.
In some embodiments, for example, creating a time field, we automatically subtract 1 day by setting a string in time format, by writing a time field dt= (dt+datetime. Timedelta (days= -1)) for each cycle insertion, modifying the time field may require data fixed at a certain time period, time may be converted into a corresponding timestamp modify_time=time.strftime('%Y-%m-%d%H:%M:%S',time.localtime(1585497600-random.randrange(1,1000000)))., and then data insert into table name values (placeholders) are inserted in batch by executemany method.
Another business scenario is a different field, such as an ID field, where the location ID is str (i) in the cycle, and the input_key field we set to input_key= 'LIMS: ammonia nitrogen content' +shift_date.
S6, the message management center module receives the execution result of the data test script and starts a performance test task with large data volume;
s7, starting test result collection and analyzing the test result.
And starting a performance test task, and starting a result collection device and a server monitoring device, wherein the server monitoring device monitors hardware resources of the tested server, and the result collection device collects performance test receipts. In one embodiment of the invention, the interface is triggered to send tasks through configuration of timing tasks, and test results are sent to the test result collecting device at fixed time. The pressure test results of each server and the service conditions of server resources are collected and stored in a time sequence database, each piece of data has a time stamp based on the time stored database, and the collected test results are integrated and then sent to an analysis device. The analysis device collects data based on the collected test result data and server resource use data, analyzes the data, and dynamically displays various performance test result indexes and server resource use conditions set by a user in a visual mode in real time. When the test result error rate and the server resource utilization rate exceed the set threshold, the flow chart is shown in fig. 5, the relevant responsible person is informed in time through the message middleware, and the relevant responsible person can access and check the historical test index data and the server resource monitoring data through the Web page to conduct the check and analysis.
The user can learn the real-time performance test result in the first time, and perform manual intervention in time, so that time and resource waste from the occurrence of errors to the end of the performance test are avoided, and the performance test efficiency is ensured.
S8, collecting, storing and filtering the logs based on the preset log level, and displaying the logs based on the Python graphical programming, so that the problem is rapidly located.
In some embodiments, the log management module of the script-based management center is used for collecting log information generated by a test execution machine pool, and the log level is optional as follows, the device < INFO < WARNING < ERROR < Fatal, corresponding date files are generated by date and stored on a hard disk, log analysis can analyze the log files of the date, search is performed according to keywords input by a user, and quick positioning can be realized.
S9, configuring a mail service center, and sending test report results to different users
In some embodiments, the method further comprises configuring the mail service center to send test report results to different users.
The invention provides a large-data-volume performance test system of industrial software, which at least comprises a script management center, a real-time monitoring center and a message management center, wherein the script management center is used for adapting different service scenes to run corresponding scripts and creating test data, the real-time monitoring system is used for acquiring performance test results and server resource information after running the scripts in real time and sending the performance test results and the server resource information to the message management center, and the message management center feeds back the test results based on preset thresholds, the performance test results and the server resource information.
It should be noted that in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, a single step described in this specification may be described as being split into multiple steps in other embodiments, while multiple steps described in this specification may be described as being combined into a single step in other embodiments.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The system and system embodiments described above are merely illustrative, and some or all of the modules may be selected according to actual needs to achieve the objectives of the present embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.