The trend of modern time is the growth of information flows within the company. Due to many factors: increased number of users, rise in the number of automated business processes, overall growth in the number of business units (branches, trading floors, additional offices, warehouses). It’s necessary to continuously have access to the necessary information. Positive results depend on the optimality of this or that decision to increase the profitability of the company. Organized continuous access is not always obtained. This may be due to malfunctions in the software, coding bugs, hardware, various loads such as scheduled activities. For example, in the 1C 8.x system this could be the closing of the month, calculation of cost price, calculation of wages, pre-sales, what’s sold and promotions etc..
How to understand the causes of performance problems and to solve them as quickly as possible?
By using various tools the information is collected regarding the performance of the information system. In our case we use performance monitoring PerfExpert. This has close integration within information systems like 1C 7.7, 1C 8.1, 1C 8.2, 1C 8.3. Used to analyze within the search for “narrow” places in performance. Given that information is collected in large numbers from various sources like Windows performance counters, MS SQL Server, SQL Server load processes, heavy or suboptimal SQL queries, user session loads. It needs to be grouped and ranked by importance.
In our practice analysis and what is especially important is the visual presentation of information make it possible to find a pattern of behavior and this in turn allowed us to find the cause of the failures.
The first example: client complained about the hanging of the information system when the document window “Expense Invoice” was opened. We analyze the code using the built-in debugger. There are no problems found. We perform synthetic testing and the issue almost never happened until we developed a process that opened the window with equal period document maintaining the duration of these discoveries. Less than 2% of the discoveries were opened unacceptably long but as soon as we created the diagram in Excel based on the data we noticed an interesting regularity.
Only a small number of openings of document forms occur much slower than usual. However, the moments of these slowdowns are repeated with practically the same periodicity. This made it possible to find the cause of slow performance, a background task that repeats itself each 2 minutes. In this example we’d would like to show the importance of working correctly and efficiently with statistical information.
The second example: analysis and solution of performance problems. End users complaining about many operations: prompt execution of documents, execution of reports, processing, appearance of messages with locks or just the hanging of the interface forms of information systems. Of course in our analysis we take into account the information from the user but we try first of all to take into account the real data of statistics from information systems and the database servers.
For example, the end user complains about the length of the execution of the X report.
Initially, wewant to analyze the code of the IS on which the report is written. In this situation we recommend that you first look at the performance monitoring data and determine if there was a shortage of server resources (usually a database server) at the time the report was requested. If there’s a shortage of resources then we recommend that instead of analyzing the implementation code of the report to analyze the reason for the lack of resources. More about which SQL commands or requests consumed server resources. In most cases these SQL queries are not directly related to the X report but are external operations. Prompts for screen forms, interface procedures and online reports that also need to be optimized.
To search for similar SQL queries we use the form to trace performance monitoring requests using PerfExpert. The first line of the application code accounts for 13.19% of the total CPU load on the MS SQL database server. On the second line it’s using 9.42%. If there were not enough CPU resources for quick execution of the user operation it is advisable to optimize the first two lines as in total they use almost 23% of the total CPU load. The situation is the same with the disk / memory and the analysis should be done similarly.
This material will be useful for understanding the importance of working and analyzing statistical information.