4

Ad-Hoc Monitoring of Db2 Advanced Log Space Management

 1 year ago
source link: https://blogs.sap.com/2022/08/15/ad-hoc-monitoring-of-db2-advanced-log-space-management/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Ad-Hoc Monitoring of Db2 Advanced Log Space Management

0 5 491

Introduction

Advanced Log Space Management (ALSM) can help avoiding transaction log full situations where applications rarely commit and hold the oldest log file very long. Now, you might want to monitor ALSM from time to time. For this purpose, various metrics were introduced with IBM Db2 11.5 to monitor Advanced Log Space Management, for example LOG_EXTRACTION_STATUS in table function MON_GET_TRANSACTION_LOG. In addition to these metrics, database configuration parameters like LOGPRIMRY are important to get the complete picture of Db2 logging.

As an SAP customer, you can use the DBA Cockpit for monitoring advanced log space management (see also the blog post Using the DBA Cockpit for Monitoring Advanced Log Space Management). However, for ad hoc monitoring, it’s also easy to execute an SQL statement from command line that provides all information needed to assess the status of Db2 logging and ALSM.

The SQL Statement

For monitoring ALSM, I created a SQL statement that returns this result:

NUMBER      PARAMETER                                VALUE                  UNIT
----------- ---------------------------------------- ---------------------- --------------
          1 TOTAL_LOG_AVAILABLE                                      380.26 MB
          2 TOTAL_LOG_USED                                          1174.63 MB
          3 total_log                                               1554.89 MB
          4 log_used                                                  75.54 Pct
          5 LOGFILSIZ                                                156.25 MB
          6 LOGPRIMARY                                                10.00 LogFiles
          7 log_primary                                             1562.50 MB
          8 LOGSECOND                                                  0.00 LogFiles
          9 log_secondary                                              0.00 MB
         10 SEC_LOGS_ALLOCATED                                         0.00 LogFiles
         11 TOT_LOG_USED_TOP                                       12168.68 MB
         12 SEC_LOG_USED_TOP                                       10613.81 MB
         13 log_primary_used_top                                    1554.87 MB
         14 LOG_EXTRACTION_DISK_SPACE_USED_TOTAL                       0.02 MB
         15 LOG_EXTRACTION_DISK_SPACE_USED_TOTAL_TOP                   0.14 MB
         16 LOG_EXTRACTION_PROCESSED_BYTES                         10750.60 MB
         17 LOG_EXTRACTION_WRITTEN_BYTES                               0.00 MB
         18 LOG_EXTRACTION_WRITTEN_BYTES                               2.40 KB
         19 data_extracted                                             0.00 Pct
         20 LOG_EXTRACTION_ROLLBACK_READS                              0.00 LogRecords
         21 LOG_EXTRACTION_CUR_COMMIT_READS                            0.00 LogRecords
         22 FIRST_ACTIVE_LOG                                          78.00 LogFileNumber
         23 CURRENT_ACTIVE_LOG                                        92.00 LogFileNumber
         24 LAST_ACTIVE_LOG                                           92.00 LogFileNumber
         25 LOG_EXTRACTION_STATUS                                      2.00 0=n/a,2=active
         26 MAX_LOG                                                    0.00 Pct
         27 NUM_LOG_SPAN                                               0.00 LogFiles

Parameters in upper case are from the Db2 database configuration or from table function MON_GET_TRANSACTION_LOG. In addition, the UNIT of various parameters is changed to MB, to make calculations easier.

Parameters in lower case are computed values based on parameters in upper case. Column NUMBER is only used to order the result. Column UNIT specifies the unit of the value. For LOG_EXTRACTION_STATUS, UNIT specifies the meaning of the value. In the above example, LOG_EXTRACTION_STATUS=2 means that ALSM is active.

The scenario

The following diagram shows the scenario that I used to get the result above. Parallel to frequently committing workload (in orange), two transactions NC1 and NC2 were executed, which update just a few rows, but did not commit over a long period and committed only at the end of the transaction. This type of transaction is the primary use case for ALSM.

ALSM_scenario_kl.png

The workload started after I created the database. The current write log was log file number 0. Transaction NC1 updated a couple of rows and stayed open. The workload that was executed in parallel filled up log file after log file.

LOGPRIMARY was set 10 log files. Without ALSM, the workload would trigger error message SQL0964C “The transaction log for the database is full” after log file 0 to 9 had become full. With ALSM, an extraction process started and extracted the log records of transaction NC1. This enabled Db2 to release log file 0, which held log records of the oldest open transaction NC1.

The workload continued and transaction NC1 committed, where the current write log was number 78. A transaction of the same type started right after NC1, which was NC2. Transaction NC2 committed when the current write log was number 92.

The result of the monitoring SQL shows the situation just before transaction NC2 committed.

Explanation of the results

As you can see, LOGPRIMARY is set to 10, which results in log_primary = 1562.50 MB. ALSM checked LOG_EXTRACTION_PROCESSED_BYTES = 10750.60 MB and extracted only LOG_EXTRACTION_WRITTEN_BYTES 2.40 KB. Note: LOG_EXTRACTION_WRITTEN_BYTES occurs two times in MB and KB.

You can also see that LOG_EXTRACTION_STATUS is 2, which means active, and you probably noticed that SEC_LOG_USED_TOP is not 0 although LOGSECOND is 0. This might be confusing, but it is correct if ALSM is turned on.

With ALSM, SEC_LOG_USED_TOP shows the maximum logical (or virtual) log space used on top of LOGPRIMARY. You can now do this maths:

SEC_LOG_USED_TOP 10613.81 MB / LOGFILSIZ 156.25 MB = 68 log files. Adding LOGPRIMARY leads to 78 log files, which is exactly the volume over which transaction NC1 spanned.

The result also contains MAX_LOG and NUM_LOG_SPAN. If these settings are different from 0, please revisit these parameters. If NUM_LOG_SPAN is set to a value lower than 80% of LOGPRIMARY+LOGSECOND, ALSM will never extract, because the transaction will be canceled before ALSM has the chance to extract.

The monitoring SQL statement

Are you curious which SQL statement I used? Here it is:

with mon_get_transaction_log as ( select * from table(MON_GET_TRANSACTION_LOG(-1)) ),
     dbcfg as ( select * from SYSIBMADM.DBCFG )
select 1 as number, 'TOTAL_LOG_AVAILABLE' as parameter,
case TOTAL_LOG_AVAILABLE when -1 then -1 else
cast( cast( TOTAL_LOG_AVAILABLE as float ) / 1024 / 1024 as decimal(20,2)) end as value, 'MB' as unit
from mon_get_transaction_log
union
select 2, 'TOTAL_LOG_USED',
cast( cast( TOTAL_LOG_USED as float ) / 1024 / 1024 as decimal(20,2)), 'MB'
from mon_get_transaction_log
union
select 3, 'total_log',
cast( cast( TOTAL_LOG_AVAILABLE + TOTAL_LOG_USED as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 4, 'log_used',
case TOTAL_LOG when 0 then 0 else
cast( (TOTAL_LOG_USED*100.) / TOTAL_LOG as  decimal(20,2) ) end, 'Pct'
from (
select TOTAL_LOG_AVAILABLE, TOTAL_LOG_USED, TOTAL_LOG_AVAILABLE + TOTAL_LOG_USED as TOTAL_LOG
from mon_get_transaction_log )
union
select 5, 'LOGFILSIZ',
cast( cast( VALUE as float ) * 4 / 1024 as decimal(20,2) ), 'MB'
from dbcfg where name like 'logfilsiz'
union
select 6, 'LOGPRIMARY',
cast( VALUE as decimal(20,2) ), 'LogFiles'
from dbcfg where name like 'logprimary'
union
select 7, 'log_primary',
cast( s1.value * s2.value * 4 / 1024 as decimal(20,2) ), 'MB'
from
( select cast( VALUE as decimal(20,2) ) as value from dbcfg where name like 'logprimary' ) as s1,
( select cast( VALUE as decimal(20,2) ) as value from dbcfg where name like 'logfilsiz' ) as s2
union
select 8, 'LOGSECOND',
cast( VALUE as decimal(20,2) ), 'LogFiles'
from dbcfg where name like 'logsecond'
union
select 9, 'log_secondary',
case s1.value when -1 then -1  else
cast( s1.value * s2.value * 4 / 1024 as decimal(20,2) ) end, 'MB'
from
( select cast( VALUE as decimal(20,2) ) as value from dbcfg where name like 'logsecond' ) as s1,
( select cast( VALUE as decimal(20,2) ) as value from dbcfg where name like 'logfilsiz' ) as s2
union
select 10, 'SEC_LOGS_ALLOCATED',
cast( SEC_LOGS_ALLOCATED as decimal(20,2) ), 'LogFiles'
from mon_get_transaction_log
union
select 11, 'TOT_LOG_USED_TOP',
cast( cast( TOT_LOG_USED_TOP as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 12, 'SEC_LOG_USED_TOP',
cast( cast( SEC_LOG_USED_TOP as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 13, 'log_primary_used_top',
cast( cast( TOT_LOG_USED_TOP - SEC_LOG_USED_TOP as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 14, 'LOG_EXTRACTION_DISK_SPACE_USED_TOTAL',
cast( cast( LOG_EXTRACTION_DISK_SPACE_USED_TOTAL as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 15, 'LOG_EXTRACTION_DISK_SPACE_USED_TOTAL_TOP',
cast( cast( LOG_EXTRACTION_DISK_SPACE_USED_TOTAL_TOP as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 16, 'LOG_EXTRACTION_PROCESSED_BYTES',
cast( cast( LOG_EXTRACTION_PROCESSED_BYTES as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 17, 'LOG_EXTRACTION_WRITTEN_BYTES',
cast( cast( LOG_EXTRACTION_WRITTEN_BYTES as float ) / 1024 / 1024 as decimal(20,2) ), 'MB'
from mon_get_transaction_log
union
select 18, 'LOG_EXTRACTION_WRITTEN_BYTES',
cast( cast( LOG_EXTRACTION_WRITTEN_BYTES as float ) / 1024 as decimal(20,2) ), 'KB'
from mon_get_transaction_log
union
select 19, 'data_extracted',
case LOG_EXTRACTION_PROCESSED_BYTES
when 0 then 0
else cast( (cast( LOG_EXTRACTION_WRITTEN_BYTES as float )*100) /
cast( LOG_EXTRACTION_PROCESSED_BYTES as float ) as decimal(20,2) ) end, 'Pct'
from mon_get_transaction_log
union
select 20, 'LOG_EXTRACTION_ROLLBACK_READS',
cast( LOG_EXTRACTION_ROLLBACK_READS as decimal(20,2) ), 'LogRecords'
from mon_get_transaction_log
union
select 21, 'LOG_EXTRACTION_CUR_COMMIT_READS',
cast( LOG_EXTRACTION_CUR_COMMIT_READS as decimal(20,2) ), 'LogRecords'
from mon_get_transaction_log
union
select 22, 'FIRST_ACTIVE_LOG',
cast( FIRST_ACTIVE_LOG as decimal(20,2) ), 'LogFileNumber'
from mon_get_transaction_log
union
select 23, 'CURRENT_ACTIVE_LOG',
cast( CURRENT_ACTIVE_LOG as decimal(20,2) ), 'LogFileNumber'
from mon_get_transaction_log
union
select 24, 'LAST_ACTIVE_LOG',
cast( LAST_ACTIVE_LOG as decimal(20,2) ), 'LogFileNumber'
from mon_get_transaction_log
union
select 25, 'LOG_EXTRACTION_LAST_EXTRACTED_LOG',
cast( LOG_EXTRACTION_LAST_EXTRACTED_LOG as decimal(20,2) ), 'LogFileNumber'
from mon_get_transaction_log
union
select 26, 'LOG_EXTRACTION_STATUS',
cast( LOG_EXTRACTION_STATUS as decimal(20,2) ), '0=n/a,2=active'
from mon_get_transaction_log
union
select 27, 'MAX_LOG',
cast( VALUE as decimal(20,2) ), 'Pct'
from dbcfg where name like 'max_log'
union
select 28, 'NUM_LOG_SPAN',
cast( VALUE as decimal(20,2) ), 'LogFiles'
from dbcfg where name like 'num_log_span'
order by 1
;

If you want to use this SQL statement, you can copy it into a file, for example, called monALSM.sql. You can execute the SQL statement using this command: db2 -tvf monALSM.sql.

Summary

This monitoring SQL statement helps database administrators to assess the status of Db2 logging and ALSM. If you miss one or the other parameter, you can easily enhance the SQL statement as you like.

Finally, I’d like to thank everyone who contributed to this blogpost, especially my SAP colleagues Sabine Reich.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK