I don’t know if you are ever asked to compare to AWR periods. AWR period comparison is pretty easy if you have access to the two periods in the same AWR repository. AWR in the same repository can be compared with
SELECT * FROM TABLE(
dbms_workload_repository.awr_diff_report_text(
[db_id ],
[instance id],
120, -- start snapshot id
121, -- end snapshot id
[db_id of target,
[instance id] ,
122, -- start snapshot id
123 -- end snapshot id));
and it can be run for single instance as
SELECT * FROM TABLE(
dbms_workload_repository.awr_diff_report_text(
(select dbid from v$database),
1,
120, -- start snapshot id
121, -- end snapshot id
(select dbid from v$database),
1,
122, -- start snapshot id
123 -- end snapshot id));
This puts out a bit of a messy but useful report.
Here is an example output
Here is an example of using it from Doug Burns
A similar, but cleaner simpler report that I partially designed can be run from OEM
but what if someone sends you two AWR reports? How can they be compared? These days I’m receiving at least a couple a week to compare, so I put together a compare script. usage:
where type
sevt = system events , ie wait events
stats = system statistics
load = load profile section
init = init.ora
for example
udiffs.sh sevt awr1.txt awr2.txt
... Statistics requested is load
... 1st report.txt
... 2nd report.txt
============================= load_psec ==============================
Name Ratio 1/2 Value1 Value2 Delta
Physical_reads: : 0.29: 266.20: 905.33: 639.13
Physical_writes: : 0.70: 585.32: 836.75: 251.43
Logons: : 0.86: 1.27: 1.48: 0.21
Logical_reads: : 1.04: 747342.68: 718259.28: -29083.4
Redo_size: : 1.17:3516126.09:2995591.47: -520535
Sorts: : 1.31: 3981.16: 3027.78: -953.38
User_calls: : 1.38: 16476.53: 11948.71: -4527.82
Parses: : 1.39: 4541.51: 3279.06: -1262.45
Executes: : 1.44: 10619.75: 7350.55: -3269.2
Hard_parses: : 1.89: 0.17: 0.09: -0.08
Block_changes: : 2.38: 18936.62: 7942.27: -10994.3
============================= load_ptrx ==============================
Name Ratio 1/2 Value1 Value2 Delta
Logons: : 0.00: 0.00: 0.01: 0.01
Physical_reads: : 0.11: 0.43: 3.94: 3.51
Physical_writes: : 0.26: 0.95: 3.64: 2.69
Logical_reads: : 0.39: 1218.11: 3123.70: 1905.59
Redo_size: : 0.44: 5730.99: 13027.80: 7296.81
Sorts: : 0.49: 6.49: 13.17: 6.68
User_calls: : 0.52: 26.86: 51.96: 25.1
Parses: : 0.52: 7.40: 14.26: 6.86
Executes: : 0.54: 17.31: 31.97: 14.66
Block_changes: : 0.89: 30.87: 34.54: 3.67
of course if your AWR report is an html file, then the current script won’t work. One workaround is to run the html through a text converter like http://www.nirsoft.net/utils/htmlastext.html
This script was originally written back before statspack and was based on utlstat. If you look closely you will even see that the code is actually modified by Connie Dialeris, aka the writer of statspack. Before Connie put together statspack, she was looking at the usability of my scripts. I had written a couple of scripts, collect.sh and utlstat.sh. The idea of these scritps was to continuously looped collecting database statistics to flat files. Flat files were used to avoid an extra overhead of inserting data into the database. The data could be formatted the into a utlstat like report with utlstat.sh. Instead of writting a diff report on the raw data, I wrote a diff report that could be used for two different utlstat reports from customers as well as the raw data. This strategy was lucky because it was easy to update the diff script for statspack and AWR.
Comments