Introduction
Active session history (ASH), available since 10g, is one of the greatest performance troubleshooting tools available for Oracle databases. And one of most underutilized, too. In this article I will give some basic guidelines regarding ASH usage, and will list some of my favorite ASH queries.
ASH is a view (V$ACTIVE_SESSION_HISTORY). It can give most of the information found in trace files, but unlike tracing, it’s always on!
Using ASH is extremely simple, because you don’t have enable any events or go to database server filesystem to retrieve the results: you just query the view. However, there are a few basic rules that allow to use ASH with maximum efficiency, and minize the risk of misinterpreting its output.
This technique is especially useful when you are trying to do either of the following:
-
Resolve transient performance problems that may last for only a short period of time, such as why a particular job or session is not responding when the rest of the instance is performing as usual
-
Perform scoped or targeted performance analysis by various dimensions or their combinations, such as time, session, module, action, or SQL identifier
Sampled data captured by ASH can be aggregated based on the dimensions in the data, including the following:
-
SQL identifier of a SQL statement
-
Object number, file number, and block number
-
Wait event identifier and parameters
-
Session identifier and session serial number
-
Module and action name
-
Client identifier of the session
-
Service hash identifier
Using ASH efficiently
Resolve transient performance problems that may last for only a short period of time, such as why a particular job or session is not responding when the rest of the instance is performing as usual
Perform scoped or targeted performance analysis by various dimensions or their combinations, such as time, session, module, action, or SQL identifier
Sampled data captured by ASH can be aggregated based on the dimensions in the data, including the following:
- SQL identifier of a SQL statement
- Object number, file number, and block number
- Wait event identifier and parameters
- Session identifier and session serial number
- Module and action name
- Client identifier of the session
- Service hash identifier
1. Don’t use automatic ASH reports
Using regular SQL to pull information out of ASH is the simplest and most reliable way, and it gives you the full control. Using dbms_workload_repository or third-party tools to generate and pre-process ASH data is neither necessary nor helpful.
2. Count samples to obtain timings
Don’t use TIME_WAITED column to calculate timings, this will lead to wrong results (because sampled data is biased towards longer events ). Instead, use sample counts. Since ASH samples are collected every second, the number of samples can be used as an estimate of number of seconds taken by certain activity (or certain events, or spent in certain state, etc.). E.g. if you have X counts with event=’db file sequential read’ then you know that the database spent about X seconds on that event.
Note that when counting events from DBA_HIST_ACTIVE_SESS_HISTORY, one count represents 10 seconds, not 1 (only one sample out of 10 is kept to save space).
3. Be aware of short expiration dates
ASH collects an enormous amount of diagnostic data — obviously, these data cannot be kept indefinitely long. V$ACTIVE_SESSION_HISTORY has about 1 hour worth of diagnostic data. DBA_HIST_ACTIVE_SESSION_HISTORY can have up to several days of history (depending on your settings), but it only keeps 1 sample out of 10 (i.e. its time resolution is 10 times worse).
This means that you need to look at ASH as soon as possible (e.g. you can always look at AWR later, if your retention is large enough, the snapshots are going to be available for weeks). Or better yet, dump relevant entries from V$ACTIVE_SESSION_HISTORY into a table to analyze them later.
Also, it means that you need to check the overlap between the interval of interest and the interval for which ASH data is available, or you can misinterpret ASH results.
Update As suggested by Christos in his comment to the post, one could use V$ASH_INFO to check ASH metadata, like earliest/latest sample time. I myself simply use:
SELECT min(sample_time) min_sample_time, max(sample_time) max_sample_time FROM V$ACTIVE_SESSION_HISTORY |
4. Instrument your code to make ASH efficient.
One of strengths of ASH is that it allows you to look at a single session or a group of sessions (pertaining to a certain application or a module). This is a big advantage over global tools like AWR, which can only give you aggregated or averaged statistics for the entire instance. However, you need to be able to identify the session(s) of interest. Columns like CLIENT_ID, ACTION, MODULE, are very efficient for that purpose, but you need to do a little work to make sure that they are properly populated, i.e. call dbms_session.set_identifier and/or dbms_application_info.set_action/set_module inside the application to let the database know what part of the application the current session is related to.
5. Be aware of the statistical error.
Trace files give duration of various events as measured by database kernel code. It’s subject to an instrumentation error, but there is no statistical error. ASH is different because it’s based on sampling. Sampling techniques are statistical by their nature, i.e. the more samples you have, the better is statistical accuracy. Quantitatively, statistical error can be estimated using formulas Poisson’s distribution, i.e. the inverse square root of number of counts. For example, if your period of interest covers 16 seconds, i.e. 16 samples in V$ACTIVE_SESSION_HISTORY, then statistically your results will be accurate to 25%.
Sample ASH queries
Here are some queries to show you want one can do with ASH data. Here I used both V$ACTIVE_SESSION_HISTORY and DBA_HIST_ACTIVE_SESS_HISTORY views (depending on how recent the time period of interest is you can always use either view). The SQL below can be used as building blocks for creating other useful queries.
I’ve used 11.2.0.3 for testing, but items 1 through 8 should work on 10g as well.
1. Current CPU usage by this instance
select round(100*num_cpus_in_use/num_cpus_total,1) cpu_usage_pct from ( select count(*) num_cpus_in_use from v$active_session_history where sample_time = (select max(sample_time) from v$active_session_history ) and session_state = 'ON CPU' ), (select value num_cpus_total from v$osstat where stat_name = 'NUM_CPUS' ) num_cpus; |
2. Top 5 events for an arbitrary interval
This query returns information similar to “top 5 timed foreground events” in the AWR report, but with ASH you are free to choose any interval you like, rather than being limited by AWR snapshots.
select * from ( select nvl(event, session_state) event, round(100*count(*)/ (sum(count(1)) over ()), 2) "DB time (%)" from v$active_session_history where sample_time between to_date(:start_time, 'dd-mon-yyyy hh24:mi:ss') and to_date(:end_time, 'dd-mon-yyyy hh24:mi:ss') group by nvl(event, session_state) order by count(*) desc ) where rownum<=5; |
Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Call Time -------------------------------------- ------------ ----------- --------- CPU time 559 88.80 log file parallel write 2,181 28 4.42 SQL*Net more data from client 516,611 27 4.24 db file parallel write 13,383 13 2.04 db file sequential read 563 2 .27
The following example of this data is taken from the Wait Event section of the AWR report:
Avg Total Wait wait Waits Event Waits Timeouts Time (s) (ms) /txn --------------------------- --------- --------- ---------- ------ --------- log file parallel write 2,181 0 28 13 41.2 SQL*Net more data from clie 516,611 0 27 0 9,747.4 db file parallel write 13,383 0 13 1 252.53. Database activity as a function of time
This query is convenient for plotting (e.g. with gnuplot or other similar tool) database activity (also known as average active sessions or AAS) versus time.
select sample_time time, count(*) activity from v$active_session_history group by sample_time order by sample_time; |
4. Wait profile for a single SQL statement
This query is useful when you need to understand what a specific SQL statement was waiting on — I/O, CPU or something else. This information is similar to wait info in trace files.
select nvl(event, session_state) event, round(100*count(*)/ (sum(count(1)) over ()), 2) "% query time" from dba_hist_active_sess_history where sql_id = :sql_id group by nvl(event, session_state) order by count(*) desc; |
5. Top SQL for a module
This query gives output similar to contents of “SQL ordered by elapsed time” section of AWR reports, but it gives you more flexibility, because not only you can pick any interval you like, but also you can filter only SQL pertaining to some application module. Of course you can only do that, if the application code is properly instrumented (i.e. it calls dbms_application_info.set_module to specify the current module). In this query and below :len bind variable sets the size for the SQL text (I normally use :len=100).
select nvl(ash.sql_id, 'N/A') sql_id, replace(nvl(txt.sql_text_short, 'N/A'), chr(10), '\n') sql_text, count(*) "DB time(s)", round(100*count(*)/ sum(count(1)) over(), 2) "DB time(%)" from v$active_session_history ASH, (select sql_id, dbms_lob.substr(sql_text, :len) sql_text_short from v$sql union all select sql_id, dbms_lob.substr(sql_text, :len) sql_text_short from dba_hist_sqltext ) txt where ash.sql_id = txt.sql_id (+) and module = :module group by ash.sql_id, txt.sql_text_short order by count(*) desc; |
6. Profiling PL/SQL calls in a module
This query gives output similar to that given by PL/SQL profilers, but only for top-level PL/SQL calls. In principle ASH also contains information for bottom-level PL/SQL calls, but it’s not always useful because apparently when a PL/SQL routine is running SQL, bottom-level PL/SQL info is not properly filled.
select P.OBJECT_NAME || case when p.object_name is not null and p.procedure_name is not null then '.' end || P.PROCEDURE_NAME top_plsql_object_name, count(*) "DB time (s)" from v$active_session_history ash, dba_procedures p where ASH.PLSQL_ENTRY_OBJECT_ID = P.OBJECT_ID (+) and ASH.PLSQL_ENTRY_SUBPROGRAM_ID = P.SUBPROGRAM_ID (+) and module = :module group by p.object_name, p.procedure_name order by count(*) desc; |
7. Accessed objects
This query gives the breakdown of DB time within the interval of interest by database objects accessed.
select nvl(o.owner || case when o.object_name is not null then '.' end || o.object_name, 'N/A') object_name, round(100*count(*)/ sum(count(1)) over (), 2) "DB time (%)" from dba_hist_active_SESS_HISTORY ash, dba_objects o where ash.CURRENT_OBJ# = o.object_id (+) and ash.sample_time between to_date(:start_period, 'dd-mon-yyyy hh24:mi:ss') and to_date(:end_period, 'dd-mon-yyyy hh24:mi:ss') group by o.owner || case when o.object_name is not null then '.' end || o.object_name order by count(*) desc; |
8. Datafile access during last N hours
This query gives the breakdown of DB time by files accessed (useful for troubleshooting I/O problems).
select nvl(f.file_name, 'not in I/O waits'), 10*count(*) "DB time (s)", round(100*count(*)/ sum(count(1)) over (), 2) "DB time (%)" from DBA_HIST_ACTIVE_SESS_HISTORY ash, DBA_DATA_FILES f where ash.current_file# = f.file_id (+) and ash.sample_time > sysdate - :hours/24 group by f.file_name order by count(*) desc; |
9. Breakdown of query DB time by plan operations
This query tells where in the plan a certain SQL statement is spending most of its time, which is very useful for troubleshooting SQL retrospectively if SQL real-time monitor information is not available.
SELECT ash.sql_plan_line_id, ash.sql_plan_operation, ash.sql_plan_options, p.object_name, round(100*COUNT(*)/ sum(count(1)) over(), 2) "% time" FROM v$active_session_history ash, v$sql_plan p WHERE ash.sql_id = p.sql_id AND ash.sql_plan_hash_value = p.plan_hash_value AND ash.sql_plan_line_id = P.id AND ash.sql_id = :sql_id AND ash.sql_plan_hash_value = :plan_hash_value GROUP BY ASH.SQL_PLAN_LINE_ID, ASH.SQL_PLAN_OPERATION, ASH.SQL_PLAN_OPTIONS, p.object_name ORDER BY count(*) DESC; |
10. Long-running queries at a specific moment in time
select distinct sql_ID, 24*60*60*(sysdate - sql_exec_start) seconds_running FROM v$active_session_history where sample_time = (select max(sample_time) from v$active_session_history where sample_time < to_date(:time, 'dd-mon-yyyy hh24:mi')) and sql_id is not null order by 24*60*60*(sysdate - sql_exec_start) desc; |
11. Time model statistics query
This query gives breakdown of DB time by session state, similar to the time model statistics section of the AWR report.
select status, round(100*count(*)/ sum(count(1)) over(), 2) "% DB time" from ( select case when in_connection_mgmt='Y' then 'IN_CONNECTION_MGMT' when in_parse = 'Y' then 'IN_PARSE' when in_hard_parse = 'Y' then 'IN_HARD_PARSE' when in_sql_execution = 'Y' then 'IN_sql_execution' when in_plsql_execution = 'Y' then 'IN_plsql_execution' when in_plsql_compilation = 'Y' then 'IN_plsql_compilation' when in_plsql_rpc = 'Y' then 'IN_plsql_rpc' when in_java_execution = 'Y' then 'IN_JAVA_EXECUTION' when in_bind = 'Y' then 'IN_BIND' when in_cursor_close = 'Y' then 'IN_CURSOR_CLOSE' when in_sequence_load = 'Y' then 'IN_SEQUENCE_LOAD' else 'N/A' end status FROM dba_hist_active_sess_history ) group by status order by count(*) desc; |
12. SQL statements consuming most PGA at a specific time
This query can help troubleshoot out-of-memory errors like ORA-4030 by providing a list of top PGA consumers and size of PGA allocated to them.
ASH Report For ORCL/orcl1
Back to Top
Back to Top Events
Back to Top
Back to Top Events
Back to Top
Back to Top
Back to Top Events
Back to Top
Back to Load Profile
Back to Top
Back to Top
Back to Load Profile
Back to Top
Back to Load Profile
Back to Top
Back to Top
Back to Top SQL
Back to Top
Back to Top SQL
Back to Top
Back to Top
Back to Top SQL
Back to Top
Back to Top
Back to Top
Back to Top
Back to Top
Back to Top Sessions
Back to Top
Back to Top
Back to Top
Back to Top
Back to Top Objects/Files/Latches
Back to Top
Back to Top
Back to TopEnd of Report
select ash.sql_id,
replace(nvl(txt.sql_text_short, 'N/A'),
chr(10), '\n'
) sql_text,
round(ASH.PGA_ALLOCATED/1024/1024) pga_mb
from dba_hist_active_sess_history ash,
(
select sql_id, dbms_lob.substr(sql_text, :len) sql_text_short
from v$sql
union all
select sql_id, dbms_lob.substr(sql_text, :len) sql_text_short
from dba_hist_sqltext
) txt
where ash.sql_id = txt.sql_id (+)
and ash.sql_id is not null
and ash.sample_time = (select max(sample_time)
from dba_hist_active_sess_history
where sample_time < to_date(:time, 'dd-mon-yyyy hh24:mi:ss'))
order by ash.pga_allocated desc;
ASH Report For ORCL/orcl1
DB Name | DB Id | Instance | Inst num | Release | RAC | Host |
---|---|---|---|---|---|---|
ORCL | 1431258748 | orcl1 | 1 | 11.2.0.4.0 | YES | rac1.oracl.com |
CPUs | SGA Size | Buffer Cache | Shared Pool | ASH Buffer Size |
---|---|---|---|---|
1 | 411M (100%) | 108M (26.3%) | 78M (18.9%) | 2.0M (0.5%) |
Sample Time | Data Source | |
---|---|---|
Analysis Begin Time: | 21-Jun-16 16:51:00 | DBA_HIST_ACTIVE_SESS_HISTORY in AWR snapshot 2 |
Analysis End Time: | 21-Jun-16 16:55:00 | DBA_HIST_ACTIVE_SESS_HISTORY in AWR snapshot 2 |
Elapsed Time: | 4.0 (mins) | |
Sample Count: | 14 | |
Average Active Sessions: | 0.58 | |
Avg. Active Session per CPU: | 0.58 | |
Report Target: | None specified |
ASH Report
- Top Events
- Load Profile
- Top SQL
- Top PL/SQL
- Top Java
- Top Call Types
- Top Sessions
- Top Objects/Files/Latches
- Activity Over Time
Back to Top
Top Events
Back to TopTop User Events
Event | Event Class | % Event | Avg Active Sessions |
---|---|---|---|
CPU + Wait for CPU | CPU | 14.29 | 0.08 |
Back to Top
Top Background Events
Event | Event Class | % Activity | Avg Active Sessions |
---|---|---|---|
CPU + Wait for CPU | CPU | 28.57 | 0.17 |
control file sequential read | System I/O | 28.57 | 0.17 |
ASM file metadata operation | Other | 14.29 | 0.08 |
db file sequential read | User I/O | 14.29 | 0.08 |
Back to Top
Top Cluster Events
No data exists for this section of the report. Back to Top EventsBack to Top
Top Event P1/P2/P3 Values
Event | % Event | P1 Value, P2 Value, P3 Value | % Activity | Parameter 1 | Parameter 2 | Parameter 3 |
---|---|---|---|---|---|---|
control file sequential read | 28.57 | "0","35","1" | 21.43 | file# | block# | blocks |
control file sequential read
|
28.57142857142857142857142857142857142857
| "0","40","1" | 7.14 |
file#
|
block#
|
blocks
|
ASM file metadata operation | 14.29 | "39","0","0" | 14.29 | msgop | locn | NOT DEFINED |
db file sequential read | 14.29 | "1","54493","1" | 7.14 | file# | block# | blocks |
db file sequential read
|
14.28571428571428571428571428571428571429
| "2","38362","1" | 7.14 |
file#
|
block#
|
blocks
|
Back to Top
Load Profile
- Top Service/Module
- Top Client IDs
- Top SQL Command Types
- Top Phases of Execution
- Top Remote Instances
Top Service/Module
Service | Module | % Activity | Action | % Action |
---|---|---|---|---|
SYS$BACKGROUND | UNNAMED | 57.14 | UNNAMED | 57.14 |
SYS$BACKGROUND
| MMON_SLAVE | 14.29 | Check SMB Size | 14.29 |
SYS$USERS | DBMS_SCHEDULER | 14.29 | BSLN_MAINTAIN_STATS_JOB | 7.14 |
SYS$USERS
|
DBMS_SCHEDULER
|
14.28571428571428571428571428571428571429
| PURGE_LOG | 7.14 |
SYS$USERS
| UNNAMED | 14.29 | UNNAMED | 14.29 |
Back to Top
Top Client IDs
No data exists for this section of the report. Back to Load ProfileBack to Top
Top SQL Command Types
- 'Distinct SQLIDs' is the count of the distinct number of SQLIDs with the given SQL Command Type found over all the ASH samples in the analysis period
SQL Command Type | Distinct SQLIDs | % Activity | Avg Active Sessions |
---|---|---|---|
SELECT | 5 | 42.86 | 0.25 |
ALTER DATABASE | 0 | 14.29 | 0.08 |
Back to Top
Top Phases of Execution
Phase of Execution | % Activity | Avg Active Sessions |
---|---|---|
Hard Parse | 14.29 | 0.08 |
Parse | 14.29 | 0.08 |
SQL Execution | 14.29 | 0.08 |
Back to Top
Top Remote Instances
No data exists for this section of the report. Back to Load ProfileBack to Top
Top SQL
- Top SQL with Top Events
- Top SQL with Top Row Sources
- Top SQL using literals
- Top Parsing Module/Action
- Complete List of SQL Text
Top SQL with Top Events
SQL ID | Planhash | Sampled # of Executions | % Activity | Event | % Event | Top Row Source | % RwSrc | SQL Text |
---|---|---|---|---|---|---|---|---|
f6cz4n8y72xdc | 86676974 | 1 | 14.29 | CPU + Wait for CPU | 7.14 | ** Row Source Not Available ** | 7.14 | ** SQL Text Not Available ** |
f6cz4n8y72xdc
|
86676974
|
1
|
14.28571428571428571428571428571428571429
| db file sequential read | 7.14 | ** Row Source Not Available ** | 7.14 | |
1vzc8w7hxcayh | 492002213 | 1 | 7.14 | db file sequential read | 7.14 | INDEX - RANGE SCAN | 7.14 | select o.dataobj# from seg$ s,... |
7a00w0f3y6awp | 736819937 | 0 | 7.14 | CPU + Wait for CPU | 7.14 | ** Row Source Not Available ** | 7.14 | ** SQL Text Not Available ** |
9vfvgsk7mtkr4 | 0 | 7.14 | CPU + Wait for CPU | 7.14 | ** Row Source Not Available ** | 7.14 | ** SQL Text Not Available ** | |
f99a23s9aba0z | 1846751226 | 1 | 7.14 | CPU + Wait for CPU | 7.14 | ** Row Source Not Available ** | 7.14 | select job, nvl2(last_date, 1,... |
Back to Top
Top SQL with Top Row Sources
SQL ID | PlanHash | Sampled # of Executions | % Activity | Row Source | % RwSrc | Top Event | % Event | SQL Text |
---|---|---|---|---|---|---|---|---|
f6cz4n8y72xdc | 86676974 | 1 | 14.29 | ** Row Source Not Available ** | 14.29 | CPU + Wait for CPU | 7.14 | ** SQL Text Not Available ** |
1vzc8w7hxcayh | 492002213 | 1 | 7.14 | INDEX - RANGE SCAN | 7.14 | db file sequential read | 7.14 | select o.dataobj# from seg$ s,... |
7a00w0f3y6awp | 736819937 | 0 | 7.14 | ** Row Source Not Available ** | 7.14 | CPU + Wait for CPU | 7.14 | ** SQL Text Not Available ** |
9vfvgsk7mtkr4 | 0 | 7.14 | ** Row Source Not Available ** | 7.14 | CPU + Wait for CPU | 7.14 | ** SQL Text Not Available ** | |
f99a23s9aba0z | 1846751226 | 1 | 7.14 | ** Row Source Not Available ** | 7.14 | CPU + Wait for CPU | 7.14 | select job, nvl2(last_date, 1,... |
Back to Top
Top SQL using literals
No data exists for this section of the report. Back to Top SQLBack to Top
Top Parsing Module/Action
Module | Action | % Activity | Event | % Event |
---|---|---|---|---|
DBMS_SCHEDULER | BSLN_MAINTAIN_STATS_JOB | 7.14 | CPU + Wait for CPU | 7.14 |
DBMS_SCHEDULER
| PURGE_LOG |
7.14285714285714285714285714285714285714
|
CPU + Wait for CPU
| 7.14 |
Back to Top
Complete List of SQL Text
Back to Top SQLBack to Top
Top PL/SQL Procedures
- 'PL/SQL entry subprogram' represents the application's top-level entry-point(procedure, function, trigger, package initialization or RPC call) into PL/SQL.
- 'PL/SQL current subprogram' is the pl/sql subprogram being executed at the point of sampling . If the value is 'SQL', it represents the percentage of time spent executing SQL for the particular plsql entry subprogram
PLSQL Entry Subprogram | % Activity | PLSQL Current Subprogram | % Current |
---|---|---|---|
SYS.DBMS_SPACE_ADMIN.SEGMENT_NUMBER_BLOCKS | 7.14 | SQL | 7.14 |
SYS.DBMS_SCHEDULER.AUTO_PURGE | 7.14 | SQL | 7.14 |
DBSNMP.BSLN_INTERNAL.MAINTAIN_STATISTICS | 7.14 | SQL | 7.14 |
Back to Top
Top Java Workload
No data exists for this section of the report.Back to Top
Top Call Types
No data exists for this section of the report.Back to Top
Top Sessions
Back to TopTop Sessions
- '# Samples Active' shows the number of ASH samples in which the session was found waiting for that particular event. The percentage shown in this column is calculated with respect to wall clock time and not total database activity.
- 'XIDs' shows the number of distinct transaction IDs sampled in ASH when the session was waiting for that particular event
- For sessions running Parallel Queries, this section will NOT aggregate the PQ slave activity into the session issuing the PQ. Refer to the 'Top Sessions running PQs' section for such statistics.
Sid, Serial# | % Activity | Event | % Event | User | Program | # Samples Active | XIDs |
---|---|---|---|---|---|---|---|
11, 1 | 21.43 | control file sequential read | 21.43 | SYS | oracle@rac1.oracl.com (LMON) | 3/24 [ 13%] | 0 |
20, 1 | 14.29 | CPU + Wait for CPU | 7.14 | SYS | oracle@rac1.oracl.com (CKPT) | 1/24 [ 4%] | 0 |
20, 1
|
14.28571428571428571428571428571428571429
| control file sequential read | 7.14 |
SYS
|
oracle@rac1.oracl.com (CKPT)
| 1/24 [ 4%] | 0 |
35, 1 | 14.29 | ASM file metadata operation | 14.29 | SYS | oracle@rac1.oracl.com (O001) | 2/24 [ 8%] | 0 |
60, 3 | 14.29 | CPU + Wait for CPU | 7.14 | SYS | oracle@rac1.oracl.com (M001) | 1/24 [ 4%] | 0 |
60, 3
|
14.28571428571428571428571428571428571429
| db file sequential read | 7.14 |
SYS
|
oracle@rac1.oracl.com (M001)
| 1/24 [ 4%] | 0 |
3, 1 | 7.14 | CPU + Wait for CPU | 7.14 | SYS | oracle@rac1.oracl.com (PSP0) | 1/24 [ 4%] | 0 |
Back to Top
Top Blocking Sessions
No data exists for this section of the report. Back to Top SessionsBack to Top
Top Sessions running PQs
No data exists for this section of the report. Back to Top SessionsBack to Top
Top Objects/Files/Latches
Back to TopTop DB Objects
No data exists for this section of the report. Back to Top Objects/Files/LatchesBack to Top
Top DB Files
- With respect to Cluster and User I/O events only.
File ID | % Activity | Event | % Event | File Name | Tablespace |
---|---|---|---|---|---|
2 | 7.14 | db file sequential read | 7.14 | +DATA/orcl/datafile/sysaux.257.903477839 | SYSAUX |
Back to Top
Top Latches
No data exists for this section of the report. Back to Top Objects/Files/LatchesBack to Top
Activity Over Time
- Analysis period is divided into smaller time slots
- Top 3 events are reported in each of those slots
- 'Slot Count' shows the number of ASH samples in that slot
- 'Event Count' shows the number of ASH samples waiting for that event in that slot
- '% Event' is 'Event Count' over all ASH samples in the analysis period
Slot Time (Duration) | Slot Count | Event | Event Count | % Event |
---|---|---|---|---|
16:51:00 (4.0 min) | 14 | CPU + Wait for CPU | 6 | 42.86 |
16:51:00 (4.0 min)
|
14
| control file sequential read | 4 | 28.57 |
16:51:00 (4.0 min)
|
14
| ASM file metadata operation | 2 | 14.29 |
Back to TopEnd of Report
No comments:
Post a Comment