Wednesday, July 8, 2015

Redo Practice

Although "Undo is more complicated than redo" (Oracle Core: Essential Internals for DBAs and Developers), Redo performance is more visible to applications. End user is affected by "log file sync", while DBA is googling Panacea of "log file parallel write". It travels through all layers of computer system, from applications, Oracle, OS (Scheduler, VM, FS), adapter, down to disk (LUNS, RAID) and network.

Following previous Blog: "UNDO Practice", it is time to do the second exercise in sticking to:
"The Beginners' Guide to Becoming an Oracle Expert" (Page 5).

All tests are done in Oracle 11.2.0.4.0 (see appended Test Code).

1. Asynchronous Commit


The default PL/SQL commit behavior for nondistributed transactions is BATCH NOWAIT if the COMMIT_LOGGING and COMMIT_WAIT database initialization parameters have not been set (Database PL/SQL Language Reference).

Run:

 exec create_awr;
 exec update_test_tab(1000, 1);
 exec create_awr;


AWR shows:

Statistic Total / Waits per Second per Trans
redo size 11,369,340 943,435.40 11,245.64
user commits 1,011 83.89 1.00
redo synch writes 2 0.17 0.00
redo writes 971 80.57 0.96
log file sync 2
log file parallel write 972

Table-1

For 1000 updates and 1000 user commits, it requires 971 "redo write" which triggered almost same number of "log file parallel write" (972), very close to the number of user commits. However, only 2 "redo synch write", hence 2 "log file sync". That is the effect of Asynchronous Commit.

Note that "select for update" shows the similar Redo behaviour as "update" even though there is no real update (see Test Code 2).

This Redo optimization is also effective for Oracle Server-Side Internal Driver JVM.

2. Synchronous Commit


Run "script_1 1000", AWR shows:


Statistic Total / Waits per Second per Trans
redo size 11,202,868 3,002,644.87 11,080.98
user commits 1,011 270.97 1.00
redo synch writes 1,001 268.29 0.99
redo writes 1,036 277.67 1.02
log file sync 1,001
log file parallel write 1,034

Table-2

In case of 1000 Synchronous Commits, all 5 statistics (not including "redo size") are almost same. Each user commit leads to one respective event.

"redo synch write" represents the number of times the redo is forced to disk immediately, usually for a transaction commit.

Client-Side JDBC connection has the similar behaviour. But it is possible to switch off default auto-commit by:
   connection.setAutoCommit(false).

3. Piggybacked Commit


Piggybacked Commit reduces the number of redo writes caused by commits by grouping redo records from several sessions together (Oracle Mos WAITEVENT: "log file sync" Reference Note (Doc ID 34592.1)).

Run:

 exec create_awr;
 exec update_test_tab_loop(1000, 10);
 exec dbms_lock.sleep(120); 
 -- wait for job finished
 exec create_awr;


AWR shows:


Statistic Total / Waits per Second per Trans
redo size 18,138,376 136,033.06 1,808.05
user commits 10,032 75.24 1.00
redo synch writes 12 0.09 0.00
redo writes 440 3.30 0.04
log file sync 12
log file parallel write 439

Table-3

Comparing to Table-1, although we make 10,000 updates and user commits, "redo size" is not 10 times that of Table-1 (18,138,376 vs. 11,369,340), and "redo write" (also "log file parallel write") is even less than that of Table-1 (440 vs. 971). That is the role Piggybacked Commit plays.

LGWR truss outputs:

  listio64(0x0000000010000004, 0x000000000FFFFFFF, 0x00000000FFFDB4D0, 0x0000000000000002, ...) = 0x0000000000000000
  aio_nwait64(0x0000000000001000, 0x0000000000000002, 0x0FFFFFFFFFFEB4D0, 0x800000000000D032, ...) = 0x0000000000000002
  thread_post_many(7, 0x0FFFFFFFFFFF3488, 0x0FFFFFFFFFFF3480) = 0
  listio64(0x0000000010000004, 0x000000000FFFFFFF, 0x00000000FFFDB4D0, 0x0000000000000002, ...) = 0x0000000000000000
  aio_nwait64(0x0000000000001000, 0x0000000000000002, 0x0FFFFFFFFFFEB4D0, 0x800000000000D032, ...) = 0x0000000000000002
  thread_post_many(4, 0x0FFFFFFFFFFF3488, 0x0FFFFFFFFFFF3480) = 0
  listio64(0x0000000010000004, 0x000000000FFFFFFF, 0x00000000FFFDB4D0, 0x0000000000000002, ...) = 0x0000000000000000
  aio_nwait64(0x0000000000001000, 0x0000000000000002, 0x0FFFFFFFFFFEB4D0, 0x800000000000D032, ...) = 0x0000000000000002
  thread_post_many(6, 0x0FFFFFFFFFFF3488, 0x0FFFFFFFFFFF3480) = 0

We can see LGWR "thread_post_many" several sessions (nthreads is the first parameter of "thread_post_many") after each "aio_nwait64". It means that redo of multiple threads (oracle sessions) are written by one single "listio64" and are posted at the same time.

4. Distributed Transactions


We will look at Oracle-Controlled Distributed Transactions (using Database Link).

Run "script_2 1000", AWR in dblocal (Commit Point Site) shows:


Statistic Total / Waits per Second per Trans
redo size 12,135,636 394,513.70 12,003.60
user commits 1,011 32.87 1.00
redo synch writes 3,003 97.62 2.97
redo writes 3,038 98.76 3.00
log file sync 3,002
log file parallel write 3,116
transaction branch allocation 4,012

Table-4


AWR in dbremote shows:

Statistic Total / Waits per Second per Trans
redo size 11,315,104 371,681.63 11,191.99
user commits 1,011 33.21 1.00
redo synch writes 2,003 65.80 1.98
redo writes 2,035 66.85 2.01
log file sync 2,002
log file parallel write 2,036
transaction branch allocation 9,013

Table-5

Comparing to Table-2 of 1000 Synchronous Commits in nondistributed transactions, 1000 updates and 1000 user commits in Distributed Transactions demand for 3 times of redo events in Commit Point Site, and 2 times in other node. "transaction branch allocation" is 4 times, respectively, 9 times.

Each "redo synch write" is translated as one UNIX System Call "pwrite", which can be watched by truss or dtrace command in Solaris. Therefore there are 3,003 pwrites in dblocal, and 2,003 pwrites in dbremote.

By dumping Redo Log only for the "commit" command in dblocal and dbremote with:
     ALTER SYSTEM DUMP LOGFILE '<full_path_logfile_name>';

CHANGE vectors in dblocal are:

CLS:117 AFN:3 DBA:0x00c00290 OBJ:4294967295 SCN:0x0853.9628ab8a SEQ:1 OP:5.2 
CLS:118 AFN:3 DBA:0x00c00550 OBJ:4294967295 SCN:0x0853.9628ab8a SEQ:1 OP:5.1 
CLS:118 AFN:3 DBA:0x00c00550 OBJ:4294967295 SCN:0x0853.9628ab8d SEQ:1 OP:5.1 
CLS:117 AFN:3 DBA:0x00c00290 OBJ:4294967295 SCN:0x0853.9628ab8d SEQ:1 OP:5.12 
CLS:117 AFN:3 DBA:0x00c00290 OBJ:4294967295 SCN:0x0853.9628ab8d SEQ:2 OP:5.12 
CLS:117 AFN:3 DBA:0x00c00290 OBJ:4294967295 SCN:0x0853.9628ab8f SEQ:1 OP:5.4 

CHANGE vectors in dbremote are:

CLS:123 AFN:3 DBA:0x00c002b0 OBJ:4294967295 SCN:0x0853.9628ac1d SEQ:1 OP:5.2
CLS:124 AFN:3 DBA:0x00c009ba OBJ:4294967295 SCN:0x0853.9628ac5c SEQ:1 OP:5.1
CLS:124 AFN:3 DBA:0x00c009ba OBJ:4294967295 SCN:0x0853.9628ac6a SEQ:1 OP:5.1
CLS:123 AFN:3 DBA:0x00c002b0 OBJ:4294967295 SCN:0x0853.9628ac6a SEQ:1 OP:5.4

we can see that "commit" in dblocal includes two additional "OP:5.12" CHANGE vectors on UNDO header (CLS:117), apart from OP:5.2 (change for update transaction table in undo segment header), OP:5.1 (Change for update of the undo block), OP:5.4 (change for Commit). Probably "OP:5.12" is particular for 2pc commit.

As documented in Oracle Two-Phase Commit Mechanism, Commit Point Site experiences 3 Phases: Prepare/Commit/Forget Phases, whereas other resource nodes perform only first 2 Phases.

A redo record, also called a redo entry, is made up of a group of change vectors. In AWR report, we can also check "redo entries".

Each committed transaction has an associated system change number (SCN) to uniquely identify the changes made  by the SQL statements within that transaction.

During the prepare phase, the database determines the highest SCN at all nodes involved in the transaction. The transaction then commits with the high SCN at the commit point site. The commit SCN is then sent to all prepared nodes with the commit decision (Distributed Transactions Concepts).

alert log of dblocal contains the following text when first time to make the Distributed Transaction.

 Fri Jul 03 13:48:00 2015
 Advanced SCN by 29394 minutes worth to 0x0824.738bd480, by distributed transaction logon, remote DB: DBREMOTE.COM.
  Client info: DB logon user TESTU, machine dbremote, program oracle@dbremote (TNS V1-V3), and OS user oracle
 Fri Jul 03 13:50:04 2015 

Following two queries can help monitor Distributed Transactions.

 select 'local' db, v.* from v$global_transaction v  union all
 select 'remote' db, v.* from v$global_transaction@dblinkremote v;
 
 select 'local' db, v.* from v$lock v where type in ('TM', 'TX', 'DX')  union all
 select 'remote' db, v.* from v$lock@dblinkremote v where type in ('TM', 'TX', 'DX');

For XA Based Distributed Transactions (X/Open DTP), the similar behaviour can be observed.
Additionally in AWR - Enqueue Activity shows "DX-Distributed Transaction".

5. Distributed Transaction Commit


We can commit the whole distributed transaction in local db:
   update test_redo set name = 'update_local_1' where id = 1;
   update
test_redo@dblinkremote set name = 'update_remote_1' where id = 1;
   commit;

or commit in remote db:
   update test_redo set name = 'update_local_1' where id = 1;
   update
test_redo@dblinkremote set name = 'update_remote_1' where id = 1;
   exec dblinkremote_commit;


6. Distributed Transaction with autonomous_transaction


We can make one more sophisticated test by incorporating one autonomous_transaction in remote DB.
Run "script_3 1000", AWR in dblocal (Commit Point Site) shows:

Statistic Total / Waits per Second per Trans
redo size 12,010,796 483,565.34 11,880.11
user commits 1,011 40.7 1.00
redo synch writes 3,004 120.94 2.97
redo writes 3,041 122.43 3.01
log file sync 3,004
log file parallel write 3,042
transaction branch allocation 6,010

Table-6

AWR in dbremote shows:

Statistic Total / Waits per Second per Trans
redo size 12,156,052 499,673.30 6,047.79
user commits 2,010 82.62 1.00
redo synch writes 3,002 123.4 1.49
redo writes 3,035 124.75 1.51
log file sync 3,004
log file parallel write 3,034
transaction branch allocation 10,019

Table-7

In dblocal, Table-4 and Table-6 are almost identical.

In dbremote, the numbers in Table-7 is higher than Table-5 due to autonomous_transaction. It looks like no Piggybacked Commit in effect.

The above test case reflects the complexity of three-tier architecture, where middle tier and data tier synchronize atomic data operations with Distributed Transaction. In case of Oracle as data tier, it can involve autonomous_transactions.

7. Distributed Transaction: distributed_lock_timeout


In remote db, set distributed_lock_timeout by:
       alter system set distributed_lock_timeout=27 scope=spfile;
then restart remote db:
       startup force;
in remote db, run;
       update test_redo set name = 'update_remote_0' where id = 1;
and then in local db, execute:
   update test_redo set name = 'update_local_1' where id = 1;
   update
test_redo@dblinkremote set name = 'update_remote_1' where id = 1;
after 27 seconds (waiting Event "enq: TX - contention"), it returns:
   ORA-02049: timeout: distributed transaction waiting for lock
   ORA-02063: preceding line from DBLINKREMOTE


ORA-02049 occurs when the DML modified segment is blocked more than distributed_lock_timeout, for example, if the segment is stored in Oracle bigfile tablespace, and pre-allocation takes more than distributed_lock_timeout. (see Blog: Oracle bigfile tablespace pre-allocation and session blocking).

Test Setup


Setup two DBs, one named dblocal, another dbremote.

Run Test Code 1 on both DBs, additionally run Test Code 2 on dblocal.
Note: adapt database name, user_id, password at first.

---------------- Test Code 1 ----------------

drop table test_redo; create table test_redo ( id int primary key using index (create index ind_p on test_redo (id)), name varchar2(300)); insert into test_redo select level, rpad('abc', 100, 'x') y from dual connect by level <= 10000; exec dbms_stats.gather_table_stats(null, 'TEST_REDO', cascade => true); create or replace procedure create_awr as begin sys.dbms_workload_repository.create_snapshot('ALL'); end; / create or replace procedure db_commit as begin commit; end; / create or replace procedure db_commit_autotrx(i number) as pragma autonomous_transaction; begin update test_redo set name = i||'_autonomous_transaction_at_'||localtimestamp where id = (10000 - (i - 1)); commit; end; /

---------------- Test Code 2 ----------------

drop database link dblinkremote; create database link dblinkremote connect to k identified by k using 'dbremote'; create or replace procedure update_test_tab(p_cnt number, p_job number) as begin for i in 1.. p_cnt loop update test_redo set name = rpad('abc', 100, i) where id = (p_job -1) * 1000 + mod(i, 1001); ---- get similar AWR Redo figures for select for update --for c in (select name from test_redo where id = (p_job -1) * 1000 + mod(i, 1001) for update) --loop null; end loop; commit; end loop; end; / create or replace procedure update_test_tab_loop(p_cnt number, p_job_cnt number) as l_job_id pls_integer; begin for i in 1.. p_job_cnt loop dbms_job.submit(l_job_id, 'update_test_tab('||p_cnt||', '|| i||');'); end loop; commit; end; / create or replace procedure dblinkremote_commit as begin db_commit@dblinkremote; end; / create or replace procedure dblinkremote_commit_autotrx(i number) as begin db_commit_autotrx@dblinkremote(i); end; /

---------------- script_1 ----------------

#!/bin/ksh sqlplus -s testu/testp |& print -p "exec create_awr;" print -p "set feedback off" i=0 while (( i < $1 )) do (( i+=1 )) echo $i print -p "update test_redo set name = rpad('abc', 100, $i) where id = mod($i, 1001);" print -p "commit;" done print -p "exec create_awr;" print -p "exit 0"

---------------- script_2 ----------------

#!/bin/ksh sqlplus -s testu/testp |& print -p "exec create_awr;" print -p "exec create_awr@dblinkremote;" print -p "set feedback off" i=0 while (( i < $1 )) do (( i+=1 )) echo $i print -p "update test_redo set name = rpad('abc', 100, $i) where id = mod($i, 1001);" print -p "update test_redo@dblinkremote set name = rpad('abc', 100, $i) where id = mod($i, 1001);" print -p "commit;" done print -p "exec create_awr;" print -p "exec create_awr@dblinkremote;" print -p "exit 0"

---------------- script_3 ----------------

#!/bin/ksh sqlplus -s testu/testp |& print -p "exec create_awr;" print -p "exec create_awr@dblinkremote;" print -p "set feedback off" i=0 while (( i < $1 )) do (( i+=1 )) echo $i print -p "update test_redo set name = rpad('abc', 100, $i) where id = mod($i, 1001);" print -p "update test_redo@dblinkremote set name = rpad('abc', 100, $i) where id = mod($i, 1001);" print -p "exec dblinkremote_commit_autotrx(mod($i, 1001));" print -p "commit;" done print -p "exec create_awr;" print -p "exec create_awr@dblinkremote;" print -p "exit 0"

Thursday, July 2, 2015

UNDO Practice

An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field.
Niels Bohr

Book Oracle Core: Essential Internals for DBAs and Developers declared that change vector (the heart of redo and undo) is most important feature of Oracle (Page 5). While redo data(after image) is written to be forgotten, undo data(before image) is active over the life time of instance. Therefore it is worth of doing a couple of exercises on UNDO by following:
"The Beginners' Guide to Becoming an Oracle Expert" (Page 5).

All tests are done in Oracle 11.2.0.4.0 with undo_management=AUTO and db_block_size=8192.

UNDO dump


The first test code is to practice what one learned from the book (see the book for more details).

Run Test Code 1, output looks like:

------------ testhost_ora_25106_block_1.trc ------------
Start dump data blocks tsn: 1172 file#:863 minblk 2082662 maxblk 2082662
scn: 0x0824.e789f0b4 seq: 0x02 flg: 0x04 tail: 0xf0b40602
0x02   0x0078.01c.000006ba  0x00c0054a.03c8.01  ----    1  fsc 0x0000.00000000
bdba: 0x001fc766


------------ testhost_ora_25106_block_2.trc ------------
Start dump data blocks tsn: 1172 file#:863 minblk 2082663 maxblk 2082663
scn: 0x0824.e789f0b8 seq: 0x02 flg: 0x04 tail: 0xf0b80602
0x02   0x0078.01c.000006ba  0x00c0054b.03c8.01  ----    1  fsc 0x0000.00000000
bdba: 0x001fc767


------------ testhost_ora_25106_block_3.trc ------------
Start dump data blocks tsn: 1172 file#:863 minblk 2082659 maxblk 2082659
scn: 0x0824.e789f0bc seq: 0x02 flg: 0x04 tail: 0xf0bc0602
0x02   0x0078.01c.000006ba  0x00c0054c.03c8.01  ----    1  fsc 0x0000.00000000
bdba: 0x001fc763


------------ testhost_ora_25106_undo_1.trc ------------
Start dump data blocks tsn: 2 file#:3 minblk 1354 maxblk 1354
BH (0xc0f6fe38) file#: 3 rdba: 0x00c0054a (3/1354) class: 256 ba: 0xc013a000
scn: 0x0824.e789f0b4 seq: 0x01 flg: 0x04 tail: 0xf0b40201
xid: 0x0078.01c.000006ba  seq: 0x3c8 cnt: 0x1   irb: 0x1   icl: 0x0   flg: 0x0000
rdba: 0x00c00549
op: C  uba: 0x00c00549.03c8.38


------------ testhost_ora_25106_undo_2.trc ------------
Start dump data blocks tsn: 2 file#:3 minblk 1355 maxblk 1355
BH (0xcef64d80) file#: 3 rdba: 0x00c0054b (3/1355) class: 256 ba: 0xce018000
scn: 0x0824.e789f0b8 seq: 0x02 flg: 0x04 tail: 0xf0b80202
xid: 0x0078.01c.000006ba  seq: 0x3c8 cnt: 0x1   irb: 0x1   icl: 0x0   flg: 0x0000
rdba: 0x00c0054a
op: C  uba: 0x00c0054a.03c8.02


------------ testhost_ora_25106_undo_3.trc ------------
Start dump data blocks tsn: 2 file#:3 minblk 1356 maxblk 1356
BH (0xd2fe5930) file#: 3 rdba: 0x00c0054c (3/1356) class: 256 ba: 0xd2d4c000
scn: 0x0824.e789f0bc seq: 0x02 flg: 0x04 tail: 0xf0bc0202
xid: 0x0078.01c.000006ba  seq: 0x3c8 cnt: 0x1   irb: 0x1   icl: 0x0   flg: 0x0000
rdba: 0x00c0054b
op: C  uba: 0x00c0054b.03c8.02


1. Above output showed that all UNDO blocks are linked with "rdba" field, from last to first
    (0x00c0054c -> 0x00c0054b -> 0x00c0054a).

2. The modified ITL in each Data block is also saved in its UNDO block's field marked with:
    "op: C  uba:". So all modified information in Data block are recorded in Before Image.

UNDO Size


We can make an UNDO test by updating one block many times, and querying an un-modified block in a different session. The purpose is trying to extend UNDO length (Space) to infinitive.

Open one session, update first row 1000 times, without commit:

 begin
   for i in 1..1000 loop
     update test_tab set name = rpad('a', 3000, i) where id = '1';
   end loop;
 end;
 /

Open a second session, read second row:

 select value from v$sysstat where name = 'data blocks consistent reads - undo records applied';
 select count(*) from test_tab t where id = '2';
 select value from v$sysstat where name = 'data blocks consistent reads - undo records applied';

The output:

 7401710432
 1
 7401711432

shows that there are 1000 (7401711432 - 7401710432) "data blocks consistent reads - undo records applied".

Even though the second row is not modified and in a different block, it still needs to apply whole UNDO records to make the CR read:

(More statistics can be listed by RUNSTATS in Expert Oracle Database Architecture)


UNDO Duration


Another UNDO test is by continuously inserting rows without commit, and querying in a few other sessions. The purpose is trying to extend UNDO duration (Time) to infinitive.

The similar pattern of code could probably be found in some real applications.
(see Test Code 2)

Start the test by:

 exec insert_no_commit_loop(1);
 exec test_select_loop(24);

From time to time, run following query, we could see elapsed_per_exec of select statement is gradually increasing. At beginning, it is some milliseconds, after a few hours, it could reach a couple of minutes.

select executions, disk_reads
      ,rows_processed, round(rows_processed/executions, 2) rows_per_exec
      ,buffer_gets, round(buffer_gets/executions, 2) buffer_per_exec
      ,round(elapsed_time/1e3) elapsed_time, round(elapsed_time/1e3/executions, 2) elapsed_per_exec
      ,v.*
from v$sql v where lower(sql_text) like '%test_tab2%' and v.executions > 0
order by v.executions desc;


Deeper into Buffer Cloning


In the Section: Deeper into Buffer Cloning (Page 244) of  Oracle Performance Firefighting (4th Printing June 2010) there is some text:

The SCN is 12320, which is before our query stated at time 12330. Therefore, we do not apply the undo. If we did apply the undo, our CR buffer would represent a version of block 7,678 at time
12320, which is too early!

Probably even though 12320 is before 12330, we should also apply this undo since it belongs to an opened transaction. In fact, we should reverse all blocks in this un-committed transaction.

Otherwise, the elapsed_per_exec of above select statement would arrive at a fixpoint (might be Nash equilibrium ?).

In Page 242, text:

If you recall, when a server process locates a desired buffer and discovers a required row has changed since its query began, it must create a back-in-time image of the buffer.

Same as above argument, even a required row is changed before its query began, it should also be reversed if not yet committed.


Temporary Table (GTT): UNDO / Redo


As we know, Oracle fulfils a DML by following 5 steps:
  1. create UNDO Change Vector
  2. create REDO Change Vector
  3. combine both Change Vectors into Redo Buffer, and then write to redo log
  4. write UNDO record into UNDO file
  5. write modified Data into DATA file
(See: Oracle Core: Essential Internals for DBAs and Developers (Page 10); Update Restart and new Active Undo Extent )

In case of GTT, applying the principle:
  Oracle redo log never records temporary data.
to prune the above 5 steps, a DML on GTT is processed by 4 steps:
  1. create UNDO Change Vector
  3. combine UNDO Change Vectors into Redo Buffer, and then write to redo log
  4. write UNDO record into UNDO file
  5. write modified Data into DATA (TEMP) file     
when setting 12c temp_undo_enabled=TRUE, it is further shortened as:
  1. create UNDO Change Vector
  4. write UNDO record into TEMP file (Remember TEMP has NEVER redo, V$TEMPUNDOSTAT)
  5. write modified Data into DATA (TEMP) file    

Test Code 1


drop table test_tab;
create table test_tab (id varchar2(3), scn number, name varchar2(3000), team varchar2(3000));
insert into test_tab values ('1', 1, rpad('a', 3000, 'a'), rpad('x', 3000, 'x'));
insert into test_tab values ('2', 2, rpad('b', 3000, 'b'), rpad('y', 3000, 'y'));
insert into test_tab values ('3', 3, rpad('c', 3000, 'c'), rpad('z', 3000, 'z'));
commit;

set lines 200
column name new_value  hostname
column mysid new_value mysid
column spid new_value  myspid
column dir new_value   mydir

select lower(name) name, sys.dbms_support.mysid mysid from v$database;
select spid from v$session s, v$process p where s.paddr=p.addr and s.sid = sys.dbms_support.mysid;
select '/orabin/app/oracle/admin/'||'&hostname'||'/diag/rdbms/'||'&hostname'||'/'||'&hostname'||'/trace' dir
from dual;

prompt hostname: &hostname, mysid: &mysid, myspid: &myspid, mydir: &mydir

create or replace directory TEST_DUMP_DIR as '&mydir';  
-- see: select * from v$diag_info where name = 'Diag Trace';

drop table dump_text_tab;

create table dump_text_tab (text varchar2(1000)) organization external
(type oracle_loader default directory TEST_DUMP_DIR
 access parameters
  (records delimited by newline CHARACTERSET AL32UTF8
   badfile aaa_db_io:'dump_text_tab.bad'
   logfile aaa_db_io:'dump_text_tab.log'
   discardfile aaa_db_io:'dump_text_tab.dsc'
   fields terminated by ','  OPTIONALLY ENCLOSED BY '"'
   missing field values are null
   reject rows with all null fields
   (text position(1:1000)))
 location ('host_ora_spid_x.trc')) reject limit unlimited;

create or replace procedure dump_read(loc varchar2, type varchar2) as
begin
  dbms_output.put_line('------------ '||loc||' ------------');
  execute immediate q'[alter table dump_text_tab location (']'||loc||q'[')]';
  if type = 'block' then
    for c in (
      select text from dump_text_tab
      where text like 'Start dump data blocks tsn:%'
         or text like 'scn:%'
         or text like '%-    1  fsc%'
         or text like 'bdba:%') loop
      dbms_output.put_line(c.text);
    end loop;
  else
  for c in (
    select text from dump_text_tab
    where text like 'Start dump data blocks tsn:%'
       or text like 'BH (%'
       or text like 'scn:%'
       or text like 'xid:%'
       or text like 'rdba:%'
       or text like 'op: C  uba:%') loop
    dbms_output.put_line(c.text);
  end loop;   
  end if;
end;
/

------------ Block 1 ------------
def id = 1

update test_tab set scn = dbms_flashback.get_system_change_number, name = upper(name), team = upper(team) where id = '&id';
-- dbms_flashback.get_system_change_number call makes test stable.

column file_id new_value  fid
column block_nr new_value blk
column location new_value loc_block_1

select dbms_rowid.rowid_to_absolute_fno(t.rowid, 'K', 'TEST_TAB') file_id
      ,dbms_rowid.rowid_block_number(t.rowid)           block_nr
      ,'&hostname'||'_ora_'||trim('&myspid')||'_block_&id'||'.trc' location
from test_tab t where id = '&id';

prompt fid: &fid, blk: &blk, loc: &loc_block_1

alter session set tracefile_identifier = 'block_&id';
alter system flush buffer_cache;
alter system dump datafile &fid block &blk; 

------------ UNDO 1 ------------
column location new_value loc_undo_1
select ubafil    file_id
      ,ubablk    block_nr
   ,'&hostname'||'_ora_'||trim('&myspid')||'_undo_&id'||'.trc' location
from v$session s, v$transaction x
where s.saddr = x.ses_addr and s.sid = sys.dbms_support.mysid;

alter session set tracefile_identifier = 'undo_&id';
alter system flush buffer_cache;
alter system dump datafile &fid block &blk; 

------------ Block 2 ------------
def id = 2

update test_tab set scn = dbms_flashback.get_system_change_number, name = upper(name), team = upper(team) where id = '&id';

column location new_value loc_block_2
select dbms_rowid.rowid_to_absolute_fno(t.rowid, 'K', 'TEST_TAB') file_id
      ,dbms_rowid.rowid_block_number(t.rowid)           block_nr
      ,'&hostname'||'_ora_'||trim('&myspid')||'_block_&id'||'.trc' location
from test_tab t where id = '&id';

alter session set tracefile_identifier = 'block_&id';
alter system flush buffer_cache;
alter system dump datafile &fid block &blk; 

------------ UNDO 2 ------------
column location new_value loc_undo_2
select ubafil    file_id
      ,ubablk    block_nr
   ,'&hostname'||'_ora_'||trim('&myspid')||'_undo_&id'||'.trc' location
from v$session s, v$transaction x
where s.saddr = x.ses_addr and s.sid = sys.dbms_support.mysid;

alter session set tracefile_identifier = 'undo_&id';
alter system flush buffer_cache;
alter system dump datafile &fid block &blk; 

------------ Block 3 ------------
def id = 3

update test_tab set scn = dbms_flashback.get_system_change_number, name = upper(name), team = upper(team) where id = '&id';

column location new_value loc_block_3
select dbms_rowid.rowid_to_absolute_fno(t.rowid, 'K', 'TEST_TAB') file_id
      ,dbms_rowid.rowid_block_number(t.rowid)           block_nr
      ,'&hostname'||'_ora_'||trim('&myspid')||'_block_&id'||'.trc' location
from test_tab t where id = '&id';

alter session set tracefile_identifier = 'block_&id';
alter system flush buffer_cache;
alter system dump datafile &fid block &blk; 

------------ UNDO 3 ------------
column location new_value loc_undo_3
select ubafil    file_id
      ,ubablk    block_nr
   ,'&hostname'||'_ora_'||trim('&myspid')||'_undo_&id'||'.trc' location
from v$session s, v$transaction x
where s.saddr = x.ses_addr and s.sid = sys.dbms_support.mysid;

alter session set tracefile_identifier = 'undo_&id';
alter system flush buffer_cache;
alter system dump datafile &fid block &blk; 

------------ Block dump read ------------
set serveroutput on
exec dump_read('&loc_block_1', 'block');
exec dump_read('&loc_block_2', 'block');
exec dump_read('&loc_block_3', 'block');

------------ UNDO dump read ------------
exec dump_read('&loc_undo_1', 'undo');
exec dump_read('&loc_undo_2', 'undo');
exec dump_read('&loc_undo_3', 'undo');

Test Code 2


create sequence test_seq;
create table test_tab2 (id number, seq_nr number, cnt number);
create index test_tab2_ind on test_tab2 (id, seq_nr, cnt) compress 1;
create type type_c100 as table of varchar2(100);
/

create or replace procedure insert_no_commit(p_cnt number) as
begin
  insert into test_tab2 select 99, test_seq.nextval, level from dual connect by level <= p_cnt;
  dbms_lock.sleep(0.1);
end;
/

create or replace procedure test_select as
  l_tab   type_c100;
begin
    select rowidtochar(rowid) bulk collect into l_tab
    from   test_tab2 where  id = 99;
    dbms_lock.sleep(0.1);
end;
/

create or replace procedure insert_no_commit_loop(p_job_cnt number)
as
   l_job_id pls_integer;
begin
    for i in 1.. p_job_cnt loop
      dbms_job.submit(l_job_id, 'begin while true loop insert_no_commit(4); end loop; end;');
    end loop;
    commit;
end;   
/

create or replace procedure test_select_loop(p_job_cnt number)
as
   l_job_id pls_integer;
begin
    for i in 1.. p_job_cnt loop
      dbms_job.submit(l_job_id, 'begin while true loop test_select; end loop; end;');
    end loop;
    commit;
end;   
/