Monday, December 23, 2019

Oracle Database and Java Datetime Conversion Differences across Daylight Saving Time (DST)

Following the long discussions of Blog: Oracle Datetime (1) - Concepts, we also observed the difference of Datetime conversion on Daylight Saving Time (DST) between Java and Sql.

For example, for Paris Winter to Summer DST time switch, on Sunday 31 March 2019, 02:00:00 clocks were turned forward 1 hour to 03:00:00.

In this Blog, we made two tests to demonstrate the difference of Datetime arithmetic across DST turning point, one is in Java, another is in Sql.

Note: All tests are done in Oracle 12c, 18c, 19c and Java 8.


1. Java Test


Create following Java class:

------------------ Java Code -----------------

import java.time.LocalDateTime;
import java.time.Month;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;

public class TestDST
{
   private static final String DATE_FORMAT = "yyyy-MMMM-dd kk:mm:ss VV O";
   private static final DateTimeFormatter formatter = DateTimeFormatter.ofPattern(DATE_FORMAT);
    
   public static void main(String[] args)
   {
      ZoneId timeZone = ZoneId.of("Europe/Paris");    // timezone
      LocalDateTime localDT = LocalDateTime.of(2019, Month.APRIL, 01, 02, 11, 33);  //2019-04-01 02:11:33
      
      for (int h = 1; h <= 3; h++) {
        localDT = LocalDateTime.of(2019, Month.APRIL, 01, h, 11, 33);  
        ZonedDateTime zDT = localDT.atZone(timeZone);  //Zoned date time
        System.out.println("(" + formatter.format(zDT) + " - 1 Day) = " + formatter.format(zDT.minusDays(1))); 
        System.out.println("(" + formatter.format(zDT) + " - 2 Day) = " + formatter.format(zDT.minusDays(2)));
      }
      System.out.println();
      for (int h = 1; h <= 3; h++) {
        localDT = LocalDateTime.of(2019, Month.MARCH, 30, h, 11, 33);  
        ZonedDateTime zDT = localDT.atZone(timeZone);  //Zoned date time
        System.out.println("(" + formatter.format(zDT) + " + 1 Day) = " + formatter.format(zDT.plusDays(1))); 
        System.out.println("(" + formatter.format(zDT) + " + 2 Day) = " + formatter.format(zDT.plusDays(2)));
      }        
   }
}  
Compile and then run it:

javac -cp . TestDST.java

java -cp . TestDST
Here the Output for Datetime minus and plus of 1 or 2 days in Java.

$ > java -cp . TestDST

--------------------------------- Datetime minus ---------------------------------
(2019-April-01 01:11:33 Europe/Paris GMT+2 - 1 Day) = 2019-March-31 01:11:33 Europe/Paris GMT+1
(2019-April-01 01:11:33 Europe/Paris GMT+2 - 2 Day) = 2019-March-30 01:11:33 Europe/Paris GMT+1
(2019-April-01 02:11:33 Europe/Paris GMT+2 - 1 Day) = 2019-March-31 03:11:33 Europe/Paris GMT+2
(2019-April-01 02:11:33 Europe/Paris GMT+2 - 2 Day) = 2019-March-30 02:11:33 Europe/Paris GMT+1
(2019-April-01 03:11:33 Europe/Paris GMT+2 - 1 Day) = 2019-March-31 03:11:33 Europe/Paris GMT+2
(2019-April-01 03:11:33 Europe/Paris GMT+2 - 2 Day) = 2019-March-30 03:11:33 Europe/Paris GMT+1

--------------------------------- Datetime plus ---------------------------------
(2019-March-30 01:11:33 Europe/Paris GMT+1 + 1 Day) = 2019-March-31 01:11:33 Europe/Paris GMT+1
(2019-March-30 01:11:33 Europe/Paris GMT+1 + 2 Day) = 2019-April-01 01:11:33 Europe/Paris GMT+2
(2019-March-30 02:11:33 Europe/Paris GMT+1 + 1 Day) = 2019-March-31 03:11:33 Europe/Paris GMT+2
(2019-March-30 02:11:33 Europe/Paris GMT+1 + 2 Day) = 2019-April-01 02:11:33 Europe/Paris GMT+2
(2019-March-30 03:11:33 Europe/Paris GMT+1 + 1 Day) = 2019-March-31 03:11:33 Europe/Paris GMT+2
(2019-March-30 03:11:33 Europe/Paris GMT+1 + 2 Day) = 2019-April-01 03:11:33 Europe/Paris GMT+2


2. Sql Test


Here the test and output for the identical Datetime minus and plus of 1 or 2 days in Oracle Sql.

Sql > column datetime format a100
Sql > alter session set time_zone = 'Europe/Paris';

Sql > with base as (select timestamp'2019-04-01 01:11:00 Europe/Paris' v from dual)
          ,diff as (select level-1 v from dual connect by level <= 3)
      select '('||(base.v+numtodsinterval(hh.v, 'hour'))||' - '||dd.v||' day) = '||
              ((base.v+numtodsinterval(hh.v, 'hour')) - numtodsinterval(dd.v, 'day')) datetime 
        from base, diff hh, diff dd
      where dd.v > 0
      order by base.v, hh.v, dd.v; 

--------------------------------- Datetime minus ---------------------------------
(01-APR-2019 01:11:00 EUROPE/PARIS - 1 day) = 31-MAR-2019 00:11:00 EUROPE/PARIS
(01-APR-2019 01:11:00 EUROPE/PARIS - 2 day) = 30-MAR-2019 00:11:00 EUROPE/PARIS
(01-APR-2019 02:11:00 EUROPE/PARIS - 1 day) = 31-MAR-2019 01:11:00 EUROPE/PARIS
(01-APR-2019 02:11:00 EUROPE/PARIS - 2 day) = 30-MAR-2019 01:11:00 EUROPE/PARIS
(01-APR-2019 03:11:00 EUROPE/PARIS - 1 day) = 31-MAR-2019 03:11:00 EUROPE/PARIS
(01-APR-2019 03:11:00 EUROPE/PARIS - 2 day) = 30-MAR-2019 02:11:00 EUROPE/PARIS


Sql > with base as (select timestamp'2019-03-30 01:11:00 Europe/Paris' v from dual)
          ,diff as (select level-1 v from dual connect by level <= 3)
      select '('||(base.v+numtodsinterval(hh.v, 'hour'))||' + '||dd.v||' day) = '||
              ((base.v+numtodsinterval(hh.v, 'hour')) + numtodsinterval(dd.v, 'day')) datetime 
        from base, diff hh, diff dd
      where dd.v > 0
      order by base.v, hh.v, dd.v; 

--------------------------------- Datetime plus ---------------------------------
(30-MAR-2019 01:11:00 EUROPE/PARIS + 1 day) = 31-MAR-2019 01:11:00 EUROPE/PARIS
(30-MAR-2019 01:11:00 EUROPE/PARIS + 2 day) = 01-APR-2019 02:11:00 EUROPE/PARIS
(30-MAR-2019 02:11:00 EUROPE/PARIS + 1 day) = 31-MAR-2019 03:11:00 EUROPE/PARIS
(30-MAR-2019 02:11:00 EUROPE/PARIS + 2 day) = 01-APR-2019 03:11:00 EUROPE/PARIS
(30-MAR-2019 03:11:00 EUROPE/PARIS + 1 day) = 31-MAR-2019 04:11:00 EUROPE/PARIS
(30-MAR-2019 03:11:00 EUROPE/PARIS + 2 day) = 01-APR-2019 04:11:00 EUROPE/PARIS


3. Java vs. Sql


From above output, we can see:


3.1. Java Arithmetic


Java is trying to maintain Datetime literal string value as much as possible. for example,
  (2019-April-01 01:11:33 Europe/Paris GMT+2 - 1 Day) = 2019-March-31 01:11:33 Europe/Paris GMT+1
it only shift one day back, even though the absolute time
  from 2019-March-31 01:11:33 Europe/Paris GMT+1
  to   2019-April-01 01:11:33 Europe/Paris GMT+2
is 23 hours, not one full day of 24 hours. If Datetime value not exists in that timezone, round to the nearest higher value (Positive Infinity). for example,
  (2019-April-01 02:11:33 Europe/Paris GMT+2 - 1 Day) = 2019-March-31 03:11:33 Europe/Paris GMT+2
since "2019-March-31 02:11:33 Europe/Paris GMT+2" does not exist.


3.2. Sql Arithmetic


Sql performs strict 24 hours per day calculation. For example,
  (01-APR-2019 01:11:00 EUROPE/PARIS - 1 day) = 31-MAR-2019 00:11:00 EUROPE/PARIS
If Datetime value does not exist in that timezone, for example, to minus one day, it maps to exact timestamp 24 hours ago.
  (01-APR-2019 02:11:00 EUROPE/PARIS - 1 day) = 31-MAR-2019 01:11:00 EUROPE/PARIS


3.3. Result Differences


Now we end up with different result between Java and Sql, the first one even in different Time Zone (GMT+2 vs. GMT+1).

Java: (2019-April-01 02:11:33 Europe/Paris GMT+2 - 1 Day) = 2019-March-31 03:11:33 Europe/Paris GMT+2
Sql:  (  01-APR-2019 02:11:00 EUROPE/PARIS - 1 day)       =   31-MAR-2019 01:11:00 EUROPE/PARIS

Java: (2019-April-01 01:11:33 Europe/Paris GMT+2 - 2 Day) = 2019-March-30 01:11:33 Europe/Paris GMT+1
Sql:  (  01-APR-2019 01:11:00 EUROPE/PARIS - 2 day)       =   30-MAR-2019 00:11:00 EUROPE/PARIS

Oracle 19.4 OracleJVM JAVA_JIT_ENABLED Not Working on AIX

Java run-time compilation, JAVA_JIT_ENABLED (JIT), was first introduced as a performance upgrade in JDK 1.1.6, and now is a standard tool invoked whenever Java is used. It enables JVM well exploit the features of the computer hardware and operating system (OS).

On AIX, when calling Oracle 19.4 Java stored procedure, which is running on server-side integrated OracleJVM, JIT enabled is not working, it behaves same as disabled. If the Java stored procedure is invoking system dependent native compiled Java classes (big endian), it falls back to the default OracleJVM provided Java classes (little endian), hence performance degradation.

In this Blog, we first demonstrate this behaviour with JarInputStream to import signed Jar file with JIT enabled (Oracle Doc: For platforms that support the JIT compiler, the default value of this parameter is true). Then we reveal the bug by investigating the call stacks, and finally present a workaround.

A JAR file is essentially a Zip file that contains an optional META-INF directory. If a manifest is present under META-INF directory (security signatures) and verify is true (default), JarInputStream attempts to verify each file for the signed Jar. JarInputStream and JarOutputStream classes extend to ZipInputStream and ZipOutputStream
(For JAR File, see JAR File Specification
For Security Provider, see Java Cryptography Architecture (JCA) Reference Guide).

Update (07 Feb. 2020): The bug should be fixed in Patch 29707582: EC.C COMPILE FAILURE.


1. Test Setup


Here the complete steps and code to reproduce the issue.

------ 1. Create a Jar file under directory /tmp ------
$> jar -cvf testJar.jar test1.txt

------ 2. List Jar content ------
$>  unzip -l testJar.jar

  Archive:  testJar.jar
    Length      Date    Time    Name
  ---------  ---------- -----   ----
        144  12-01-2019 08:27   META-INF/MANIFEST.MF
        306  12-01-2019 08:27   META-INF/KUNALIAS.SF
       1471  12-01-2019 08:27   META-INF/KUNALIAS.DSA
          0  12-01-2019 08:26   META-INF/
  110947240  12-01-2019 08:23   test1.txt
  ---------                     -------
  110949161                     5 files
       
------ 3. Create a key ------
$> keytool -genkey -alias kunalias1 -keystore kunkey1      

------ 4. Sign Jar file ------
$> jarsigner -keystore kunkey1 -storepass kunkey1 -keypass kunkey1 testJar.jar kunalias1
  
------ 5. Create a DB table to store Jar entries, and a Plsql row insert procedure to be called by Java ------

create table test_blob_tab(jar_name varchar2(100), jar_entry blob, sts timestamp with time zone);

create or replace function insert_blob(p_jar_name varchar2, p_jar_entry blob) return number as
  l_rowcount integer;
begin
  insert into test_blob_tab values (p_jar_name, p_jar_entry, systimestamp);
  l_rowcount := sql%rowcount;
  commit;
  return l_rowcount;
end;
/

------ 6. Create Java stored procedure OracleJVMJarInputStream.java  ------
(see Blog appended code in Section 8 OracleJVM Test Code).
     
------ 7. Publishing the Java Java stored procedure to Plsql  ------

create or replace procedure OracleJVMJarInputStream(p_verify varchar2, p_info varchar2) as language java
name 'OracleJVMJarInputStream.run(java.lang.String, java.lang.String)';
/


2. Test Run


We will make two tests, one with signature verify, another without signature verify. The test shows that signature verify takes 34 minutes, but another 12 seconds.


2.1. Invoke OracleJVMJarInputStream with signature verify



Sql > set serveroutput on size 50000
Sql > exec dbms_java.set_output(50000); 

Sql > exec OracleJVMJarInputStream('true', 'no');

      ********* getNextJarEntry *********
      ------ NextJarEntry: 1, Name: META-INF/KUNALIAS.SF ------
               getNextEntry ElapsedMills: 2, at: 1576576740110
               Insert DB 1 row, blob size 0 at Sun Dec 01 10:59:00 CET 2019
               readContent ElapsedMills: 20, at: 1576576740130
      ------ NextJarEntry: 2, Name: META-INF/KUNALIAS.DSA ------
               getNextEntry ElapsedMills: 0, at: 1576576740130
               Insert DB 1 row, blob size 0 at Sun Dec 01 10:59:06 CET 2019
               readContent ElapsedMills: 6440, at: 1576576746571
      ------ NextJarEntry: 3, Name: META-INF/ ------
               getNextEntry ElapsedMills: 0, at: 1576576746572
               Insert DB 1 row, blob size 0 at Sun Dec 01 10:59:06 CET 2019
               readContent ElapsedMills: 16, at: 1576576746588
      ------ NextJarEntry: 4, Name: test1.txt ------
               getNextEntry ElapsedMills: 1, at: 1576576746589
               Insert DB 1 row, blob size 110920480 at Sun Dec 01 11:33:01 CET 2019
               readContent ElapsedMills: 2034693, at: 1576578781282
       
      Elapsed: 00:34:01.35


2.2. Invoke OracleJVMJarInputStream without signature verify



Sql > exec OracleJVMJarInputStream('false', 'no');

      ********* getNextJarEntry *********
      ------ NextJarEntry: 1, Name: META-INF/KUNALIAS.SF ------
               getNextEntry ElapsedMills: 2, at: 1576588511208
               Insert DB 1 row, blob size 0 at Sun Dec 01 14:15:11 CET 2019
               readContent ElapsedMills: 18, at: 1576588511226
      ------ NextJarEntry: 2, Name: META-INF/KUNALIAS.DSA ------
               getNextEntry ElapsedMills: 0, at: 1576588511226
               Insert DB 1 row, blob size 0 at Sun Dec 01 14:15:11 CET 2019
               readContent ElapsedMills: 10, at: 1576588511237
      ------ NextJarEntry: 3, Name: META-INF/ ------
               getNextEntry ElapsedMills: 0, at: 1576588511237
               Insert DB 1 row, blob size 0 at Sun Dec 01 14:15:11 CET 2019
               readContent ElapsedMills: 10, at: 1576588511247
      ------ NextJarEntry: 4, Name: test1.txt ------
               getNextEntry ElapsedMills: 0, at: 1576588511248
               Insert DB 1 row, blob size 110920480 at Sun Dec 01 14:15:23 CET 2019
               readContent ElapsedMills: 12271, at: 1576588523519
      
      Elapsed: 00:00:12.44


3. Reasoning


The above two tests showed a factor of 170 difference (34 minutes vs. 12 seconds) between Jar import with vs. without signature verify. For each Jar entry of size n, the complexity seems non-linear to its size (O(n^2) or higher). When reading Jar archived big sized entries, the performance degradation becomes dramatic.


3.1. Code Path


Open two Sqlplus sessions. In the first session with (sid, serial#) = (101, 1010), we execute again import with signature verify:

Sql > set serveroutput on size 50000
Sql > exec dbms_java.set_output(50000); 
Sql > alter session set tracefile_identifier = 'java_dump_1';  
Sql > exec OracleJVMJarInputStream('true', 'no');
In the second session, we make a few Java stack dump:

Sql > begin
       for i in 1..600 loop
         sys.dbms_java_dump.dump(sys.dbms_java_dump.java_dump_stack, 101, 1010);
         dbms_lock.sleep(0.1);
       end loop;
      end;
      /
Now if we open the dump file, we can see most of them having call stacks like:

*** 2019-12-01T13:34:24.768993+01:00
*** Java stack trace for the active thread
  at sun.security.provider.ByteArrayAccess.b2iBig(ByteArrayAccess.java:247)
  at sun.security.provider.ByteArrayAccess.b2iBig64(ByteArrayAccess.java:309)
  at sun.security.provider.ByteArrayAccess.b2iBig(ByteArrayAccess.java:256)
  at sun.security.provider.ByteArrayAccess.b2iBig64(ByteArrayAccess.java:309)
  at sun.security.provider.SHA2.implCompress(SHA2.java:196)
  at sun.security.provider.DigestBase.implCompressMultiBlock(DigestBase.java:141)
  at sun.security.provider.DigestBase.engineUpdate(DigestBase.java:128)
  at java.security.MessageDigest$Delegate.engineUpdate(MessageDigest.java:584)
  at java.security.MessageDigest.update(MessageDigest.java:325)
  at sun.security.util.ManifestEntryVerifier.update(ManifestEntryVerifier.java:173)
  at java.util.jar.JarVerifier.update(JarVerifier.java:227)
  at java.util.jar.JarInputStream.read(JarInputStream.java:212)
  at java.util.zip.ZipInputStream.closeEntry(ZipInputStream.java:140)
  at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:118)
  at java.util.jar.JarInputStream.getNextEntry(JarInputStream.java:142)
  at java.util.jar.JarInputStream.getNextJarEntry(JarInputStream.java:179)
  at OracleJVMJarInputStream.run(OracleJVMJarInputStream:49)


3.2. Java Class: sun.security.provider.ByteArrayAccess


If we look the Java 8 ByteArrayAccess source delivered by sun.security.provider. The document states that the method is optimized for little endian, and big endian, such as SPARC.

Sticking to above call stack, we can see that sun.security.provider.DigestBase calls big endian specific b2iBig64 for algorithm SHA2, and it is further implemented by b2iBig, which contains a while loop.

/**
 * Optimized methods for converting between byte[] and int[]/long[], both for
 * big endian and little endian byte orders.
 *
 * Currently, it includes a default code path plus two optimized code paths.
 * One is for little endian architectures that support full speed int/long
 * access at unaligned addresses (i.e. x86/amd64). The second is for big endian
 * architectures (that only support correctly aligned access), such as SPARC.
 * These are the only platforms we currently support, but other optimized
 * variants could be added as needed.
 */
final class ByteArrayAccess {
...
          /**
           * byte[] to int[] conversion, big endian byte order.
           */
      static void b2iBig(byte[] in, int inOfs, int[] out, int outOfs, int len) {
      ...
             while (inOfs < len) {
       
      
          // Special optimization of b2iBig(in, inOfs, out, 0, 64)
      static void b2iBig64(byte[] in, int inOfs, int[] out) { 
      ...
             b2iBig(in, inOfs, out, 0, 64);


3.3. Security Provider


If we list all Security Providers in OracleJVM:

Sql > exec OracleJVMJarInputStream('true', 'info');

      ===============0. Provider: SUN version 1.8
        ----- 97. Element:MessageDigest.SHA-256 
      ===============1. Provider: SunRsaSign version 1.8
      ===============2. Provider: SunJSSE version 1.8
      ===============3. Provider: SunJCE version 1.8
      ===============4. Provider: SunJGSS version 1.8
      ===============5. Provider: SunSASL version 1.8
      ===============6. Provider: XMLDSig version 1.8
      ===============7. Provider: SunPCSC version 1.8
we can see "MessageDigest.SHA-256" (for SHA2) is provided by "SUN version 1.8". No any IBM Security Provider is listed.


4. Deep into JIT Compiled Native Code


Now we can have a further look of the allocated shared memory segments from UNIX.


4.1. JIT Compiled Native Code


JIT run-time Compiled Native Code are dynamically allocated in shared memory, and can be monitored by ipcs command. If we compare the following ipcs output on shared memory allocated for JIT Compiled Native Code between not working Oracle 19.4 and that of properly working Oracle 19.3:

---=============== Oracle 19.4 on AIX, JIT not Working ===============---
$> ipcs -ar
T        ID KEY        MODE        OWNER   GROUP CREATOR CGROUP NATTCH SEGSZ  CPID LPID ATIME  DTIME   CTIME  RTFLAGS NAME
Shared Memory:
m         - 0xffffffff --rw-r----- oracle  dba   oracle  dba    0      0 30146690  0 no-entry no-entry 15:28:51  -    /JOEZSHM_testdb194_1_0_1_0_0_2971349679
m         - 0xffffffff --rw-r----- oracle  dba   oracle  dba    0      0 18219016  0 no-entry no-entry 15:28:43  -    /JOEZSHM_testdb194_1_0_0_0_0_1989310950
m         - 0xffffffff --rw-r----- oracle  dba   oracle  dba    0      0 20185178  0 no-entry no-entry 14:23:14  -    /JOEZSHM_testdb194_1_0_1_0_0_2979596769

---=============== Oracle 19.3 on AIX, JIT works ===============---

$> ipcs -ar
T        ID KEY        MODE        OWNER  GROUP CREATOR CGROUP NATTCH SEGSZ    CPID     LPID  ATIME    DTIME   CTIME   RTFLAGS NAME
Shared Memory:
m         - 0xffffffff --rw-rw---- oracle dba   oracle  dba      0    16777216 31588636 0    no-entry no-entry 13:30:56   -    /JOEZSHM_testdb193_1_0_1_0_0_2979282907
m         - 0xffffffff --rw-rw---- oracle dba   oracle  dba      0     8388608 31588636 0    no-entry no-entry  8:10:45   -    /JOEZSHM_testdb193_1_0_1_1_0_2979282907
m         - 0xffffffff --rw-rw---- oracle dba   oracle  dba      0    16777216 34931180 0    no-entry no-entry 12:27:51   -    /JOEZSHM_testdb193_1_0_0_0_0_1989371893
we can see two differences, the first is the permissions; the second is shared segment size:

  Oracle 19.4 MODE   --rw-r----- 
  Oracle 19.3 MODE   --rw-rw----  
  
  Oracle 19.4 SEGSZ  0 
  Oracle 19.3 SEGSZ  not 0, one is with 8388608 (8MB), and two are 16MB 16777216(16MB)  
Later we will see that MZ00 process trace file constantly shows error message: "Unable to allocate code space", probably because of insufficient permissions and 0 sized shared segments in Oracle 19.4.

Since enabling JIT is not working on AIX (big endian) for Oracle 19.4, it falls back to the default OracleJVM provided Java classes (little endian), hence performance degradation.

As a small test, we can also create a shared segment with

  SEGSZ: 1024 bytes, and MODE: "--rw-r-----" 
as follows: (see IBM Doc: shmget–Allocate shared memory)

#include <sys/ipc.h>
#include <sys/shm.h>
int main()

{
 key_t key;
 int   i;

 key = ftok("/usr/testfile2", 2);
 i= shmget(key, 1024, IPC_CREAT|S_IRUSR|S_IWUSR|S_IRGRP);
 printf("shmget key: %i\n", i);
 }
 
-- compile and run compiled c code
 
$ > ipcs -ar
T        ID     KEY        MODE        OWNER  GROUP CREATOR CGROUP NATTCH SEGSZ CPID    LPID   ATIME    DTIME  CTIME  RTFLAGS NAME
Shared Memory:
m 329257579     0xffffffff --rw-r----- oracle dba   oracle  dba    0      1024  29229194   0 no-entry no-entry 9:17:54


4.2. Oracle JAVAVM JIT Compiler Slave MZ00


According to Oracle Doc, JIT compiler runs as an MMON slave, in a single background process MZ00 for the DB instance. While the JIT compiler is running and actively compiling methods, we may see this background process consuming CPU and memory resources equivalent to an active user Java session.

In fact, above Oracle 19.4 ipcs output shows that CPID (creator process ID) is 20185178, which is the UNIX process id of MZ00.

Open Oracle 19.4 MZ00 trace file: aix194db_mz00_20185178.trc, we can see the failed message: (turn on 10046 Event Trace on MZ00 can observe more activities)

Unix process pid: 20185178, image: oracle@aix194db (MZ00)

*** 2019-12-01T22:23:14.257628+01:00
*** SERVICE NAME:(SYS$BACKGROUND) 2019-12-01T22:23:14.257687+01:00
*** MODULE NAME:(MMON_SLAVE) 2019-12-01T22:23:14.257699+01:00
*** ACTION NAME:(JAVAVM JIT slave action) 2019-12-01T22:23:14.257711+01:00


*** 2019-12-01T22:23:14.257754+01:00
JIT running
joez_shm_open_object failed: size = 16777216, extnam = /JOEZSHM_testdb194_1_0_1_0_0_2979596769  flags = 0x34 
joez: Failed loading machine code: Unable to allocate code space
Done compiling java/lang/SecurityManager$1.run

*** 2019-12-01T22:23:26.185994+01:00
joez_shm_open_object failed: size = 16777216, extnam = /JOEZSHM_testdb194_1_0_0_0_0_1989310950  flags = 0x34 
joez: Failed loading machine code: Unable to allocate code space
Done compiling oracle/aurora/rdbms/EnvironmentSpecificImpl.securityManagerImpl
The above output shows that each joez_shm_open_object is indentified by "extnam", which maybe indicates that Java launcher uses Java Extension path to find classes.

However if we search Bootstrap classes in rt.jar, we can see IBM security provider and ibm SHA2 algorithm.

unzip -l $ORACLE_HOME/jdk/jre/lib/rt.jar  
       0  02-07-2019 17:37   com/ibm/security/bootstrap/
     3339  10-30-2018 16:47   com/ibm/security/bootstrap/SHA2.class    
The above "flags" value "0x34" is ASCII "4", which could signify GroupMode "r--" in above ipcs output.

If we truss MZ00 JIT slave, it shows constantly "Err#22 EINVAL". According to AIX Doc, lseek Error Codes: EINVAL stands for:

The resulting offset would be greater than the maximum offset allowed for the file or device associated with FileDescriptor. The lseek subroutine was used with a file descriptor obtained from a call to the shm_open subroutine.

Since Oracle 19.4 created shared segments are 0 sized, MZ00 throws above offset error.

$> truss -dp 20185178
Sun Dec 01 22:25:31 2019
0.0000:        thread_post(50135221)            = 0
0.0009:        shm_open(0x0FFFFFFFFFFE4590, 258, 504) = 3606
0.0013:        lseek(3606, 0, 1)                Err#22 EINVAL
0.0018:        lseek(3259, 0, 1)                = 701432
0.0072:        kwrite(3259, "\n * * *   2 0 1 9 - 1 2".., 38) = 38
0.0075:        kwrite(3505, " ! 0 e 4 c\n", 6)  = 6
0.0078:        lseek(3259, 0, 1)                = 701470
0.0080:        kwrite(3259, " j o e z _ s h m _ o p e".., 108) = 108
0.0083:        kwrite(3505, " J ? g T y P ~ 0 i 1\n", 11) = 11
0.0086:        kwrite(3259, "\n", 1)            = 1
0.0089:        kwrite(3259, " j o e z :   F a i l e d".., 64) = 64
0.0092:        kwrite(3259, "\n", 1)            = 1
0.0172:        kwrite(3259, " D o n e   c o m p i l i".., 77) = 77
0.0174:        kwrite(3259, "\n", 1)            = 1
0.0178:        kwrite(3259, " C o m p i l i n g   c o".., 119) = 119
0.0181:        kwrite(3259, "\n", 1)            = 1
1.3517:        thread_post(50135221)            = 0
1.3574:        shm_open(0x0FFFFFFFFFFE4590, 258, 504) = 3607
1.3578:        lseek(3607, 0, 1)                Err#22 EINVAL     


5. Reproduce In Lower Oracle Versions


The above observed behaviour shows the JIT enabled is same as disabled in Oracle 19.4. So we can try to disable JIT in Oracle versions before 19.4 on AIX, and check if it is reproducible (following tests are done in Oracle 12c, 18c, 19.3 on AIX).


5.1. JIT enabled Test


Take one oracle 12c DB on AIX with JAVA_JIT_ENABLED enabled, run the same test with signature verify, it takes about 11 seconds, compared to above 34 minutes from Oracle 19.4.

Sql > set serveroutput on size 50000
Sql > exec dbms_java.set_output(50000); 
Sql > alter session set tracefile_identifier = 'java_dump_1';  
Sql > exec OracleJVMJarInputStream('true', 'no');

  ********* getNextJarEntry *********
  ------ NextJarEntry: 1, Name: META-INF/KUNALIAS.SF ------
           getNextEntry ElapsedMills: 70, at: 1576667113846
           Insert DB 1 row, blob size 0 at Sun Dec 01 12:05:14 CET 2019
           readContent ElapsedMills: 333, at: 1576667114180
  ------ NextJarEntry: 2, Name: META-INF/KUNALIAS.DSA ------
           getNextEntry ElapsedMills: 0, at: 1576667114180
           Insert DB 1 row, blob size 0 at Sun Dec 01 12:05:14 CET 2019
           readContent ElapsedMills: 504, at: 1576667114684
  ------ NextJarEntry: 3, Name: META-INF/ ------
           getNextEntry ElapsedMills: 0, at: 1576667114685
           Insert DB 1 row, blob size 0 at Sun Dec 01 12:05:14 CET 2019
           readContent ElapsedMills: 2, at: 1576667114687
  ------ NextJarEntry: 4, Name: test1.txt ------
           getNextEntry ElapsedMills: 3, at: 1576667114690
           Insert DB 1 row, blob size 110920480 at Sun Dec 01 12:05:24 CET 2019
           readContent ElapsedMills: 10333, at: 1576667125023
  
  Elapsed: 00:00:11.51


5.2. JIT disabled Test


Now we remove all JIT compiled native code from the same Oracle 12c DB, disable JIT, re-run above test (For details, see Blog: Remove Stale Native Code Files on AIX).

------ 1. List JIT compiled native code ------ 

$ > ipcs -ar |grep -e JOXSHM_EXT | awk '{cnt+=1; sum+=$10} END {print "Count=",cnt,"Sum=",sum,"Average=",sum/cnt}'
    Count= 1918 Sum= 24076288 Average= 12552.8
      
------ 2. Remove JIT compiled native code ------ 

$ > ipcs -ar |grep -e JOXSHM_EXT | awk ' {name = $17; cmd = "ipcrm -r -m " name; print "Run ", cmd; system( cmd )}'

------ 3. Disable java_jit_enabled ------ 

Sql > alter system set java_jit_enabled = false scope=both;

------ 4. Restart DB ------ 

Sql > startup force

------ 5. Slow Test: 16 minutes ------ 

Sql > exec OracleJVMJarInputStream('true', 'no');
  ********* getNextJarEntry *********
  ------ NextJarEntry: 1, Name: META-INF/KUNALIAS.SF ------
           getNextEntry ElapsedMills: 72, at: 1576667818306
           Insert DB 1 row, blob size 0 at Sun Dec 01 12:16:58 CET 2019
           readContent ElapsedMills: 872, at: 1576667819179
  ------ NextJarEntry: 2, Name: META-INF/KUNALIAS.DSA ------
           getNextEntry ElapsedMills: 1, at: 1576667819180
           Insert DB 1 row, blob size 0 at Sun Dec 01 12:17:02 CET 2019
           readContent ElapsedMills: 3456, at: 1576667822636
  ------ NextJarEntry: 3, Name: META-INF/ ------
           getNextEntry ElapsedMills: 1, at: 1576667822637
           Insert DB 1 row, blob size 0 at Sun Dec 01 12:17:02 CET 2019
           readContent ElapsedMills: 3, at: 1576667822640
  ------ NextJarEntry: 4, Name: test1.txt ------
           getNextEntry ElapsedMills: 37, at: 1576667822677
           Insert DB 1 row, blob size 110920480 at Sun Dec 01 12:33:23 CET 2019
           readContent ElapsedMills: 980996, at: 1576668803674
  
  Elapsed: 00:16:27.43         

------ 6. Dump Java call stack of test session ------ 
    -- Open a second Sqlplus session, make a few Java stack dump:
  
Sql > begin
       for i in 1..600 loop
         sys.dbms_java_dump.dump(sys.dbms_java_dump.java_dump_stack, 101, 1010);
         dbms_lock.sleep(0.1);
       end loop;
      end;
      /
 
    -- The call stack shows similar sun.security.provider for invoked SHA2 algorithm:
    --   sun.security.provider.SHA2.implCompress(SHA2.java:193)
    
------ 7. List JIT compiled native code ------ 

$ > ipcs -ar |grep -e JOXSHM_EXT

    -- JIT disable, no row returned, no shared memory segment created
The above test in Oracle 12c DB shows it takes about 16 minutes when JIT disabled, compared to above 11 seconds with JIT enabled. In case of JIT disabled, no shared memory segments created, Oracle JVM can not benefit from computer hardware and operating system (OS) dependent optimized JAVA classes. Therefore the same behaviour can be reproduced if JIT is disabled in Oracle versions lower than 19.4.


5.3. Shared Memory Segment for JIT compiled native code


We also observed changes of JIT compiled Shared Memory Segment naming and size following Oracle versions.

 
--- Oracle 12c JIT compiled Shared Memory Segment naming ---

  /JOXSHM_EXT_420_testdb12c_11534967

--- Oracle 18c JIT compiled Shared Memory Segment naming ---
  /JOEZSHM_testdb18c_1_0_32075_0_0_1164116801
  
--- Oracle 19c JIT compiled Shared Memory Segment naming ---
  /JOEZSHM_testdb194_1_0_1_0_0_3031177879
Till 18c, there are about 1000 JIT compiled Shared Memory Segments with each around 10KB, probably each one represents one Java class.

From 19c, there are only a few (less than 10) JIT compiled Shared Memory Segments with each around 8MB or 16MB, probably they are now grouped under Java Packages.


6. Workaround


As a workaround, we can use standard Java JVM to import Jar files into DB by JDBC. Besides fixing the performance problem, one more advantage is that the workaround can be called from any other platforms, or even little endian machine (See appended code in Section: "9. Standard Java JVM Test Code").

Here is one test run, which takes about 25 seconds.

$ > $ORACLE_HOME/jdk/bin/java -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:. \
                                  JavaJVMJarInputStream \
                                 "jdbc:oracle:thin:k/s@testdb194:1522:testdb194" true  no

  ********* getNextJarEntry *********
  ------ NextJarEntry: 1, Name: META-INF/KUNALIAS.SF ------
           getNextEntry ElapsedMills: 1, at: 1576670454217
           Insert DB 1 row, blob size 0 at Sun Dec 01 13:00:55 CET 2019
           readContent ElapsedMills: 1116, at: 1576670455334
  ------ NextJarEntry: 2, Name: META-INF/KUNALIAS.DSA ------
           getNextEntry ElapsedMills: 0, at: 1576670455335
           Insert DB 1 row, blob size 0 at Sun Dec 01 13:00:55 CET 2019
           readContent ElapsedMills: 151, at: 1576670455486
  ------ NextJarEntry: 3, Name: META-INF/ ------
           getNextEntry ElapsedMills: 0, at: 1576670455486
           Insert DB 1 row, blob size 0 at Sun Dec 01 13:00:55 CET 2019
           readContent ElapsedMills: 18, at: 1576670455504
  ------ NextJarEntry: 4, Name: test1.txt ------
           getNextEntry ElapsedMills: 0, at: 1576670455504
           Insert DB 1 row, blob size 110920480 at Sun Dec 01 13:01:18 CET 2019
           readContent ElapsedMills: 23346, at: 1576670478850
If we re-run the test with JMX option, and monitor it by Java VisualVM:

$ > $ORACLE_HOME/jdk/bin/java -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:. \
                                 -Dcom.sun.management.jmxremote.port=1521 \
                                 -Dcom.sun.management.jmxremote.authenticate=false \
                                 -Dcom.sun.management.jmxremote.ssl=false \
                                  JavaJVMJarInputStream \
                                 "jdbc:oracle:thin:k/s@testdb194:1522:testdb194" true  no
From VisualVM, we make a few Thread Dump. The call stack shows that com.ibm.security.bootstrap provider is invoked for SHA2 algorithm. Here one Thread Dump:

2019-12-01 13:36:17
Full thread dump IBM J9 VM (2.9 JRE 1.8.0 AIX ppc64-64-Bit Compressed References 20190124_408237 (JIT enabled, AOT enabled)
OpenJ9   - 9c77d86
OMR      - dad8ba7
IBM      - e2996d1):

"main" - Thread t@1
   java.lang.Thread.State: RUNNABLE
 at com.ibm.security.bootstrap.SHA2.implCompress(SHA2.java:278)
 at com.ibm.security.bootstrap.DigestBase.engineUpdate(DigestBase.java:157)
 at java.security.MessageDigest$Delegate.engineUpdate(MessageDigest.java:595)
 at java.security.MessageDigest.update(MessageDigest.java:336)
 at sun.security.util.ManifestEntryVerifier.update(ManifestEntryVerifier.java:185)
 at java.util.jar.JarVerifier.update(JarVerifier.java:238)
 at java.util.jar.JarInputStream.read(JarInputStream.java:223)
 at java.io.FilterInputStream.read(FilterInputStream.java:118)
 at JavaJVMJarInputStream.jarEntryReadInsertDB(JavaJVMJarInputStream.java:111)
 at JavaJVMJarInputStream.main(JavaJVMJarInputStream.java:88)
If we list all Security Providers by:

$ > $ORACLE_HOME/jdk/bin/java -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:. \
                                  JavaJVMJarInputStream \
                                 "jdbc:oracle:thin:k/s@testdb194:1522:testdb194" true info
                                 
 ===============0. Provider: IBMJSSE2 version 1.8
 ===============1. Provider: IBMJCE version 1.8
           ----- 23. Element:MessageDigest.SHA2
 ===============2. Provider: IBMJGSSProvider version 8.0
 ===============3. Provider: IBMCertPath version 1.8
 ===============4. Provider: IBMSASL version 1.8
 ===============5. Provider: IBMXMLCRYPTO version 8.0
 ===============6. Provider: IBMXMLEnc version 8.0
 ===============7. Provider: IBMSPNEGO version 8.0
 ===============8. Provider: SUN version 1.8
           ----- 1. Element:Provider.id info
           ----- 2. Element:Provider.id className
           ----- 3. Element:Policy.JavaPolicy
           ----- 4. Element:Provider.id version
           ----- 5. Element:Provider.id name
we can see "MessageDigest.SHA2" is only provided by "IBMJCE version 1.8". There is one single "SUN version 1.8" Provider, which contains 5 Elements.


7. Related Work


Here two Blogs about Java JIT native compiler:
     Remove Stale Native Code Files on AIX
     What the heck are the /dev/shm/JOXSHM_EXT_x files on Linux?

One Blog about both PLSQL and JAVA native compiler code:
     Native Code Files: Ora-7445 [Ioc_pin_shared_executable_object()] Reproducing

One Blog about Linux:
     joez: Failed loading machine code: Unable to allocate code space
    On docker, Ubuntu and Oracle RDBMS

One Blog about SHA2 provider performance in Linux:
    SHA2 calculation extremely slow under unknown conditions

In Oracle, there is one Java error message (ORA-10880). We also tried this event, but no lines recorded in the trace file.
  ORA-10880: trace Java VM execution

  alter session set events '10880 trace name context forever, level 4294967295'; 
    (where 4294967295 (2^32-1) seems the maximum tracing level with most details)
By the way, besides JIT, IBM JVM (platform-specific) is advanced by its Ahead-Of-Time (AOT).


8. OracleJVM Test Code



create or replace and compile java source named "OracleJVMJarInputStream" as 
import java.io.FileInputStream; 
import java.io.IOException; 
import java.io.InputStream; 
import java.io.OutputStream;
import java.io.BufferedInputStream;
import java.util.Date;
import java.util.Arrays; 
import java.util.jar.JarInputStream; 
import java.util.jar.JarEntry;
import java.util.jar.Manifest;
import java.util.jar.Attributes;
import java.util.zip.ZipEntry; 
import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.CallableStatement;
import java.sql.Blob;
import oracle.jdbc.OracleDriver;
import oracle.jdbc.OracleTypes;
import oracle.jdbc.OraclePreparedStatement;
import oracle.sql.BLOB;
import java.security.Provider;
import java.security.Security;
import java.util.Enumeration;

class OracleJVMJarInputStream
{   
    static String INSERT_DB = "begin :1 := insert_blob(:2, :3); end;";
    
    public static void run(String verifyStr, String infoStr) throws Exception {   
        boolean verify    = verifyStr.equalsIgnoreCase("true")? true:false;
        boolean printInfo = infoStr.equalsIgnoreCase("info")? true:false;
        FileInputStream fis = new FileInputStream("/tmp/testJar.jar"); 
        JarInputStream jis  = new JarInputStream(fis, verify);  // default verify is true
        
        if (printInfo) {
            printManifest(jis);
            printSecurityProviders();
        }    
        
        Connection conn;       
        conn = new OracleDriver().defaultConnection();
        
        long start, finish, elapsed;
        int  seq = 1;
        JarEntry je;
        String entryName;
        start = System.currentTimeMillis();
        
        System.out.println("\n********* getNextJarEntry *********");
        try {
             while ((je = jis.getNextJarEntry()) != null) { 
               finish = System.currentTimeMillis();
               elapsed = finish - start;
               entryName = je.getName();
               System.out.println("------ NextJarEntry: " + seq + ", Name: " + entryName + " ------"); 
               System.out.println("         getNextEntry ElapsedMills: " + elapsed + ", at: " + finish); 
               seq++;
               start = System.currentTimeMillis(); 
               jarEntryReadInsertDB(conn, entryName, jis);
               finish = System.currentTimeMillis();
               elapsed = finish - start;
               System.out.println("         readContent ElapsedMills: " + elapsed + ", at: " + finish); 
               start = System.currentTimeMillis();
             }        
             jis.close(); 
             conn.close();
        } catch (Exception e) {
             e.printStackTrace();
        }        
    }
    
    static void jarEntryReadInsertDB(Connection conn, String jarEntryName, JarInputStream jis) {
        BLOB blob = null;
        byte[] buf;
        OutputStream blobStream;
        int bytesRead;     
        try {
             blob = BLOB.createTemporary(conn, false, BLOB.DURATION_SESSION);
             buf = new byte[blob.getChunkSize()];
             blobStream = blob.getBinaryOutputStream();
             while (jis.available() > 0) {
                 bytesRead = jis.read(buf);
                 blobStream.write(buf, 0, bytesRead);
             }          
             CallableStatement cStmt = conn.prepareCall(INSERT_DB);
             cStmt.registerOutParameter(1, OracleTypes.INTEGER);
             cStmt.setString(2, jarEntryName);
             cStmt.setBlob(3, blob);
             cStmt.execute();
             int insRows = cStmt.getInt(1);
             System.out.println("         Insert DB " + insRows + " row, blob size " + blob.length() + " at " + new Date());
             blob.freeTemporary();
             cStmt.close();
             
             // OraclePreparedStatement pstmt = (OraclePreparedStatement) conn.prepareStatement("insert into test_blob_tab values(?, ?)");
             // pstmt.setString(1, jarEntryName);
             // pstmt.setBLOB(2, blob);
             // pstmt.execute();
             // pstmt.close();
        } catch (Exception e) {
             e.printStackTrace();
        }
    } 

    static void printManifest(JarInputStream jis) {    
        Manifest manifest = jis.getManifest();
        
        if (manifest != null) {
            System.out.println("********* Print Manifest *********");
            System.out.println("Manifest="  +  manifest.toString());
            Attributes mainAttributes = manifest.getMainAttributes();
            System.out.println("Manifest Manifest-Version="  +  mainAttributes.getValue("Manifest-Version"));
            System.out.println("Manifest Created-By="        +  mainAttributes.getValue("Created-By"));
        } else {
            System.out.println("********* No Manifest *********");
        }     
    }
    
    static void printSecurityProviders() throws Exception {
        Provider p[] = Security.getProviders();
        for (int i = 0; i < p.length; i++) {
            System.out.println(" ===============" + i + ". Provider: " + p[i]);
            int j = 1;
            for (Enumeration e = p[i].keys(); e.hasMoreElements();) {
                 System.out.println("\t   ----- " + j + ". Element:" + e.nextElement());
                 j++;
            }
        }
    }       
}
/

create or replace procedure OracleJVMJarInputStream(p_verify varchar2, p_info varchar2) as language java
name 'OracleJVMJarInputStream.run(java.lang.String, java.lang.String)';
/

--- Test Steps ---
Sqlplus > set serveroutput on size 50000
Sqlplus > exec dbms_java.set_output(50000); 

--- Verify true ---
Sqlplus > exec OracleJVMJarInputStream('true', 'no');

--- Verify false ---
Sqlplus > exec OracleJVMJarInputStream('false', 'no');

--- Verify true, print manifest and security providers ---
Sqlplus > exec OracleJVMJarInputStream('true', 'info');


9. Standard Java JVM Test Code


           
import java.io.FileInputStream; 
import java.io.IOException; 
import java.io.InputStream; 
import java.io.OutputStream;
import java.io.BufferedInputStream;
import java.util.Date;
import java.util.Arrays; 
import java.util.Enumeration;
import java.util.jar.JarInputStream; 
import java.util.jar.JarEntry;
import java.util.jar.Manifest;
import java.util.jar.Attributes;
import java.util.zip.ZipEntry; 
import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.CallableStatement;
import java.sql.Blob;
import java.security.Provider;
import java.security.Security;
import oracle.jdbc.OracleDriver;
import oracle.jdbc.OracleTypes;
import oracle.jdbc.OraclePreparedStatement;
import oracle.sql.BLOB;

/* --------------------- DB Setup ---------------------
   create table test_blob_tab(jar_name varchar2(100), jar_entry blob, sts timestamp with time zone);
   
   create or replace function insert_blob(p_jar_name varchar2, p_jar_entry blob) return number as
     l_rowcount integer;
   begin
     insert into test_blob_tab values (p_jar_name, p_jar_entry, systimestamp);
     l_rowcount := sql%rowcount;
     commit;
     return l_rowcount;
   end;
   /
   
   --------------------- Java Compile and Run (JarInputStream verify true or false)---------------------
   $ORACLE_HOME/jdk/bin/javac -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar JavaJVMJarInputStream.java
   $ORACLE_HOME/jdk/bin/java -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:. JavaJVMJarInputStream \
                 "jdbc:oracle:thin:k/s@testdb194:1522:testdb194" true  no 
   $ORACLE_HOME/jdk/bin/java -cp $ORACLE_HOME/jdbc/lib/ojdbc8.jar:. JavaJVMJarInputStream \
                 "jdbc:oracle:thin:k/s@testdb194:1522:testdb194" false no
*/

class JavaJVMJarInputStream
{   
    static String INSERT_DB = "begin :1 := insert_blob(:2, :3); end;";
   
    public static void main(String[] args) throws Exception 
    {   
        String jdbcURL    = args[0];
        boolean verify    = args[1].equalsIgnoreCase("true")? true:false;
        boolean printInfo = args[2].equalsIgnoreCase("info")? true:false;
        FileInputStream fis = new FileInputStream("/tmp/testJar.jar"); 
        JarInputStream jis  = new JarInputStream(fis, verify);  // default verify is true
        
        if (printInfo) {
            printManifest(jis);
            printSecurityProviders();
        }
        
        Connection conn = null;
        try {
            Class.forName("oracle.jdbc.driver.OracleDriver");
            conn = DriverManager.getConnection(jdbcURL);
        } catch (Exception e) {
            e.printStackTrace();
            return;
        }                     
  
        long start, finish, elapsed;
        int  seq = 1;
        JarEntry je;
        String entryName;
        start = System.currentTimeMillis();
        
        System.out.println("\n********* getNextJarEntry *********");
        try {
             while ((je = jis.getNextJarEntry()) != null) { 
               finish = System.currentTimeMillis();
               elapsed = finish - start;
               entryName = je.getName();
               System.out.println("------ NextJarEntry: " + seq + ", Name: " + entryName + " ------"); 
               System.out.println("         getNextEntry ElapsedMills: " + elapsed + ", at: " + finish); 
               seq++;
               start = System.currentTimeMillis(); 
               jarEntryReadInsertDB(conn, entryName, jis);
               finish = System.currentTimeMillis();
               elapsed = finish - start;
               System.out.println("         readContent ElapsedMills: " + elapsed + ", at: " + finish); 
               start = System.currentTimeMillis();
             }        
             jis.close(); 
             conn.close();
        } catch (Exception e) {
             e.printStackTrace();
        }        
    }
    
    static void jarEntryReadInsertDB(Connection conn, String jarEntryName, JarInputStream jis) {
        BLOB blob = null;
        byte[] buf;
        OutputStream blobStream;
        int bytesRead;     
        try {
             blob = BLOB.createTemporary(conn, false, BLOB.DURATION_SESSION);
             buf = new byte[blob.getChunkSize()];
             blobStream = blob.getBinaryOutputStream();
             while (jis.available() > 0) {
                 bytesRead = jis.read(buf);
                 blobStream.write(buf, 0, bytesRead);
             }         
             CallableStatement cStmt = conn.prepareCall(INSERT_DB);
             cStmt.registerOutParameter(1, OracleTypes.INTEGER);
             cStmt.setString(2, jarEntryName);
             cStmt.setBlob(3, blob);
             cStmt.execute();
             int insRows = cStmt.getInt(1);
             System.out.println("         Insert DB " + insRows + " row, blob size " + blob.length() + " at " + new Date());
             blob.freeTemporary();
             cStmt.close();
             
             // OraclePreparedStatement pstmt = (OraclePreparedStatement) conn.prepareStatement("insert into test_blob_tab values(?, ?)");
             // pstmt.setString(1, jarEntryName);
             // pstmt.setBLOB(2, blob);
             // pstmt.execute();
             // pstmt.close();
        } catch (Exception e) {
             e.printStackTrace();
        }
    } 

    static void printManifest(JarInputStream jis) {    
        Manifest manifest = jis.getManifest();
        
        if (manifest != null) {
            System.out.println("********* Print Manifest *********");
            System.out.println("Manifest="  +  manifest.toString());
            Attributes mainAttributes = manifest.getMainAttributes();
            System.out.println("Manifest Manifest-Version="  +  mainAttributes.getValue("Manifest-Version"));
            System.out.println("Manifest Created-By="        +  mainAttributes.getValue("Created-By"));
        } else {
            System.out.println("********* No Manifest *********");
        }     
    }
    
    static void printSecurityProviders() throws Exception {
        Provider p[] = Security.getProviders();
        for (int i = 0; i < p.length; i++) {
            System.out.println(" ===============" + i + ". Provider: " + p[i]);
            int j = 1;
            for (Enumeration e = p[i].keys(); e.hasMoreElements();) {
                 System.out.println("\t   ----- " + j + ". Element:" + e.nextElement());
                 j++;
            }
        }
    }       
}     

Thursday, August 22, 2019

Oracle 18 New Wait Event: 'index (re)build lock or pin object'

Oracle 18 introduced a new wait event: 'index (re)build lock or pin object' when modifying indexes. For example, one rebuild triggers 4 occurrences, started with one lock_mode / pin_mode being 2, followed by three with mode 3, but only second wait on index (namespace=4):

PARSING IN CURSOR #140684589165328 len=32 dep=0 uid=49 oct=9 lid=49 tim=13635775869273 hv=4012613321 ad='a4693328' sqlid='229yc47rkr7q9'
alter index test_tab#idx rebuild

WAIT #140684589165328: nam='index (re)build lock or pin object' ela= 6 namespace=1 lock_mode=2 pin_mode=2 obj#=-1 tim=13635775869402
WAIT #140684589165328: nam='index (re)build lock or pin object' ela= 7 namespace=4 lock_mode=3 pin_mode=3 obj#=-1 tim=13635775872729
WAIT #140684589165328: nam='index (re)build lock or pin object' ela= 10 namespace=1 lock_mode=3 pin_mode=3 obj#=-1 tim=13635778508162
WAIT #140684589165328: nam='index (re)build lock or pin object' ela= 4 namespace=1 lock_mode=3 pin_mode=3 obj#=-1 tim=13635778508204

  -- Note: namespace=4 is INDEX
The event information is described as:

SQL > select * from v$event_name where name = 'index (re)build lock or pin object';

  EVENT#          : 333
  EVENT_ID        : 3347698104
  NAME            : index (re)build lock or pin object
  PARAMETER1      : namespace
  PARAMETER2      : lock_mode
  PARAMETER3      : pin_mode
  WAIT_CLASS_ID   : 4166625743
  WAIT_CLASS#     : 3
  WAIT_CLASS      : Administrative
  DISPLAY_NAME    : index (re)build lock or pin object
The Lock/pin mode seems referring to those documented in v$libcache_locks:

  Lock/pin mode:
      0 - No lock/pin held
      1 - Null mode
      2 - Share mode
      3 - Exclusive mode
Since this wait event reveals metrics on index operations related to library cache and shared cursors, it can help us understand shared pool mutex activities.

Here a short demo on the number of waits for different index operations.

SQL > drop table test_tab purge;

SQL > create table test_tab as select 1 x from dual; 

SQL > select total_waits from v$system_event where event = 'index (re)build lock or pin object';

    TOTAL_WAITS
    -----------
          20000

-------------------------- create index (4 Waits) --------------------------

SQL > create index test_tab#idx on test_tab(x);

SQL > select total_waits from v$system_event where event = 'index (re)build lock or pin object';

    TOTAL_WAITS
    -----------
          20004

-------------------------- gather index stats (0 Waits) --------------------------

SQL > exec dbms_stats.gather_index_stats('K', 'TEST_TAB#IDX');

SQL > select total_waits from v$system_event where event = 'index (re)build lock or pin object';

    TOTAL_WAITS
    -----------
          20004   
      
-------------------------- rebuild index (4 Waits) --------------------------

SQL > alter index test_tab#idx rebuild;

SQL > select total_waits from v$system_event where event = 'index (re)build lock or pin object';

    TOTAL_WAITS
    -----------
          20008
      
-------------------------- rebuild index online (3 Waits) --------------------------

SQL > alter index test_tab#idx rebuild online;

SQL > select total_waits from v$system_event where event = 'index (re)build lock or pin object';

    TOTAL_WAITS
    -----------
          20011       (Note: rebuild index online increases only 3)
      
-------------------------- rebuild index reverse (4 Waits) --------------------------

SQL > alter index test_tab#idx rebuild reverse;

SQL > select total_waits from v$system_event where event = 'index (re)build lock or pin object';

    TOTAL_WAITS
    -----------
          20015
      
-------------------------- drop index (0 Waits) --------------------------

SQL > drop index test_tab#idx;

SQL > select total_waits from v$system_event where event = 'index (re)build lock or pin object';

    TOTAL_WAITS
    -----------
          20015

Tuesday, August 6, 2019

PDML Disabled on Nonpartitioned IOT

When a parallel instrumented update of Nonpartitioned IOT (Index-Organized Tables) doesn’t execute in parallel, xplan contains a Note:
     PDML disabled because non partitioned or single fragment IOT used
I will try to demonstrate this behavior, but I'm not sure if there exists any documentation about this restriction, and I don't know what means "single fragment".

Note: All tests are done in Oracle 18c.


1. PDML disabled: single fragment IOT used


Run following test code,

SQL > drop table test_iot_tab;

SQL> create table test_iot_tab (id number, sid number, 
       constraint test_iot_tab#p primary key(id)
     ) organization index; 

SQL > insert into test_iot_tab select level, -1 from dual connect by level <= 1e5;

SQL > commit;

SQL > exec dbms_stats.gather_table_stats(null, 'TEST_IOT_TAB');

SQL > set serveroutput off

SQL > update /*+ enable_parallel_dml parallel(t 4) */ test_iot_tab t set sid = sys_context('userenv','sid');

  100000 rows updated. 

SQL > select * from table(dbms_xplan.display_cursor);

  -------------------------------------
  SQL_ID  3zsrt91jd3x6s, child number 2
  -------------------------------------
  update /*+ enable_parallel_dml parallel(t 4) */ test_iot_tab t set sid
  = sys_context('userenv','sid')
  
  Plan hash value: 1206014583
  
  -----------------------------------------------------------------------------------
  | Id  | Operation        | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
  -----------------------------------------------------------------------------------
  |   0 | UPDATE STATEMENT |                |       |       |     5 (100)|          |
  |   1 |  UPDATE          | TEST_IOT_TAB   |       |       |            |          |
  |   2 |   INDEX FULL SCAN| TEST_IOT_TAB#P |   100K|   878K|     5   (0)| 00:00:01 |
  -----------------------------------------------------------------------------------
  
  Note
  -----
     - PDML disabled because non partitioned or single fragment IOT used
  
  19 rows selected.

SQL > select status, ptx, xid, xidusn, xidslot, xidsqn, used_ublk, used_urec, ptx_xidusn, ptx_xidslt, ptx_xidsqn, s.* 
       from v$transaction t, v$px_session s
      where t.ses_addr=s.saddr(+);

 STATUS PTX XID              XIDUSN  XIDSLOT XIDSQN USED_UBLK USED_UREC PTX_XIDUSN PTX_XIDSLT PTX_XIDSQN SADDR SID SERIAL# QCSID QCSERIAL# QCINST_ID SERVER_GROUP SERVER_SET SERVER# DEGREE REQ_DEGREE
 ------ --- ---------------- ------  ------- ------ --------- --------- ---------- ---------- ---------- ----- --- ------- ----- --------- --------- ------------ ---------- ------- ------ ----------
 ACTIVE NO  52000600A3810000 82      6       33187  1311      100000    0          0          0

SQL > select * from v$px_process;

  no rows selected

SQL > commit;

SQL > select * from v$pq_tqstat;

  no rows selected
xplan is noted with PDML disabled because of single fragment IOT used.

v$transaction.ptx displys "NO", indicating no parallel transaction (PTX stands for parent transaction), hence no rows in v$px_session.

Both selects on v$px_process and v$pq_tqstat return no rows.

Note that for PDML, information from v$pq_tqstat is available only after a commit or rollback operation as documented by Oracle V$PQ_TQSTAT.

If v$pq_tqstat returns rows, it is from the previous committed parallel DML, not the current committed serial DML.

v$px_process returns rows from previous and current parallel executions. The non-null sid and serial# are PX process currently in use, or recently in use because PX server process is shut down if it has not been used within certain time interval (probably 5 minutes).

Technical Article Understanding Parallel Execution – Part 2 also lists various problems with view V$PQ_TQSTAT.

With hint: index_ffs, we can make the query part of update run in parallel, but not DML (UPDATE is not inside PX COORDINATOR in xplan).

SQL > set serveroutput off

SQL > update /*+ enable_parallel_dml parallel(t 4) index_ffs(t test_iot_tab#p) */ test_iot_tab t set sid = sys_context('userenv','sid');
  100000 rows updated.
  
SQL > select * from table(dbms_xplan.display_cursor);

  -------------------------------------
  SQL_ID  93s3czmfbquu1, child number 0
  -------------------------------------
  update /*+ enable_parallel_dml parallel(t 4) index_ffs(t
  test_iot_tab#p) */ test_iot_tab t set sid = sys_context('userenv','sid')
  
  Plan hash value: 2045397606
  
  ------------------------------------------------------------------------------------------------------------------------
  | Id  | Operation                | Name           | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
  ------------------------------------------------------------------------------------------------------------------------
  |   0 | UPDATE STATEMENT         |                |       |       |    16 (100)|          |        |   |               |
  |   1 |  UPDATE                  | TEST_IOT_TAB   |       |       |            |          |        |   |               |
  |   2 |   PX COORDINATOR         |                |       |       |            |          |        |   |               |
  |   3 |    PX SEND QC (RANDOM)   | :TQ10000       |   100K|   878K|    16   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
  |   4 |     PX BLOCK ITERATOR    |                |   100K|   878K|    16   (0)| 00:00:01 |  Q1,00 | PCWC |            |
  |*  5 |      INDEX FAST FULL SCAN| TEST_IOT_TAB#P |   100K|   878K|    16   (0)| 00:00:01 |  Q1,00 | PCWP |            |
  ------------------------------------------------------------------------------------------------------------------------
  
  Predicate Information (identified by operation id):
  ---------------------------------------------------
  
     5 - access(:Z>=:Z AND :Z<=:Z)
  
  Note
  -----
     - Degree of Parallelism is 4 because of table property
     - PDML disabled because non partitioned or single fragment IOT used
  
SQL > commit;  


2. Update by Package DBMS_PARALLEL_EXECUTE


Since PDML on IOT is not allowed, and it is not clear what means "single fragment IOT", one alternative is to use 11gR2 introduced package dbms_parallel_execute to manually update table in parallel.

First, we try the most common usage of this package: create_chunks_by_rowid (Oracle Docu wrote: Index-organized tables are not allowed for create_chunks_by_rowid).

SQL > exec dbms_parallel_execute.drop_task('test_iot_task');

SQL > exec dbms_parallel_execute.create_task (task_name => 'test_iot_task');

SQL> select * from user_parallel_execute_tasks where task_name = 'test_iot_task';

  TASK_NAME      CHUNK_TYPE  STATUS   TABLE_OWNER  TABLE_NAME  NUMBER_COLUMN  TASK_COMMENT  JOB_PREFIX
  -------------  ----------  -------  -----------  ----------  -------------  ------------  ----------
  test_iot_task  UNDECLARED  CREATED

SQL > begin
        dbms_parallel_execute.create_chunks_by_rowid
          (task_name   => 'test_iot_task',
           table_owner => 'K',
           table_name  => 'TEST_IOT_TAB',
           by_row      => true,   -- or false
           chunk_size  => 200);
      end;
      /

      ERROR at line 1:
      ORA-29491: invalid table for chunking
      ORA-06512: at "SYS.DBMS_PARALLEL_EXECUTE", line 25
      ORA-06512: at "SYS.DBMS_PARALLEL_EXECUTE", line 21
      ORA-06512: at "SYS.DBMS_PARALLEL_EXECUTE", line 120
      ORA-06512: at line 2
We got ORA-29491, which says that create_chunks_by_rowid requires physical ROWID, but IOT can only provide logical rowid.
SQL > oerr ora 29491
 29491, 00000, "invalid table for chunking"
 // *Cause:  An attempt was made to chunk a table by ROWID, 
 //          but the table was not a physical table or the table was an IOT.
 //          physical table or the table is an IOT.
 // *Action: Use a table which has physical ROWID.
So we try another chunking method: create_chunks_by_number_col:

SQL > begin
        dbms_parallel_execute.create_chunks_by_number_col
         (task_name    => 'test_iot_task',
          table_owner  => 'K',
          table_name   => 'TEST_IOT_TAB',
          table_column => 'ID',
          chunk_size   => 200);
         end;
         /

      PL/SQL procedure successfully completed.

SQL > select * from user_parallel_execute_tasks where task_name = 'test_iot_task';

  TASK_NAME      CHUNK_TYPE    STATUS   TABLE_OWNER  TABLE_NAME    NUMBER_COLUMN  TASK_COMMENT  JOB_PREFIX
  -------------  ------------  -------  -----------  ------------  -------------  ------------  ----------
  test_iot_task  NUMBER_RANGE  CHUNKED  K            TEST_IOT_TAB  ID
  

SQL > declare
       l_sql_stmt varchar2(32767);
     begin
       l_sql_stmt := q'[update test_iot_tab t 
                        set    t.sid = sys_context('userenv','sid')
                        where id between :start_id and :end_id]';
     
       dbms_parallel_execute.run_task(task_name      => 'test_iot_task',
                                      sql_stmt       => l_sql_stmt,
                                      language_flag  => dbms_sql.native,
                                      parallel_level => 10);
     end;
     /
     
     PL/SQL procedure successfully completed.


SQL > select * from user_parallel_execute_tasks where task_name = 'test_iot_task';

  TASK_NAME      CHUNK_TYPE    STATUS    TABLE_OWNER  TABLE_NAME    NUMBER_COLUMN  TASK_COMMENT  JOB_PREFIX
  -------------  ------------  --------  -----------  ------------  -------------  ------------  ----------
  test_iot_task  NUMBER_RANGE  FINISHED  K            TEST_IOT_TAB  ID                           TASK$_81116
                                             
SQL > select sid, count(*) from test_iot_tab group by sid order by sid;

    SID  COUNT(*)
    ---  --------
     27      7000
    196     15400
    197     13600
    203      6200
    377     12600
    379      4600
    559     10800
    729     13600
    750     10200
    908      6000
    
    10 rows selected.
The above test shows that create_chunks_by_number_col is allowed, and table was updated by 10 parallel jobs.

The drawback of dbms_parallel_execute is that each chunk in each job is committed separately, therefore whole transaction is not atomic as documented on dbms_parallel_execute.run_task:
      This procedure executes the specified statement (sql_stmt) on the chunks in parallel. 
      It commits after processing each chunk. 


3. Partitioned Index-Organized Tables


Since PDML is disabled on Nonpartitioned IOT, we can look the case of Partitioned IOT.

SQL> drop table test_iot_tab_part;

SQL > create table test_iot_tab_part (part number, id number, sid number, 
       constraint test_iot_tab_part#p primary key(part, id)
     ) organization index
       partition by list (part)
        (partition p1 values (1),  
         partition p2 values (2),
         partition p3 values (3),
         partition p4 values (4)
        );

SQL > insert into test_iot_tab_part select mod(level, 4) + 1 part, level, -1 from dual connect by level <= 1e5;

SQL > commit;

SQL > exec dbms_stats.gather_table_stats(null, 'TEST_IOT_TAB_PART');

SQL > set serveroutput off

SQL > update /*+ enable_parallel_dml parallel(t 4) */ test_iot_tab_part t set sid = sys_context('userenv','sid');

     100000 rows updated. 

SQL > select * from table(dbms_xplan.display_cursor);

  -------------------------------------
  SQL_ID  75bsynpc6h5qq, child number 1
  -------------------------------------
  update /*+ enable_parallel_dml parallel(t 4) */ test_iot_tab_part t set
  sid = sys_context('userenv','sid')
  
  Plan hash value: 4043313740
  
  ---------------------------------------------------------------------------------------------------------------------------------------------
  | Id  | Operation                | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
  ---------------------------------------------------------------------------------------------------------------------------------------------
  |   0 | UPDATE STATEMENT         |                     |       |       |     2 (100)|          |       |       |        |      |            |
  |   1 |  PX COORDINATOR          |                     |       |       |            |          |       |       |        |      |            |
  |   2 |   PX SEND QC (RANDOM)    | :TQ10000            |   100K|  1171K|     2   (0)| 00:00:01 |       |       |  Q1,00 | P->S | QC (RAND)  |
  |   3 |    UPDATE                | TEST_IOT_TAB_PART   |       |       |            |          |       |       |  Q1,00 | PCWP |            |
  |   4 |     PX PARTITION LIST ALL|                     |   100K|  1171K|     2   (0)| 00:00:01 |     1 |     4 |  Q1,00 | PCWC |            |
  |   5 |      INDEX FULL SCAN     | TEST_IOT_TAB_PART#P |   100K|  1171K|     2   (0)| 00:00:01 |     1 |     4 |  Q1,00 | PCWP |            |
  ---------------------------------------------------------------------------------------------------------------------------------------------
  
  Note
  -----
     - Degree of Parallelism is 4 because of table property
  
  22 rows selected.

SQL > select status, ptx, xid, xidusn, xidslot, xidsqn, used_ublk, used_urec, ptx_xidusn, ptx_xidslt, ptx_xidsqn, s.* 
       from v$transaction t, v$px_session s
      where t.ses_addr=s.saddr(+);

  STATUS PTX XID              XIDUSN XIDSLOT XIDSQN USED_UBLK USED_UREC PTX_XIDUSN PTX_XIDSLT PTX_XIDSQN SADDR            SID SERIAL# QCSID QCSERIAL# QCINST_ID SERVER_GROUP SERVER_SET SERVER# DEGREE REQ_DEGREE
  ------ --- ---------------- ------ ------- ------ --------- --------- ---------- ---------- ---------- ---------------  --- ------- ----- --------- --------- ------------ ---------- ------- ------ ----------
  ACTIVE YES 5600120012830000 86     18      33554  331       25001     98         11         21043      00000000B7BFCF38 203 27206   392   53164     1         1            1          1       4      4
  ACTIVE YES 53001300117B0000 83     19      31505  331       25001     98         11         21043      00000000B62F60B8 750 29571   392   53164     1         1            1          2       4      4
  ACTIVE YES 61001B00DC4C0000 97     27      19676  331       25001     98         11         21043      00000000B7BF5B70 206 58371   392   53164     1         1            1          3       4      4
  ACTIVE YES 5E0019009B6D0000 94     25      28059  331       25001     98         11         21043      00000000B7DB8C78 377 55026   392   53164     1         1            1          4       4      4
  ACTIVE YES 62000B0033520000 98     11      21043  1         1         98         11         21043      00000000B7D94990 392 53164   392

SQL > select * from v$px_process;

  SERV STATUS     PID SPID     SID SERIAL# IS_GV
  ---- --------- ---- ------ ----- ------- -----
  P000 IN USE      49 32017    203   27206 FALSE
  P001 IN USE      58 32019    750   29571 FALSE
  P002 IN USE      61 32021    206   58371 FALSE
  P003 IN USE      62 32023    377   55026 FALSE
  
  4 rows selected.

SQL > commit;

SQL > select * from v$pq_tqstat;

  DFO_NUMBER TQ_ID SERVER_TYP NUM_ROWS BYTES  OPEN_TIME AVG_LATENCY WAITS TIMEOUTS PROCES
  ---------- ----- ---------- -------- ----- ---------- ----------- ----- -------- ------
           1     0 Producer          2   286          0           0     1        0 P003  
           1     0 Producer          2   286          0           0     1        0 P002  
           1     0 Producer          2   286          0           0     1        0 P001  
           1     0 Producer          2   286          0           0     1        0 P000  
           1     0 Consumer          8  1144          0           0    15        2 QC    
  
  5 rows selected.
The above test shows that update on Partitioned IOT is executed in parallel, i.e. PDML enabled.

Oracle performs two-phase commit protocol in PDML (analogue to distributed transactions) and documented in VLDB and Partitioning Guide (Release 12.2) - 8.5.3.5 Transaction Restrictions for Parallel DML:
    To ensure user-level transactional atomicity, the coordinator uses a two-phase commit protocol 
    to commit the changes performed by the parallel process transactions.
Oracle8 Server Migration (A54650_01) - Oracle8 Enhancements has some details:
    Changes to Fixed Views
  
    The following fixed views contain new information about parallel DML:
  
       V$SESSION: This fixed view contains a new column for ENABLE PARALLEL DML mode.
       V$TRANSACTION: The existing column STATUS has two new values, PTX PREPARED and PTX COMMITTED (where PTX stands for parent transaction). 
                     This fixed view also contains new columns (where XID stands for transaction identifier):
             PTX (value YES or NO)
             PTX_XIDUSN
             PTX_XIDSLT
             PTX_XIDSQN
       V$PQ_SESSTAT: This fixed view contains a new row: DML Parallelized.
       V$PQ_SYSSTAT: This fixed view contains a new row: DML Initiated.
In fact, for above PDML example, if we suspend one parallel slave (e.g. P002) before issuing commit in PX COORDINATOR session, v$transaction.status shows that COORDINATOR session and P002 have status: ACTIVE, but all other PX slaves are marked with status: PTX PREPARED.

--- Launch PDML in PX COORDINATOR session

SQL > update /*+ enable_parallel_dml parallel(t 4) */ test_iot_tab_part t set sid = sys_context('userenv','sid');

SQL > select * from v$px_process;

  SERV STATUS           PID SPID                       SID SERIAL# IS_GV
  ---- --------- ---------- ------------------------ ----- ------- -----
  P000 IN USE            49 6950                       206    5492 FALSE
  P001 IN USE            58 6952                       750   22721 FALSE
  P002 IN USE            61 6954                       203   35975 FALSE
  P003 IN USE            62 6956                       377   60192 FALSE
  
  4 rows selected. 


--- Suspend one parallel slave in ORADEBUG session

SQL(oradebug) > oradebug setospid 6954
  Oracle pid: 61, Unix process pid: 6954, image: oracle@testdb (P002)

SQL(oradebug) > oradebug suspend
  Statement processed.


--- return to PX COORDINATOR session, issue commit 

SQL > commit;


--- in a monitor session, show PTX PREPARED phase

SQL (monitor) > select status, ptx, xid, xidusn, xidslot, xidsqn, used_ublk, used_urec, ptx_xidusn, ptx_xidslt, ptx_xidsqn, s.* 
       from v$transaction t, v$px_session s
      where t.ses_addr=s.saddr(+);

  STATUS       PTX XID              XIDUSN XIDSLOT XIDSQN USED_UBLK USED_UREC PTX_XIDUSN PTX_XIDSLT PTX_XIDSQN SADDR            SID SERIAL# QCSID QCSERIAL# QCINST_ID SERVER_GROUP SERVER_SET SERVER# DEGREE REQ_DEGREE
  ------------ --- ---------------- ------ ------- ------ --------- --------- ---------- ---------- ---------- ---------------  --- ------- ----- --------- --------- ------------ ---------- ------- ------ ----------
  PTX PREPARED YES 5C001600566A0000 92     22      27222  331       25001     94         15         28085      00000000B7BF5B70 206 5492    392   53164     1         1            1          1       4      4
  PTX PREPARED YES 5A000400B4680000 90     4       26804  331       25001     94         15         28085      00000000B62F60B8 750 22721   392   53164     1         1            1          2       4      4
  ACTIVE       YES 5500190020780000 85     25      30752  331       25001     94         15         28085      00000000B7BFCF38 203 35975   392   53164     1         1            1          3       4      4
  PTX PREPARED YES 5F000100C0600000 95     1       24768  331       25001     94         15         28085      00000000B7DB8C78 377 60192   392   53164     1         1            1          4       4      4
  ACTIVE       YES 5E000F00B56D0000 94     15      28085  3         3         94         15         28085      00000000B7D94990 392 53164   392
When a DML doesn’t execute in parallel, it is not always obvious as demonstrated by Jonathan's Blog Quiz Night (March 9, 2017) on unused CLOB column.

I did exercise below to understand the Quiz.

SQL > drop table test_unused_lob_col_tab;

SQL > create table test_unused_lob_col_tab (id number, sid number, unused_clob clob);

SQL > alter table test_unused_lob_col_tab set unused (unused_clob);

SQL > select * from user_unused_col_tabs where table_name = 'TEST_UNUSED_LOB_COL_TAB';

  TABLE_NAME                   COUNT
  ----------------------- ----------
  TEST_UNUSED_LOB_COL_TAB          1

-- DBA_TAB_COLUMNS view filters out system-generated hidden columns and invisible columns
SQL > select table_name, column_name, data_type, column_id                                            
      from  dba_tab_columns                                                                                       
      where table_name = 'TEST_UNUSED_LOB_COL_TAB';                                                                
                                                                                                                   
  TABLE_NAME                COLUMN_NAME                    DATA_TYPE   COLUMN_ID                                
  ------------------------- ------------------------------ ---------- ----------                                
  TEST_UNUSED_LOB_COL_TAB   ID                             NUMBER              1                                
  TEST_UNUSED_LOB_COL_TAB   SID                            NUMBER              2      
  
  2 rows selected. 


--DBA_TAB_COLS has 4 different Column ID: 
--  COLUMN_ID           ,Sequence number of the column as created,  NULL when HIDDEN_COLUMN='YES'
--  SEGMENT_COLUMN_ID   ,Sequence number of the column in the segment, NULL when VIRTUAL_COLUMN = 'YES'
--  INTERNAL_COLUMN_ID  ,Internal sequence number of the column, NOT NULL
--  COLLATED_COLUMN_ID  ,Internal sequence number for virtual column generates a collation key (since 18c)

SQL > select column_name, data_type, hidden_column, virtual_column, column_id, segment_column_id, internal_column_id             
    from dba_tab_cols                                                                                          
   where table_name = 'TEST_UNUSED_LOB_COL_TAB';
  
  COLUMN_NAME                DATA_TYPE HIDDEN VIRTUAL COLUMN_ID SEGMENT_COLUMN_ID INTERNAL_COLUMN_ID
  -------------------------- --------- ------ ------- --------- ----------------- ------------------ 
  ID                         NUMBER    NO     NO      1         1                 1
  SID                        NUMBER    NO     NO      2         2                 2
  SYS_C00003_19080610:24:47$ CLOB      YES    NO                3                 3                                                                                       
 
  3 rows selected. 
  

-- Both LOGIC_COLUMN_ID and SEGMENT_COLUMN_ID are displayed.
-- LOGIC_COLUMN_ID is 0 for hidden column.                                                                                                              
SQL > select o.object_name, o.subobject_name, o.object_id, c.name column_name, c.col# logic_column_id, c.segcol# segment_column_id
            ,(case 
                when     bitand(c.property, 32768) = 32768 
                     and bitand(c.property, 1) != 1 
                     and bitand(c.property, 1024) != 1024 
                then 'YES' 
                else 'NO' 
              end)  unused
            ,bitand(c.property, 32768) unused_chk, bitand(c.property, 1) adt_attr, bitand(c.property, 1024) ntab_setid
          --,c.unusablebefore#, c.unusablebeginning#
       from sys.col$ c, dba_objects o
      where c.obj# = o.object_id 
        and object_name = 'TEST_UNUSED_LOB_COL_TAB'
      order by segment_column_id;
  
  OBJECT_NAME             OBJECT_ID COLUMN_NAME                LOGIC_COLUMN_ID   SEGMENT_COLUMN_ID UNUSED UNUSED_CHK ADT_ATTR NTAB_SETID
  ----------------------- --------- -------------------------- ----------------- ----------------- ------ ---------- -------- ----------
  TEST_UNUSED_LOB_COL_TAB 3754451   ID                         1                 1                 NO     0          0        0
  TEST_UNUSED_LOB_COL_TAB 3754451   SID                        2                 2                 NO     0          0        0
  TEST_UNUSED_LOB_COL_TAB 3754451   SYS_C00003_19080610:24:47$ 0                 3                 YES    32768      0        0

    --Note 1: unused column_name seems a concatenation of column_id with unused timestamp.
    --        two columns: unusablebefore#, unusablebeginning# in sys.col$ are empty.
  
    --Note 2: virtual_column has only internal_column_id in dba_tab_cols, its segment_column_id is empty.
    --        In above query of sys.col$, segcol# (segment_column_id) is 0.

SQL > select dbms_metadata.get_ddl('TABLE', 'TEST_UNUSED_LOB_COL_TAB', 'K') from dual;

       CREATE TABLE "K"."TEST_UNUSED_LOB_COL_TAB" 
         ( "ID"  NUMBER, 
           "SID" NUMBER
         ) SEGMENT CREATION DEFERRED 
       PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
       NOCOMPRESS LOGGING
       TABLESPACE "TEST_USER" 

SQL > set serveroutput off

SQL > update /*+ enable_parallel_dml parallel(t 4) */ test_unused_lob_col_tab t set sid = sys_context('userenv','sid');  

  0 rows updated.

SQL > select * from table(dbms_xplan.display_cursor);

  -------------------------------------
  SQL_ID  82vs0ct3p2nfj, child number 1
  -------------------------------------
  update /*+ enable_parallel_dml parallel(t 4) */ test_unused_lob_col_tab
  t set sid = sys_context('userenv','sid')
  
  Plan hash value: 4086267116
  
  ------------------------------------------------------------------------------------------------------------------------------
  | Id  | Operation             | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
  ------------------------------------------------------------------------------------------------------------------------------
  |   0 | UPDATE STATEMENT      |                         |       |       |     2 (100)|          |     | |            |
  |   1 |  UPDATE               | TEST_UNUSED_LOB_COL_TAB |       |       |            |          |     | |            |
  |   2 |   PX COORDINATOR      |                         |       |       |            |          |     | |            |
  |   3 |    PX SEND QC (RANDOM)| :TQ10000                |    82 |  1066 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
  |   4 |     PX BLOCK ITERATOR |                         |    82 |  1066 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
  |*  5 |      TABLE ACCESS FULL| TEST_UNUSED_LOB_COL_TAB |    82 |  1066 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
  ------------------------------------------------------------------------------------------------------------------------------
  
  Predicate Information (identified by operation id):
  ---------------------------------------------------
  
     5 - access(:Z>=:Z AND :Z<=:Z)
  
  Note
  -----
     - Degree of Parallelism is 4 because of table property
     - PDML disabled because single fragment or non partitioned table used

SQL > alter table test_unused_lob_col_tab drop unused columns;

SQL > select * from user_unused_col_tabs where table_name = 'TEST_UNUSED_LOB_COL_TAB';

  no rows selected

SQL > update /*+ enable_parallel_dml parallel(t 4) */ test_unused_lob_col_tab t set sid = sys_context('userenv','sid');  

  0 rows updated.

SQL > select * from table(dbms_xplan.display_cursor);

  -------------------------------------------------------------------------------------------------------------------------------
  SQL_ID  82vs0ct3p2nfj, child number 1
  -------------------------------------
  update /*+ enable_parallel_dml parallel(t 4) */ test_unused_lob_col_tab
  t set sid = sys_context('userenv','sid')
  
  Plan hash value: 3773939051
  
  ------------------------------------------------------------------------------------------------------------------------------
  | Id  | Operation             | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
  ------------------------------------------------------------------------------------------------------------------------------
  |   0 | UPDATE STATEMENT      |                         |       |       |     2 (100)|          |     | |            |
  |   1 |  PX COORDINATOR       |                         |       |       |            |          |     | |            |
  |   2 |   PX SEND QC (RANDOM) | :TQ10000                |    82 |  1066 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
  |   3 |    UPDATE             | TEST_UNUSED_LOB_COL_TAB |       |       |            |          |  Q1,00 | PCWP |            |
  |   4 |     PX BLOCK ITERATOR |                         |    82 |  1066 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
  |*  5 |      TABLE ACCESS FULL| TEST_UNUSED_LOB_COL_TAB |    82 |  1066 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
  ------------------------------------------------------------------------------------------------------------------------------
  
  Predicate Information (identified by operation id):
  ---------------------------------------------------
  
     5 - access(:Z>=:Z AND :Z<=:Z)
  
  Note
  -----
     - Degree of Parallelism is 4 because of table property
If we look carefully the above xplan Note, and that of Nonpartitioned IOT:
     PDML disabled because single fragment or non partitioned table used
     PDML disabled because non partitioned or single fragment IOT used
they are not exact the same, probably dynamically composed according to the parsed statement.

By the way, in Oracle 12c, when set a column unused, it could hit:
  Bug 26965236 : DELETE FROM TSDP_SENSITIVE_DATA$ CAUSING ENQ: TM - CONTENTION WAITS
as discussed in Blog TM lock and no transaction commit.

Wednesday, April 17, 2019

LOB ORA-22924: snapshot too old and Fix

To continue the discussion in Blog: UNDO Practice, this Blog will demonstrate LOB special ORA-01555 UNDO error, in which both rollback segment number and name are null:
    ORA-01555: snapshot too old: rollback segment number  with name "" too small
    ORA-22924: snapshot too old

    (ORA-01555 printf format string is:
      01555, 00000, "snapshot too old: rollback segment number %s with name \"%s\" too small")
We will test such "too old" in two dimensions, one is according to space usage (LOB pctversion), other is according to life time (LOB retention).

The code examples are in Plsql. After the test, we also try to provide one fix.

At beginning, it was thought to find some concrete code examples to reproduce ORA-22924. Googled with "Oracle LOB ORA-22924: snapshot too old example", and paged over a dozen of returned results, it was still empty.

Note: All tests are done in 12.1.0.2.0 (12cR1)


1. Test Setup


First we create a table containing one LOB column and fill some data. The LOB column is stored as basicfile and using pctversion to control the old versions of LOB data. The pctversion is set to special value 0 so that ORA-22924 can be reproduced in each short run. We also tested pctversion default value 10, the same ORA-22924 is still reproducible (later we will also test retention Parameter).

---==================== PCTVERSION Test Setup ====================---

drop tablespace test_ts including contents and datafiles;

create tablespace test_ts datafile '/oradb/oradata/testdb/test_dbf.dbf' size 100m online;

drop table tab_lob cascade constraints;

create table tab_lob(id number, mylob clob) 
  lob (mylob) store as basicfile 
  (tablespace  test_ts
   enable      storage in row
   chunk       8192
   pctversion 0
   --pctversion 10        -- default of 10 (%)
   --retention   none
   nocache
   logging)
tablespace test_ts; 

declare
  l_cnt     number := 1e1;
  l_clob    clob   := to_clob(rpad('abc', 10000, 'x'));
begin
  for i in 1..l_cnt loop
    insert into tab_lob values(i, l_clob);
  end loop;
  commit;
end;
/   
Show LOB meta info:

---==================== PCTVERSION Test Meta Info ====================---
 
column table_name format a14;
column column_name format a14;
column segment_name format a28;
column column_name format a14;
column retention_type format a20;
  
select table_name, column_name, segment_name, pctversion, retention, retention_type 
  from dba_lobs where table_name in ('TAB_LOB');  
  
TABLE_NAME COLUMN_NAME SEGMENT_NAME              PCTVERSION  RETENTION RETENTION_TYPE
---------- ----------- ------------------------- ---------- ---------- --------------
TAB_LOB    MYLOB       SYS_LOB0003449207C00002$$          0            NO
  
 
select segment_name, segment_type from dba_segments where tablespace_name in ('TEST_TS');

SEGMENT_NAME                 SEGMENT_TYPE
---------------------------- ------------
TAB_LOB                      TABLE
SYS_IL0003449207C00002$$     LOBINDEX
SYS_LOB0003449207C00002$$    LOBSEGMENT


-- The names of LOB object and index are composed by table OBJECT_ID (3449207) 
-- with prefix "SYS_LOB"/"SYS_IB" and suffix "C00002$$".

select object_name, object_id, object_type from dba_objects 
 where object_name in ('TAB_LOB', 'SYS_LOB0003449207C00002$$', 'SYS_IL0003449207C00002$$')
 order by object_id;

OBJECT_NAME                OBJECT_ID   OBJECT_TYPE
------------------------- ----------   -----------
TAB_LOB                      3449207   TABLE
SYS_LOB0003449207C00002$$    3449208   LOB
SYS_IL0003449207C00002$$     3449209   INDEX
Create 3 procedures for our test.

---==================== PCTVERSION Test Meta Info ====================---

create or replace procedure lob_22924_select(p_id number, p_cnt number, p_sleep number) as
  l_clob           clob;
  l_null_check     boolean;
begin
  select mylob into l_clob from tab_lob where id = p_id;
  
  for i in 1..p_cnt loop
   dbms_output.put_line('------- Seq: '||i);
   
   -- getlength, no error
    dbms_output.put_line('LOB length check OK, length = '||dbms_lob.getlength(l_clob));
    
    -- null check, no error
    l_null_check := l_clob is null;
    l_null_check := l_clob is not null;
    dbms_output.put_line('LOB null check OK');
    
    -- content access, throw ORA-22924 under ORA-01555
    dbms_output.put_line('LOB content check, substr ='||dbms_lob.substr(l_clob, 10, 2000));
    
    dbms_lock.sleep(p_sleep);
  end loop;
end;
/

create or replace procedure lob_22924_update(p_id number, p_cnt number, p_sleep number) as
  l_clob      clob;
  l_pad       varchar2(1000) := rpad('abc', 100, 'x');
begin
  select mylob into l_clob from tab_lob where id = p_id;
  
  for i in 1..p_cnt loop
   update tab_lob set mylob = mylob||l_pad where id = p_id;
    commit;
    dbms_lock.sleep(p_sleep);
  end loop;
end;
/

create or replace procedure lob_22924_select_update(p_id number, p_cnt number, p_sleep number) as
  l_clob        clob;
  l_pad         varchar2(1000) := rpad('abc', 100, 'x');
  l_clob_upd    clob := to_clob(rpad('abc', 10000, 'x'));  
  l_null_check  boolean;
begin
  select mylob into l_clob from tab_lob where id = p_id;
  
  for i in 1..p_cnt loop
    dbms_output.put_line('------- Seq: '||i);  
    update tab_lob set mylob = mylob||l_pad where id = p_id;
    commit;
    
    -- getlength, no error
    dbms_output.put_line('LOB length check OK, length = '||dbms_lob.getlength(l_clob));
    
    -- null check, no error
    l_null_check := l_clob is null;
    l_null_check := l_clob is not null;
    dbms_output.put_line('LOB null check OK');
    
    -- content access, throw ORA-22924 under ORA-01555
    dbms_output.put_line('LOB content check, substr ='||dbms_lob.substr(l_clob, 10, 2000));
    
    dbms_lock.sleep(p_sleep);
  end loop;
end;
/


2. Test Run


We will make two different tests to generate ORA-22924. Once with two Sqlplus sessions, once with a single session.


2.1. Two Sessions.


We open two Sqlplus Sessions. In Session_1, call lob_22924_select to start a query:

-------------- Session_1@T1 select --------------
 
10:22:15 Sql > exec lob_22924_select(3, 1e2, 1);
  ------- Seq: 1
  LOB length check OK, length = 10000
  LOB null check OK
  LOB content check, substr =xxxxxxxxxx
  ------- Seq: 2
  LOB length check OK, length = 10000
  LOB null check OK
  LOB content check, substr =xxxxxxxxxx
  
  ...
  ------- Seq: 7
  LOB length check OK, length = 10000
  LOB null check OK
  BEGIN lob_22924_select(3, 1e2, 1); END;
  
  *
  ERROR at line 1:
  ORA-01555: snapshot too old: rollback segment number  with name "" too small
  ORA-22924: snapshot too old
  ORA-06512: at "SYS.DBMS_LOB", line 1109
  ORA-06512: at "S.LOB_22924_SELECT", line 19
  ORA-06512: at line 1
In Session_2, call lob_22924_update to start a loop update.

-------------- Session_2@T2 update --------------

10:22:24 Sql > exec lob_22924_update(3, 1e4, 0.01);
After a couple of seconds, Session_1 throws error ORA-01555 and ORA-22924. If we look procedure lob_22924_select, error occurs when we access content by dbms_lob.substr. For dbms_lob.getlength and LOB null check, there is no such error. Probably both dbms_lob.getlength and LOB null check are using LOB index, and LOB index is based on normal Oracle UNDO mechanism.

Session_1 output shows that LOB length is always 10000, which implies that the checked LOB data is pointing to the fetched data and never changed. It acts like a consistent view in READ ONLY transaction (or SERIALIZABLE transaction).


2.2. One Session


We open one single Sqlplus Session Session_2, call lob_22924_select_update to start a query, then make updates:

-------------- Session_3@T3 update --------------

10:28:33 Sql > exec lob_22924_select_update(7, 1e2, 1);
  ------- Seq: 1
  LOB length check OK, length = 10000
  LOB null check OK
  LOB content check, substr =xxxxxxxxxx
  ------- Seq: 2
  LOB length check OK, length = 10000
  LOB null check OK
  LOB content check, substr =xxxxxxxxxx
  
  ...
  ------- Seq: 54
  LOB length check OK, length = 10000
  LOB null check OK
  BEGIN lob_22924_select_update(7, 1e2, 1); END;
  
  *
  ERROR at line 1:
  ORA-01555: snapshot too old: rollback segment number  with name "" too small
  ORA-22924: snapshot too old
  ORA-06512: at "SYS.DBMS_LOB", line 1109
  ORA-06512: at "S.LOB_22924_SELECT_UPDATE", line 23
  ORA-06512: at line 1
After about 50 seconds, it throws error ORA-01555 and ORA-22924.

lob_22924_select_update is a merge of previous select (lob_22924_select) and update (lob_22924_update). When we run both in one single session, we get the same error. That means even though we update the current LOB version, the old consistent version is still kept in the same session. And any access to the content can hit ORA-22924 error.


3. Retention Test


LOB column can also be configured to store old versions of LOB data for a period of time by normal retention, rather than using a percentage of the table space by above pctversion.

First we change the DDL to use retention, fill data and show meta info:

---==================== RETENTION Test ====================

alter system set undo_retention = 900;    --(Default 900)

drop table tab_lob cascade constraints;

create table tab_lob(id number, mylob clob) 
  lob (mylob) store as basicfile 
  (tablespace  test_ts
   enable      storage in row
   chunk       8192
   --pctversion 0
   retention   none
   nocache
   logging)
tablespace test_ts; 

declare
  l_cnt     number := 1e1;
 l_clob    clob   := to_clob(rpad('abc', 10000, 'x'));
begin
  for i in 1..l_cnt loop
    insert into tab_lob values(i, l_clob);
  end loop;
  commit;
end;
/   

select table_name, column_name, segment_name, pctversion, retention, retention_type 
  from dba_lobs where table_name in ('TAB_LOB');  
  
TABLE_NAME COLUMN_NAME SEGMENT_NAME              PCTVERSION  RETENTION RETENTION_TYPE
---------- ----------- ------------------------- ---------- ---------- --------------
TAB_LOB    MYLOB       SYS_LOB0003449216C00002$$                   900 YES
We can see that default retention is picked from undo_retention default 900 seconds. To speed up out test, we can low it to a short time, for example, 3 seconds:

alter system set undo_retention = 3;  

alter table tab_lob modify lob (mylob) (retention);  
  
select table_name, column_name, segment_name, pctversion, retention, retention_type 
  from dba_lobs where table_name in ('TAB_LOB');  
  
TABLE_NAME COLUMN_NAME SEGMENT_NAME              PCTVERSION  RETENTION RETENTION_TYPE
---------- ----------- ------------------------- ---------- ---------- --------------
TAB_LOB    MYLOB       SYS_LOB0003449216C00002$$                     3 YES
Then restore original undo_retention, and recompile invalidated procedures:

alter system set undo_retention = 900; 

alter procedure lob_22924_select compile;
alter procedure lob_22924_select_update compile;
alter procedure lob_22924_update compile;    
Now we can repeat the same tests (two or one sessions) as pctversion, and get the same errors.

With retention, it can require longer time to erase the kept CR copy because of Oracle AUM (Automatic Undo Management). Therefore it needs to a higher update loops (p_cnt) when calling lob_22924_select_update to hit ORA-22924.

We also noticed that when running several concurrent update sessions (each updates a different row), the error appears quicker and more frequent because all LOB data (from different rows) are stored in the same datafile.


4. Fix


The problem of ORA-22924 is that we are retaining a LOB Locator to an old version of LOB data, and if this old version gets too old (overwritten by newer versions), we hit the error when accessing that Locator.

Oracle permanent LOB CR views are implemented by versions (different copies) to conform to the ANSI standard (ACID regime). LOB data does not generate rollback information (redo/undo). Only LOB Index generates undo/redo because it is implemented in normal Oracle undo/redo mechanism.

However for Oracle temporary LOBs, CR, undo and versions are not supported. They are stored in Temporary Tablespace and are session private.

In the fix below, LOB content is at first copied to a local temporary LOB so that we always preserve a CR data for later access (analogue to normal CR view at the point of query start).

Note that dbms_lob.copy is used to create a new CR copy (new instance) of LOB content. Direct LOB assignment by Sql or Plsql do not fix ORA-22924: snapshot too old.

create or replace procedure lob_22924_select_fix(p_id number, p_cnt number, p_sleep number) as
  l_clob           clob;
  l_null_check     boolean;
  l_clob_temp      clob;
begin
  -- create temporary lob
  dbms_lob.createtemporary(lob_loc => l_clob_temp, cache => true, dur => dbms_lob.call);
  
  select mylob into l_clob from tab_lob where id = p_id;
  
  -- copy content of Permanent LOB Locator to Temporary LOB Locator (pass by value)
  dbms_lob.copy(dest_lob => l_clob_temp, src_lob => l_clob, amount => dbms_lob.getlength(l_clob));
  
  -- Note: two following approaches not fix ORA-22924: snapshot too old 
  -- because Temporary LOB Locator is overwritten by Permanent LOB Locator
  -- and Temporary LOB Locator points to the Permanent LOB Locator (pass by pointer). 
  --   select mylob into l_clob_temp from tab_lob where id = p_id;   -- not fix ORA-22924 by Sql
  --   l_clob_temp := l_clob;                                        -- not fix ORA-22924 by Plsql
  
  for i in 1..p_cnt loop
    dbms_output.put_line('------- Seq: '||i);
   
    -- getlength, no error
    dbms_output.put_line('LOB length check OK, length = '||dbms_lob.getlength(l_clob_temp));
    
    -- null check, no error
    l_null_check := l_clob_temp is null;
    l_null_check := l_clob_temp is not null;
    dbms_output.put_line('LOB null check OK');
    
    -- content access by local temp copy, not throw ORA-22924 under ORA-01555
    dbms_output.put_line('LOB content check, substr ='||dbms_lob.substr(l_clob_temp, 10, 2000));
    
    dbms_lock.sleep(p_sleep);
  end loop;
  
  -- free temporary lob
  dbms_lob.freetemporary(lob_loc => l_clob_temp);
end;
/