Thursday, March 28, 2013

Shared Pool - SQLA and KGLH0


In the last Blog: Shared Pool - Memory Allocation - unbalanced, SQLA in Subpool_1 is desperately low allocated,

   Subpool_1 has   1'110 KB for SQLA.
   Subpool_2 has 588'943 KB for SQLA.

So if a statement requires more than 1'110 KB in Subpool_1, it will not be satisfied.

It is not clear why SQLA in Subpool_1 is extremely low. One possible guess is that KGLH0 in Subpool_1 is too high, and there is somehow a limitation on the whole SQL memory usage in each Subpool.

In fact, it was said that alert log contains some text like:

   Memory Notification: Library Cache Object loaded into SGA
   Heap size 74879K exceeds notification threshold (51200K)

actually this is controlled by:


with default setting 52428800 (52428800/1024 = 51200K), which is documented as:

 maximum heap size before KGL writes warnings to the alert log 

One can also monitor memory consumption by:

 select sql_id, alloc_class, sum(chunk_size)/1024 sum_mb, count(*) chunk_cnt 
 from v$sql_shared_memory 
 --where sql_id = ':sql_id'   -- not working with sql_id
 group by sql_id, alloc_class
 order by sum_mb desc;

As we observed by this incident, the low SQLA caused frequently cursor AgedOut, consequently, reloading/hard parsing, or eventually session dumps with:

   ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","SELECT MAX(XX) FROM...","SQLA","tmp")


   ORA-04031: unable to allocate 48 bytes of shared memory ("shared pool","select yy from tt whe...","TCHK^3fefd486","qcsqlpath: qcsAddSqlPath")

and the dump files contain some lines like:

     Chunk  700000088ff8000 sz=     4096    freeable  "SQLA^8b7ceb5a  "  ds=7000000a88fafc8
where "8b7ceb5a" is the hash value of sql_id (output of dbms_utility.sqlid_to_sqlhash, or mod by 131072 is hash bucket number in library cache).

It seems  that subpool id of shared pool is determined by:
     mod(mod(to_number('8b7ceb5a', 'xxxxxxxxx'), 131072), 7) + 1
or directly from sql_id
     mod(mod(dbms_utility.sqlid_to_sqlhash(:sql_id), 131072), 7) + 1
if all 7 subpools are used.

That means the subpool for one sql is chosen based on sql_id.

So it is worth of experimenting a littler more on SQLA area.

1. At first, make a heapdump of shared pool by:

   SQL> oradebug dump heapdump 2

  contains some lines like:
      Chunk  700000088ff8000 sz=     4096    freeable  "SQLA^8b7ceb5a  "  ds=7000000a88fafc8
      Chunk  700000088ff9000 sz=     4096    freeable  "SQLA^8b7ceb5a  "  ds=7000000a88fafc8

2. Then a heapdump_addr dump of SQLA by:

   SQL> oradebug dump heapdump_addr 1 0X7000000a88fafc8

    *** 2013-03-12 10:38:24.072
    Processing Oradebug command 'dump heapdump_addr 1 0X7000000a88fafc8'
    HEAP DUMP heap name="SQLA^8b7ceb5a"  desc=7000000a88fafc8
     extent sz=0xfe8 alt=32767 het=368 rec=0 flg=2 opc=2
     parent=700000000000198 owner=7000000a88fae88 nex=0 xsz=0xfe8 heap=0
     fl2=0x67, nex=0, dsxvers=1, dsxflg=0x0
     dsx first ext=0x8c5e2f90
    EXTENT 0 addr=7000000902996a0
      Chunk  7000000902996b0 sz=     4056    freeable  "TCHK^8b7ceb5a  "  ds=70000008c5e3a98
    EXTENT 85 addr=7000000902ee6a8
      Chunk  7000000902ee6b8 sz=     4056    freeable  "TCHK^8b7ceb5a  "  ds=70000008c5e3a98

3. Finally a heapdump_addr dump of TCHK ("Typecheck heap") by:

   SQL> oradebug dump heapdump_addr 1 0X70000008c5e3a98

    *** 2013-03-12 10:46:42.079
    Processing Oradebug command 'dump heapdump_addr 1 0X70000008c5e3a98'
    HEAP DUMP heap name="TCHK^8b7ceb5a"  desc=70000008c5e3a98
     extent sz=0xfc0 alt=32767 het=32767 rec=0 flg=2 opc=2
     parent=7000000a88fafc8 owner=7000000a88fae88 nex=0 xsz=0xfc0 heap=0
     fl2=0x67, nex=0, dsxvers=1, dsxflg=0x0
     dsx first ext=0x8c5a7b30
    EXTENT 0 addr=7000000902996c8
      Chunk  7000000902996d8 sz=      608    free      "               "
      Chunk  700000090299938 sz=       40    freeable  "chedef : qcuatc"

From last dump file, we can see top 5 memory consumers are with comments like:
  Chunk  700000094527010 sz=      112    freeable  "optdef: qcopCre"
  Chunk  7000000a16d2678 sz=      152    freeable  "opndef: qcopCre"
  Chunk  700000092c9e160 sz=      288    freeable  "kkojnp - infode"
  Chunk  7000000902bea50 sz=       40    freeable  "chedef : qcuatc" 
  Chunk  7000000902ebf58 sz=      184    freeable  "kggec.c.kggfa  "

Following queries can also get memory consumption:

select *
  from v$sql_shared_memory;

select sql_id, typecheck_mem, type_chk_heap, sql_text
  from v$sql
 where typecheck_mem > 0;

 v$sql_shared_memory.HEAP_DESC points to
     ds=7000000a88fafc8 in "SQLA^8b7ceb5a  " 
 v$sql_shared_memory.SUBHEAP_DESC points to
     ds=70000008c5e3a98 in  "TCHK^8b7ceb5a  "

 v$sql.TYPECHECK_MEM gives the total memory of SQLA, which can be also obtained by sum(chunk_size) of v$sql_shared_memory.
so you can pick HEAP_DESC and SUBHEAP_DESC to make SQLA and TCHK dump.
As tested, the majority (>90%) of memory in SQLA is consumed by TCHK.


Similar to SQLA, one can find the ds for "KGLH0^8b7ceb5a " in shared pool heapdump:
   Chunk  7000000a1efce68 sz=     4096    freeable  "KGLH0^8b7ceb5a "  ds=7000000a1d9c450
and then make an address dump for "KGLH0^8b7ceb5a ":

  Total_size #Chunks  Chunk_size,        From_heap,       Chunk_type,  Alloc_reason
  ---------- ------- ------------ ----------------- ----------------- -----------------
        3296    1       3296 ,   KGLH0^8b7ceb5a,             perm,  perm          
        1960    1       1960 ,   KGLH0^8b7ceb5a,         freeable,  policy chain  
        1760    1       1760 ,   KGLH0^8b7ceb5a,             perm,  perm          
        1392    2        696 ,   KGLH0^8b7ceb5a,         freeable,  policy chain  
        1384    1       1384 ,   KGLH0^8b7ceb5a,             perm,  perm          
        1152    8        144 ,   KGLH0^8b7ceb5a,         freeable,  context chain 
         880    1        880 ,   KGLH0^8b7ceb5a,         freeable,  policy chain  
         760    5        152 ,   KGLH0^8b7ceb5a,         freeable,  kgltbtab      
         720    1        720 ,   KGLH0^8b7ceb5a,         freeable,  policy chain  
         712    1        712 ,   KGLH0^8b7ceb5a,         freeable,  policy chain  
         656    1        656 ,   KGLH0^8b7ceb5a,         freeable,  policy chain  
         608    1        608 ,   KGLH0^8b7ceb5a,         freeable,  policy chain  
         416    1        416 ,   KGLH0^8b7ceb5a,             free,                
         376    1        376 ,   KGLH0^8b7ceb5a,             free,                
          80    1         80 ,   KGLH0^8b7ceb5a,             perm,  perm          
          48    1         48 ,   KGLH0^8b7ceb5a,             free,

These seem about VPD. Following query:
     select * from v$vpd_policy v where sql_id = :sql_id; 
returns 8 rows, which confrims the above 8 policy chains and 8 context chains.


A library_cache dump ("library_cache level 16") reveals that each child cursor is associated with one KGHL0 and one SQLA. It seems that KGHL0 stores environment information, whereas SQLA stores parsing tree and xplan. When memory is reclaimed under memory presssure, KGHL0 kept, whereas SQLA deallocated. The later re-use of the that child cursor will result in a hard-parsing by using the kept KGHL0 info.

Occasionally for certain statements, SQL Trace (10046) shows:

SQL ID: 67kamvx1dz051


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute    233      0.01       0.85          0          0          0           0
Fetch      233      0.00       0.01          0       1494          0          78
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      466      0.02       0.86          0       1494          0          78

Misses in library cache during parse: 0
Misses in library cache during execute: 2

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  latch: shared pool                             12        0.00          0.03
  latch: row cache objects                        3        0.29          0.79
probably indicates such hard-parsing. Moreover, Wait Event "latch: shared pool" and "latch: row cache objects" also provides the evidence of hard-parsing.

It means that while executing the statement (233 times), the required child cursor could be always found in KGHL0, hence no Parse Call during parse; but occasionally xplan has been evicted (2 times), and should be newly created during execute.

Shared pool heapdump shows that both SQLA and KGLH0 are allocated in a unit of 4096 Bytes(Chunk_size). So the memory is only allocated starting from:
     Bucket 240 size=4096
in Free List (memory in Free List is partitioned into 255 Buckets from 0 to 254).

Big chunk size generates less fragmentation, however less efficient memory utilization, since certain ORA-04031 says:
    unable to allocate 32 bytes of shared memory ("shared pool","SELECT MAX(XX) FROM...","SQLA","tmp")
so a 32 Bytes memory demand would be boosted into a 4096 Bytes allocation.

Shared Pool - Memory Allocation - unbalanced

Continuing from my last field experience of ORA-04031 (Shared Pool - KKSSP),  I would like to share my further observation of Shared Pool Memory Allocation.

At first exposing some figures (facts), then trying to discuss the unbalanced memory allocation among subpools, eventual the cause of ORA-04031.


1. Areas (Subheaps)

There are 959 areas (Oracle in shared pool dump.

The query:

   select count(*) from v$sgastat where pool = 'shared pool';

returns only 881, so not all areas are registered in v$sgastat, for example, "kokcd", "post agent".
(probably you will see 1062 in Oracle, and 35, 593, 729, 857 in the previous releases enumerated in Oracle Performance Firefighting)

2. Subpools

Subpool size can have more than 20% difference (range from 2'684'354KB to 3'355'443KB, all size in KB), areas in Subpools are also different. There are total 959 areas, but each subpool can have maximum 316 areas.

Name Subpool_1 Subpool_2 Subpool_3 Subpool_4 Subpool_5 Subpool_6 Subpool_7 Sum
Subpool_Size 2'684'354 3'355'443 3'355'443 2'684'354 3'355'443 2'684'354 3'355'443 21'474'836
Area_Count 295 303 308 306 306 308 316 959

Top 5 Areas Name Subpool_1 Subpool_2 Subpool_3 Subpool_4 Subpool_5 Subpool_6 Subpool_7 Sum
KKSSP 617'693 563'340 590'296 590'798 577'532 643'504 577'237 4'160'404
db_block_hash_buckets 529'039 535'429 529'039 539'525 535'429 529'044 535'429 3'732'935
KGLH0 554'532 485'312 464'006 353'634 450'528 346'045 332'455 2'986'514
SQLA 1'110 588'943 565'317 185'664 574'008 155'498 272'452 2'342'994
free memory 306'562 358'029 353'146 306'659 342'194 325'386 304'535 2'296'513

3. Single Area Allocation

There are 649 areas allocated only to one single subpool. 110 areas allocated to two subpools, but often one of two is allocated simply 24 Bytes, and other is allocated the rest, for example, FileOpenBlock is allocated 24 Bytes into Subpool_6, 510,025,424 Bytes into Subpool_7.

  Here are the top 10 (all size in Byte):
Name                    Subpool_1   Subpool_2   Subpool_3   Subpool_4    Subpool_5   Subpool_6   Subpool_7    Sum         ALLOC_COUNT
----------------------- ----------- ----------- ----------- ------------ ----------- ----------- ------------ ----------- -----------
FileOpenBlock                                                                        24          510,025,424  510,025,448           2
enqueue                 39,257,104                                                               24           39,257,128            2
KSK VT POOL                                                 24           19,130,696                           19,130,720            2
ktlbk state objects                             17,971,200                                                    17,971,200            1
Wait History Segment                                                                 15,733,120               15,733,120            1
Global Context                                              11,017,024                                        11,017,024            1
call                    24           10,057,864                                                               10,057,888            2
keswx:plan en                        10,011,896                                                               10,011,896            1
KEWS sesstat values                                         9,802,944                                         9,802,944             1
FileIdentificatonBlock               24         7,711,368                                                     7,711,392             2

4. Unbalanced Areas

Following Areas allocations are not well balanced (difference to average > 30%)   

  The top 10 are (all size in Byte):

Name                 Subpool_1   Subpool_2   Subpool_3   Subpool_4    Subpool_5   Subpool_6   Subpool_7    Sum           ALLOC_COUNT
-------------------- ----------- ----------- ----------- ------------ ----------- ----------- ------------ ------------- -----------
SQLA                 1,110,321   588,943,273 565,317,412 185,664,243  574,008,432 155,498,158 272,452,386  2,342,994,419           7
FileOpenBlock                                                                     24          510,025,424  510,025,448             2
SQLP                 761,808     66,155,040  61,548,864  15,324,936   57,623,952  10,234,544  21,440,800   233,089,944             7
KGLS                 737,792     9,169,960   8,730,128   4,925,272    8,779,216   4,559,760   5,985,728    42,887,856              7
KQR M PO             30,720      799,232     512         255,896      30,568,128  138,680     8,590,256    40,383,424              7
enqueue              39,257,104                                                               24           39,257,128              2
PLDIA                0           7,246,864   8,072,928   4,285,000    7,129,088   3,520,016   5,066,408    35,320,304              7
write state object   4,662,208   4,662,184   48          9,324,368    24          4,662,208   4,662,208    27,973,248              7
KQR L PO                         924,032     21,262,248  117,768                                           22,304,048              3
PRTDS                241,720     4,668,976   4,023,352   2,747,344    5,001,776   2,203,384   1,311,744    20,198,296              7


Based on the above figures, we can see that each subpool can have maximum 1/3 of total areas, and certain areas are only allocated to 1 or 2 subpools. For example,

  • "FileOpenBlock" is in Subpool_6 and Subpool_7, but Subpool_6 has only 24 Bytes.
  • "enqueue" is in Subpool_1 and Subpool_7, but Subpool_7 has only 24 Bytes.
  • "PLDIA" is in Subpool_1 and Subpool_7, but Subpool_1 has 0 Bytes.
  • "Global Context" is only in Subpool_4.

Now we have some more explanation of "Global Context" contention as experienced in the past since it is all only in one single subpool.

Certain areas are extremely unbalanced among subpools. For example,

  • Subpool_1 has   1'110 KB for SQLA.  
  • Subpool_2 has 588'943 KB for SQLA.

That probably results in the error:

       ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","SELECT MAX(XX) FROM...","SQLA","tmp")

(probably Oracle can improve the error message by including the Subpool number)

Now I wonder if this shared pool architecture is by design hardly to be balanced. For static area allocations, like "db_block_hash_buckets", it could be OK. However dynamic areas with frequent memory fragmentation would put tremendous pressure on memory management.

ORA-04031 is a cumulative outcome after certain time of ineffective usage of memory. Therefore it is hard to predict and reproduce. Quite often it is thrown by the victim session, which are not necessarily the cause of the error.

By what we learned from LISP (CLOS) and Java, which are using automatic memory management (garbage collection), we can also say that Shared Pool memory management puts remarkable challenge on this technique, and hence a long way to be perfect.

For example, Oracle shared_pool durations "instance", "session", "cursor", and "execution" (vs. LISP garbage collector generations), is probably the second parameter of Heap Dump title:
   HEAP DUMP heap name="sga heap(1,0)"
where 1 is subpool_number, 0 is duration.

In Oracle, if Redo is claimed as most critical mechanism in Oracle, probably shared pool should be declared as most sophisticated one.

BTW, to inspect number of Subpools, run following query (instead of checking "_kghdsidx_count").

    select * from v$latch_children where name = 'shared pool';

It returns 7 rows (maxmum 7 Subpools), but only the active ones are those with higher GETS (or MISSES).

Wednesday, March 27, 2013

Shared Pool - KKSSP

In a recent DB hanging caused by ORA-04031, we saw huge allocations of KKSSP, and session dump with:

     ORA-04031: unable to allocate 256 bytes of shared memory ("shared pool","unknown object","KKSSP^9876","kgllk").

Even the smallest memory chunk of 32 bytes is no more procurable:

     ORA-04031: unable to allocate 32 bytes of shared memory
The DB is configured with 20GB shared pool, 300GB buffer cache, running as dedicated server with 6000 concurrent login sessions in Oracle

Additionally, Automatic Shared Memory Management(ASMM) disabled, cursor_sharing being exact
("_memory_imm_mode_without_autosga" set as FALSE, that is, disallow immediate mode without sga/memory target.
See Oracle Doc ID 1269139.1: SGA Re-Sizes Occurring Despite AMM/ASMM Being Disabled (MEMORY_TARGET/SGA_TARGET=0)).

The whole KKSSP consumption amounts to 4GB just before crash,  so average per session 700KB.

A heapdump of shared pool shows that all KKSSP are listed with comment like:

      Chunk  70000009fad14d0 sz=     2136    freeable  "KKSSP^2566     "  ds=7000000A5DA1798

where 2566 is SID of login session. So KKSSP is per session allocated and is a session specific area (subheap) in shared pool.

This contradicts a common believe of shared pool being shared (at least in majority) across all sessions because they are not negligible memory consummers.

Pick the ds marked address(descriptor), and dig further by a KKSSP address dump:


then aggregate by Tanel's  Heapdump Analyzer:

  Total_size #Chunks  Chunk_size,     From_heap,       Chunk_type,  Alloc_reason
  ---------- ------- ------------ ----------------- --------------- -----------------
      188160  735        256 ,       KKSSP^2566,         freeable,  kgllk         
      181504  709        256 ,       KKSSP^2566,         freeable,  kglpn         
       56320  220        256 ,       KKSSP^2566,         freeable,  KQR ENQ       
       28896  516         56 ,       KKSSP^2566,         freeable,  kglseshtSegs  
       12312    1      12312 ,       KKSSP^2566,         freeable,  kglseshtTable

The above table shows that the top 3 memory consumers are kgllk, kglpn, KQR ENQ. Each single one of them needs 256 Bytes. More than half of memory is allocated to kgllk and kglpn since the application is coded in PL/SQL packages and types, which requires kgllk and kglpn during each call to keep them stateful.

To inspect kgllk and kglpn touched objects, following query will list them:

select s.sid, username, logon_time
      ,(select kglnaobj||'('||kglobtyd||')' from x$kglob v

         where kglhdadr = v.object_handle and rownum=1) kobj_name
from v$libcache_locks v, v$session s
where holding_session = s.saddr
  and s.sid = 2566;

The above query can also help to debug: library cache pin and library cache lock.

(select * from x$kglob where kglhdadr in
   (select p1raw from v$session where sid = :blocked_session);

Instead of Heapdump, a direct way to get the memory consumption for a session is:

 select count(*), sum(ksmchsiz) from x$ksmsp where ksmchcom='KKSSP^2566';

Following select can also give the address for KKSSP address Heapdump:

 select ksmchpar from x$ksmsp where ksmchcom='KKSSP^2566' and ksmchcls = 'recr'; 

 (The output looks like 07000000A5DA1798, replace first 0 with 0X to dump:
  oradebug dump heapdump_addr 2 0X7000000A5DA1798)

By the way, the above dump probably finds the lost "KQR ENQ" discussed in Page169 of Book:
Oracle Core: Essential Internals for DBAs and Developers

... when I ran the same query against an instance of there was no entry for KQR ENQ ...

We can see "KQR ENQ" now moved to KKSSP in Oracle

MOS: ORA-4031 Or Excessive Memory Consumption On KKSSP Due To Parse Failures (Doc ID 2369127.1)
   KKSSP is just a type of internal memory allocation related to child cursors.

I also noticed that high usage of kgllk and kglpin was allied with heavy contention on KOKC latch(kokc descriptor allocation latch), which is responsible for pinning, unpinning and freeing objects. KOKC is a single latch without children, thus a single point of contention.