Monday, February 23, 2015

Streams Capture CCA Mode

Capture CCA Mode



Summary

In Oracle 11g a new Streams replication feature has been added. The new feature is for the streams capture process to be able to run in combined capture & apply. When capture is running in this mode it directly captures the LCRS and propagates them to the remote apply process. Unfortunately there can be issues with CCA mode and streams will effectively hang and stop propagating changes.

This document details the symptoms of the issues and the known workarounds to resolve.

The current capture mode can be determined with

SQL> select CAPTURE_NAME, decode(OPTIMIZATION,0,'Non-CCA','CCA') from gv$streams_capture
  2  /
CAPTURE_TO_NYDB  CCA
CAPTURE_TO_LNDTB CCA


Symptoms/Observations

During a maintenance or release activity the capture process or database is restarted. The symptoms of the issue are :

1. first_scn is two to three days in the past whereas the required_checkpoint_scn is up to date.

SQL> set lines 132
SQL> column fcn format a35
SQL> column rcn format a35
SQL> column capture_name format a20
SQL> column first_scn format 99999999999999
SQL> column required_checkpoint_scn format 99999999999999

SQL> select capture_name, first_scn, scn_to_timestamp(first_scn) fcn,required_checkpoint_scn,scn_to_timestamp
(required_checkpoint_scn) rcn from dba_capture

CAPTURE_NAME               FIRST_SCN FCN                                 REQUIRED_CHECKPOINT_SCN RCN
-------------------- --------------- ----------------------------------- ----------------------- ----------------------------------
CAPTURE_TO_QNYDB    7420845030624 05-DEC-14 11.00.59.000000000 AM               
423420410445 08-DEC-14 01.43.57.000000000 PM
CAPTURE_TO_QSGDB    7420840301551 05-DEC-14 10.01.04.000000000 AM               7423420469291 08-DEC-14 01.51.35.000000000 PM

2. The capture state is showing as WAITING FOR INACTIVE DEQUEUERS

SQL> select capture_name, rtrim(state) from gv$streams_capture


Workarounds

The first step is to update the first_scn = required_checkpoint_scn

SQL> select capture_name, first_scn, required_checkpoint_scn from dba_capture ;

CAPTURE_NAME               FIRST_SCN REQUIRED_CHECKPOINT_SCN
-------------------- --------------- -----------------------
CAPTURE_TO_QNYDB    7420845030624           7423420410445
CAPTURE_TO_QSGDB   7420840301551           7423423914847

Login as strmadmin user

Take a note of the required_checkpoint_scn for the capture that is not working. Then

begin
dbms_capture_adm.alter_capture(capture_name=>’[capture]’, 
                                                  first_scn=> [required_checkpoint_scn]);
end;
/

exec dbms_capture_adm.stop_capture('[capture_name]')
exec dbms_capture_adm.start_capture('[capture_name]')

At this point try to restart capture and confirm it’s not stuck waiting for inactive dequers. If the issue is still persisting then a capture checkpoint should be forced.

execute dbms_capture_adm.set_parameter(‘[capture]’, '_CHECKPOINT_FORCE', 'Y'); 
stop/start capture


It has been observed that occasionally the above fails to remediate and capture is still stuck. In this case we should disable CCA mode for the capture that is not working. This is done by setting a capture parameter.

exec dbms_capture_adm.set_parameter('[capture_name]', '_DIRECT_APPLY', 'N');
stop/start only the effected capture

exec dbms_capture_adm.stop_capture('[capture_name]')
exec dbms_capture_adm.start_capture('[capture_name]')


Capture should then proceed to capture messages however this does not resolve our initial problem…

I believe the nature of the CCA bug is that once capture has minded past the time when the capture was restarted and has checkpointed so it does not need to re-read from a previous SCN then it will work. So if the checkpoint force and setting first_scn does not work then leave the capture running in non-CCA mode for many hours and repeat the above work around steps.



Row cache enqueue lock

Row cache enqueue lock 

Question:  What are row cache enqueue locks?  I see many row cache enqueue locks and I need to know how to remove this event?

The row cache enqueue lock is caused by DDL locking portions of the data dictionary as you create a table/index of other DDL command.  The row cache lock escalates into the row cache enqueue lock when multiple DDL statements are waiting to execute.  The row cache enqueue locks are allocated within the shared_pool_size region of the SGA.

If the lock wait exceed a predefined threshold, the DDL will abort and cause a trace file to be generated.  This trace file will contain the string “WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK”.

Note:  Long-running DDL (alter table xxx move" is far more likely to generate a row cache enqueue lock than a fast-running DDL statement (e.g. truncate table).
The solution to excessive row cache enqueue lock waits is:
1 - Serialize the DDL running on a small number of concurrent DDL tasks.
2 - If the row cache enqueue locks are for dc_sequences, consider using DBMS_SHARED_POOL.KEEP to pin your sequences.
Specifically, the row cache enqueue lock is within the data dictionary files and they tend to be located in the dc_objects and dc_tablespaces.
You can run this script to see data dictionary row cache statistics by querying the v$rowcache view:

REM SET FLUSH OFF
REM SET TERM OFF
SET HEAD ON
SET PAGESIZE 59
SET LINESIZE 79
COLUMN parameter FORMAT A20
COLUMN type FORMAT a11
COLUMN percent FORMAT 999.99    HEADING "%";
COLUMN gets FORMAT 999,999      HEADING 'Gets'
COLUMN getmisses FORMAT 999,999 heading 'Get|Misses'
COLUMN count FORMAT 999,999     heading 'Count'
COLUMN usage FORMAT 999,999     HEADING 'Usage'
ttitle "DATA DICTIONARY CACHE STATISTICS"

SPOOL rep_out/ddcache.lis
SELECT
   parameter,
   type,
   gets,
   getmisses,
   ( getmisses / gets * 100) percent,
   count,
   usage
FROM
   v$rowcache
WHERE
   gets > 100 AND
   getmisses > 0
ORDER BY parameter;



Thursday, February 19, 2015

ORA-01157 and ORA-01110 when trying to bring my tablespaces online




SQL> alter tablespace example online;
alter tablespace example online
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
ORA-01110: data file 5: 



'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\EXAMPLE01.DBF'


How do I fix this ORA-01157 error?




ORA-01157: cannot identify/lock data file string - see DBWR trace file

Cause: The background process was either unable to find one of the data files or failed to lock it because the file was already in use.

first check the datafile is available where it is trying to write the block using 




SQL> select file_name, file_id from dba_data_files where file_name like 'FILE_NAME' ;



if it returns a row with details means datafile is busy in writing some blocks at the time when Query executed , if it returns no rows means there is no DATAFILE with the given name. 


The database will prohibit access to this file but other files will be unaffected. However the first instance to open the database will need to access all online data files. Accompanying error from the operating system describes why the file could not be identified.

Action: Have operating system make file available to database. Then either open the database or do ALTER SYSTEM CHECK DATAFILES.



The ORA-01157 is caused by a locking issue with the database writer (DBWR) background process.  During a recovery, this can be caused by a unopened data files (i.e. database mounted but not open), a missing file, a permission problem in the file (e.g. no write permissions 770 on the files owned by Oracle).





Wednesday, February 11, 2015

Guaranteed Restore Points Oracle 11g

Guaranteed Restore Points Oracle 11g

Guaranteed Restore Points are a life-saver when it comes to Application changes.It can ensure that you can rewind the database to a time without tradional point in time recovery. Guaranteed restore points are basically alias’es for SCN’s
A normal restore point assigns a restore point name to an SCN or specific point in time.The control file can maintain a record of thousands of normal restore points with no significant effect on database performance. Normal restore points eventually age out of the control file if not manually deleted, so they require no ongoing maintenance.
SQL> CREATE RESTORE POINT before_upgrade;
Creating a normal restore point eliminates manually recording an SCN in advance or determine the correct SCN after the fact by using features such as Flashback Query.
Like a normal restore point, a guaranteed restore point serves as an alias for an SCN in recovery operations. A principal difference is that guaranteed restore points never age out of the control file and must be explicitly dropped. In general, you can use a guaranteed restore point as an alias for an SCN with any command that works with a normal restore point. Except as noted, the information about where and how to use normal restore points applies to guaranteed restore points as well.
A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled. If flashback logging is enabled, then a guaranteed restore point enforces the retention of flashback logs required for Flashback Database to any SCN after the earliest guaranteed restore point.
This example shows how to create a guaranteed restore point:
SQL> CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;
Listing Restore Points
______________________
You can use the LIST command to list either a specific restore point or all restore points known to the RMAN repository. The variations of the command are as follows:
LIST RESTORE POINT restore_point_name;
LIST RESTORE POINT ALL;
RMAN indicates the SCN and time of the restore point, the type of restore point, and the name of the restore point. The following example shows sample output:
RMAN> LIST RESTORE POINT ALL;
using target database control file instead of recovery catalog
SCN              RSP Time  Type       Time      Name
—————- ——— ———- ——— —-
341859           28-JUL-06            28-JUL-06 NORMAL_RS
343690           28-JUL-06 GUARANTEED 28-JUL-06 GUARANTEED_RS
To see a list of all currently defined restore points (normal and guaranteed), use the V$RESTORE_POINT control file view, by means of the following query:
SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,
GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE
FROM V$RESTORE_POINT
Dropping Restore Points
_______________________
When you are satisfied that you do not need an existing restore point, or when you want to create a new restore point with the name of an existing restore point, you can drop the restore point, using the DROP RESTORE POINT SQL*Plus statement. For example:
SQL> DROP RESTORE POINT before_app_upgrade;
Restore point dropped.
The same statement is used to drop both normal and guaranteed restore points.
Note:
Normal restore points eventually age out of the control file, even if not explicitly dropped.
Guaranteed restore points never age out of the control file. They remain until they are explicitly dropped.
Flashing back database till Restore Point
_________________________________________
Login to RMAN and bring the database to mount mode.
FLASHBACK DATABASE TO RESTORE POINT ‘before_upgrade';
FLASHBACK DATABASE TO SCN 202381;
Open the database.
I normally create guaranteed retore points before doing a Dataguard switchover. It helps you assure you can go back to the state before you started (in case anything goes wrong).

pSU9 Patch -

Pre-Check
------------
oracle@Server20028933:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied

Patch  13060271     : applied on Tue Oct 29 14:34:42 GMT 2013
Patch  13615767     : applied on Tue Oct 29 14:30:57 GMT 2013
Patch  15947884     : applied on Tue Oct 29 14:23:16 GMT 2013
Patch  13696216     : applied on Wed Apr 18 09:22:21 BST 2012
Patch  13696251     : applied on Wed Apr 18 09:20:51 BST 2012

grid@Server20027095:/DB/oracle/grid/oracle_grid_patches/ [+ASM2] $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013
Patch  13954738     : applied on Tue Aug 21 10:58:54 BST 2012
Patch  13696216     : applied on Wed Apr 18 09:10:19 BST 2012


GRID
--------

stop all databases

as grid

unzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME

grid@Server20028932:/DB/oracle/grid/oracle_grid_patches/ [+ASM1] $ORACLE_HOME/OPatch/opatch version
OPatch Version: 11.2.0.3.6

OPatch succeeded.


as oracle

unzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME
oracle@Server20027094:/DB/oracle/data/oracle_rdbms_patches $ [rdbms1120]$ORACLE_HOME/OPatch/opatch version       
OPatch Version: 11.2.0.3.6

OPatch succeeded.

as root
--------
sudo /DB/GRID/11.2.0.3/bin/crsctl stop crs ( all nodes in cluster )


as grid
--------
/DB/GRID/11.2.0.3/OPatch/opatch rollback -id 13954738 ( first node only )

as root
--------
sudo /DB/GRID/11.2.0.3/bin/crsctl start crs ( all nodes in cluster )


as grid
-------

grid@Server20027094:/DB/oracle/grid/oracle_grid_patches/ [+ASM1]  $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013
Patch  13696216     : applied on Wed Apr 18 09:10:19 BST 2012


as grid
-------

cd /DB/oracle/grid/oracle_grid_patches
unzip p17735354_112030_Linux-x86-64.zip

as root ( one server at a time in cluster )
-------
sudo /tmp/grid_rootpre.sh
/bin/umount /DB/orafs/dbpw
/DB/GRID/11.2.0.3/OPatch/opatch auto /DB/oracle/grid/oracle_grid_patches -oh /DB/GRID/11.2.0.3 -ocmrf /DB/GRID/11.2.0.3/OPatch/ocm/bin/emocmrsp

as grid
-------
grid@Server20027094:/DB/GRID/11.2.0.3/log/Server20027094/DBsd/ [+ASM1] $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  17540582     : applied on Tue Mar 04 13:10:33 GMT 2014
Patch  17592127     : applied on Tue Mar 04 13:07:11 GMT 2014
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013


as oracle
---------
oracle@Server20025086:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13060271     : applied on Mon Nov 25 15:07:55 GMT 2013
Patch  13615767     : applied on Mon Nov 25 15:05:05 GMT 2013
Patch  15947884     : applied on Mon Nov 25 15:01:18 GMT 2013
Patch  13696216     : applied on Wed Apr 18 09:22:21 BST 2012
Patch  13696251     : applied on Wed Apr 18 09:20:51 BST 2012

$ORACLE_HOME/OPatch/opatch rollback -id 15947884

oracle@Server20025089:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13060271     : applied on Mon Nov 25 15:14:56 GMT 2013
Patch  13615767     : applied on Mon Nov 25 15:13:56 GMT 2013
Patch  13696216     : applied on Wed Apr 18 09:22:21 BST 2012
Patch  13696251     : applied on Wed Apr 18 09:20:51 BST 2012
oracle@Server20025089:/DB/oracle/data $ [rdbms1120]

as grid
-------
sudo /tmp/grid_rootpre.sh  
/bin/umount /DB/orafs/dbpw
/DB/GRID/11.2.0.3/OPatch/opatch auto /DB/oracle/grid/oracle_grid_patches -oh /DB/oracle/product/11.2.0.3 -ocmrf /DB/oracle/product/11.2.0.3/OPatch/ocm/bin/emocmrsp


as oracle
---------
oracle@Server20028932:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  17540582     : applied on Thu Mar 13 11:32:22 GMT 2014
Patch  17592127     : applied on Thu Mar 13 11:28:44 GMT 2014
Patch  13615767     : applied on Tue Oct 29 14:30:57 GMT 2013


as grid
-------
cd /DB/oracle/grid/oracle_grid_patches
unzip p13954738_112039_Linux-x86-64.zip

as root
-------
sudo /tmp/grid_rootpre.sh

/DB/GRID/11.2.0.3/OPatch/opatch auto /DB/oracle/grid/oracle_grid_patches -oh /DB/GRID/11.2.0.3 -ocmrf /DB/GRID/11.2.0.3/OPatch/ocm/bin/emocmrsp


grid@Server20027094:/DB/GRID/11.2.0.3/log/Server20027094/DBsd/ [+ASM1] $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13954738     : applied on Tue Mar 04 13:25:27 GMT 2014
Patch  17540582     : applied on Tue Mar 04 13:10:33 GMT 2014
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013


RDBMS
------------

as oracle
----------
oracle@Server20025090:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  17540582     : applied on Sat Mar 08 11:16:00 GMT 2014
Patch  17592127     : applied on Sat Mar 08 11:12:19 GMT 2014
Patch  13615767     : applied on Mon Nov 25 15:13:56 GMT 2013

as oracle
---------
cd /DB/oracle/data/oracle_rdbms_patches
unzip p15947884_112038_Linux-x86-64.zip

cd 15947884

oracle@Server20025089:/DB/oracle/data/oracle_rdbms_patches/15947884 $ [rdbms1120]$ORACLE_HOME/OPatch/opatch apply

oracle@Server20025089:/DB/oracle/data/oracle_rdbms_patches/15947884 $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied

Patch  15947884     : applied on Sat Mar 08 11:49:40 GMT 2014
Patch  17540582     : applied on Sat Mar 08 11:02:25 GMT 2014
Patch  17592127     : applied on Sat Mar 08 10:58:43 GMT 2014
Patch  13615767     : applied on Mon Nov 25 15:13:56 GMT 2013

oracle@Server20025089:/DB/oracle/data/oracle_rdbms_patches/15947884 $ [rdbms1120]

start all the datbases

Primary Side Only for each database

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

Primary Side Only once for any database

UPGRADE CATALOG;
UPGRADE CATALOG;

Monday, February 9, 2015

Running SQLs


Query to check Running SQLs on the Oracle DATABASE.



set lines 400
col USERNAME for a10
col SCHEMANAME for a10 
col PROCESS for a10 
col BLOCKING_SESSION for a20        
col SECONDS_IN_WAIT for a5
Col PROGRAM for a30
col blocking_session_status for a20
col SQL_ID for a15
col OSUSER for a10
SELECT s.SID, s.serial#, s.audsid, s.username, s.command, s.status, s.schemaname, s.osuser, s.process, UPPER (s.program) program, s.sql_id, s.row_wait_block#, s.row_wait_row#, s.blocking_session_status,s.blocking_session, s.seconds_in_wait, s.state FROM v$session s WHERE ( (s.username IS NOT NULL) AND (NVL (s.osuser, 'x') <> 'SYSTEM') AND (s.TYPE <> 'BACKGROUND') AND STATUS='ACTIVE'
) ORDER BY "PROGRAM";



Saturday, February 7, 2015

SET Remote_listener on Database in one HIT

Set Remote_listener on Database in one Hit. 


hostname1='hostname' 
export ORAENV_ASK=NO 

for db in 'srvctl config database'

do for inst in `srvctl status instance -d $db -n $hostname1 | awk '{print $2}'`

do
export ORACLE_SID=$inst 

echo $ORACLE_SID 
. oraenv 

sqlplus / as sysdba <<!!

sho parameter db_name 
show parameter remote_listener

alter system set remote_listener='(DESCRIPTION=(ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCPS)(HOST=HOST_IP)(PORT=1523)) (ADDRESS=(PROTOCOL=TCPS)(HOST=HOST_IP)(PORT=1523)) (ADDRESS=(PROTOCOL=TCPS)(HOST=Host_IP)(PORT=1523))))' scope=both sid='*';

and 

show parameter remote_listener !!

Wednesday, February 4, 2015

Doing quicks checks to check the livliness of the RAC

Doing quicks checks to check the livliness of the RAC

We have gone through this command several times. It resides under the 4ORACLE_HOME/CRS/bin directory and there are several commands to perform various tasks, such as displaying individual resource and starting and stopping those resources. Anyhow, do the crs --help to get a complete list of each command.

[oracle@vm02 bin]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.esxrac.db  application    ONLINE    ONLINE    vm02
ora....c1.inst application    ONLINE    ONLINE    vm01
ora....c2.inst application    ONLINE    ONLINE    vm02
ora....serv.cs application    ONLINE    ONLINE    vm02
ora....ac1.srv application    ONLINE    ONLINE    vm01
ora....ac2.srv application    ONLINE    ONLINE    vm02
ora....SM1.asm application    ONLINE    ONLINE    vm01
ora....01.lsnr application    ONLINE    ONLINE    vm01
ora.vm01.gsd   application    ONLINE    ONLINE    vm01
ora.vm01.ons   application    ONLINE    ONLINE    vm01
ora.vm01.vip   application    ONLINE    ONLINE    vm01
ora....SM2.asm application    ONLINE    ONLINE    vm02
ora....02.lsnr application    ONLINE    ONLINE    vm02
ora.vm02.gsd   application    ONLINE    ONLINE    vm02
ora.vm02.ons   application    ONLINE    ONLINE    vm02
ora.vm02.vip   application    ONLINE    ONLINE    vm02


By checking the status of individual nodes and all the necessary applications, we can see that the VIP, GSD, Listener and the ONS daemons are alive.

[oracle@vm02 bin]$ srvctl status nodeapps -n vm01
VIP is running on node: vm01
GSD is running on node: vm01
Listener is running on node: vm01
ONS daemon is running on node: vm01
[oracle@vm02 bin]$ srvctl status nodeapps -n vm02
VIP is running on node: vm02
GSD is running on node: vm02
Listener is running on node: vm02
ONS daemon is running on node: vm02
Now checking the status of the ASM on both nodes...

[oracle@vm02 bin]$ srvctl status asm -n vm01
ASM instance +ASM1 is running on node vm01.
[oracle@vm02 bin]$ srvctl status asm -n vm02
ASM instance +ASM2 is running on node vm02.

What about the database status?

[oracle@vm02 bin]$ srvctl status database -d esxrac

Instance esxrac1 is running on node vm01
Instance esxrac2 is running on node vm02

What would be the status of the service that we created at the end of the database installation?

[oracle@vm02 bin]$ srvctl status service -d esxrac
Service fokeserv is running on instance(s) esxrac2, esxrac1

Cluster Status

[oracle@vm02 bin]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@vm02 bin]$ ssh vm01
The authenticity of host 'vm01 (172.22.107.25)' can't be established.
RSA key fingerprint is c0:0c:75:15:9a:e5:fc:69:5c:0c:e5:c8:94:00:52:c4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'vm01,172.22.107.25' (RSA) to the list of known hosts.
Enter passphrase for key '/u01/app/oracle/.ssh/id_rsa':
Last login: Fri May  4 12:56:08 2007 from 172.22.206.62
[oracle@vm01 ~]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[oracle@vm01 ~]$ exit
logout
Connection to vm01 closed.
[oracle@vm02 bin]$


Querying the RAC

SQL*Plus: Release 10.2.0.1.0 - Production on Fri May 25 10:15:27 2007
Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Oracle Label Security, OLAP and Data Mining Scoring Engine options


Here we quickly query the gv$instance cluster view to get the instances, hostname, and status of our node apps.

SQL> select instance_name, host_name, archiver, thread#, status
  2  from gv$instance
  3  /
INSTANCE_NAME  HOST_NAME             ARCHIVE  THREAD# STATUS
-------------- --------------------- ------- -------- ------
esxrac1        vm01.wolga.nl         STARTED        1 OPEN
esxrac2        vm02.wolga.nl         STARTED        2 OPEN
Checking the SGA (System Global Area) on one of the nodes (Note: Both nodes have identical setup).


SQL> show sga
Total System Global Area  608174080 bytes
Fixed Size                  1220820 bytes
Variable Size             142610220 bytes
Database Buffers          457179136 bytes
Redo Buffers                7163904 bytes
Getting a list of all our datafiles...

SQL> select file_name, bytes/1024/1024
  2  from dba_data_files
  3  /
FILE_NAME                                           BYTES/1024/1024
-----------------------------------------------     ---------------
+ORADATA/esxrac/datafile/system.259.620732719                  500
+ORADATA/esxrac/datafile/undotbs1.260.620732753                200
+ORADATA/esxrac/datafile/sysaux.261.620732767                  670
+ORADATA/esxrac/datafile/example.263.620732791                 150
+ORADATA/esxrac/datafile/undotbs2.264.620732801                200
+ORADATA/esxrac/datafile/users.265.620732817                     5
6 rows selected.


Getting the status of all the groups, type, membership (if any)...
SQL> select group#, type, member, is_recovery_dest_file
  2  from v$logfile
  3  order by group#
  4  /
GROUP# TYPE    MEMBER                                                   IS_------ ------- ---------------------------------------------------      ---
     1 ONLINE  +ORADATA/esxrac/onlinelog/group_1.257.620732695          NO
     1 ONLINE  +FLASH_RECO_AREA/esxrac/onlinelog/group_1.257.620732699  YES
     2 ONLINE  +ORADATA/esxrac/onlinelog/group_2.258.620732703          NO
     2 ONLINE  +FLASH_RECO_AREA/esxrac/onlinelog/group_2.258.620732707  YES
     3 ONLINE  +ORADATA/esxrac/onlinelog/group_3.266.620737527          NO
     3 ONLINE  +FLASH_RECO_AREA/esxrac/onlinelog/group_3.259.620737533  YES
     4 ONLINE  +ORADATA/esxrac/onlinelog/group_4.267.620737535          NO
     4 ONLINE  +FLASH_RECO_AREA/esxrac/onlinelog/group_4.260.620737539  YES


Querying the v$asm_diskgroup view...

select group_number, name,allocation_unit_size alloc_unit_size,state,type,total_mb,usable_file_mb
from v$asm_diskgroup;

GROUP_NUMBER NAME                           ALLOC_UNIT_SIZE STATE       TYPE     TOTAL_MB USABLE_FILE_MB
------------ ------------------------------ --------------- ----------- ------ ---------- --------------
           1 FLASH_RECO_AREA                        1048576 CONNECTED   EXTERN      10236           2781
           2 ORADATA                                1048576 CONNECTED   NORMAL      20472           8132

Querying v$asm_disk for our volumes (remember the ones we created first on OS level with the asmlib) :


select name, path, header_status, total_mb free_mb, trunc(bytes_read/1024/1024) read_mb, trunc(bytes_written/1024/1024) write_mb
from v$asm_disk;
NAME  PATH       HEADER_STATU    FREE_MB    READ_MB   WRITE_MB
----- ---------- ------------ ---------- ---------- ----------
VOL1  ORCL:VOL1  MEMBER             10236     39617      15816
VOL2  ORCL:VOL2  MEMBER             10236      7424      15816
VOL3  ORCL:VOL3  MEMBER             10236      1123      13059


All datafiles in one go:

SQL> select name from v$datafile
  2  union
  3  select name from v$controlfile
  4  union
  5  select name from v$tempfile
  6  union
  7  select member from v$logfile
  8  /

NAME
--------------------------------------------------------------------------------
+FLASH_RECO_AREA/esxrac/controlfile/current.256.620732691
+FLASH_RECO_AREA/esxrac/onlinelog/group_1.257.620732699
+FLASH_RECO_AREA/esxrac/onlinelog/group_2.258.620732707
+FLASH_RECO_AREA/esxrac/onlinelog/group_3.259.620737533
+FLASH_RECO_AREA/esxrac/onlinelog/group_4.260.620737539
+ORADATA/esxrac/controlfile/current.256.620732689
+ORADATA/esxrac/datafile/example.263.620732791
+ORADATA/esxrac/datafile/sysaux.261.620732767
+ORADATA/esxrac/datafile/system.259.620732719
+ORADATA/esxrac/datafile/undotbs1.260.620732753
+ORADATA/esxrac/datafile/undotbs2.264.620732801
+ORADATA/esxrac/datafile/users.265.620732817
+ORADATA/esxrac/onlinelog/group_1.257.620732695
+ORADATA/esxrac/onlinelog/group_2.258.620732703
+ORADATA/esxrac/onlinelog/group_3.266.620737527
+ORADATA/esxrac/onlinelog/group_4.267.620737535
+ORADATA/esxrac/tempfile/temp.262.620732779
17 rows selected.


Listing all the tablespaces...

SQL> select tablespace_name, file_name
  2  from dba_data_files
  3  union
  4  select tablespace_name, file_name
  5  from dba_temp_files
  6  /
TABLESPACE_NAME         FILE_NAME
------------------      --------------------------------------------------------------------------------
EXAMPLE                 +ORADATA/esxrac/datafile/example.263.620732791
SYSAUX                  +ORADATA/esxrac/datafile/sysaux.261.620732767
SYSTEM   +ORADATA/esxrac/datafile/system.259.620732719
TEMP    +ORADATA/esxrac/tempfile/temp.262.620732779
UNDOTBS1 `  +ORADATA/esxrac/datafile/undotbs1.260.620732753
UNDOTBS2   +ORADATA/esxrac/datafile/undotbs2.264.620732801
USERS    +ORADATA/esxrac/datafile/users.265.620732817
7 rows selected.


This script will give you information of the +ASM1 instance files:


SQL> select group_number, file_number, bytes/1024/1024/1024 GB, type, striped, modification_date
  2   from v$asm_file
  3  where TYPE != 'ARCHIVELOG'
  4  /
GRP_NUM FILE_NUM       GB TYPE            STRIPE MODIFICAT
------- -------- -------- --------------- ------ ---------
      1      256      .01 CONTROLFILE     FINE   04-MAY-07
      1      257      .05 ONLINELOG       FINE   25-MAY-07
      1      258      .05 ONLINELOG       FINE   24-MAY-07
      1      259      .05 ONLINELOG       FINE   24-MAY-07
      1      260      .05 ONLINELOG       FINE   25-MAY-07
      1      261      .00 PARAMETERFILE   COARSE 24-MAY-07
      2      256      .01 CONTROLFILE     FINE   04-MAY-07
      2      257      .05 ONLINELOG       FINE   25-MAY-07
      2      258      .05 ONLINELOG       FINE   24-MAY-07
      2      259      .49 DATAFILE        COARSE 04-MAY-07
      2      260      .20 DATAFILE        COARSE 04-MAY-07
      2      261      .65 DATAFILE        COARSE 23-MAY-07
      2      262      .03 TEMPFILE        COARSE 04-MAY-07
      2      263      .15 DATAFILE        COARSE 04-MAY-07
      2      264      .20 DATAFILE        COARSE 04-MAY-07
      2      265      .00 DATAFILE        COARSE 04-MAY-07
      2      266      .05 ONLINELOG       FINE   24-MAY-07
      2      267      .05 ONLINELOG       FINE   25-MAY-07
18 rows selected.


More detailed information:

SQL> select group_number, file_number, compound_index, incarnation, block_size, bytes/1024/1024/1024 GB, type, striped,
  2  creation_date, modification_date
  3  from v$asm_file
  4  where TYPE != 'ARCHIVELOG'
  5  /
GRP_NUM FILE_NUM COMPOUND_INDEX INCARNATION BLOCK_SIZE       GB TYPE            STRIPE CREATION_ MODIFICAT
------- -------- -------------- ----------- ---------- -------- --------------- ------ --------- ---------
      1      256       16777472   620732691      16384      .01 CONTROLFILE     FINE   24-APR-07 04-MAY-07
      1      257       16777473   620732699        512      .05 ONLINELOG       FINE   24-APR-07 25-MAY-07
      1      258       16777474   620732707        512      .05 ONLINELOG       FINE   24-APR-07 24-MAY-07
      1      259       16777475   620737533        512      .05 ONLINELOG       FINE   24-APR-07 24-MAY-07
      1      260       16777476   620737539        512      .05 ONLINELOG       FINE   24-APR-07 25-MAY-07
      1      261       16777477   620737547        512      .00 PARAMETERFILE   COARSE 24-APR-07 24-MAY-07
      2      256       33554688   620732689      16384      .01 CONTROLFILE     FINE   24-APR-07 04-MAY-07
      2      257       33554689   620732695        512      .05 ONLINELOG       FINE   24-APR-07 25-MAY-07
      2      258       33554690   620732703        512      .05 ONLINELOG       FINE   24-APR-07 24-MAY-07
      2      259       33554691   620732719       8192      .49 DATAFILE        COARSE 24-APR-07 04-MAY-07
      2      260       33554692   620732753       8192      .20 DATAFILE        COARSE 24-APR-07 04-MAY-07
      2      261       33554693   620732767       8192      .65 DATAFILE        COARSE 24-APR-07 23-MAY-07
      2      262       33554694   620732779       8192      .03 TEMPFILE        COARSE 24-APR-07 04-MAY-07
      2      263       33554695   620732791       8192      .15 DATAFILE        COARSE 24-APR-07 04-MAY-07
      2      264       33554696   620732801       8192      .20 DATAFILE        COARSE 24-APR-07 04-MAY-07
      2      265       33554697   620732817       8192      .00 DATAFILE        COARSE 24-APR-07 04-MAY-07
      2      266       33554698   620737527        512      .05 ONLINELOG       FINE   24-APR-07 24-MAY-07
      2      267       33554699   620737535        512      .05 ONLINELOG       FINE   24-APR-07 25-MAY-07
18 rows selected.

ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operation ORA-06512: at “SYS.UTL_FILE”, ORA-29283: invalid file operation

When doing export occur above error, let’s check directories:

SQL> select directory_name, directory_path from dba_directories

DIRECTORY_NAME     DIRECTORY_PATH
——————————————————————————–
DUMP_DIR       /u02/dump_dir


Two thing’s that hadn’t done is:


1) given correct permissions for that user to acccess the logical directory for export:

SQL> GRANT read, write on directory dump_dir TO username;



2) create the physical directory

[oracle@oel6]$ mkdir -p /u02/dump_dir


after that our export run successfully;

Tuesday, February 3, 2015

‘ORA-16198: LGWR received timedout error from KSR’



It is well known that poor performance on the standby server of a DataGuard pair can affect the performance of the primary database. This post shows an example and how to use the view GV$EVENT_HISTOGRAM to track down an issue.


The databases were 11.2.0.1 on HPUX. I had been seeing alerts from OEM to state that the standby was seeing lag_apply delays when applying redo to standby. Looking at the primary database alert log I could see the entries




ORA-16198: LGWR received timedout error from KSR LGWR: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (16198) LGWR: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned Errors in file /app/oracle/diag/rdbms/xxxprd1a/BSMPRD1A/trace/xxxPRD1A_lgwr_24722.trc: ORA-16198: Timeout incurred on internal channel during remote archival Error 16198 for archive log file 1 to 'xxxPRD1B' Destination LOG_ARCHIVE_DEST_2 is UNSYNCHRONIZED



That seemed to correct itself later on but the timeout error was indicative of a network problem – well at least that was my original hypothesis.

Find and Kill Oracke database Process at OS Side


session with OS process number


SET LINESIZE 100
COLUMN spid FORMAT A10
COLUMN username FORMAT A10
COLUMN program FORMAT A45

SELECT s.inst_id,
       s.sid,
       s.serial#,
       p.spid,
       s.username,
       s.program
FROM   gv$session s
       JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE  s.type != 'BACKGROUND' ;



Second is for killing on db side


select 'ALTER SYSTEM KILL SESSION '||''''||s.sid||','||s.serial#||',@'||s.inst_id||''' IMMEDIATE;' FROM   gv$session s   JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE  s.type != 'BACKGROUND' and s.username='USERNAME';




Wednesday, January 28, 2015

Background Processes in oracle To maximize performance and accommodate many users, a multiprocess Oracle database system uses background processes. Background processes are the processes running behind the scene and are meant to perform certain maintenance activities or to deal with abnormal conditions arising in the instance. Each background process is meant for a specific purpose and its role is well defined. Background processes consolidate functions that would otherwise be handled by multiple database programs running for each user process. Background processes asynchronously perform I/O and monitor other Oracle database processes to provide increased parallelism for better performance and reliability. A background process is defined as any process that is listed in V$PROCESS and has a non-null value in the pname column. Not all background processes are mandatory for an instance. Some are mandatory and some are optional. Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO.

All other processes are optional, will be invoked if that particular feature is activated. Oracle background processes are visible as separate operating system processes in Unix/Linux. In Windows, these run as separate threads within the same service. Any issues related to background processes should be monitored and analyzed from the trace files generated and the alert log. Background processes are started automatically when the instance is started.

To findout background processes from database:
SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND';

To findout background processes from OS: $ ps -ef|grep ora_|grep SID

 Mandatory Background Processes in Oracle If any one of these 6 mandatory background processes is killed/not running, the instance will be aborted. 


1) Database Writer (maximum 20) DBW0-DBW9,DBWa-DBWj Interview Question Write The Song Download Circular Separation Agreement Manual To Any Whenever a log switch is occurring as redolog file is becoming CURRENT to ACTIVE stage, oracle calls DBWn and synchronizes all the dirty blocks in database buffer cache to the respective datafiles, scattered or randomly. Database writer (or Dirty Buffer Writer) process does multi-block writing to disk asynchronously. One DBWn process is adequate for most systems. Multiple database writers can be configured by initialization parameter
DB_WRITER_PROCESSES, depends on the number of CPUs allocated to the instance. To have more than one DBWn only make sense if each DBWn has been allocated its own list of blocks to write to disk. This is done through the initialization parameter DB_BLOCK_LRU_LATCHES. If this parameter is not set correctly, multiple DB writers can end up contending for the same block list. The possible multiple DBWR processes in RAC must be coordinated through the locking and global cache processes to ensure efficient processing is accomplished. DBWn will be invoked in following scenarios: When the dirty blocks in SGA reaches to a threshold value, oracle calls DBWn. When the database is shutting down with some dirty blocks in the SGA, then oracle calls DBWn. DBWn has a time out value (3 seconds by default) and it wakes up whether there are any dirty blocks or not. When a checkpoint is issued. When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers. When a huge table wants to enter into SGA and oracle could not find enough free space where it decides to flush out LRU blocks and which happens to be dirty blocks. Before flushing out the dirty blocks, oracle calls DBWn. Oracle RAC ping request is made. When Table DROPped or TRUNCATEed. When tablespace is going to OFFLINE/READ ONLY/BEGIN BACKUP.

 2) Log Writer (maximum 1) LGWR LGWR writes redo data from redolog buffers to (online) redolog files, sequentially. Redolog file contains changes to any datafile. The content of the redolog file is file id, block id and new content. LGWR will be invoked more often than DBWn as log files are really small when compared to datafiles (KB vs GB). For every small update we don’t want to open huge gigabytes of datafiles, instead write to the log file. Redolog file has three stages CURRENT, ACTIVE, INACTIVE and this is a cyclic process. Newly created redolog file will be in UNUSED state. When the LGWR is writing to a particular redolog file, that file is said to be in CURRENT status. If the file is filled up completely then a log switch takes place and the LGWR starts writing to the second file (this is the reason every database requires a minimum of 2 redolog groups). The file which is filled up now becomes from CURRENT to ACTIVE. Log writer will write synchronously to the redolog groups in a circular fashion. If any damage is identified with a redolog file, the log writer will log an error in the LGWR trace file and the alert log. Sometimes, when additional redolog buffer space is required, the LGWR will even write uncommitted redolog entries to release the held buffers. LGWR can also use group commits (multiple committed transaction's redo entries taken together) to write to redologs when a database is undergoing heavy write operations. In RAC, each RAC instance has its own LGWR process that maintains that instance’s thread of redo logs. LGWR will be invoked in following scenarios: LGWR is invoked whenever 1/3rd of the redo buffer is filled up. Whenever the log writer times out (3sec). Whenever 1MB of redolog buffer is filled (This means that there is no sense in making the redolog buffer more than 3MB). Shutting down the database. Whenever checkpoint event occurs. When a transaction is completed (either committed or rollbacked) then oracle calls the LGWR and synchronizes the log buffers to the redolog files and then only passes on the acknowledgement back to the user. Which means the transaction is not guaranteed although we said commit, unless we receive the acknowledgement. When a transaction is committed, a System Change Number (SCN) is generated and tagged to it. Log writer puts a commit record in the redolog buffer and writes it to disk immediately along with the transaction's redo entries. Changes to actual data blocks are deferred until a convenient time (Fast-Commit mechanism). When DBWn signals the writing of redo records to disk. All redo records associated with changes in the block buffers must be written to disk first (The write-ahead protocol). While writing dirty buffers, if the DBWn process finds that some redo information has not been written, it signals the LGWR to write the information and waits until the control is returned.


 3) Checkpoint (maximum 1) CKPT Free mp3 songs download Current Status Download free song Interview Question Write The Checkpoint is a background process which triggers the checkpoint event, to synchronize all database files with the checkpoint information. It ensures data consistency and faster database recovery in case of a crash. When checkpoint occurred it will invoke the DBWn and updates the SCN block of the all datafiles and the control file with the current SCN. This is done by LGWR. This SCN is called checkpoint SCN. Checkpoint event can be occurred in following conditions: Whenever database buffer cache filled up. Whenever times out (3seconds until 9i, 1second from 10g). Log switch occurred. Whenever manual log switch is done. SQL> ALTER SYSTEM SWITCH LOGFILE; Manual checkpoint. SQL> ALTER SYSTEM CHECKPOINT; Graceful shutdown of the database. Whenever BEGIN BACKUP command is issued. When the time specified by the initialization parameter LOG_CHECKPOINT_TIMEOUT (in seconds), exists between the incremental checkpoint and the tail of the log. When the number of OS blocks specified by the initialization parameter LOG_CHECKPOINT_INTERVAL, exists between the incremental checkpoint and the tail of the log. The number of buffers specified by the initialization parameter FAST_START_IO_TARGET required to perform roll-forward is reached. Oracle 9i onwards, the time specified by the initialization parameter FAST_START_MTTR_TARGET (in seconds) is reached and specifies the time required for a crash recovery. The parameter FAST_START_MTTR_TARGET replaces LOG_CHECKPOINT_INTERVAL and FAST_START_IO_TARGET, but these parameters can still be used.


 4) System Monitor (maximum 1) SMON If the database is crashed (power failure) and next time when we restart the database SMON observes that last time the database was not shutdown gracefully. Hence it requires some recovery, which is known as INSTANCE CRASH RECOVERY. When performing the crash recovery before the database is completely open, if it finds any transaction committed but not found in the datafiles, will now be applied from redolog files to datafiles. If SMON observes some uncommitted transaction which has already updated the table in the datafile, is going to be treated as a in doubt transaction and will be rolled back with the help of before image available in rollback segments. SMON also cleans up temporary segments that are no longer in use. It also coalesces contiguous free extents in dictionary managed tablespaces that have PCTINCREASE set to a non-zero value. In RAC environment, the SMON process of one instance can perform instance recovery for other instances that have failed. SMON wakes up about every 5 minutes to perform housekeeping activities.

5) Process Monitor (maximum 1) PMON If a client has an open transaction which is no longer active (client session is closed) then PMON comes into the picture and that transaction becomes in doubt transaction which will be rolled back. PMON is responsible for performing recovery if a user process fails. It will rollback uncommitted transactions. If the old session locked any resources that will be unlocked by PMON. PMON is responsible for cleaning up the database buffer cache and freeing resources that were allocated to a process. PMON also registers information about the instance and dispatcher processes with Oracle (network) listener. PMON also checks the dispatcher & server processes and restarts them if they have failed. PMON wakes up every 3 seconds to perform housekeeping activities. In RAC, PMON’s role as service registration agent is particularly important.

6) Recoverer (maximum 1) RECO [Mandatory from Oracle 10g] This process is intended for recovery in distributed databases. The distributed transaction recovery process finds pending distributed transactions and resolves them. All in-doubt transactions are recovered by this process in the distributed database setup. RECO will connect to the remote database to resolve pending transactions. Pending distributed transactions are two-phase commit transactions involving multiple databases. The database that the transaction started is normally the coordinator. It will send request to other databases involved in two-phase commit if they are ready to commit. If a negative request is received from one of the other sites, the entire transaction will be rolled back. Otherwise, the distributed transaction will be committed on all sites. However, there is a chance that an error (network related or otherwise) causes the two-phase commit transaction to be left in pending state (i.e. not committed or rolled back). It's the role of the RECO process to liaise with the coordinator to resolve the pending two-phase commit transaction. RECO will either commit or rollback this transaction. Optional Background Processes in Oracle Archiver (maximum

7) ARC0-ARC9 The ARCn process is responsible for writing the online redolog files to the mentioned archive log destination after a log switch has occurred. ARCn is present only if the database is running in archivelog mode and automatic archiving is enabled. The log writer process is responsible for starting multiple ARCn processes when the workload increases. Unless ARCn completes the copying of a redolog file, it is not released to log writer for overwriting. The number of archiver processes that can be invoked initially is specified by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES (by default 2, max 10).

The actual number of archiver processes in use may vary based on the workload. ARCH processes, running on primary database, select archived redo logs and send them to standby database. Archive log files are used for media recovery (in case of a hard disk failure and for maintaining an Oracle standby database via log shipping). Archives the standby redo logs applied by the managed recovery process (MRP). In RAC, the various ARCH processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other instances in the RAC setup should they be needed for recovery. Coordinated Job Queue Processes (maximum 1000) CJQ0/Jnnn Job queue processes carry out batch processing. All scheduled jobs are executed by these processes.

The initialization parameter JOB_QUEUE_PROCESSES specifies the maximum job processes that can be run concurrently. These processes will be useful in refreshing materialized views. This is the Oracle’s dynamic job queue coordinator. It periodically selects jobs (from JOB$) that need to be run, scheduled by the Oracle job queue. The coordinator process dynamically spawns job queue slave processes (J000-J999) to run the jobs. These jobs could be PL/SQL statements or procedures on an Oracle instance. CQJ0 - Job queue controller process wakes up periodically and checks the job log. If a job is due, it spawns Jnnnn processes to handle jobs. From Oracle 11g release2,

DBMS_JOB and DBMS_SCHEDULER work without setting JOB_QUEUE_PROCESSES. Prior to 11gR2 the default value is 0, and from 11gR2 the default value is 1000. Dedicated Server Dedicated server processes are used when MTS is not used. Each user process gets a dedicated connection to the database. These user processes also handle disk reads from database datafiles into the database block buffers.

 LISTENER The LISTENER process listens for connection requests on a specified port and passes these requests to either a distributor process if MTS is configured, or to a dedicated process if MTS is not used. The LISTENER process is responsible for load balance and failover in case a RAC instance fails or is overloaded. CALLOUT Listener Used by internal processes to make calls to externally stored procedures. Lock Monitor (maximum 1) LMON Lock monitor manages global locks and resources.

It handles the redistribution of instance locks whenever instances are started or shutdown. Lock monitor also recovers instance lock information prior to the instance recovery process. Lock monitor co-ordinates with the Process Monitor (PMON) to recover dead processes that hold instance locks. Lock Manager Daemon (maximum 10) LMDn LMDn processes manage instance locks that are used to share resources between instances. LMDn processes also handle deadlock detection and remote lock requests. Global Cache Service (LMS) In an Oracle Real Application Clusters environment, this process manages resources and provides inter-instance resource control. Lock processes (maximum 10) LCK0- LCK9 The instance locks that are used to share resources between instances are held by the lock processes. Block Server Process (maximum 10) BSP0-BSP9 Block server Processes have to do with providing a consistent read image of a buffer that is requested by a process of another instance, in certain circumstances.

 Queue Monitor (maximum 10) QMN0-QMN9 This is the advanced queuing time manager process. QMNn monitors the message queues. QMN used to manage Oracle Streams Advanced Queuing. Event Monitor (maximum 1) EMN0/EMON This process is also related to advanced queuing, and is meant for allowing a publish/subscribe style of messaging between applications. Dispatcher (maximum 1000) Dnnn Intended for multi threaded server (MTS) setups. Dispatcher processes listen to and receive requests from connected sessions and places them in the request queue for further processing. Dispatcher processes also pickup outgoing responses from the result queue and transmit them back to the clients. Dnnn are mediators between the client processes and the shared server processes. The maximum number of dispatcher process can be specified using the initialization parameter MAX_DISPATCHERS.

 Shared Server Processes (maximum 1000) Snnn Intended for multi threaded server (MTS) setups. These processes pickup requests from the call request queue, process them and then return the results to a result queue. These user processes also handle disk reads from database datafiles into the database block buffers. The number of shared server processes to be created at instance startup can be specified using the initialization parameter SHARED_SERVERS. Maximum shared server processes can be specified by MAX_SHARED_SERVERS. Parallel Execution/Query Slaves (maximum 1000) Pnnn These processes are used for parallel processing. It can be used for parallel execution of SQL statements or recovery. The Maximum number of parallel processes that can be invoked is specified by the initialization parameter PARALLEL_MAX_SERVERS. Trace Writer (maximum 1) TRWR Trace writer writes trace files from an Oracle internal tracing facility. Input/Output Slaves (maximum 1000) Innn These processes are used to simulate asynchronous I/O on platforms that do not support it. The initialization parameter DBWR_IO_SLAVES is set for this purpose.

Data Guard Monitor (maximum 1) DMON The Data Guard broker process. DMON is started when Data Guard is started. This is broker controller process is the main broker process and is responsible for coordinating all broker actions as well as maintaining the broker configuration files.
This process is enabled/disabled with the DG_BROKER_START parameter. Data Guard Broker Resource Manager RSM0 The RSM process is responsible for handling any SQL commands used by the broker that need to be executed on one of the databases in the configuration. Data Guard NetServer/NetSlave NSVn These are responsible for making contact with the remote database and sending across any work items to the remote database. From 1 to n of these network server processes can exist. NSVn is created when a Data Guard broker configuration is enabled. There can be as many NSVn processes (where n is 0- 9 and A-U) created as there are databases in the Data Guard broker configuration. DRCn These network receiver processes establish the connection from the source database NSVn process. When the broker needs to send something (e.g. data or SQL) between databases, it uses this NSV to DRC connection.

These connections are started as needed. Data Guard Broker Instance Slave Process INSV Performs Data Guard broker communication among instances in an Oracle RAC environment Data Guard Broker Fast Start Failover Pinger Process FSFP Maintains fast-start failover state between the primary and target standby databases. FSFP is created when fast-start failover is enabled. LGWR Network Server process LNS In Data Guard, LNS process performs actual network I/O and waits for each network I/O to complete. Each LNS has a user configurable buffer that is used to accept outbound redo data from the LGWR process. The NET_TIMEOUT attribute is used only when the LGWR process transmits redo data using a LGWR Network Server(LNS) process.

 Managed Recovery Process MRP In Data Guard environment, this managed recovery process will apply archived redo logs to the standby database. Remote File Server process RFS The remote file server process, in Data Guard environment, on the standby database receives archived redo logs from the primary database. Logical Standby Process LSP The logical standby process is the coordinator process for a set of processes that concurrently read, prepare, build, analyze, and apply completed SQL transactions from the archived redo logs. The LSP also maintains metadata in the database. The RFS process communicates with the logical standby process (LSP) to coordinate and record which files arrived. Wakeup Monitor Process (maximum 1) WMON This process was available in older versions of Oracle to alarm other processes that are suspended while waiting for an event to occur. This process is obsolete and has been removed.

Recovery Writer (maximum 1) RVWR This is responsible for writing flashback logs (to FRA). Fetch Archive Log (FAL) Server Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. Fetch Archive Log (FAL) Client Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence. Data Pump Master Process DMnn Creates and deletes the master table at the time of export and import. Master table contains the job state and object information. Coordinates the Data Pump job tasks performed by Data Pump worker processes and handles client interactions. The Data Pump master (control) process is started during job creation and coordinates all tasks performed by the Data Pump job. It handles all client interactions and communication, establishes all job contexts, and coordinates all worker process activities on behalf of the job. Creates the Worker Process.

 Data Pump Worker Process DWnn It performs the actual heavy duty work of loading and unloading of data. It maintains the information in master table. The Data Pump worker process is responsible for performing tasks that are assigned by the Data Pump master process, such as the loading and unloading of metadata and data. Shadow Process When client logs in to an Oracle Server the database creates and Oracle process to service Data Pump API. Client Process The client process calls the Data pump API. New Background Processes in Oracle 10g Memory Manager (maximum 1) MMAN MMAN dynamically adjust the sizes of the SGA components like buffer cache, large pool, shared pool and java pool and serves as SGA memory broker. It is a new process added to Oracle 10g as part of automatic shared memory  anagement. Memory Monitor (maximum 1) MMON MMON monitors SGA and performs various manageability related background tasks. MMON, the Oracle 10g background process, used to collect statistics for the Automatic Workload Repository (AWR). Memory Monitor Light (maximum 1) MMNL New background process in Oracle 10g. This process performs frequent and lightweight manageability-related tasks, such as session history capture and metrics computation. Change Tracking Writer (maximum

1) CTWR CTWR will be useful in RMAN.
Optimized incremental backups using block change tracking (faster incremental backups) using a file (named block change tracking file). CTWR (Change Tracking Writer) is the background process responsible for tracking the blocks. ASMB This ASMB process is used to provide information to and from cluster synchronization services used by ASM to manage the disk resources. It's also used to update statistics and provide a heart beat mechanism. Re-Balance RBAL RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM. Actual Rebalance ARBx ARBx is configured by ASM_POWER_LIMIT.

New Background Processes in Oracle 11g ACMS - Atomic Controlfile to Memory Server DBRM - Database Resource Manager DIA0 - Diagnosibility process 0 DIAG - Diagnosibility process FBDA - Flashback Data Archiver, Background process fbda captures data asynchronously, Every 5 minutes (default), more frequent intervals based on activity. GTX0 - Global Transaction Process 0 KATE - Konductor (Conductor) of ASM Temporary Errands MARK - Mark Allocation unit for Resync Koordinator (coordinator) SMCO - Space Manager VKTM - Virtual Keeper of TiMe process W000 - Space Management Worker Processes ABP - Autotask Background Process Autotask Background Process (ABP) It translates tasks into jobs for execution by the scheduler. It determines the list of jobs that must be created for each maintenance window.

Stores task execution history in the SYSAUX tablespace. It is spawned by the MMON background process at the start of the maintenance window. File Monitor (FMON) The database communicates with the mapping libraries provided by storage vendors through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information. When you specify the FILE_MAPPING initialization parameter for mapping datafiles to physical devices on a storage subsystem, then the FMON process is spawned. Dynamic Intimate Shared Memory (DISM) By default, Oracle uses intimate shared memory (ISM) instead of standard System V shared memory on Solaris Operating system.

When a shared memory segment is made into an ISM segment, it is mapped using large pages and the memory for the segment is locked (i.e., it cannot be paged out). This greatly reduces the overhead due to process context switches, which improves Oracle's performance linearity under load.