Friday, February 27, 2015

Oracle 11g R2 response file example



After installing the Operating System (in my case usually Red Hat Enterprise Linux or Oracle Enterprise Linux) and configuring all necessary parameters, one has to install the Oracle software. It is usually a good idea to use a response file to do this.

There are a few reasons to use a response file:
The installation is reproducible (the most important point)
No X server is necessary when using a response file with the Oracle Universal Installer (OUI)
The installation is easily scriptable
Strictly enforcing the OFA or other policies on all hosts is much easier


So after extracting the archive with the software downloaded from the Oracle website, we usually find anexample response file in the response/ folder of the software package. So here is an example of a response file:


oracle.install.option=INSTALL_DB_SWONLY UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=/home/oracle/oraInventory SELECTED_LANGUAGES=en ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 ORACLE_BASE=/u01/app/oracle oracle.install.db.InstallEdition=EE oracle.install.db.DBA_GROUP=dba oracle.install.db.OPER_GROUP=dba SECURITY_UPDATES_VIA_MYORACLESUPPORT=false DECLINE_SECURITY_UPDATES=true


Note that this is a very minimalistic response file, where only the software is installed (no database is created). Please refer to the Oracle documentation and the response file that Oracle provides as part of their software delivery package.

To install the software, execute the ./runInstaller script with the parameters “-silent -reponseFile <full_path_to_file>“. See this post for more information on the syntax.

When I tried to install a 11g database with the -silent and -responsefile options, I received the following error:
[SEVERE] - Email Address Not Specified
Obviously, I needed to specify an e-mail address for My Oracle Support. Lets have a look in the responsefile:
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true
Hmm, in my opinion, I do not need to specify an e-mail address. Or do I?
In a thread of the Oracle Technology Network I then found the solutions for my woes:
./runInstaller -silent -responseFile=D:/install/response/db_only_software.rsp
does NOT work! The responsefile needs to be specified as follows:
./runInstaller -silent -responseFile D:/install/response/db_only_software.rsp
Response FileDescription
db_install.rspSilent installation of Oracle Database 11g
dbca.rspSilent installation of Database Configuration Assistant
netca.rspSilent installation of Oracle Net Configuration Assistant

Thursday, February 26, 2015

Remove Old files in Unix


you can use the given Command to remove old Specific Extension base Files Older than specific time

like find (will find the files with given Condition like " DEVDB_12 "

-mtime (machine time +20) which means 20 days Old files comparing to machine Time

& exec rm -f {} \  will execute the command after meeting the condition.



find . DEVDB_12*.trc -mtime +20 -exec rm -f {} \;



rm -rf `ls -ltr | grep -i "2013 " | awk '{ print $9}'`


SQL Trace , TKPROF

SQL Trace , TKPROF



SQL_TRACE (10046 trace)

SQL_TRACE is the main method for collecting SQL Execution information in Oracle. It records a wide range of information and statistics that can be used to tune SQL operations.

Enabling SQL_TRACE

The SQL Trace facility can be enabled/disabled for an individual session or at the instance level. If the initialisation Parameter SQL_TRACE is set to TRUE in the init.ora of an instance, then all sessions will be traced. Note that using this initialization parameter to enable the SQL trace facility for the entire instance can have a severe performance impact.


------------------------------
The quickest way to capture the SQL is being processed by a session is to switch on SQL trace or set the 10046 event for a representative period of time. The resulting trace files can be read in their raw state or translated using the tkprof utility.

------------------------------------------------


Tracing a SQL session


Start session trace

To start a SQL trace for the current session, execute:
ALTER SESSION SET sql_trace = true;
You can also add an identifier to the trace file name for later identification:
ALTER SESSION SET sql_trace = true;
ALTER SESSION SET tracefile_identifier = mysqltrace;


Stop session trace

To stop SQL tracing for the current session, execute:
ALTER SESSION SET sql_trace = false;


Tracing other user's sessions

DBA's can use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to trace problematic database sessions. Steps:

  • Get the SID and SERIAL# for the process you want to trace.
SQL> select sid, serial# from sys.v_$session where ...
       SID    SERIAL#
---------- ----------
         8      13607
  • Enable tracing for your selected process:
SQL> ALTER SYSTEM SET timed_statistics = true;
SQL> execute dbms_system.set_sql_trace_in_session(8, 13607, true);
  • Ask user to run just the necessary to demonstrate his problem.
  • Disable tracing for your selected process:
SQL> execute dbms_system.set_sql_trace_in_session(8,13607, false);
  • Look for trace file in USER_DUMP_DEST:
$ cd /app/oracle/admin/oradba/udump
$ ls -ltr
total 8
-rw-r-----    1 oracle   dba         2764 Mar 30 12:37 ora_9294.trc


Tracing an entire database

To enable SQL tracing for the entire database, execute:
ALTER SYSTEM SET sql_trace = true SCOPE=MEMORY;
To stop, execute:
ALTER SYSTEM SET sql_trace = false SCOPE=MEMORY;


Identifying trace files

Trace output is written to the database's UDUMP directory.
The default name for a trace files is INSTANCE_PID_ora_TRACEID.trc where:
  • INSTANCE is the name of the Oracle instance,
  • PID is the operating system process ID (V$PROCESS.OSPID); and
  • TRACEID is a character string of your choosing.


Size of trace files

The trace file size is limited by the parameter MAX_DUMP_FILE_SIZE. The unit of this parameter, if you don't specify the K or M option, is in OS block size.
Be sure this parameter is set to a value high enough for your purpose (e.g. some MB). Of course this depends on the amount and complexitiy of statements which have to be run while tracing. If this value is set too low, possibly the dump file size limit will be reached before the execution of the crucial statements and the trace file will be closed before the interesting parts can be recorded in it.
On the other hand, when this parameter is set to UNLIMITED (default value), if the program to be traced is working forth and forth and the trace mode is not finished, the trace file can grow without limit which means until the associated file system or disk is full. A DBA can stop the trace of a session using the DBMS_MONITOR (10g and up), DBMS_SYSTEM or DBMS_SUPPORT package.


Formatting output

Trace output is quite unreadable. However, Oracle provides a utility, called TKProf, that can be used to format trace output.

==========================================================================

TKPROF Usage


TKPROF allows you to analyse a trace file to determine where time is 
  being spent and what query plans are being used on SQL statements.

   1 - Set TIMED_STATISTICS if required at database level.
  
   2 - Get SQL_TRACE output for the session you wish to monitor
   
   3 - Find the appropriate trace file (In USER_DUMP_DEST, default
       $ORACLE_HOME/rdbms/log on Unix). 
       You can find the most recent trace files on Unix with the command:
         ls -ltr
       This will list the most recent files LAST

   4 - Run tkprof on the trace file thus:

         tkprof tracefile outfile [explain=user/password] [options...]

TKPROF Options
~~~~~~~~~~~~~~
   print=integer    List only the first 'integer' SQL statements.
   insert=filename  List SQL statements and data inside INSERT statements.
   sys=no           TKPROF does not list SQL statements run as user SYS.
   record=filename  Record statements found in the trace file.
   sort=option      Set of zero or more of the following sort options:

     prscnt  number of times parse was called
     prscpu  cpu time parsing
     prsela  elapsed time parsing
     prsdsk  number of disk reads during parse
     prsqry  number of buffers for consistent read during parse
     prscu   number of buffers for current read during parse
     prsmis  number of misses in library cache during parse

     execnt  number of execute was called
     execpu  cpu time spent executing
     exeela  elapsed time executing
     exedsk  number of disk reads during execute
     exeqry  number of buffers for consistent read during execute
     execu   number of buffers for current read during execute
     exerow  number of rows processed during execute
     exemis  number of library cache misses during execute

     fchcnt  number of times fetch was called
     fchcpu  cpu time spent fetching
     fchela  elapsed time fetching
     fchdsk  number of disk reads during fetch
     fchqry  number of buffers for consistent read during fetch
     fchcu   number of buffers for current read during fetch
     fchrow  number of rows fetched

     userid  userid of user that parsed the cursor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


TKProf Interpretation
TKProf is an executable that 'parses' Oracle trace files to produce more readable output. 
In the default mode, all the information in TKProf is available from the base trace file.

TKProf Structure

TKProf output for an individual cursor has the following structure: SELECT NULL FROM DUAL call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 0 3 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 0.00 0.00 0 3 0 1 Misses in library cache during parse: 0 Optimizer goal: FIRST_ROWS Parsing user id: 271 Rows Row Source Operation ------- --------------------------------------------------- 1 TABLE ACCESS FULL DUAL (cr=3 r=0 w=0 time=21 us) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 2 0.00 0.00 SQL*Net message from client 2 0.00 0.00
Overall the structure is:
  • SQL Statement
  • Parse/Execute/Fetch statistics and timings
  • Library Cache information
  • Row source plan
  • Events waited for by the statement
  • Parse/Execute/Fetch statistics and timings This section contains the bulk of the useful timing information for each statement. This can be used in conjunction with the 'Row source plan' and 'Events waited for by the statement' to give the full picture. Columns in the Parse/Execute/Fetch table have the following meanings: call Statistics for each cursor's activity are divided in to 3 areas: Parse/Execute/Fetch. A total is also calculated. Parse statistics from parsing the cursor. This includes information for plan generation etc. Execute statistics for the execution phase of a cursor Fetch statistics for actually fetching the rows count number of times each individual activity has been performed on this particular cursor cpu cpu time used by this cursor elapsed elapsed time for this cursor (includes the cpu time) disk This indicates the number of blocks read from disk. Generally it would be preferable for blocks to be read from the buffer cache rather than disk. query This column is incremented if a buffer is read in Consistent mode. A Consistent mode buffer is one that has been generated to give a consistent read snapshot for a long running transaction. current This column is incremented if a buffer is found in the buffer cache that is new enough for the current transaction and is in current mode (and it is not a CR buffer). This applies to buffers that have been read in to the cache as well as buffers that already exist in the cache in current mode. rows Rows retrieved by this step Library Cache information Tracing a statement records some information regarding library cache usage which is externalised by TKProf in this section. Most important here is "Misses in library cache during parse:" which shows whether or not a statement is being re-parsed. If a statement is being shared well then you should see a minimal number of misses here (1 or 0 preferably). If sharing is not occurring then high values in this field can indicate that. Row source plan This section displays the access path used at execution time for each statement along with timing and actual row counts returned by each step in the plan. This can be very useful for a number of reasons. Example: Rows Row Source Operation ------- --------------------------------------------------- [A] 1 TABLE ACCESS FULL DUAL [B] (cr=3 [C] r=0 [D] w=0 [E] time=21 us [F]) >>Row count [A]- the row counts output in this section are the actual number of rows returned at each step in the query execution. These actual counts can be compared with the estimated cardinalities (row counts)from an optimizer explain plan. Any differences may indicate a statistical problem that may result in a poor planchoice. >>Row Source Operation [B] - Shows the operation executed at this step in the plan. >>IO Stats - For each step in the plan, [C] is the consistent reads, [D] is the physical reads and [E] is the writes. These statistics can be useful in identifying steps that read or write a particularly large proportion of the overall data. >>Timing - [F] shows the cumulative elapsed time for the step and the steps that preceded it. This section is very useful when looking for the point in an access path that takes all the time. By looking for the point at where the majority of the time originates it is possible to narrow down a number of problems.
Events waited for by the statement This section displays all wait events that a statement has waited for during the tracing. This section can be very useful when used in conjunction with the statistics and row source information for tracking down the causes of problems associated with long wait times. High numbers of waits or waits with a long total duration may be candidates for investigation dependent on the wait itself.
==============================================================
Potential TKProf Usage Examples Spotting Relatively High Resource Usage update ... where ... ----------------------------------------------------------------------- | call | count | cpu | elapsed | disk | query | current | rows | |---------|-------|-----|---------|------|---------|---------|--------| | Parse | 1 | 7 | 122 | 0 | 0 | 0 | 0 | | Execute | 1 | 75 | 461 | 5 | [H] 297 | [I] 3 | [J] 1 | | Fetch | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ----------------------------------------------------------------------- This statement is a single execute of an update. [H] shows that this query is visiting 297 buffers to find the rows to update [I] shows that only 3 buffer are visited performing the update [J] shows that only 1 row is updated. Reading 297 buffers to update 1 rows is a lot of work and would tend to indicate that the access path being used is not particularly efficient. Perhaps there is an index missing that would improve the access performance? Spotting Over Parsing select ... ------------------------------------------------------------------------- | call | count | cpu | elapsed | disk | query | current | rows | |---------|-------|---------|---------|------|--------|---------|-------| | Parse | [M] 2 | [N] 221 | 329 | 0 | 45 | 0 | 0 | | Execute | 3 | [O] 9 | [P] 17 | 0 | 0 | 0 | 0 | | Fetch | 3 | 6 | 8 | 0 | [L] 4 | 0 | [K] 1 | ------------------------------------------------------------------------- Misses in library cache during parse: 2 [Q] Here we have a select that we suspect may be a candidate for over parsing. [K] is shows that the query has returned 1 row. [L] shows that 4 buffers were read to get this row back. This is fine.
[M] show that the statement is parsed twice - this is not desirable especially as the parse cpu usage is a high [N] in comparison to the execute figures : [O] & [P] (ie the elapsed time for execute is 17 seconds but the statement spends over 300 seconds to determine the access path etc in the parse phase.  [Q] shows that these parses are hard parses. If [Q] was 1 then the statement would have had 1 hard parse followed by a soft parse (which just looks up the already parsed detail in the library cache). This is not a particularly bad example in terms of total counts since the query has only been executed a few times. However if this pattern is reproduced for each execution this could be a significant issue. Excessive parsing should be avoided as far as possible by ensuring that code is shared:
  • using bind variables
  • make shared pool large enough to hold query definitions in memory long enough to be reused.
Spotting Queries that Execute too frequently The following query has a high elapsed time and is a candidate for investigation: UPDATE ... SET ... WHERE COL = :bind1; call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- -------- Parse 0 0.00 0.00 0 0 0 0 Execute 488719 66476.95 66557.80 1 488729 1970566 488719 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- -------- total 488719 66476.95 66557.80 1 488729 1970566 488719 From the above, the update executes 488,719 times and takes in total ~ 65,000 seconds to do this.
The majority of the time is spent on CPU. A single row is updated per execution. For each row updated ~1 buffer is queried. ~2 million buffers are visited to perform the update. On average the elapsed time is ~ 0.1 second per execution. A sub-second execution time would normally be acceptable for most queries, but if the query is not scaleable and is executed numerous times, then the time can quickly add up to a large number.  It would appear that in this case the update may be part of a loop where individual values are passsed and 1 row is updated per value. This structure does not scale with large number of values meaning that it can become inefficient. One potential solution is to try to 'batch up'  the updates so that multiple rows are updated within the same execution. As Oracle releases have progressed a number of optimizations and enhancements have been made to improve the handling of 'batch' operations and to make them more efficient. In this way, code modifications to replace frequently executed relatively inefficient statements by more scaleable operations can have a significant impact.
============================================================================
SQL trace files contain detailed timing information. By default, Oracle does not track timing, so all timing figures in trace files will show as zero. If you would like to see legitimate timing information, then you need to enable timed statistics. You can do this at the instance level by setting the following parameter in the instance parameter file and restarting the instance:
timed_statistics = true
You can also dynamically enable or disable timed statistics collection at either the instance or the session level with the following commands:
ALTER SYSTEM SET timed_statistics = TRUE|FALSE;
ALTER SESSION SET timed_statistics = TRUE|FALSE;
There is no known way to enable timed statistics collection for an individual session from another session (akin to the SYS.dbms_system.set_sql_trace_in_session built-in).
There is very high overhead associated with enabling SQL trace. Some DBAs believe the performance penalty could be over 25%. Another concern is that enabling SQL trace causes the generation of potentially large trace files. For these reasons, you should use SQL trace sparingly. Only trace what you need to trace and think very carefully before enabling SQL trace at the instance level.
On the other hand, there is little, if any, measurable performance penalty in enabling timed statistics collection. Many DBAs run production databases with timed statistics collection enabled at the system level so that various system statistics (more than just SQL trace files) will include detailed timing information

Clone User Accesses -



set pages 0 feed off veri off lines 500

accept oldname prompt "Enter user to model new user to (Source Username): "

accept newname prompt "Enter new user name (Target Username) : "

-- accept psw prompt "Enter new user's password: "

-- Create user...

select 'create user &&newname identified by values '''||password||'''

'||

' default tablespace '||default_tablespace||

' temporary tablespace '||temporary_tablespace||' profile '||

profile||';'

from sys.dba_users

where username = upper('&&oldname');

-- Grant Roles...

select 'grant '||granted_role||' to &&newname'||

decode(ADMIN_OPTION, 'YES', ' WITH ADMIN OPTION')||';'

from sys.dba_role_privs

where grantee = upper('&&oldname');

-- Grant System Privs...

select 'grant '||privilege||' to &&newname'||

decode(ADMIN_OPTION, 'YES', ' WITH ADMIN OPTION')||';'

from sys.dba_sys_privs

where grantee = upper('&&oldname');

-- Grant Table Privs...

select 'grant '||privilege||' on '||owner||'.'||table_name||' to &&newname;'

from sys.dba_tab_privs

where grantee = upper('&&oldname');

-- Grant Column Privs...

select 'grant '||privilege||' on '||owner||'.'||table_name||

'('||column_name||') to &&newname;'

from sys.dba_col_privs

where grantee = upper('&&oldname');

-- Tablespace Quotas...

select 'alter user '||username||' quota '||

decode(max_bytes, -1, 'UNLIMITED', max_bytes)||

' on '||tablespace_name||';'

from sys.dba_ts_quotas

where username = upper('&&oldname');

-- Set Default Role...

set serveroutput on

declare

defroles varchar2(4000);

begin

for c1 in (select * from sys.dba_role_privs

where grantee = upper('&&oldname')

and default_role = 'YES'

) loop

if length(defroles) > 0 then

defroles := defroles||','||c1.granted_role;

else

defroles := defroles||c1.granted_role;

end if;

end loop;

dbms_output.put_line('alter user &&newname default role '||defroles||';');

end;

/

set feed on veri on

ORA-12516 : TNS :Listenercould not find available handler with matching protocol stack (LDAP)

Issue:

We are not able to login to the DB DEV_CN_ENTERPRISE_APP
and when we try logging-in we are getting the below error:

Error Message: ORA-12516 : TNS :Listenercould not find available handler with matching protocol stack
DB: DEV_CN_ENTERPRISE_APP
Schema: CN_ENTERPRISE

Resolution :


Steps

- Check Sessions connected / Processes (count)
- Check database Parameters for sessions / processes Configured
- Check Service is Up and Running
- Check Listener Value & Status in Database and OS
- Check LDAP Entry for the service
-Possibly it is because the Differnet Host details in Listener & service Configuration
- remove Old Serrvice & add New LDAP entry with correct Host details (logical Host in RAC)

sample to add LDAP in Unix


oracle/local/dba/bin/ogb_ldapadd.pl -d PRODT010_ADHOC_LOPR -h glbl002400-vip.eu.hedani.net,glbl200131550-vip.eu.hedani.net,gbl20013156 -R 

where -d is service name

-h is the scan name for example PRODD2047-scan.eu.hedani.net

and -R means RAC

there is also 

ogb_ldapsearch.pl -s for searching, 

use * for all string i.e. ogb_ldapsearch.pl -s PRODT0* and of course ogb_ldapdelete.pl for removing service form ldap





Basic Unix Commands for Oracle DBAs

Basic Unix Commands for Oracle DBAs

1) Find whether OS is 64/32 Bit Kernel in UNIX.

uname -a

2) Find free physical memory in UNIX.

free -m

3) Find CPU details in UNIX.

cat /proc/cpuinfo

4) Find files modified within specific time.

find . -mtime -3 (modified less than 3days ago)

5) command used to alter file permissions.

chmod 777 abc.txt

6) Command used to reset the Ownership.

chown oracle:dba abc.txt

7) command used to set, or reset, the users login password.

Passwd username

8) Kill specific process in UNIX.

Kill -9 processid

9) Command used for display last given lines of a file.

tail -n alert_PROD.log

10) Command used for intall a rpm package.

rpm -ivh packagename.rpm

11) Command used to querry about any rpm package

rpm -q packagename

12) Command to Check the server up time

uptime

13) Command to check the file versions

strings -a <filename> |grep ‘$Header’

14) Command will keep ‘n’ number of days files and remove rest of file.

find . -mtime +n -exec rm  {} \; &

15) Basic commands for  vi editor

i   :- insert before cursor.

l   : insert begining of the line.

a  :- append after the cursor.

A  :- Append at the end of the line.

o :- insert a blank line below the cursor.

O :- insert a blank line above the cursor position.

h :- from current position one char towards left .

I :- from current position one char towards right.

j :- from current position one line towards down.

k :- from current position one line towards up.

Shift+g  :- go to end of the file.

Shift+:1 :- go to top of the file.

dd –> delete the ful line.

:q! —> closing the file without saving any changes.

:wq! –> save the changes and force close.

:w –> to save the changes without closing the file.

Monday, February 23, 2015

Streams Capture CCA Mode

Capture CCA Mode



Summary

In Oracle 11g a new Streams replication feature has been added. The new feature is for the streams capture process to be able to run in combined capture & apply. When capture is running in this mode it directly captures the LCRS and propagates them to the remote apply process. Unfortunately there can be issues with CCA mode and streams will effectively hang and stop propagating changes.

This document details the symptoms of the issues and the known workarounds to resolve.

The current capture mode can be determined with

SQL> select CAPTURE_NAME, decode(OPTIMIZATION,0,'Non-CCA','CCA') from gv$streams_capture
  2  /
CAPTURE_TO_NYDB  CCA
CAPTURE_TO_LNDTB CCA


Symptoms/Observations

During a maintenance or release activity the capture process or database is restarted. The symptoms of the issue are :

1. first_scn is two to three days in the past whereas the required_checkpoint_scn is up to date.

SQL> set lines 132
SQL> column fcn format a35
SQL> column rcn format a35
SQL> column capture_name format a20
SQL> column first_scn format 99999999999999
SQL> column required_checkpoint_scn format 99999999999999

SQL> select capture_name, first_scn, scn_to_timestamp(first_scn) fcn,required_checkpoint_scn,scn_to_timestamp
(required_checkpoint_scn) rcn from dba_capture

CAPTURE_NAME               FIRST_SCN FCN                                 REQUIRED_CHECKPOINT_SCN RCN
-------------------- --------------- ----------------------------------- ----------------------- ----------------------------------
CAPTURE_TO_QNYDB    7420845030624 05-DEC-14 11.00.59.000000000 AM               
423420410445 08-DEC-14 01.43.57.000000000 PM
CAPTURE_TO_QSGDB    7420840301551 05-DEC-14 10.01.04.000000000 AM               7423420469291 08-DEC-14 01.51.35.000000000 PM

2. The capture state is showing as WAITING FOR INACTIVE DEQUEUERS

SQL> select capture_name, rtrim(state) from gv$streams_capture


Workarounds

The first step is to update the first_scn = required_checkpoint_scn

SQL> select capture_name, first_scn, required_checkpoint_scn from dba_capture ;

CAPTURE_NAME               FIRST_SCN REQUIRED_CHECKPOINT_SCN
-------------------- --------------- -----------------------
CAPTURE_TO_QNYDB    7420845030624           7423420410445
CAPTURE_TO_QSGDB   7420840301551           7423423914847

Login as strmadmin user

Take a note of the required_checkpoint_scn for the capture that is not working. Then

begin
dbms_capture_adm.alter_capture(capture_name=>’[capture]’, 
                                                  first_scn=> [required_checkpoint_scn]);
end;
/

exec dbms_capture_adm.stop_capture('[capture_name]')
exec dbms_capture_adm.start_capture('[capture_name]')

At this point try to restart capture and confirm it’s not stuck waiting for inactive dequers. If the issue is still persisting then a capture checkpoint should be forced.

execute dbms_capture_adm.set_parameter(‘[capture]’, '_CHECKPOINT_FORCE', 'Y'); 
stop/start capture


It has been observed that occasionally the above fails to remediate and capture is still stuck. In this case we should disable CCA mode for the capture that is not working. This is done by setting a capture parameter.

exec dbms_capture_adm.set_parameter('[capture_name]', '_DIRECT_APPLY', 'N');
stop/start only the effected capture

exec dbms_capture_adm.stop_capture('[capture_name]')
exec dbms_capture_adm.start_capture('[capture_name]')


Capture should then proceed to capture messages however this does not resolve our initial problem…

I believe the nature of the CCA bug is that once capture has minded past the time when the capture was restarted and has checkpointed so it does not need to re-read from a previous SCN then it will work. So if the checkpoint force and setting first_scn does not work then leave the capture running in non-CCA mode for many hours and repeat the above work around steps.



Row cache enqueue lock

Row cache enqueue lock 

Question:  What are row cache enqueue locks?  I see many row cache enqueue locks and I need to know how to remove this event?

The row cache enqueue lock is caused by DDL locking portions of the data dictionary as you create a table/index of other DDL command.  The row cache lock escalates into the row cache enqueue lock when multiple DDL statements are waiting to execute.  The row cache enqueue locks are allocated within the shared_pool_size region of the SGA.

If the lock wait exceed a predefined threshold, the DDL will abort and cause a trace file to be generated.  This trace file will contain the string “WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK”.

Note:  Long-running DDL (alter table xxx move" is far more likely to generate a row cache enqueue lock than a fast-running DDL statement (e.g. truncate table).
The solution to excessive row cache enqueue lock waits is:
1 - Serialize the DDL running on a small number of concurrent DDL tasks.
2 - If the row cache enqueue locks are for dc_sequences, consider using DBMS_SHARED_POOL.KEEP to pin your sequences.
Specifically, the row cache enqueue lock is within the data dictionary files and they tend to be located in the dc_objects and dc_tablespaces.
You can run this script to see data dictionary row cache statistics by querying the v$rowcache view:

REM SET FLUSH OFF
REM SET TERM OFF
SET HEAD ON
SET PAGESIZE 59
SET LINESIZE 79
COLUMN parameter FORMAT A20
COLUMN type FORMAT a11
COLUMN percent FORMAT 999.99    HEADING "%";
COLUMN gets FORMAT 999,999      HEADING 'Gets'
COLUMN getmisses FORMAT 999,999 heading 'Get|Misses'
COLUMN count FORMAT 999,999     heading 'Count'
COLUMN usage FORMAT 999,999     HEADING 'Usage'
ttitle "DATA DICTIONARY CACHE STATISTICS"

SPOOL rep_out/ddcache.lis
SELECT
   parameter,
   type,
   gets,
   getmisses,
   ( getmisses / gets * 100) percent,
   count,
   usage
FROM
   v$rowcache
WHERE
   gets > 100 AND
   getmisses > 0
ORDER BY parameter;



Thursday, February 19, 2015

ORA-01157 and ORA-01110 when trying to bring my tablespaces online




SQL> alter tablespace example online;
alter tablespace example online
*
ERROR at line 1:
ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
ORA-01110: data file 5: 



'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\EXAMPLE01.DBF'


How do I fix this ORA-01157 error?




ORA-01157: cannot identify/lock data file string - see DBWR trace file

Cause: The background process was either unable to find one of the data files or failed to lock it because the file was already in use.

first check the datafile is available where it is trying to write the block using 




SQL> select file_name, file_id from dba_data_files where file_name like 'FILE_NAME' ;



if it returns a row with details means datafile is busy in writing some blocks at the time when Query executed , if it returns no rows means there is no DATAFILE with the given name. 


The database will prohibit access to this file but other files will be unaffected. However the first instance to open the database will need to access all online data files. Accompanying error from the operating system describes why the file could not be identified.

Action: Have operating system make file available to database. Then either open the database or do ALTER SYSTEM CHECK DATAFILES.



The ORA-01157 is caused by a locking issue with the database writer (DBWR) background process.  During a recovery, this can be caused by a unopened data files (i.e. database mounted but not open), a missing file, a permission problem in the file (e.g. no write permissions 770 on the files owned by Oracle).





Wednesday, February 11, 2015

Guaranteed Restore Points Oracle 11g

Guaranteed Restore Points Oracle 11g

Guaranteed Restore Points are a life-saver when it comes to Application changes.It can ensure that you can rewind the database to a time without tradional point in time recovery. Guaranteed restore points are basically alias’es for SCN’s
A normal restore point assigns a restore point name to an SCN or specific point in time.The control file can maintain a record of thousands of normal restore points with no significant effect on database performance. Normal restore points eventually age out of the control file if not manually deleted, so they require no ongoing maintenance.
SQL> CREATE RESTORE POINT before_upgrade;
Creating a normal restore point eliminates manually recording an SCN in advance or determine the correct SCN after the fact by using features such as Flashback Query.
Like a normal restore point, a guaranteed restore point serves as an alias for an SCN in recovery operations. A principal difference is that guaranteed restore points never age out of the control file and must be explicitly dropped. In general, you can use a guaranteed restore point as an alias for an SCN with any command that works with a normal restore point. Except as noted, the information about where and how to use normal restore points applies to guaranteed restore points as well.
A guaranteed restore point ensures that you can use Flashback Database to rewind a database to its state at the restore point SCN, even if the generation of flashback logs is disabled. If flashback logging is enabled, then a guaranteed restore point enforces the retention of flashback logs required for Flashback Database to any SCN after the earliest guaranteed restore point.
This example shows how to create a guaranteed restore point:
SQL> CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;
Listing Restore Points
______________________
You can use the LIST command to list either a specific restore point or all restore points known to the RMAN repository. The variations of the command are as follows:
LIST RESTORE POINT restore_point_name;
LIST RESTORE POINT ALL;
RMAN indicates the SCN and time of the restore point, the type of restore point, and the name of the restore point. The following example shows sample output:
RMAN> LIST RESTORE POINT ALL;
using target database control file instead of recovery catalog
SCN              RSP Time  Type       Time      Name
—————- ——— ———- ——— —-
341859           28-JUL-06            28-JUL-06 NORMAL_RS
343690           28-JUL-06 GUARANTEED 28-JUL-06 GUARANTEED_RS
To see a list of all currently defined restore points (normal and guaranteed), use the V$RESTORE_POINT control file view, by means of the following query:
SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,
GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE
FROM V$RESTORE_POINT
Dropping Restore Points
_______________________
When you are satisfied that you do not need an existing restore point, or when you want to create a new restore point with the name of an existing restore point, you can drop the restore point, using the DROP RESTORE POINT SQL*Plus statement. For example:
SQL> DROP RESTORE POINT before_app_upgrade;
Restore point dropped.
The same statement is used to drop both normal and guaranteed restore points.
Note:
Normal restore points eventually age out of the control file, even if not explicitly dropped.
Guaranteed restore points never age out of the control file. They remain until they are explicitly dropped.
Flashing back database till Restore Point
_________________________________________
Login to RMAN and bring the database to mount mode.
FLASHBACK DATABASE TO RESTORE POINT ‘before_upgrade';
FLASHBACK DATABASE TO SCN 202381;
Open the database.
I normally create guaranteed retore points before doing a Dataguard switchover. It helps you assure you can go back to the state before you started (in case anything goes wrong).

pSU9 Patch -

Pre-Check
------------
oracle@Server20028933:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied

Patch  13060271     : applied on Tue Oct 29 14:34:42 GMT 2013
Patch  13615767     : applied on Tue Oct 29 14:30:57 GMT 2013
Patch  15947884     : applied on Tue Oct 29 14:23:16 GMT 2013
Patch  13696216     : applied on Wed Apr 18 09:22:21 BST 2012
Patch  13696251     : applied on Wed Apr 18 09:20:51 BST 2012

grid@Server20027095:/DB/oracle/grid/oracle_grid_patches/ [+ASM2] $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013
Patch  13954738     : applied on Tue Aug 21 10:58:54 BST 2012
Patch  13696216     : applied on Wed Apr 18 09:10:19 BST 2012


GRID
--------

stop all databases

as grid

unzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME

grid@Server20028932:/DB/oracle/grid/oracle_grid_patches/ [+ASM1] $ORACLE_HOME/OPatch/opatch version
OPatch Version: 11.2.0.3.6

OPatch succeeded.


as oracle

unzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME
oracle@Server20027094:/DB/oracle/data/oracle_rdbms_patches $ [rdbms1120]$ORACLE_HOME/OPatch/opatch version       
OPatch Version: 11.2.0.3.6

OPatch succeeded.

as root
--------
sudo /DB/GRID/11.2.0.3/bin/crsctl stop crs ( all nodes in cluster )


as grid
--------
/DB/GRID/11.2.0.3/OPatch/opatch rollback -id 13954738 ( first node only )

as root
--------
sudo /DB/GRID/11.2.0.3/bin/crsctl start crs ( all nodes in cluster )


as grid
-------

grid@Server20027094:/DB/oracle/grid/oracle_grid_patches/ [+ASM1]  $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013
Patch  13696216     : applied on Wed Apr 18 09:10:19 BST 2012


as grid
-------

cd /DB/oracle/grid/oracle_grid_patches
unzip p17735354_112030_Linux-x86-64.zip

as root ( one server at a time in cluster )
-------
sudo /tmp/grid_rootpre.sh
/bin/umount /DB/orafs/dbpw
/DB/GRID/11.2.0.3/OPatch/opatch auto /DB/oracle/grid/oracle_grid_patches -oh /DB/GRID/11.2.0.3 -ocmrf /DB/GRID/11.2.0.3/OPatch/ocm/bin/emocmrsp

as grid
-------
grid@Server20027094:/DB/GRID/11.2.0.3/log/Server20027094/DBsd/ [+ASM1] $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  17540582     : applied on Tue Mar 04 13:10:33 GMT 2014
Patch  17592127     : applied on Tue Mar 04 13:07:11 GMT 2014
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013


as oracle
---------
oracle@Server20025086:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13060271     : applied on Mon Nov 25 15:07:55 GMT 2013
Patch  13615767     : applied on Mon Nov 25 15:05:05 GMT 2013
Patch  15947884     : applied on Mon Nov 25 15:01:18 GMT 2013
Patch  13696216     : applied on Wed Apr 18 09:22:21 BST 2012
Patch  13696251     : applied on Wed Apr 18 09:20:51 BST 2012

$ORACLE_HOME/OPatch/opatch rollback -id 15947884

oracle@Server20025089:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13060271     : applied on Mon Nov 25 15:14:56 GMT 2013
Patch  13615767     : applied on Mon Nov 25 15:13:56 GMT 2013
Patch  13696216     : applied on Wed Apr 18 09:22:21 BST 2012
Patch  13696251     : applied on Wed Apr 18 09:20:51 BST 2012
oracle@Server20025089:/DB/oracle/data $ [rdbms1120]

as grid
-------
sudo /tmp/grid_rootpre.sh  
/bin/umount /DB/orafs/dbpw
/DB/GRID/11.2.0.3/OPatch/opatch auto /DB/oracle/grid/oracle_grid_patches -oh /DB/oracle/product/11.2.0.3 -ocmrf /DB/oracle/product/11.2.0.3/OPatch/ocm/bin/emocmrsp


as oracle
---------
oracle@Server20028932:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  17540582     : applied on Thu Mar 13 11:32:22 GMT 2014
Patch  17592127     : applied on Thu Mar 13 11:28:44 GMT 2014
Patch  13615767     : applied on Tue Oct 29 14:30:57 GMT 2013


as grid
-------
cd /DB/oracle/grid/oracle_grid_patches
unzip p13954738_112039_Linux-x86-64.zip

as root
-------
sudo /tmp/grid_rootpre.sh

/DB/GRID/11.2.0.3/OPatch/opatch auto /DB/oracle/grid/oracle_grid_patches -oh /DB/GRID/11.2.0.3 -ocmrf /DB/GRID/11.2.0.3/OPatch/ocm/bin/emocmrsp


grid@Server20027094:/DB/GRID/11.2.0.3/log/Server20027094/DBsd/ [+ASM1] $ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  13954738     : applied on Tue Mar 04 13:25:27 GMT 2014
Patch  17540582     : applied on Tue Mar 04 13:10:33 GMT 2014
Patch  13614429     : applied on Tue Jul 09 17:19:32 BST 2013


RDBMS
------------

as oracle
----------
oracle@Server20025090:/DB/oracle/data $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied
Patch  17540582     : applied on Sat Mar 08 11:16:00 GMT 2014
Patch  17592127     : applied on Sat Mar 08 11:12:19 GMT 2014
Patch  13615767     : applied on Mon Nov 25 15:13:56 GMT 2013

as oracle
---------
cd /DB/oracle/data/oracle_rdbms_patches
unzip p15947884_112038_Linux-x86-64.zip

cd 15947884

oracle@Server20025089:/DB/oracle/data/oracle_rdbms_patches/15947884 $ [rdbms1120]$ORACLE_HOME/OPatch/opatch apply

oracle@Server20025089:/DB/oracle/data/oracle_rdbms_patches/15947884 $ [rdbms1120]$ORACLE_HOME/OPatch/opatch lsinventory | grep applied

Patch  15947884     : applied on Sat Mar 08 11:49:40 GMT 2014
Patch  17540582     : applied on Sat Mar 08 11:02:25 GMT 2014
Patch  17592127     : applied on Sat Mar 08 10:58:43 GMT 2014
Patch  13615767     : applied on Mon Nov 25 15:13:56 GMT 2013

oracle@Server20025089:/DB/oracle/data/oracle_rdbms_patches/15947884 $ [rdbms1120]

start all the datbases

Primary Side Only for each database

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

Primary Side Only once for any database

UPGRADE CATALOG;
UPGRADE CATALOG;