Sunday, December 21, 2008
Oracle installation Step Throughs
Simple instructions for installation of Oracle in Linux
http://www.mike-devlin.com/oracle/oracle_install.htm
How to Install and Configure Oracle on Linux
Installation gotchas for Oracle Linux , Oracle RAC 10gR2 and VMware ESX 3.0.1
Installing Oracle 10g Release 2 Clusterware on a 2-node Windows 2003 Enterprise Edition Server
http://www.mike-devlin.com/oracle/oracle_install.htm
How to Install and Configure Oracle on Linux
Installation gotchas for Oracle Linux , Oracle RAC 10gR2 and VMware ESX 3.0.1
Installing Oracle 10g Release 2 Clusterware on a 2-node Windows 2003 Enterprise Edition Server
TAF Transparent App Failover in non-RAC standby database instances
Nice article on using Transparent Application Failover in non-RAC standby instances
http://www.freelists.org/post/oracle-l/TAF-and-node-failover-non-RAC-oracle-102,15
Transparent Application Failover in non-RAC
http://www.freelists.org/post/oracle-l/TAF-and-node-failover-non-RAC-oracle-102,15
Transparent Application Failover in non-RAC
Wednesday, December 10, 2008
Start Oracle when OS server reboots
http://www.orafaq.com/wiki/Unix_FAQ
Make sure the entry for your database SID in the ORATAB file ends in a capital "Y". Eg:
# $ORACLE_SID:$ORACLE_HOME:[N|Y]
#
ORCL:/u01/app/oracle/product/8.0.5:Y
#
The scripts for starting and stopping databases are: $ORACLE_HOME/bin/dbstart and dbshut. SQL*Net (or Net8) is started with the lsnrctl command. Add the following entries to your /etc/rc2.d/S99dbstart (or equivalent - for example HP-UX uses /sbin/rc2.d/*) file:
su - oracle -c "/path/to/$ORACLE_HOME/bin/dbstart" # Start DB's
su - oracle -c "/path/to/$ORACLE_HOME/bin/lsnrctl start" # Start listener
Make sure the entry for your database SID in the ORATAB file ends in a capital "Y". Eg:
# $ORACLE_SID:$ORACLE_HOME:[N|Y]
#
ORCL:/u01/app/oracle/product/8.0.5:Y
#
The scripts for starting and stopping databases are: $ORACLE_HOME/bin/dbstart and dbshut. SQL*Net (or Net8) is started with the lsnrctl command. Add the following entries to your /etc/rc2.d/S99dbstart (or equivalent - for example HP-UX uses /sbin/rc2.d/*) file:
su - oracle -c "/path/to/$ORACLE_HOME/bin/dbstart" # Start DB's
su - oracle -c "/path/to/$ORACLE_HOME/bin/lsnrctl start" # Start listener
Return value from sqlplus to shell variable
http://www.orafaq.com/wiki/Unix_FAQ
For returning single value use this ...
yourvalue=`sqlplus -s $LOGON_NAME/$LOGON_PASSWD << EOF
set heading off;
select col2 from table_a where col1 = 10;
exit;
EOF`
For returning multiple values use this ...
sqlplus -s /nolog |& # Open a pipe to SQL*Plus
print -p -- 'connect user/password@instance'
print -p -- 'set feed off pause off pages 0 head off veri off line 500'
print -p -- 'set term off time off'
print -p -- "set sqlprompt "
print -p -- "select sysdate from dual;"
read -p SYSDATE
print -p -- "select user from dual;"
read -p USER
print -p -- "select global_name from global_name;"
read -p GLOBAL_NAME
print -p -- exit
echo SYSDATE: $SYSDATE
echo USER: $USER
echo GLOBAL_NAME: $GLOBAL_NAME
For passing error value back to shell use the following
http://www.sap-img.com/oracle-database/passing-values-back-to-unix-script-from-sqlplus.htm
write following code in sql script
set echo on
whenever sqlerror exit 1
whenever oserror exit 2
--your code--
exit 0
write following code in shell script to receive the value
--call sql script
ret_val=$?
now ret val will have value 0 if sql script did not fail and 1 or 2 otherwise
For returning single value use this ...
yourvalue=`sqlplus -s $LOGON_NAME/$LOGON_PASSWD << EOF
set heading off;
select col2 from table_a where col1 = 10;
exit;
EOF`
For returning multiple values use this ...
sqlplus -s /nolog |& # Open a pipe to SQL*Plus
print -p -- 'connect user/password@instance'
print -p -- 'set feed off pause off pages 0 head off veri off line 500'
print -p -- 'set term off time off'
print -p -- "set sqlprompt "
print -p -- "select sysdate from dual;"
read -p SYSDATE
print -p -- "select user from dual;"
read -p USER
print -p -- "select global_name from global_name;"
read -p GLOBAL_NAME
print -p -- exit
echo SYSDATE: $SYSDATE
echo USER: $USER
echo GLOBAL_NAME: $GLOBAL_NAME
For passing error value back to shell use the following
http://www.sap-img.com/oracle-database/passing-values-back-to-unix-script-from-sqlplus.htm
write following code in sql script
set echo on
whenever sqlerror exit 1
whenever oserror exit 2
--your code--
exit 0
write following code in shell script to receive the value
--call sql script
ret_val=$?
now ret val will have value 0 if sql script did not fail and 1 or 2 otherwise
Thursday, December 4, 2008
Friday, October 31, 2008
Oracle dbms_redefinition
Oracle Tables and Indexes may be rebuild online using dbms_redefinition utility package. Exceptions are those tables on which Materialized views are defined.
While making drastic changes to the table structure using the CAN_REDEF_TABLE may be useful to findout whether online rebuilding is possible or not.
The following is an useful article
Oracle dbms_redefinition
While making drastic changes to the table structure using the CAN_REDEF_TABLE may be useful to findout whether online rebuilding is possible or not.
The following is an useful article
Oracle dbms_redefinition
Thursday, October 2, 2008
DB Link creation format
Just for reference, even if there is no tns_names entry, we can use the complete connect string.
create database link MY_DBLINK connect to schema_user identified by pswd using '(DESCRIPTION =(ADDRESS_LIST =(ADDRESS =(COMMUNITY = tcp.world)(PROTOCOL = TCP)(Host = myserver.unix.myorg.org)(Port = 1521)))(CONNECT_DATA = (SID = MYDB_SID)))'
With tns entry: create database link MY_DBLINK connect to schema_user identified by pswd using 'TNS_CONNECT_ENTRY'
create database link MY_DBLINK connect to schema_user identified by pswd using '(DESCRIPTION =(ADDRESS_LIST =(ADDRESS =(COMMUNITY = tcp.world)(PROTOCOL = TCP)(Host = myserver.unix.myorg.org)(Port = 1521)))(CONNECT_DATA = (SID = MYDB_SID)))'
With tns entry: create database link MY_DBLINK connect to schema_user identified by pswd using 'TNS_CONNECT_ENTRY'
Thursday, September 25, 2008
ORA-01034: ORACLE not available error
While logging into db through sqlplus (9.2.0.5.0), got the following error.
SQL> conn / as sysdba
Connected to an idle instance.
SQL> select 1 from dual;
select 1 from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
ORA-01034: ORACLE not available error
ORA-27101: shared memory realm does not exist
HP-UX Error: 2: No such file or directory
The ORACLE_SID env variable was checked and found ok.
It was found that TWO_TASK: parameter not set
Same set to the proper SID.
Now able to connect to db.
http://www.orafaq.com/wiki/ORA-01034
SQL> conn / as sysdba
Connected to an idle instance.
SQL> select 1 from dual;
select 1 from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
ORA-01034: ORACLE not available error
ORA-27101: shared memory realm does not exist
HP-UX Error: 2: No such file or directory
The ORACLE_SID env variable was checked and found ok.
It was found that TWO_TASK: parameter not set
Same set to the proper SID.
Now able to connect to db.
http://www.orafaq.com/wiki/ORA-01034
Wednesday, August 13, 2008
Scripts for disabling and enabling constraints
Running the following statements in sqlplus will create two sql files with statements for disabling and enabling FK constraints in current schema.
set echo on
set verify on
set feedback off
set heading off
set pagesize 0
set linesize 1000
spool disable_constraints_script.sql
select 'alter table '||table_name||' disable constraint '||constraint_name||';' from user_constraints where constraint_type in ('R');
spool enable_constraints_script.sql
select 'alter table '||table_name||' modify constraint '||constraint_name||' enable novalidate;' from user_constraints where constraint_type in ('R');
set echo on
set verify on
set feedback off
set heading off
set pagesize 0
set linesize 1000
spool disable_constraints_script.sql
select 'alter table '||table_name||' disable constraint '||constraint_name||';' from user_constraints where constraint_type in ('R');
spool enable_constraints_script.sql
select 'alter table '||table_name||' modify constraint '||constraint_name||' enable novalidate;' from user_constraints where constraint_type in ('R');
Tips on day-to-day Communications & use of language for the same.
Useful tips on day-to-day grammer in communication / correspondence
Useful tips on day-to-day grammer in communication / correspondence
Useful tips on day-to-day grammer in communication / correspondence
Monday, August 4, 2008
Counting rows in Partition with millions of rows
If the no. of rows in a partitioned table is in millions, use the following to minimize the time in returning the no. of rows
SQL> set timing on
SQL> select /*+ full(m) parallel(m,16) */ count(*) from MY_TABLE partition (MY_TABLE_PR108) M;
COUNT(*)---------- 209204953
Elapsed: 00:06:07.02
SQL> alter table MY_TABLE truncate partition MY_TABLE_PR108;
Table truncated.
Elapsed: 00:00:12.20
SQL> set timing on
SQL> select /*+ full(m) parallel(m,16) */ count(*) from MY_TABLE partition (MY_TABLE_PR108) M;
COUNT(*)---------- 209204953
Elapsed: 00:06:07.02
SQL> alter table MY_TABLE truncate partition MY_TABLE_PR108;
Table truncated.
Elapsed: 00:00:12.20
Friday, August 1, 2008
Parameter file for exp & imp
A ready reference to exp & imp parfile
$exp parfile=ExportParfile.par
$imp parfile=ImportParfile.par
ExportParfile.par
OWNER=VSTREFWORK
FEEDBACK=10000
COMPRESS=N
STATISTICS=NONE
RECORDLENGTH=65535
BUFFER=10248000
USERID=system/password
LOG=myexp.log
FILE=myexp.dmp
DIRECT=Y speeds up export for FULL=Y Cannot be used with QUERY
ImportParfile.par
FROMUSER=VSTREFWORK
TOUSER=QATREF18
file=my_exp_file.dmp
log=my_import_log.log
grants=n
commit=y
buffer=10240000
ignore=y
statistics=none
USERID=system/password
Use SHOW=Y for running a mock imp/exp i.e. only create a log file without actually importing / exporting.
Use help for further details
exp HELP=Y
imp HELP=Y
$exp parfile=ExportParfile.par
$imp parfile=ImportParfile.par
ExportParfile.par
OWNER=VSTREFWORK
FEEDBACK=10000
COMPRESS=N
STATISTICS=NONE
RECORDLENGTH=65535
BUFFER=10248000
USERID=system/password
LOG=myexp.log
FILE=myexp.dmp
DIRECT=Y speeds up export for FULL=Y Cannot be used with QUERY
ImportParfile.par
FROMUSER=VSTREFWORK
TOUSER=QATREF18
file=my_exp_file.dmp
log=my_import_log.log
grants=n
commit=y
buffer=10240000
ignore=y
statistics=none
USERID=system/password
Use SHOW=Y for running a mock imp/exp i.e. only create a log file without actually importing / exporting.
Use help for further details
exp HELP=Y
imp HELP=Y
Thursday, July 31, 2008
Unix File System configuration
$/etc/fstab
stores the filesystem mount point informations
A backup of the same comes in handy if we need to restore and mount the filesystem
# System /etc/fstab file. Static information about the file systems
# See fstab(4) and sam(1M) for further details on configuring devices.
The first column shows the Device, Second col. is its mount point, third is the type of device, mount options, dump options and filesystem check options.
Unix admin. has to maintain this file correctly.
This is used by the commands mount, umount & fsck
stores the filesystem mount point informations
A backup of the same comes in handy if we need to restore and mount the filesystem
# System /etc/fstab file. Static information about the file systems
# See fstab(4) and sam(1M) for further details on configuring devices.
The first column shows the Device, Second col. is its mount point, third is the type of device, mount options, dump options and filesystem check options.
Unix admin. has to maintain this file correctly.
This is used by the commands mount, umount & fsck
Friday, July 25, 2008
Enable SQL trace for remote session
If we have the Remote client's Process ID provided by the users, running the SQL given below (copy and paste inside TOAD sql editor and execute) will provide the statements to be run for enabling/disabling session trace. Also the trace file name with full path will be provided.
Pre-requisites:
1) SELECT privilege on V$SESSION, V$PROCESS views (required for running this sql)
2) EXECUTE privilege on DBMS_SYSTEM or DBMS_MONITOR package.
3) Write permission to the directory where we want to store the trace output file ie. udump
Attn.: This needs user to connected from some Unix client (App Server / computer) to oracle DB Server (Unix)
For Windows based client computers, the client Process id is a combination of Process:Thread
select 'Level 1 Trace' "For Trace Level",'execute sys.DBMS_SYSTEM.Set_Sql_Trace_In_Session(' || s.sid || ', ' || s.serial# || ', TRUE)' "Command for Starting Trace" ,
'execute sys.DBMS_SYSTEM.Set_Sql_Trace_In_Session(' || s.sid || ', ' || s.serial# || ', FALSE)' "Command for Stopping Trace",
s.program, s.username , c.value || '/' || lower(d.value)|| '_ora_' ||p.spid||'.trc' "TRACE FILE"
from
V$SESSION s, V$PROCESS p,v$parameter c, v$parameter d where
p.addr=s.paddr and s.process=:RemoteClientProcessID and
c.name = 'user_dump_dest' and d.name = 'db_name'
union
select 'Level 1 Trace (another stmt...)' "For Trace Level",'execute sys.DBMS_MONITOR.session_trace_enable(' || s.sid || ', ' || s.serial# || ', waits=>FALSE, binds=>FALSE)' "Command for Starting Trace" ,
'execute sys.DBMS_MONITOR.session_trace_disable(' || s.sid || ', ' || s.serial# || ')' "Command for Stopping Trace",
s.program, s.username ,
c.value || '/' || lower(d.value)|| '_ora_' ||p.spid||'.trc' "TRACE FILE"
from
V$SESSION s, V$PROCESS p,v$parameter c, v$parameter d where
p.addr=s.paddr and s.process=:RemoteClientProcessID and
c.name = 'user_dump_dest' and d.name = 'db_name'
union
select 'Level 4 ; with Bind values' "For Trace Level",'execute sys.DBMS_MONITOR.session_trace_enable(' || s.sid || ', ' || s.serial# || ', waits=>FALSE, binds=>TRUE)' "Command for Starting Trace" ,
'execute sys.DBMS_MONITOR.session_trace_disable(' || s.sid || ', ' || s.serial# || ')' "Command for Stopping Trace",
s.program, s.username ,
c.value || '/' || lower(d.value)|| '_ora_' ||p.spid||'.trc' "TRACE FILE"
from
V$SESSION s, V$PROCESS p,v$parameter c, v$parameter d where
p.addr=s.paddr and s.process=:RemoteClientProcessID and
c.name = 'user_dump_dest' and d.name = 'db_name'
union
select 'Level 8 ; with bind and wait values' "For Trace Level",'execute sys.DBMS_MONITOR.session_trace_enable(' || s.sid || ', ' || s.serial# || ', waits=>TRUE, binds=>TRUE)' "Command for Starting Trace" ,
'execute sys.DBMS_MONITOR.session_trace_disable(' || s.sid || ', ' || s.serial# || ')' "Command for Stopping Trace",
s.program, s.username ,
c.value || '/' || lower(d.value)|| '_ora_' ||p.spid||'.trc' "TRACE FILE"
from
V$SESSION s, V$PROCESS p,v$parameter c, v$parameter d where
p.addr=s.paddr and s.process=:RemoteClientProcessID and
c.name = 'user_dump_dest' and d.name = 'db_name';
Pre-requisites:
1) SELECT privilege on V$SESSION, V$PROCESS views (required for running this sql)
2) EXECUTE privilege on DBMS_SYSTEM or DBMS_MONITOR package.
3) Write permission to the directory where we want to store the trace output file ie. udump
Attn.: This needs user to connected from some Unix client (App Server / computer) to oracle DB Server (Unix)
For Windows based client computers, the client Process id is a combination of Process:Thread
Thursday, July 3, 2008
Unix VI find & replace
For searching in the whole file and replace string
:%s/search_string/replacement_string/g
http://unix.t-a-y-l-o-r.com/VBsr.html
:%s/search_string/replacement_string/g
http://unix.t-a-y-l-o-r.com/VBsr.html
Wednesday, April 16, 2008
Script to permanentently change hostname, IP address and DNS client configuration.
Script to permanentently change hostname, IP address and DNS client configuration.
http://www.sun.com/bigadmin/scripts/submittedScripts/chhostname.ksh.txt
http://www.sun.com/bigadmin/scripts/submittedScripts/chhostname.ksh.txt
Sunday, April 13, 2008
Broken snapshots - repair
exec dbms_job.broken(218, FALSE, NEXT_DAY(SYSDATE, 'SUNDAY'));
commit; --- DONOT FORGET TO COMMIT
This will make the broken job again active, setting the next schedule date.
If you want to execute the job immediately to refresh the shapshot, use dbms_job.run(218)
commit; --- DONOT FORGET TO COMMIT
This will make the broken job again active, setting the next schedule date.
If you want to execute the job immediately to refresh the shapshot, use dbms_job.run(218)
Export GRANTS on tables a particular user has made
#!/bin/ksh
sqlplus -silent /nolog<<eof
connect / as sysdba
set serveroutput on
SET HEAD OFF
SET PAGESIZE 0
SET FEEDBACK OFF
SPOOL /tmp/privs_exp.sql
SELECT 'GRANT ' || PRIVILEGE || ' ON ' ||OWNER||'.'||TABLE_NAME|| ' TO '||GRANTEE|| DECODE(GRANTABLE, 'YES' , ' WITH GRANT OPTION', ' ' )||';' FROM DBA_TAB_PRIVS WHERE GRANTOR IN ('NP_TRIAL', 'NAVEEN', 'TOAD');
exit;
eof
Tuesday, March 18, 2008
Upgrades and Patches - best practise
Reference
http://searchoracle.techtarget.com/tip/0,289483,sid41_gci1252556_tax306964,00.html
1) Always test upgrade procedure and post-upgrade installation on a test environment
2) Use separate ORACLE_HOME and install existing version temporarily (with temporarily modification of oracleinventory directory loc in oraInst.loc file). With a brief outage change point the running db to this new home. Now install back the binaries in original home & edit back the oraInst.loc. While the db still running using another home, install the patch on this binary. Take a brief outage start the database with this upgraded installation
3) When going for OS upgrade, if the Oracle version is not supported on the upgraded OS, first upgrade Oracle to that supported version. Then upgrade OS. Ensure the binary links are recreated.
4) Set AUDIT_TRAIL = DB
5) Create separate user for client install.
http://searchoracle.techtarget.com/tip/0,289483,sid41_gci1252556_tax306964,00.html
1) Always test upgrade procedure and post-upgrade installation on a test environment
2) Use separate ORACLE_HOME and install existing version temporarily (with temporarily modification of oracleinventory directory loc in oraInst.loc file). With a brief outage change point the running db to this new home. Now install back the binaries in original home & edit back the oraInst.loc. While the db still running using another home, install the patch on this binary. Take a brief outage start the database with this upgraded installation
3) When going for OS upgrade, if the Oracle version is not supported on the upgraded OS, first upgrade Oracle to that supported version. Then upgrade OS. Ensure the binary links are recreated.
4) Set AUDIT_TRAIL = DB
5) Create separate user for client install.
Monday, March 17, 2008
RMAN commands - Examples
CONFIGURE DEFAULT DEVICE TYPE TO sbt;
CONFIGURE ARCHIVELOG DELETION POLICY TO
BACKED UP 2 TIMES
TO DEVICE TYPE sbt;
The following DELETE command deletes all archived redo logs on disk if they are not needed to meet the configured deletion policy, which specifies that logs must be backed up twice to tape (sample output included):
RMAN> DELETE ARCHIVELOG ALL;
---------
This example deletes backups and copies that are not needed to recover the database to an arbitrary SCN within the last week. RMAN also deletes archived redo logs that are no longer needed.
DELETE NOPROMPT OBSOLETE RECOVERY WINDOW OF 7 DAYS;
---------
This example uses a configured sbt channel to check the media manager for expired backups of the tablespace users that are more than one month old and removes their recovery catalog records.
CROSSCHECK BACKUPSET OF TABLESPACE users
DEVICE TYPE sbt COMPLETED BEFORE 'SYSDATE-31';
DELETE NOPROMPT EXPIRED BACKUPSET OF TABLESPACE users
DEVICE TYPE sbt COMPLETED BEFORE 'SYSDATE-31';
---------
The following example attempts to delete the backup set copy with tag weekly_bkup:
RMAN> DELETE NOPROMPT BACKUPSET TAG weekly_bkup;
RMAN displays a warning because the repository shows the backup set as available, but the object is not actually available on the media:
The following command forces RMAN to delete the backup set:
RMAN> DELETE FORCE NOPROMPT BACKUPSET TAG weekly_bkup;
CONFIGURE ARCHIVELOG DELETION POLICY TO
BACKED UP 2 TIMES
TO DEVICE TYPE sbt;
The following DELETE command deletes all archived redo logs on disk if they are not needed to meet the configured deletion policy, which specifies that logs must be backed up twice to tape (sample output included):
RMAN> DELETE ARCHIVELOG ALL;
---------
This example deletes backups and copies that are not needed to recover the database to an arbitrary SCN within the last week. RMAN also deletes archived redo logs that are no longer needed.
DELETE NOPROMPT OBSOLETE RECOVERY WINDOW OF 7 DAYS;
---------
This example uses a configured sbt channel to check the media manager for expired backups of the tablespace users that are more than one month old and removes their recovery catalog records.
CROSSCHECK BACKUPSET OF TABLESPACE users
DEVICE TYPE sbt COMPLETED BEFORE 'SYSDATE-31';
DELETE NOPROMPT EXPIRED BACKUPSET OF TABLESPACE users
DEVICE TYPE sbt COMPLETED BEFORE 'SYSDATE-31';
---------
The following example attempts to delete the backup set copy with tag weekly_bkup:
RMAN> DELETE NOPROMPT BACKUPSET TAG weekly_bkup;
RMAN displays a warning because the repository shows the backup set as available, but the object is not actually available on the media:
The following command forces RMAN to delete the backup set:
RMAN> DELETE FORCE NOPROMPT BACKUPSET TAG weekly_bkup;
RMAN Scripts - Creating and running from Catalog Database
CONNECT TARGET SYS/password@prod
CONNECT CATALOG rman/password@catdb
CREATE SCRIPT backup_whole
COMMENT "backup whole database and archived redo logs"
{
BACKUP
INCREMENTAL LEVEL 0 TAG backup_whole
FORMAT "/disk2/backup/%U"
DATABASE PLUS ARCHIVELOG;
}
RUN { EXECUTE SCRIPT backup_whole; }
For scripts using substitution varialbles:
CREATE SCRIPT backup_df { BACKUP DATAFILE &1 TAG &2.1 FORMAT '/disk1/&3_%U'; }
RUN { EXECUTE SCRIPT backup_df USING 3 test_backup df3; }
-------------
RMAN> LIST SCRIPT NAMES;
List of Stored Scripts in Recovery Catalog
RMAN> DELETE GLOBAL SCRIPT global_backup_db;
CONNECT CATALOG rman/password@catdb
CREATE SCRIPT backup_whole
COMMENT "backup whole database and archived redo logs"
{
BACKUP
INCREMENTAL LEVEL 0 TAG backup_whole
FORMAT "/disk2/backup/%U"
DATABASE PLUS ARCHIVELOG;
}
RUN { EXECUTE SCRIPT backup_whole; }
For scripts using substitution varialbles:
CREATE SCRIPT backup_df { BACKUP DATAFILE &1 TAG &2.1 FORMAT '/disk1/&3_%U'; }
RUN { EXECUTE SCRIPT backup_df USING 3 test_backup df3; }
-------------
RMAN> LIST SCRIPT NAMES;
List of Stored Scripts in Recovery Catalog
RMAN> DELETE GLOBAL SCRIPT global_backup_db;
RMAN Recovery Catalog Database creation
SQL> CONNECT SYS/password@catdb AS SYSDBA
SQL> CREATE USER catowner IDENTIFIED BY oracle
2 DEFAULT TABLESPACE cattbs
3 QUOTA UNLIMITED ON cattbs;
SQL> GRANT recovery_catalog_owner TO catowner;
SQL> EXIT
RMAN> CONNECT CATALOG catowner/password@catdb
RMAN> CREATE CATALOG;
RMAN> CONNECT TARGET SYS/password@prod1
RMAN> REGISTER DATABASE; ---this will sync all already existing RMAN data in Target db's control files with the catalog database
RMAN> EXIT
SQL> CREATE USER catowner IDENTIFIED BY oracle
2 DEFAULT TABLESPACE cattbs
3 QUOTA UNLIMITED ON cattbs;
SQL> GRANT recovery_catalog_owner TO catowner;
SQL> EXIT
RMAN> CONNECT CATALOG catowner/password@catdb
RMAN> CREATE CATALOG;
RMAN> CONNECT TARGET SYS/password@prod1
RMAN> REGISTER DATABASE; ---this will sync all already existing RMAN data in Target db's control files with the catalog database
RMAN> EXIT
Dataguard - Physical Standby Role Transitions : Switchover & Failover
Refer to previous posting creating-dataguard-physical-standby-in.html
Now for conducting switchover to secondary so that some maintenance may be carriedout on Primary follow the following steps
1) Ensure temporary files exist on the standby database that match the temporary files on the primary database
2) Chech the switchover status of both Primary and Standby using SELECT SWITCHOVER_STATUS FROM V$DATABASE;
3)Check the lags on all standbys using SELECT * FROM V$DATAGUARD_STATS; and choose the one which will require the minimum time for transition
4) If a standby database currently running in maximum protection mode will be involved in the failover, first place it in maximum performance mode by issuing the following statement on the standby database:
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
5) Primary : ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;
6) Primary : SHUTDOWN IMMEDIATE;
7) Primary : STARTUP MOUNT; ---Now the former primary has become a standby
8) Standby : ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
9) Standby : ALTER DATABASE OPEN; if it was ever opened in readonly mode then SHUTDOWN IMMEDIATE & STARTUP
10)Check the log transport and apply by ALTER SYSTEM SWITCH LOGFILE; in new primary
The following steps are to be used for failover in case of problem in Primary database
1)check the archive log gaps SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
2)The archive logs missing in the standby may be physically copied in standby file system and are to be registered using ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1'; Repeat until the above query returns no rows.
3)Initiate failover in standby
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;
4)Finish failover using ALTER DATABASE OPEN; if it was ever opened in readonly mode then SHUTDOWN IMMEDIATE & STARTUP
The target physical standby database has now undergone a transition to the primary database role while the previous Primary is no more a participant of Dataguard. It needs to be recreated from the current primary.
Now for conducting switchover to secondary so that some maintenance may be carriedout on Primary follow the following steps
1) Ensure temporary files exist on the standby database that match the temporary files on the primary database
2) Chech the switchover status of both Primary and Standby using SELECT SWITCHOVER_STATUS FROM V$DATABASE;
3)Check the lags on all standbys using SELECT * FROM V$DATAGUARD_STATS; and choose the one which will require the minimum time for transition
4) If a standby database currently running in maximum protection mode will be involved in the failover, first place it in maximum performance mode by issuing the following statement on the standby database:
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
5) Primary : ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;
6) Primary : SHUTDOWN IMMEDIATE;
7) Primary : STARTUP MOUNT; ---Now the former primary has become a standby
8) Standby : ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
9) Standby : ALTER DATABASE OPEN; if it was ever opened in readonly mode then SHUTDOWN IMMEDIATE & STARTUP
10)Check the log transport and apply by ALTER SYSTEM SWITCH LOGFILE; in new primary
The following steps are to be used for failover in case of problem in Primary database
1)check the archive log gaps SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
2)The archive logs missing in the standby may be physically copied in standby file system and are to be registered using ALTER DATABASE REGISTER PHYSICAL LOGFILE 'filespec1'; Repeat until the above query returns no rows.
3)Initiate failover in standby
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;
4)Finish failover using ALTER DATABASE OPEN; if it was ever opened in readonly mode then SHUTDOWN IMMEDIATE & STARTUP
The target physical standby database has now undergone a transition to the primary database role while the previous Primary is no more a participant of Dataguard. It needs to be recreated from the current primary.
Thursday, March 13, 2008
Renaming / Moving Data Files, Control Files, and Online Redo Logs
http://www.idevelopment.info/data/Oracle/DBA_tips/Database_Administration/DBA_35.shtml
Only applies to datafiles whose tablespaces do not include SYSTEM, ROLLBACK or TEMPORARY segments
For renaming controlfiles,
1) Edit the control_files parameter in the init.ora file (create pfile from spfile; if running in spfile mode)
or : alter system set control_files='/oracle/control1.ctl','/oracle2/control2.ctl' scope=spfile;
2) Shutdown instance
3) OS move of controlfiles as required
4) Startup using pfile
STARTUP PFILE=C:\Oracle\Admin\SID\PFile\init.ora;
Only applies to datafiles whose tablespaces do not include SYSTEM, ROLLBACK or TEMPORARY segments
SQL> shutdown immediate
or SQL> alter tablespace INDX offline;
SQL> !mv /u05/app/oradata/ORA920/indx01.dbf /u06/app/oradata/ORA920/indx01.dbf
SQL> startup mount - if shutdown immediate was done
For both datafiles and redolog files:
SQL> alter database rename file '/u05/app/oradata/ORA920/indx01.dbf' to '/u06/app/oradata/ORA920/indx01.dbf';
Do not disconnect after this step. Stay logged in
and proceed to open the database!
SQL> alter database open;
SQL> exit
For renaming controlfiles,
1) Edit the control_files parameter in the init.ora file (create pfile from spfile; if running in spfile mode)
or : alter system set control_files='/oracle/control1.ctl','/oracle2/control2.ctl' scope=spfile;
2) Shutdown instance
3) OS move of controlfiles as required
4) Startup using pfile
STARTUP PFILE=C:\Oracle\Admin\SID\PFile\init.ora;
Tuesday, March 11, 2008
ROWID format
ROWID format: block ID, row in block, file ID.
ROWIDs must be entered as formatted hexadecimal strings using only numbers and the characters A through F. A typical ROWID format is '000001F8.0001.0006'.
ROWIDs must be entered as formatted hexadecimal strings using only numbers and the characters A through F. A typical ROWID format is '000001F8.0001.0006'.
Monday, March 10, 2008
Unix Commands - Syntax
Some basic unix command usage examples
#!/bin/ksh
$ find /u02/ora/archives -type f -mtime +7 -name \*.dbf -print -exec rm {} \;
Example usages link
To delete ALL files owned by a user:
$ find . -user USERID awk '{print "rm "$NF}' ksh
'find' command links
To Copy files matching certain criteria to another directory (just to practice grep, printf and awk features...)
$ ls -altr grep "Jul 3" grep -v ^d awk '{printf("cp %s ./new_dir/%s\n",$9,$9)}' sh
$ for mystatement in `cat myfile.dat awk '{print $1}'`; <<EOF!
do sqlplus / as sysadmin mystatement ;
EOF!
$ chown oracle:dba * : to change owner and group of all files in current dir
$ ps -ef grep 234 awk '{print $2}' xargs kill
$ ps -ef egrep 'pmonsmonlgwrdbwarc' grep -v grep
$ ls -altr
$ ptrace [processid]
$ pfiles [processid]
$ ptree [processid]
$ truss -p [processid]
$ top
$ sort -t: +2n -3 /etc/passwd
-t : separator
+2n : begin - nth occurance of separator
-3 : end
$ who sort +4n
$ vmstat interval 6
$ mpstat - for multiprocessor
$ sar
$ iostat
$ uname -a
$ at 8:15
$ uucp
$ du -h : Disk usage
$ bdf . : to find the disk free space on a particular mount volume (in %)
$ df : to find the disk free space on a particular mount volume
$ tail -100 filename.ext
$ tnsping INSTNAME
$ which top
$ bc : calculating arithmatics from command line
$ id -a : user id and group id
$ su - : switch to root user i.e. superuser
$ tar -cvvf tarfile.tar *.* for creating tarfile
$ tar -xvvf tarfile.tar for extracting from existing tarfile
$ tar -xvvzf tarfile.tar.gz for uncompress
$ rcp -r user@sourcehostname:sourcefileordir user@desthostname:destfileordir
$ gzip filename for compressing the file replaces the file with filename.gz
$ gzip -dvf zipfile.gz for uncompressing a .gz file
Useful link : Shell Programming
To delete files (with extension .dbf) in a folder which are older than 7 days. This may be put inside a shell script for executing as a cron job
#!/bin/ksh
$ find /u02/ora/archives -type f -mtime +7 -name \*.dbf -print -exec rm {} \;
Example usages link
To delete ALL files owned by a user:
$ find . -user USERID awk '{print "rm "$NF}' ksh
'find' command links
To Copy files matching certain criteria to another directory (just to practice grep, printf and awk features...)
$ ls -altr grep "Jul 3" grep -v ^d awk '{printf("cp %s ./new_dir/%s\n",$9,$9)}' sh
$ for mystatement in `cat myfile.dat awk '{print $1}'`; <<EOF!
do sqlplus / as sysadmin mystatement ;
EOF!
$ chown oracle:dba * : to change owner and group of all files in current dir
$ ps -ef grep 234 awk '{print $2}' xargs kill
$ ps -ef egrep 'pmonsmonlgwrdbwarc' grep -v grep
$ ls -altr
$ ptrace [processid]
$ pfiles [processid]
$ ptree [processid]
$ truss -p [processid]
$ top
$ sort -t: +2n -3 /etc/passwd
-t : separator
+2n : begin - nth occurance of separator
-3 : end
$ who sort +4n
$ vmstat interval 6
$ mpstat - for multiprocessor
$ sar
$ iostat
$ uname -a
$ at 8:15
$ uucp
$ du -h : Disk usage
$ bdf . : to find the disk free space on a particular mount volume (in %)
$ df : to find the disk free space on a particular mount volume
$ tail -100 filename.ext
$ tnsping INSTNAME
$ which top
$ bc : calculating arithmatics from command line
$ id -a : user id and group id
$ su - : switch to root user i.e. superuser
$ tar -cvvf tarfile.tar *.* for creating tarfile
$ tar -xvvf tarfile.tar for extracting from existing tarfile
$ tar -xvvzf tarfile.tar.gz for uncompress
$ rcp -r user@sourcehostname:sourcefileordir user@desthostname:destfileordir
$ gzip filename for compressing the file replaces the file with filename.gz
$ gzip -dvf zipfile.gz for uncompressing a .gz file
Useful link : Shell Programming
Saturday, March 8, 2008
DATAGUARD - Physical Standby on same machine
Creating Dataguard Physical Standby (in the same machine)
1) Primary - Enable Force Logging
2) Primary - Ensure Archivelog mode
3) Primary - Make pfile from spfile
4) Primary - Incorporate Changes in pfile
5) Primary - Create Standby Redo file Group & files (check sufficient MAXLOGFILES and MAXLOGMEMBERS values)
6) Primary - Create password file (if not there)
7) listener.ora - Add entries for additional listener
8) tnsnames.ora - Add entries for additional services
9) sqlnet.ora - Include SQLNET.EXPIRE_TIME=2 DEFAULT_SDU_SIZE=32767
10) Primary - Shutdown immediate (for backup) or use RMAN online backup copy
11) Primary - Backup & Copy all datafiles to Standby file system location
12) Primary - Create a Backup Controlfile for use with the standby and copy the file to standby
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/boston.ctl';
13) Primary - Open database
SQL> ALTER DATABASE OPEN;
14) Standby - Copy the password file (or create a new one with same sys password as in Primary) and pfile from primary
15) Standby - Edit the pfile and change db_unique_name, control_files, log_archive_dest_i, db_filename_convert, log_filename_convert, FAL_server, FAL_Client
16) Standby - Startup mount
17)
Recovery Steps
Fault Tolerencing
http://www.itk.ilstu.edu/docs/oracle/server.101/b10726/appconfig.htm#g635861
1) Primary - Enable Force Logging
2) Primary - Ensure Archivelog mode
3) Primary - Make pfile from spfile
4) Primary - Incorporate Changes in pfile
5) Primary - Create Standby Redo file Group & files (check sufficient MAXLOGFILES and MAXLOGMEMBERS values)
6) Primary - Create password file (if not there)
7) listener.ora - Add entries for additional listener
8) tnsnames.ora - Add entries for additional services
9) sqlnet.ora - Include SQLNET.EXPIRE_TIME=2 DEFAULT_SDU_SIZE=32767
10) Primary - Shutdown immediate (for backup) or use RMAN online backup copy
11) Primary - Backup & Copy all datafiles to Standby file system location
12) Primary - Create a Backup Controlfile for use with the standby and copy the file to standby
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/boston.ctl';
13) Primary - Open database
SQL> ALTER DATABASE OPEN;
14) Standby - Copy the password file (or create a new one with same sys password as in Primary) and pfile from primary
15) Standby - Edit the pfile and change db_unique_name, control_files, log_archive_dest_i, db_filename_convert, log_filename_convert, FAL_server, FAL_Client
16) Standby - Startup mount
17)
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
--
Check the log archiving and transfer by running the following on both Primary & Standby after executing ALTER SYSTEM SWITCH LOGFILE; in Primary
--
SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG;
--
SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
--
SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
FROM V$ARCHIVE_DEST_STATUS
WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';
--
SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
(SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)
LOCAL WHERE
LOCAL.SEQUENCE# NOT IN
(SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
THREAD# = LOCAL.THREAD#);
Recovery Steps
Fault Tolerencing
http://www.itk.ilstu.edu/docs/oracle/server.101/b10726/appconfig.htm#g635861
Wednesday, March 5, 2008
Compressing and splitting large exports in Unix
Mention in the parfile for the export as (pipe)
file=compress_pipe
Create pipes, one for compression and another for splitting :
rm –f compress_pipe
rm –f spilt_pipe
mknod compress_pipe p
mknod split_pipe p
chmod g+w compress_pipe
chmod g+w split_pipe
split –b500m < split_pipe > /tmp/exp_tab &
compress < compress_pipe > split_pipe &
exp parfile=export_tab.par file=compress_pipe > exp_tab.list 2>&1 &
Execute the reverse for import :
cat xaa xab xac xad > join_pipe &
uncompress –c join_pipe > uncompress_pipe &
imp file=uncompress_pipe
file=compress_pipe
Create pipes, one for compression and another for splitting :
rm –f compress_pipe
rm –f spilt_pipe
mknod compress_pipe p
mknod split_pipe p
chmod g+w compress_pipe
chmod g+w split_pipe
split –b500m < split_pipe > /tmp/exp_tab &
compress < compress_pipe > split_pipe &
exp parfile=export_tab.par file=compress_pipe > exp_tab.list 2>&1 &
Execute the reverse for import :
cat xaa xab xac xad > join_pipe &
uncompress –c join_pipe > uncompress_pipe &
imp file=uncompress_pipe
Migration from one OS Platform to another (with different endian formats)
A transportable tablespace allows you to quickly move a subset of an Oracle database from one Oracle database to another. Beginning with the Oracle10g database, a tablespace can always be transported to a database with the same or higher compatibility setting, whether the
target database is on the same or a different platform.
First check whether the tablespace is self contained i.e. all dependancies are within the ts itself.
Find the Oracle internal name and endian for the host & target OS using
Copy the converted datafiles (binary FTP) research_file.dbf to the target machine
Use the Export utility to create the file of metadata information after making the tablespace ReadOnly
The metadata information file should be moved from its location and the converted datafiles in directory as in FORMAT to their respective target directories on the destination
Then plug the tablespace(s) into the new database with the Import utility
If the busy OLTP source database cannot spare processor overhead for the conversion, the conversion may be carriedout at the Target database using the following:
Note: CONVERT DATABASE may be used on read only database as target only if the endian of the destination database and the source are the same.
Using External Tables, in 10g fast transport of selective column data (to Datawarehouses) is possible
The files generated are readable across platform (like export dump)
Further readings on Transportable tablespaces
http://www.rampant-books.com/art_otn_transportable_tablespace_tricks.htm
target database is on the same or a different platform.
First check whether the tablespace is self contained i.e. all dependancies are within the ts itself.
SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('research, another_ts', TRUE);
SQL> SELECT * FROM transport_set_violations;
Find the Oracle internal name and endian for the host & target OS using
SQL> select name, platform_id,platform_name from v$database;
SQL> SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT
2 FROM V$TRANSPORTABLE_PLATFORM;
The following CONVERT to PLATFORM is required only if the Host & target OS are of different ENDIANS
% rman TARGET /
RMAN> CONVERT TABLESPACE research
2> TO PLATFORM 'Microsoft Windows NT'
3> FORMAT='/tmp/oracle/transport_windows/%U'
4> PARALLELISM = 4;
To preserve the filenames from host use the following instead of format:
db_file_name_convert '/usr/oradata/dw10/dw10','/home/oracle/rman_bkups'
Copy the converted datafiles (binary FTP) research_file.dbf to the target machine
Use the Export utility to create the file of metadata information after making the tablespace ReadOnly
alter tablespace research read only;
exp tablespaces=research transport_tablespace=y file=exp_ts_research.dmp
: if DATAPUMP is to be used
expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_ts_research.dmp \
LOGFILE=exp_ts_research.log TRANSPORT_TABLESPACES=research, another_ts \
TRANSPORT_FULL_CHECK=Y
The metadata information file should be moved from its location and the converted datafiles in directory as in FORMAT to their respective target directories on the destination
Then plug the tablespace(s) into the new database with the Import utility
imp tablespaces=research transport_tablespace=y file=exp_ts_research.dmp datafiles='research_file.dbf'
: importing Using datapump
impdp system/manager DIRECTORY=my_dir2 DUMPFILE=exp_ts_research.dmp
LOGFILE=imp_tbsp.log TRANSPORT_DATAFILES=\('research_file.dbf',
'another_ts_file.dbf'\) REMAP_SCHEMA=\(np:np\)
TRANSPORT_FULL_CHECK=Y
If the busy OLTP source database cannot spare processor overhead for the conversion, the conversion may be carriedout at the Target database using the following:
RMAN> convertYou can only use CONVERT DATAFILE when connected as TARGET to the destination database and converting datafile copies on the destination platform
2> datafile '/usr/oradata/dw10/dw10/users01.dbf'
3> format '/home/oracle/rman_bkups/%N_%f';
Note: CONVERT DATABASE may be used on read only database as target only if the endian of the destination database and the source are the same.
Using External Tables, in 10g fast transport of selective column data (to Datawarehouses) is possible
create table trans_dump
organization external
(
type oracle_datapump
default directory dump_dir
location ('trans_dump_1.dmp','trans_dump_2.dmp')
)
parallel 4
as
select * from trans
/
The files generated are readable across platform (like export dump)
Further readings on Transportable tablespaces
http://www.rampant-books.com/art_otn_transportable_tablespace_tricks.htm
Index becoming invalid
Whenever a DBA task shifts the ROWID values
1) Table partition maintenance - Alter commands (move, split or truncate partition) will shift ROWID's, making the index invalid and unusable.
2) CTAS maintenance - Table reorganization with "alter table move" or an online table reorganization (using the dbms_redefinition package) will shift ROWIDs, creating unusable indexes.
3) Oracle imports - An Oracle import (imp utility) with the skip_unusable_indexes=y parameter
4) SQL*Loader (sqlldr utility) - Using direct path loads (e.g. skip_index_maintenance) will cause invalid and unusable indexes.
ALTER INDEX myindex REBUILD ONLINE TABLESPACE newtablespace;
1) Table partition maintenance - Alter commands (move, split or truncate partition) will shift ROWID's, making the index invalid and unusable.
2) CTAS maintenance - Table reorganization with "alter table move" or an online table reorganization (using the dbms_redefinition package) will shift ROWIDs, creating unusable indexes.
3) Oracle imports - An Oracle import (imp utility) with the skip_unusable_indexes=y parameter
4) SQL*Loader (sqlldr utility) - Using direct path loads (e.g. skip_index_maintenance) will cause invalid and unusable indexes.
ALTER INDEX myindex REBUILD ONLINE TABLESPACE newtablespace;
Thursday, February 28, 2008
SQL Execution Plan
Optimizer creates the execution plan as per statistics available on database objects
Different types of internal joins
- nested loop joins (Both Small tables & Optimizer mode: First Rows)
- hash joins (Big Tables, Join type: Equality, Optimizer mode: All Rows)
- sort merge join (Big Tables, Join type: Inequality, Optimizer mode: All Rows)
- star transformation joins
The statements with the maximum level will be first executed and returned for the next level statement to work on it.
DBMS_STATS for CBO - replaced ANALYZE
Nice article explaining the transition to DBMS_STATS for CBO from old ANALYZE
http://www.dba-oracle.com/oracle_tips_dbms_stats1.htm
DBMS_STATS.GATHER_TABLE_STATS
Options: (for gather_schema_stats)
gather —Reanalyzes the whole schema
gather empty —Only analyzes tables that have no existing statistics
gather stale —Only reanalyzes tables with more than 10% modifications (inserts, updates, deletes).
gather auto —Reanalyzes objects which currently have no statistics and objects with stale statistics
method_opt:
'for all indexed columns size skewonly' - examines the distribution of values for every column within every index for histogram
'for all columns size repeat' - only reanalyze indexes with existing histograms
'for columns size auto' - used when invoked using alter table xxx monitoring; data distribution and the access manner / load on column
DBMS_STATS.GATHER_SYSTEM_STATS
execute dbms_stats.gather_system_stats('Start');
-- one hour delay during high workload
execute dbms_stats.gather_system_stats('Stop');
The statistics collected are (sys.aux_stats$)
No Workload (NW) stats:
CPUSPEEDNW - CPU speed
IOSEEKTIM - The I/O seek time in milliseconds
IOTFRSPEED - I/O transfer speed in milliseconds
Workload-related stats:
SREADTIM - Single block read time in milliseconds
MREADTIM - Multiblock read time in ms
CPUSPEED - CPU speed
MBRC - Average blocks read per multiblock read (see db_file_multiblock_read_count)
MAXTHR - Maximum I/O throughput (for OPQ only)
SLAVETHR - OPQ Factotum (slave) throughput (OPQ only)
Statistics collected during different workloads may be stored and used for influencing the optimizer plan as per load times
-------------------------------------------------------------------
http://www.dba-oracle.com/oracle_tips_dbms_stats1.htm
DBMS_STATS.GATHER_TABLE_STATS
exec dbms_stats.gather_table_stats( -
ownname => 'PERFSTAT', -
tabname => ’STATS$SNAPSHOT’ -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size skewonly', -
cascade => true, -
degree => 7 -
)
Options: (for gather_schema_stats)
gather —Reanalyzes the whole schema
gather empty —Only analyzes tables that have no existing statistics
gather stale —Only reanalyzes tables with more than 10% modifications (inserts, updates, deletes).
gather auto —Reanalyzes objects which currently have no statistics and objects with stale statistics
method_opt:
'for all indexed columns size skewonly' - examines the distribution of values for every column within every index for histogram
'for all columns size repeat' - only reanalyze indexes with existing histograms
'for columns size auto' - used when invoked using alter table xxx monitoring; data distribution and the access manner / load on column
DBMS_STATS.GATHER_SYSTEM_STATS
execute dbms_stats.gather_system_stats('Start');
-- one hour delay during high workload
execute dbms_stats.gather_system_stats('Stop');
The statistics collected are (sys.aux_stats$)
No Workload (NW) stats:
CPUSPEEDNW - CPU speed
IOSEEKTIM - The I/O seek time in milliseconds
IOTFRSPEED - I/O transfer speed in milliseconds
Workload-related stats:
SREADTIM - Single block read time in milliseconds
MREADTIM - Multiblock read time in ms
CPUSPEED - CPU speed
MBRC - Average blocks read per multiblock read (see db_file_multiblock_read_count)
MAXTHR - Maximum I/O throughput (for OPQ only)
SLAVETHR - OPQ Factotum (slave) throughput (OPQ only)
Statistics collected during different workloads may be stored and used for influencing the optimizer plan as per load times
/* e.g. activate the DAY statistics each day at 7:00 am */
DECLARE
I NUMBER;
BEGIN
DBMS_JOB.SUBMIT (I, 'DBMS_STATS.IMPORT_SYSTEM_STATS(stattab => ''mystats'', s
tatown => ''SYSTEM'', statid => ''DAY'');', trunc(sysdate) + 1 + 7/24, 'sysdate + 1');
END;
/
/* e.g. activate the NIGHT statistics each day at 9:00 pm */
DECLARE
I NUMBER;
BEGIN
DBMS_JOB.SUBMIT (I, 'DBMS_STATS.IMPORT_SYSTEM_STATS(stattab => ''mystats'', s
tatown => ''SYSTEM'', statid => ''NIGHT'');', trunc(sysdate) + 1 + 21/24, 'sysdate + 1');
END;
/
*** ********************************************************
*** Initialize the OLTP System Statistics for the CBO
*** ********************************************************
1. Delete any existing system statistics from dictionary:
SQL> execute DBMS_STATS.DELETE_SYSTEM_STATS;
PL/SQL procedure successfully completed.
2. Transfer the OLTP statistics from OLTP_STATS table to the dictionary tables:
SQL> execute DBMS_STATS.IMPORT_SYSTEM_STATS(stattab => 'OLTP_stats', statid => 'OLTP', statown => 'SYS');
PL/SQL procedure successfully completed.
3. All system statistics are now visible in the data dictionary table:
SQL> select * from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
-------------------- ------------------ ---------- --------------
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 08-09-2001 16:40
SYSSTATS_INFO DSTOP 08-09-2001 16:42
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN SREADTIM 7.581
SYSSTATS_MAIN MREADTIM 56.842
SYSSTATS_MAIN CPUSPEED 117
SYSSTATS_MAIN MBRC 9
where
=> sreadtim : wait time to read single block, in milliseconds
=> mreadtim : wait time to read a multiblock, in milliseconds
=> cpuspeed : cycles per second, in millions
-------------------------------------------------------------------
SQL> exec runStats_pkg.rs_start;
PL/SQL procedure successfully completed.
SQL> select * from t1 where owner='PUBLIC';
OWNER OBJECT_NAME
------------------------------ ------------------------------
PUBLIC DUAL
PUBLIC SYSTEM_PRIVILEGE_MAP
PUBLIC TABLE_PRIVILEGE_MAP
PUBLIC STMT_AUDIT_OPTION_MAP
PUBLIC MAP_OBJECT
PUBLIC DBMS_STANDARD
PUBLIC V$MAP_LIBRARY
...
2765 rows selected.
SQL> exec runStats_pkg.rs_middle;
PL/SQL procedure successfully completed.
SQL> select /*+index(t2 inx_t2) */ * from t2 where owner='PUBLIC';
OWNER OBJECT_NAME
------------------------------ ------------------------------
PUBLIC DUAL
PUBLIC SYSTEM_PRIVILEGE_MAP
PUBLIC TABLE_PRIVILEGE_MAP
PUBLIC STMT_AUDIT_OPTION_MAP
PUBLIC MAP_OBJECT
PUBLIC DBMS_STANDARD
PUBLIC V$MAP_LIBRARY
...
2765 rows selected.
SQL> exec runStats_pkg.rs_stop;
Run1 ran in 304 hsecs
Run2 ran in 331 hsecs
run 1 ran in 91,84% of the time
Name Run1 Run2 Diff
STAT...table scans (short tabl 1 0 -1
LATCH.resmgr:schema config 0 1 1
LATCH.job_queue_processes para 0 1 1
LATCH.undo global data 7 6 -1
LATCH.resmgr:actses active lis 0 1 1
STAT...shared hash latch upgra 0 1 1
STAT...session cursor cache co 1 0 -1
STAT...heap block compress 6 7 1
STAT...index scans kdiixs1 0 1 1
LATCH.In memory undo latch 0 2 2
LATCH.library cache lock 4 6 2
LATCH.redo allocation 13 16 3
STAT...CPU used by this sessio 15 12 -3
STAT...calls to get snapshot s 5 2 -3
STAT...active txn count during 4 8 4
STAT...cleanout - number of kt 4 8 4
STAT...redo entries 9 13 4
STAT...calls to kcmgcs 4 8 4
LATCH.messages 18 22 4
STAT...consistent gets - exami 4 9 5
LATCH.library cache pin 38 44 6
STAT...recursive cpu usage 13 7 -6
LATCH.library cache 48 55 7
LATCH.active service list 2 10 8
STAT...db block gets 17 29 12
STAT...db block gets from cach 17 29 12
STAT...consistent changes 17 29 12
STAT...db block changes 26 42 16
STAT...bytes received via SQL* 3,188 3,209 21
STAT...DB time 36 15 -21
STAT...CPU used when call star 34 12 -22
STAT...Elapsed Time 311 335 24
LATCH.simulator hash latch 11 44 33
LATCH.simulator lru latch 11 44 33
LATCH.JS queue state obj latch 0 36 36
LATCH.enqueues 2 78 76
LATCH.enqueue hash chains 2 79 77
STAT...no work - consistent re 206 397 191
STAT...consistent gets 218 415 197
STAT...consistent gets from ca 218 415 197
STAT...table scan blocks gotte 206 0 -206
STAT...session logical reads 235 444 209
STAT...undo change vector size 2,132 2,420 288
LATCH.cache buffers chains 550 933 383
STAT...buffer is not pinned co 0 576 576
STAT...redo size 2,780 3,400 620
STAT...table fetch by rowid 0 2,765 2,765
STAT...buffer is pinned count 0 5,139 5,139
STAT...table scan rows gotten 51,831 0 -51,831
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
1,290 1,962 672 65.75%
PL/SQL procedure successfully completed.
Wednesday, February 27, 2008
ASH - Active Session History
Unlike AWR, the collection of session performance statistics is in-memory every second.
Can be queried from V$ACTIVE_SESSION_HISTORY view
MMON flushes the ASH buffer to AWR DBA_HIST_ACTIVE_SESS_HISTORY
To find out how many sessions waited for some event,
Can be queried from V$ACTIVE_SESSION_HISTORY view
MMON flushes the ASH buffer to AWR DBA_HIST_ACTIVE_SESS_HISTORY
To find out how many sessions waited for some event,
select session_id||','||session_serial# SID, n.name, wait_time, time_waited
from v$active_session_history a, v$event_name n
where n.event# = a.event#
- CURRENT_OBJ#, can then be joined with DBA_OBJECTS to get the segments in question
- SQL_ID, can be joined with V$SQL to get SQL Statement that caused the wait
- CLIENT_ID, which was set for any client application session using DBMS_SESSION.SET_IDENTIFIER may be used to identify the client.
ADDM - Automatic Data Diagnostics Monitor
ADDM’s goal is to improve the value of db_time
Prerequisite :Exec dbms_advisor.set_default_task_parameter(’ADDM’,’DBIO_EXPECTED’, 20000);(response time expected by Oracle from the disk I/O system, defaults to 10 milliseconds)
@addmrpt.sql - for detailed report
Another article - http://www.rampant-books.com/art_floss_addm.htm
Prerequisite :Exec dbms_advisor.set_default_task_parameter(’ADDM’,’DBIO_EXPECTED’, 20000);(response time expected by Oracle from the disk I/O system, defaults to 10 milliseconds)
select sum(value) "DB time" from v$sess_time_modelAnalysis findings are ranked and catagorised into
where stat_name='DB time';
- PROBLEM
- SYMPTOM
- INFORMATION
Set pages 1000
Set lines 75
Select a.execution_end, b.type, b.impact, d.rank, d.type,
'Message : 'b.message MESSAGE,
'Command To correct: 'c.command COMMAND,
'Action Message : 'c.message ACTION_MESSAGE
From dba_advisor_tasks a, dba_advisor_findings b,
Dba_advisor_actions c, dba_advisor_recommendations d
Where a.owner=b.owner and a.task_id=b.task_id
And b.task_id=d.task_id and b.finding_id=d.finding_id
And a.task_id=c.task_id and d.rec_id=c.rec_Id
And a.task_name like 'ADDM%' and a.status='COMPLETED'
Order by b.impact, d.rank;
@addmrpt.sql - for detailed report
Another article - http://www.rampant-books.com/art_floss_addm.htm
AWR - Automatic Workload Repository reports
statistics_level parameter should be TYPICAL or ALL - for Automatic collection
BASIC - for manual snapshots
dba_hist_snapshot - for existing snapshots
dba_hist_baseline - for viewing baseline settings
awrrpt.sql - for Statspack style report (either text or html format)
awrrpti.sql - for single instance in RAC
Following views gets affected by AWR snapshots
v$active_session_history - Displays the active session history (ASH) sampled every second.
v$metric - Displays metric information.
v$metricname - Displays the metrics associated with each metric group.
v$metric_history - Displays historical metrics.
v$metricgroup - Displays all metrics groups.
dba_hist_active_sess_history - Displays the history contents of the active session history.
dba_hist_baseline - Displays baseline information.
dba_hist_database_instance - Displays database environment information.
dba_hist_snapshot - Displays snapshot information.
dba_hist_sql_plan - Displays SQL execution plans.
dba_hist_wr_control - Displays AWR settings.
Another interesting article on AWR, Time Model, Active Session History, Baseline, Snapshots is in http://www.rampant-books.com/art_nanda_awr.htm
BASIC - for manual snapshots
dba_hist_snapshot - for existing snapshots
dba_hist_baseline - for viewing baseline settings
awrrpt.sql - for Statspack style report (either text or html format)
awrrpti.sql - for single instance in RAC
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
low_snap_id => 22,
high_snap_id => 32);
END;
BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL.
interval => 30); -- Minutes. Current value retained if NULL.
END;
Setting baselines:
BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_baseline (
baseline_name => 'batch baseline',
cascade => FALSE); -- Deletes associated snapshots if TRUE.
END;
This example creates a baseline (named 'batch baseline') between snapshots 105 and 107 for the local database:
EXECUTE DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (start_snap_id => 105,
end_snap_id => 107,
baseline_name => 'batch baseline');
Following views gets affected by AWR snapshots
v$active_session_history - Displays the active session history (ASH) sampled every second.
v$metric - Displays metric information.
v$metricname - Displays the metrics associated with each metric group.
v$metric_history - Displays historical metrics.
v$metricgroup - Displays all metrics groups.
dba_hist_active_sess_history - Displays the history contents of the active session history.
dba_hist_baseline - Displays baseline information.
dba_hist_database_instance - Displays database environment information.
dba_hist_snapshot - Displays snapshot information.
dba_hist_sql_plan - Displays SQL execution plans.
dba_hist_wr_control - Displays AWR settings.
Another interesting article on AWR, Time Model, Active Session History, Baseline, Snapshots is in http://www.rampant-books.com/art_nanda_awr.htm
Statspack - How to
SQL> CONN perfstat/perfstat
Connected.
SQL> EXEC STATSPACK.snap;
PL/SQL procedure successfully completed.
@spauto.sql < for taking snapshot automatically every hour
@sppurge.sql < for deleting unwanted snaps
@spreport < for report generation
Data Pump - Export Import
Found this spool useful as a onestop reference to actual export import done using DataPump utility
http://www.databasejournal.com/img/jsc_DataPump_Listing_1.html#List0101
http://www.databasejournal.com/img/jsc_DataPump_Listing_1.html#List0101
Tuesday, February 26, 2008
X11 X-Windows Forwarding through SSH
Xming is a display server which can be used to display forwarded XWindow applications through SSH.
Assume, Unix server is having SSH service enabled. To bring up the display window of Unix in your local MS Windows PC; you need two softwares - one is a SSH client - putty and another is a X11 display server.
Download & install both from
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
http://www.straightrunning.com/XmingNotes/#head-121
Xming setting
Create a windows shortcut
"C:\Program Files\Xming\Xming.exe" :0 -clipboard -multiwindow
Execute the same
Putty setting
connection>SSH>X11>Check - Enable X11 forwarding > X Display location : localhost:0.0
Login to the Unix.
$ export DISPLAY=localhost:10.0
To check whether X11 forwarding is functioning
$ /usr/openwin/bin/xclock
Assume, Unix server is having SSH service enabled. To bring up the display window of Unix in your local MS Windows PC; you need two softwares - one is a SSH client - putty and another is a X11 display server.
Download & install both from
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
http://www.straightrunning.com/XmingNotes/#head-121
Xming setting
Create a windows shortcut
"C:\Program Files\Xming\Xming.exe" :0 -clipboard -multiwindow
Execute the same
Putty setting
connection>SSH>X11>Check - Enable X11 forwarding > X Display location : localhost:0.0
Login to the Unix.
$ export DISPLAY=localhost:10.0
To check whether X11 forwarding is functioning
$ /usr/openwin/bin/xclock
Monday, February 25, 2008
Buffer Cache Hit Ratio
SQL> column hit_ratio format 990.99
SQL> select (1-(sum(decode(name,'physical reads',value,0)) /
(sum(decode(name,'db block gets',value,0)) +
sum(decode(name,'consistent gets',value,0))))) as hit_ratio
from v$sysstat;
HIT_RATIO
---------
0.88
DBA - Career
Read one white paper on DBA as a career. Found concise and interesting
http://www.peasland.net/Grow%20That%20DBA%20Career.zip
http://www.peasland.net/Grow%20That%20DBA%20Career.zip
Tuesday, February 19, 2008
Paging vs. Swapping
Paging refers to paging out individual
pages of memory. Swapping is when an entire process is swapped out.
Paging is normal. Swapping happens when the resource load is heavy
and the entire proces is written out to disk. If you're swapping
a lot, you might want to look deeper into looking at things like
adding memory.
See article
pages of memory. Swapping is when an entire process is swapped out.
Paging is normal. Swapping happens when the resource load is heavy
and the entire proces is written out to disk. If you're swapping
a lot, you might want to look deeper into looking at things like
adding memory.
See article
Friday, February 15, 2008
AUTOTRACE - Execution Plan for SQL statement
Note: Using Analyze is obsolete. Use dbms_stats features for CBO - Cost based Optimizer
STEPS FOR ENABLING AUTOTRACE
SQL> @utlxplan
SQL> create public synonym plan_table for plan_table
SQL> grant all on plan_table to public
exit sqlplus and cd $oracle_home/sqlplus/admin
log into sqlplus as SYS
SQL> @plustrce
SQL> grant plustrace to public
SQL> set autotrace traceonly explain
SQL> set autotrace traceonly explain
SQL> set autotrace OFF
WITHOUT AUTOTRACE
STEPS FOR ENABLING AUTOTRACE
SQL> @utlxplan
SQL> create public synonym plan_table for plan_table
SQL> grant all on plan_table to public
exit sqlplus and cd $oracle_home/sqlplus/admin
log into sqlplus as SYS
SQL> @plustrce
SQL> grant plustrace to public
SQL> set autotrace traceonly explain
SQL> set autotrace traceonly explain
SQL> select * from np_trial.test1 where name like '%AHE%';
Execution Plan
----------------------------------------------------------
Plan hash value: 4122059633
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5075 | 55825 | 262 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| TEST1 | 5075 | 55825 | 262 (1)| 00:00:04 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("NAME" LIKE '%AHE%')
SQL> set autotrace OFF
WITHOUT AUTOTRACE
SQL> EXPLAIN PLAN FOR select * from np_trial.test1 where name like '%AHE%';
SQL> @?/rdbms/admin/utlxpls
##or - use the following manual query of plan_table##
COLUMN "SQL" FORMAT a56
SELECT lpad(' ',2*level)||operation||''
||options ||' '||object_name||
decode(OBJECT_TYPE, '', '',
'('||object_type||')') "SQL",
cost "Cost", cardinality "Num Rows"
FROM plan_table WHERE plan_id = (select max(plan_id) from plan_table)
CONNECT BY prior id = parent_id
START WITH id = 0;
Wait Events Drilldown to user
find the code from Natalka Roshak's blog
SQL> Select wait_class, sum(time_waited), sum(time_waited)/sum(total_waits)
Sum_Waits
From v$system_wait_class
Group by wait_class
Order by 3 desc;
SQL> Select a.event, a.total_waits, a.time_waited, a.average_wait
From v$system_event a, v$event_name b, v$system_wait_class c
Where a.event_id=b.event_id
And b.wait_class#=c.wait_class#
And c.wait_class in ('Application','Concurrency')
order by average_wait desc;
Since the TX row lock contention is max
To drill down further into session level
SQL> select a.sid, a.event, a.total_waits, a.time_waited, a.average_wait
from v$session_event a, v$session b
where time_waited > 0
and a.sid=b.sid
and b.username is not NULL
and a.event='enq: TX - row lock contention';
To see whether someone is still blocking the session
SQL> Select blocking_session, sid, serial#, wait_class,
seconds_in_wait
From v$session
where blocking_session is not NULL
order by blocking_session;
To identify who the user is
SQL> select sid, serial#, username, osuser, machine from v$session
where username is not NULL;
Another script for finding the rows on which the process is hanging
http://www.remote-dba.cc/oracle_tips_hung.htm
SQL> select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l1.id2 = l2.id2 ;
SQL> Select wait_class, sum(time_waited), sum(time_waited)/sum(total_waits)
Sum_Waits
From v$system_wait_class
Group by wait_class
Order by 3 desc;
WAIT_CLASS SUM(TIME_WAITED) SUM_WAITS
------------------------- ---------------- ----------
Idle 206281236 341.202664
Configuration 3226 65.8367347
Concurrency 1096 7.55862069
Other 10177 7.48859455
Administrative 11 5.5
Commit 834 5.27848101
User I/O 14777 .937984004
System I/O 23950 .243530429
Application 7 .02991453
Network 1 .000126279
SQL> Select a.event, a.total_waits, a.time_waited, a.average_wait
From v$system_event a, v$event_name b, v$system_wait_class c
Where a.event_id=b.event_id
And b.wait_class#=c.wait_class#
And c.wait_class in ('Application','Concurrency')
order by average_wait desc;
EVENT TOTAL_WAITS TIME_WAITED AVERAGE_WAIT
------------------------------ ----------- ----------- ------------
enq: TX - row lock contention 106 21011 30
library cache load lock 20 186 9.31
os thread startup 114 897 7.87
latch: library cache 3 9 2.98
latch: shared pool 5 4 .76
library cache pin 1 0 .13
SQL*Net break/reset to client 162 6 .04
latch: row cache objects 1 0 .04
enq: RO - fast object reuse 72 1 .01
cursor: mutex S 1 0 0
Since the TX row lock contention is max
To drill down further into session level
SQL> select a.sid, a.event, a.total_waits, a.time_waited, a.average_wait
from v$session_event a, v$session b
where time_waited > 0
and a.sid=b.sid
and b.username is not NULL
and a.event='enq: TX - row lock contention';
To see whether someone is still blocking the session
SQL> Select blocking_session, sid, serial#, wait_class,
seconds_in_wait
From v$session
where blocking_session is not NULL
order by blocking_session;
To identify who the user is
SQL> select sid, serial#, username, osuser, machine from v$session
where username is not NULL;
Another script for finding the rows on which the process is hanging
http://www.remote-dba.cc/oracle_tips_hung.htm
Thursday, February 14, 2008
Oracle 10g to 9i - Data migration from higher version to lower
Click on this post Title
or use the following link
http://dbataj.blogspot.com/2007/04/imp-00010.html
Also check the following article
SQL> SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT
2 FROM V$TRANSPORTABLE_PLATFORM;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ------------------------------ --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows NT Little
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
or use the following link
http://dbataj.blogspot.com/2007/04/imp-00010.html
Also check the following article
SQL> SELECT PLATFORM_ID, PLATFORM_NAME, ENDIAN_FORMAT
2 FROM V$TRANSPORTABLE_PLATFORM;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ------------------------------ --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows NT Little
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
Operating System 64 bits or 32 bits
Refer to this link
http://oraforms.blogspot.com/2007/08/64-bit-or-32-bit-cpu-for-unix.html
http://oraforms.blogspot.com/2007/08/64-bit-or-32-bit-cpu-for-unix.html
Wednesday, February 6, 2008
Data Buffer Cache Advice
SQL> alter system set db_cache_advice =on;
After sufficient period: -
After sufficient period: -
column c1 heading 'Cache Size (m)' format 999,999,999,999
column c2 heading
'Buffers' format 999,999,999
column c3 heading 'Estd PhysRead Factor' format
999.90
column c4 heading 'Estd Phys Reads' format 999,999,999
select
size_for_estimate c1,
buffers_for_estimate c2,
estd_physical_read_factor c3,
estd_physical_reads c4
from
v$db_cache_advice
where
name = 'DEFAULT'
and
block_size =
(SELECT value FROM V$PARAMETER
WHERE name = 'db_block_size')
and
advice_status = 'ON';
Tuesday, February 5, 2008
High Water Mark of a Table (HWM)
Use DBMS_SPACE package and calculate the HWM = TOTAL_BLOCKS - UNUSED_BLOCKS - 1
SELECT BLOCKS
FROM DBA_SEGMENTS
WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table);
ANALYZE TABLE owner.table ESTIMATE STATISTICS;
SELECT EMPTY_BLOCKS
FROM DBA_TABLES
WHERE OWNER=UPPER(owner) AND TABLE_NAME = UPPER(table);
HWM = (query result 1) - (query result 2) - 1
SELECT BLOCKS
FROM DBA_SEGMENTS
WHERE OWNER=UPPER(owner) AND SEGMENT_NAME = UPPER(table);
ANALYZE TABLE owner.table ESTIMATE STATISTICS;
SELECT EMPTY_BLOCKS
FROM DBA_TABLES
WHERE OWNER=UPPER(owner) AND TABLE_NAME = UPPER(table);
HWM = (query result 1) - (query result 2) - 1
Controlfiles - Multiplexing
When the instance is running use the following command for changing the controlfiles and then bounce. Before copying one control file to another location (for multiplexing) the database should be Shutdown
When more than one file is mentioned, error encountered
The cause was wrong statement - notice the comma outside the single quotes.
If at all, a recreation of lost control file is required, the following details are essential.
SQL> alter system set control_files='/oracle/data02/SDW/ora_control02.ctl' scope=spfile;
System altered.
SQL> startup force
ORACLE instance started.
When more than one file is mentioned, error encountered
SQL> alter system set control_files='/oracle/data02/SDW/ora_control02.ctl,/oracle/data01/SDW/ora_control01.ctl' scope=spfile;
ORA-00205: error in identifying control file, check alert log for more info
The cause was wrong statement - notice the comma outside the single quotes.
SQL> alter system set
control_files='/oracle/data02/SDW/ora_control02.ctl','/oracle/data01/SDW/ora_control01.ctl' scope=spfile;
If at all, a recreation of lost control file is required, the following details are essential.
SELECT NAME FROM V$DATAFILE
UNION ALL
SELECT MEMBER FROM V$LOGFILE
UNION ALL
SELECT NAME FROM V$CONTROLFILE;
Wednesday, January 30, 2008
Trace analyzer - reading made easy; compared to tkprof
The following article on how to use TKPROF is useful
http://www.oracleutilities.com/OSUtil/tkprof.html Also have a look at the article on Trace Analyzer
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
TRACE FILE
---------------------------------------------------------------
C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc
user_dump_dest is where the trace files are generated
To disable tracing
ALTER SESSION SET SQL_TRACE = FALSE;
DBMS_SESSION.SET_SQL_TRACE(FALSE);
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,false);
tkprof ORCL92_ora_3043.trc output.txt insert=tkprof.sql record=Allsql.sql
TRCSESS - in 10g
For analyzing and reporting from more than one Trace files at a go.
Prerequisite: dbms_session.set_identifier(‘client_id’ ) and dbms_application_info.set_module(‘module’, ‘action)
*** ACTION NAME:(DAVE_ACTION) 2006-01-10 20:06:22.396
*** MODULE NAME:(DAVE_MODULE) 2006-01-10 20:06:22.396
*** SERVICE NAME:(SYS$USERS) 2006-01-10 20:06:22.396
*** SESSION ID:(148.49932) 2006-01-10 20:06:22.386
*** CLIENT ID:(HERE IS THE CLIENTID) 2006-01-11 07:57:45.135
The following will create a single trace report from all trace files in the cur dir.
http://www.oracleutilities.com/OSUtil/tkprof.html Also have a look at the article on Trace Analyzer
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
ALTER SESSION SET SQL_TRACE = TRUE;
DBMS_SESSION.SET_SQL_TRACE(TRUE);
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);
select c.value || '\' || lower(d.value)|| '_ora_' ||
to_char(a.spid, 'fm00000')|| '.trc' "TRACE FILE"
from v$process a, v$session b, v$parameter c, v$parameter d
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
and d.name = 'db_name';
TRACE FILE
---------------------------------------------------------------
C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc
user_dump_dest is where the trace files are generated
To disable tracing
ALTER SESSION SET SQL_TRACE = FALSE;
DBMS_SESSION.SET_SQL_TRACE(FALSE);
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,false);
tkprof ORCL92_ora_3043.trc output.txt insert=tkprof.sql record=Allsql.sql
TRCSESS - in 10g
For analyzing and reporting from more than one Trace files at a go.
Prerequisite: dbms_session.set_identifier(‘client_id’ ) and dbms_application_info.set_module(‘module’, ‘action)
*** ACTION NAME:(DAVE_ACTION) 2006-01-10 20:06:22.396
*** MODULE NAME:(DAVE_MODULE) 2006-01-10 20:06:22.396
*** SERVICE NAME:(SYS$USERS) 2006-01-10 20:06:22.396
*** SESSION ID:(148.49932) 2006-01-10 20:06:22.386
*** CLIENT ID:(HERE IS THE CLIENTID) 2006-01-11 07:57:45.135
The following will create a single trace report from all trace files in the cur dir.
$ trcsess output=all.trc module=SQL*Plus *.trc
ALTER TABLE ... MOVE - Negative result
We know that ALTER TABLE ... MOVE compacts and repacks the data in a table which had seen too many DELETE operations. The following link shows instances where the impact is contrary and the reason for the same.
http://www.freelists.org/archives/oracle-l/07-2004/msg01670.html
http://www.freelists.org/archives/oracle-l/07-2004/msg01670.html
Tuesday, January 29, 2008
crontab files location
The directory where the cron daemon will look for
/var/spool/cron/crontabs/{username}
Note: Do not edit directly. Needs root login for editing. Always use crontab -e
/usr/lib/cron/cron.allow - if file exists, username to be added, for user to use crontab
/usr/lib/cron/cron.deny - - if file exists, username if present will block user from using crontab
See Man Page>>
The following error condition encountered. How to correct?
-bash-3.00$ crontab -e
sh: /bin/vics: not found
The crontab file was not changed.
Set the following environment parameter to vi (if required provide complete path)
export VISUAL='vi'
The following also gives the same effect.
export EDITOR='vi'
/var/spool/cron/crontabs/{username}
Note: Do not edit directly. Needs root login for editing. Always use crontab -e
/usr/lib/cron/cron.allow - if file exists, username to be added, for user to use crontab
/usr/lib/cron/cron.deny - - if file exists, username if present will block user from using crontab
The following error condition encountered. How to correct?
-bash-3.00$ crontab -e
sh: /bin/vics: not found
The crontab file was not changed.
Set the following environment parameter to vi (if required provide complete path)
export VISUAL='vi'
The following also gives the same effect.
export EDITOR='vi'
Finding Unix OS Block Size
Block Size for different Operating Systems
512 bytes - Solaris, AIX, Windows NT/2000,
Linux, Irix, DG/UX OpenVMS, NetWare, UnixWare, DYNIX/ptx
1024 bytes - HP-UX, Tru64 Unix
2048 bytes - SCO Unix, Reliant Unix
4096 bytes - MVS, MPE/ix
The following commands give the block size
select max(lebsz) from sys.x$kccle;
(Note: The log block size is the unit for the setting of the log_checkpoint_interval, _log_io_size and max_dump_file_size parameters)
(Note: the available & used free blocks given by the command df -g are based on the OS block size -see list above ; also use the first 2 methods to find- not the BLOCK SIZE value returned by df -g )
512 bytes - Solaris, AIX, Windows NT/2000,
Linux, Irix, DG/UX OpenVMS, NetWare, UnixWare, DYNIX/ptx
1024 bytes - HP-UX, Tru64 Unix
2048 bytes - SCO Unix, Reliant Unix
4096 bytes - MVS, MPE/ix
The following commands give the block size
select max(lebsz) from sys.x$kccle;
(Note: The log block size is the unit for the setting of the log_checkpoint_interval, _log_io_size and max_dump_file_size parameters)
$ fstyp -v /dev/dsk/c1d0s6 | grep devbsize
$ df /opt/oracle
/opt/oracle (/dev/dsk/c1d0s6 ):42448742 blocks 6413020 files
DO NOT USE THE FOLLOWING :
df -g grep 'block size'
(Note: the available & used free blocks given by the command df -g are based on the OS block size -see list above ; also use the first 2 methods to find- not the BLOCK SIZE value returned by df -g )
Monday, January 28, 2008
Finding Unix RAM size
RAM display command for different Dialects of UNIX
DEC-UNIX uerf -r 300 | grep -i mem
Solaris prtconf | grep -i mem
AIX lsdev -C | grep mem
Linux free
HP/UX swapinfo -tm
DEC-UNIX uerf -r 300 | grep -i mem
Solaris prtconf | grep -i mem
AIX lsdev -C | grep mem
Linux free
HP/UX swapinfo -tm
PGA_AGGREGATE_TARGET
Useful info about PGA sizing from http://www.dba-oracle.com/oracle_tips_sorts.htm
Oracle recommends that the value of pga_aggregate_target be set to the amount of remaining memory (less a 10 percent overhead for other UNIX tasks) on the UNIX server after the instance has been started.
The total work area cannot exceed 200 megabytes of RAM because of the default setting for _pga_max_size.
No RAM sort may use more than 5% of pga_aggegate_target or _pga_max_size, whichever is smaller. This means that no task may use more than 10 megabytes for sorting or hash joins.
The following settings would increase the default sizes for large sorts and hash joins.
pga_aggregate_target = 1000m
_pga_max_size = 1000m
_smm_px_max_size = 333m
With these hidden parameters set we see a 5x large size increase for parallel queries and sorts:
A RAM sort or hash join may now have up to 50 megabytes (5% of pga_aggegate_target) a 5x increase.
Parallel queries may now have up to 330 megabytes of RAM (30% of pga_aggegate_target), such that a DEGREE=4 parallel query would have 83 megabytes (333 meg/4).
Oracle recommends that the value of pga_aggregate_target be set to the amount of remaining memory (less a 10 percent overhead for other UNIX tasks) on the UNIX server after the instance has been started.
The total work area cannot exceed 200 megabytes of RAM because of the default setting for _pga_max_size.
No RAM sort may use more than 5% of pga_aggegate_target or _pga_max_size, whichever is smaller. This means that no task may use more than 10 megabytes for sorting or hash joins.
The following settings would increase the default sizes for large sorts and hash joins.
pga_aggregate_target = 1000m
_pga_max_size = 1000m
_smm_px_max_size = 333m
With these hidden parameters set we see a 5x large size increase for parallel queries and sorts:
A RAM sort or hash join may now have up to 50 megabytes (5% of pga_aggegate_target) a 5x increase.
Parallel queries may now have up to 330 megabytes of RAM (30% of pga_aggegate_target), such that a DEGREE=4 parallel query would have 83 megabytes (333 meg/4).
ORA-1653: unable to extend table
I was looking at my Alert log in 'bdump' directory. The following error is reported many times.
ORA-1653: unable to extend table SYS.WRI$_ADV_PARAMETERS by 128 in tablespace SYSAUX
ORA-1653: unable to extend table SYS.WRI$_ADV_PARAMETERS by 128 in tablespace SYSAUX
Joint Query - Involving tables from tablespaces of different Block Sizes
There was a discussion about the behaviour of SQL queries involving more than one tablespaces of different blocksizes.
We know, there are different data buffer cache areas in SGA for each of the non-default blocksize tablespaces. So the data fetched from different tablespaces are loaded into their corresponding buffer caches.
Does the Join query has any influence on this? i.e. Because we require different columns from
different size tablespaces, do all the data get loaded in only the default buffer cache?
Ans. is NO - because row data are staged in the buffer caches only to be further processed elsewhere?
i.e.SQL Work Area of PGA's Private SQL Area
(needs more update...)
We know, there are different data buffer cache areas in SGA for each of the non-default blocksize tablespaces. So the data fetched from different tablespaces are loaded into their corresponding buffer caches.
Does the Join query has any influence on this? i.e. Because we require different columns from
different size tablespaces, do all the data get loaded in only the default buffer cache?
Ans. is NO - because row data are staged in the buffer caches only to be further processed elsewhere?
i.e.SQL Work Area of PGA's Private SQL Area
(needs more update...)
Sunday, January 27, 2008
Why - this blog?
It is my personal observation that most of the times we ourselves do not recollect hard earned experience (gained through lots of searches, researches and failures/successes). It is painful to have to repeat the same tiresome process just because we couldn't remember what we did last time to successfully come out of similar situation.
Also, it is embarrassing when someone asks for the same (on how I handled last time) and I am not able to narrate them. Still worse - They might think I am not willing to share
Having a note in files in my personal computer again has not helped all the times.
This blog is a trial to check how I fair in achieving what I want - to SHARE - with others & with myself.
Also, it is embarrassing when someone asks for the same (on how I handled last time) and I am not able to narrate them. Still worse - They might think I am not willing to share
Having a note in files in my personal computer again has not helped all the times.
This blog is a trial to check how I fair in achieving what I want - to SHARE - with others & with myself.
Subscribe to:
Posts (Atom)