Wednesday, January 30, 2008

Trace analyzer - reading made easy; compared to tkprof

The following article on how to use TKPROF is useful
http://www.oracleutilities.com/OSUtil/tkprof.html Also have a look at the article on Trace Analyzer

http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php

ALTER SYSTEM SET TIMED_STATISTICS = TRUE;
ALTER SESSION SET SQL_TRACE = TRUE;
DBMS_SESSION.SET_SQL_TRACE(TRUE);
DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,true);


select c.value || '\' || lower(d.value)|| '_ora_' ||
to_char(a.spid, 'fm00000')|| '.trc' "TRACE FILE"
from v$process a, v$session b, v$parameter c, v$parameter d
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
and d.name = 'db_name';


TRACE FILE
---------------------------------------------------------------
C:\oracle9i\admin\ORCL92\udump\ORCL92_ora_03164.trc

user_dump_dest is where the trace files are generated

To disable tracing

ALTER SESSION SET SQL_TRACE = FALSE;
DBMS_SESSION.SET_SQL_TRACE(FALSE);

DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(sid,serial#,false);


tkprof ORCL92_ora_3043.trc output.txt insert=tkprof.sql record=Allsql.sql

TRCSESS - in 10g
For analyzing and reporting from more than one Trace files at a go.

Prerequisite: dbms_session.set_identifier(‘client_id’ ) and dbms_application_info.set_module(‘module’, ‘action)
*** ACTION NAME:(DAVE_ACTION) 2006-01-10 20:06:22.396
*** MODULE NAME:(DAVE_MODULE) 2006-01-10 20:06:22.396
*** SERVICE NAME:(SYS$USERS) 2006-01-10 20:06:22.396
*** SESSION ID:(148.49932) 2006-01-10 20:06:22.386
*** CLIENT ID:(HERE IS THE CLIENTID) 2006-01-11 07:57:45.135

The following will create a single trace report from all trace files in the cur dir.

$ trcsess output=all.trc module=SQL*Plus *.trc

Query the Table Creation Script

SELECT DBMS_METADATA.GET_DDL('TABLE','YOUR_TABLE_NAME') FROM DUAL;

ALTER TABLE ... MOVE - Negative result

We know that ALTER TABLE ... MOVE compacts and repacks the data in a table which had seen too many DELETE operations. The following link shows instances where the impact is contrary and the reason for the same.

http://www.freelists.org/archives/oracle-l/07-2004/msg01670.html

Tuesday, January 29, 2008

crontab files location

The directory where the cron daemon will look for
/var/spool/cron/crontabs/{username}
Note: Do not edit directly. Needs root login for editing. Always use crontab -e
/usr/lib/cron/cron.allow - if file exists, username to be added, for user to use crontab
/usr/lib/cron/cron.deny - - if file exists, username if present will block user from using crontab

See Man Page>>

The following error condition encountered. How to correct?

-bash-3.00$ crontab -e
sh: /bin/vics: not found
The crontab file was not changed.

Set the following environment parameter to vi (if required provide complete path)
export VISUAL='vi'

The following also gives the same effect.
export EDITOR='vi'

Finding Unix OS Block Size

Block Size for different Operating Systems
512 bytes - Solaris, AIX, Windows NT/2000,
Linux, Irix, DG/UX OpenVMS, NetWare, UnixWare, DYNIX/ptx
1024 bytes - HP-UX, Tru64 Unix
2048 bytes - SCO Unix, Reliant Unix
4096 bytes - MVS, MPE/ix


The following commands give the block size

select max(lebsz) from sys.x$kccle;
(Note: The log block size is the unit for the setting of the log_checkpoint_interval, _log_io_size and max_dump_file_size parameters)

$ fstyp -v /dev/dsk/c1d0s6  | grep devbsize
$ df /opt/oracle
/opt/oracle (/dev/dsk/c1d0s6 ):42448742 blocks 6413020 files


DO NOT USE THE FOLLOWING :
df -g grep 'block size'

(Note: the available & used free blocks given by the command df -g are based on the OS block size -see list above ; also use the first 2 methods to find- not the BLOCK SIZE value returned by df -g )



Monday, January 28, 2008

Finding Unix RAM size

RAM display command for different Dialects of UNIX
DEC-UNIX uerf -r 300 | grep -i mem
Solaris prtconf | grep -i mem
AIX lsdev -C | grep mem
Linux free
HP/UX swapinfo -tm

PGA_AGGREGATE_TARGET

Useful info about PGA sizing from http://www.dba-oracle.com/oracle_tips_sorts.htm

Oracle recommends that the value of pga_aggregate_target be set to the amount of remaining memory (less a 10 percent overhead for other UNIX tasks) on the UNIX server after the instance has been started.


The total work area cannot exceed 200 megabytes of RAM because of the default setting for _pga_max_size.
No RAM sort may use more than 5% of pga_aggegate_target or _pga_max_size, whichever is smaller. This means that no task may use more than 10 megabytes for sorting or hash joins.

The following settings would increase the default sizes for large sorts and hash joins.
pga_aggregate_target = 1000m
_pga_max_size = 1000m
_smm_px_max_size = 333m
With these hidden parameters set we see a 5x large size increase for parallel queries and sorts:
A RAM sort or hash join may now have up to 50 megabytes (5% of pga_aggegate_target) a 5x increase.
Parallel queries may now have up to 330 megabytes of RAM (30% of pga_aggegate_target), such that a DEGREE=4 parallel query would have 83 megabytes (333 meg/4).

ORA-1653: unable to extend table

I was looking at my Alert log in 'bdump' directory. The following error is reported many times.
ORA-1653: unable to extend table SYS.WRI$_ADV_PARAMETERS by 128 in tablespace SYSAUX

Joint Query - Involving tables from tablespaces of different Block Sizes

There was a discussion about the behaviour of SQL queries involving more than one tablespaces of different blocksizes.

We know, there are different data buffer cache areas in SGA for each of the non-default blocksize tablespaces. So the data fetched from different tablespaces are loaded into their corresponding buffer caches.

Does the Join query has any influence on this? i.e. Because we require different columns from
different size tablespaces, do all the data get loaded in only the default buffer cache?
Ans. is NO - because row data are staged in the buffer caches only to be further processed elsewhere?
i.e.SQL Work Area of PGA's Private SQL Area
(needs more update...)

Sunday, January 27, 2008

Why - this blog?

It is my personal observation that most of the times we ourselves do not recollect hard earned experience (gained through lots of searches, researches and failures/successes). It is painful to have to repeat the same tiresome process just because we couldn't remember what we did last time to successfully come out of similar situation.

Also, it is embarrassing when someone asks for the same (on how I handled last time) and I am not able to narrate them. Still worse - They might think I am not willing to share

Having a note in files in my personal computer again has not helped all the times.

This blog is a trial to check how I fair in achieving what I want - to SHARE - with others & with myself.