Tuesday, June 10, 2008

CHANGING DATABASE CHARACTER SET

This article describes how one can change the character set of a database.
It should be done with extreme caution having noted the following
limitations.

The database character set should only be changed if the characters of the
code page of the originating database correspond to the same characters
of the target database, ie. if the database was created with the
characterset US7ASCII and it is to be updated to WE8ISO8859P1. Since these
have the same encoding scheme for the first 127 bits, changing the
character set from US7ASCII to WE8ISO8859P1 will display all characters up
to 127 bits as the same character before and after. In addition, in this
particular example, if any characters have been entered with the 8th bit
set, then updating the database characterset to 8 bit will allow that 8th
bit to be displayed. You must not change the characterset from one encoding
scheme to another encoding scheme where the code pages do not correspond.
This will completely scramble your database. In addition, if case*designer
diagrams are stored in the database, this method must not be used. Contact
Worldwide Support for further details.

Before attempting to run any of the scripts below, you must take a full
cold backup of your database. In addition, the procedure must be thoroughly
tested before attempting this on a production instance.

Here's a SQL*PLUS script that allows a database's character set to be
changed to a different encoding scheme without having to rebuild the database.
set echo off
set verify off

rem The data dictionary table that records the database
rem character set is sys.props$
rem
rem SQL> describe sys.props$
rem Name Null? Type
rem ------------------------------- -------- ----
rem NAME NOT NULL VARCHAR2(30)
rem VALUE$ VARCHAR2(2000)
rem COMMENT$ VARCHAR2(2000)

rem For example:
rem
rem SQL> column c1 format a30
rem SQL> select name c1, value$ c1 from sys.props$;

rem C1 C1
rem ------------------------------ ------------------------------
rem DICT.BASE 2
rem NLS_LANGUAGE AMERICAN
rem NLS_TERRITORY AMERICA
rem NLS_CURRENCY $
rem NLS_ISO_CURRENCY AMERICA
rem NLS_NUMERIC_CHARACTERS .,
rem NLS_DATE_FORMAT DD-MON-YY
rem NLS_DATE_LANGUAGE AMERICAN
rem NLS_CHARACTERSET WE8DEC
rem NLS_SORT BINARY
rem GLOBAL_DB_NAME NLSV7.WORLD

rem NLS_CHARACTERSET can be changed by updating its value, for example:

rem update sys.props$
rem set value$ = 'WE8ISO8859P1'
rem Where name = 'NLS_CHARACTERSET';

rem The database has to be shutdown and restarted before the change
rem becomes effective.

rem It is very important to specify the character set name correctly.
rem IMPORTANT NOTE
rem =============
rem If NLS_CHARACTERSET is updated to an invalid value, it will not then
rem be possible to restart the database once it has been shutdown.
rem To recover, it will be necessary to re-create the database, since it
rem cannot be restarted to correct the invalid NLS_CHARACTERSET entry.

rem The character set name should be in uppercase.
rem The new value is not effective until the database has been shutdown and
rem restarted.
rem
rem A suggested procedure is as follows, and can be done by running this
rem script from SQL*Plus when logged into the SYSTEM account.
rem
rem USAGE : SQL> start ch_db.sql
rem
rem where is the desired database character set
rem

Prompt First check that the character set name is valid.

set echo on

select convert('a','&1','us7ascii') from dual;

set echo off

prompt If this select statement returns error ORA-01482, then the
prompt specified character set name is not valid for this installation.
prompt Abort the procedure now with Control-c

prompt To continue, press return
accept ans CHAR

Prompt Check the current value of database character set.

column c1 format a30
select name c1, value$ c1 from sys.props$
where name = 'NLS_CHARACTERSET';

prompt To continue, press return

Prompt Update to new character set

update sys.props$
set value$ = upper('&1')
where name = 'NLS_CHARACTERSET';

set echo off

prompt To continue, press return
accept ans CHAR

Prompt Check the new value of database character set

select name c1, value$ c1 from sys.props$
where name = 'NLS_CHARACTERSET';

Prompt If the value is updated as required, press return to continue and
Prompt then manually type COMMIT; to commit the change. Then shutdown and
Prompt restart the database.
Prompt
Prompt If the value is not updated as required, press return to continue and
Prompt than manually type ROLLBACK; to prevent the change.

prompt To continue, press return
accept ans CHAR

Wednesday, June 4, 2008

Setting Listener Password

One of the biggest loophole that an Oracle database installation can have is Oracle Listener without a password. At the time of client server this fact somehow can be overlooked as we know who our user are, but now in days of web we don’t them so its becomes very important that we secure Oracle Listener.

Setting the password for the listener can be done in following three ways:

1. Editing the listener.ora file and setting the password in it.
2. Using LSNRCTL utility.
3. Through Oracle Graphical tools such as Net Manager, Oracle Enterprise Manager and so on.

In this blog entry I will mainly concentrate on first two.

Under first method we can edit the listener.ora file and add the following line in it.

PASSWORDS_LISTENER = oraclebrains

and then restart the listener. But the drawback with this method is that password is stored in plain text format without encryption.

In Second method, we can run LSNRCTL utility and then can give command as follows:

LSNRCTL>change_password

After it it will prompt your for old password, if it is there you can type in or press enter.

Then it will prompt you for the new password which you want to set and then press enter.

Then it will again prompt you to reenter the new password for confirmation and then press enter.

After this password will be changes for running instance or session of the listener.

If we want it applicable for all the future instance or session we need to save the configuration for future use as follows:

LSNRCTL> set password

LSNRCTL> save_config

One these steps are completed, if we open listener.ora file we will notice that same line as we add in first method is added but password is in encrypted format.

PASSWORDS_LISTENER = 56ETERY34373WWEGW27362

Understanding Asynchronous COMMIT

Before Oracle 10g Release 2, Life cycle of a transaction was as follows:

- Transaction Starts When User Issues the First DML, Oracle generates redo entries corresponding to the DML and write it to buffer in memory.
- While user is issuing DML, Oracle generates redo entries corresponding to the DML and write it to buffer in memory.
- Transaction Ends When User Issues a COMMIT (can be explicit or implicit), Oracle immediately writes this buffered redo to disk.

Main point to understand here is that once COMMIT is issued, Oracle does not return the control to the user until the redo entries corresponding to that commited transaction have been physically written to disk.

This way application can be 100% sure that once the transaction is commited, it is protected against instance failure and commited data can be recovered by “rolling forward” log files.

This is another words we can say is synchronous COMMIT.

Now Asynchronous COMMIT:

In Oracle 10g Release 2, keeping in view a very high speed transaction environment, Oracle has introduce two options:

1. One is to return immediately after the COMMIT is issued, rather than waiting for the log activity to complete.
2. Another option batches multiple transactions together in memory before writing to the disk.

The full syntax of the new WRITE clause is:

COMMIT [WRITE [IMMEDIATE | BATCH] [WAIT | NOWAIT] ]

By default, if no WRITE clause is specified, a normal COMMIT is equivalent to

COMMIT WRITE IMMEDIATE WAIT;

The COMMIT statement has a new clause, WRITE, which indicates these options. The default COMMIT statement is the same as:

COMMIT WRITE IMMEDIATE WAIT;

To specify that the transaction should be written individually but that the application should not wait for it, specify:

COMMIT WRITE IMMEDIATE NOWAIT;

or just

COMMIT WRITE NOWAIT;

Likewise, to specify that Oracle’s log writer process (LGWR) is allowed to batch multiple transactions together before writing, specify:

COMMIT WRITE BATCH NOWAIT;

The option can also be set system-wide by modifying the commit_write instance parameter. Multiple options are separated by commas, as:

ALTER SYSTEM SET commit_write = BATCH, NOWAIT;

Enabling Faster Incremental Backups in Oracle 10g

In this blog entry let me introduce you to a new capability in Oracle 10g called Block Change Tracking, which can help DBA’s do faster incremental backups via RMAN (Recovery Manager).

Once this new capability in Oracle 10g is enabled it start recording the modified since last backup and stores the log of it in a block change tracking file.

Later while doing backup RMAN uses this file to determine which blocks are needed to be backed up?

Logically as we can see, this process improves the performance as RMAN does not have to scan whole datafile to detect which block in it has changed and needs backup.

This may not show any significant impact on smaller database but definitely will have big impact on bigger databases as now with this new functionality it does not have to scan full datafile but only need to query block change tracking file.

To create the Change Tracking file and enable Change Tracking, make sure the database is either open or mounted. Then issue following SQL:

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING
FILE ‘c:\oracle\oradata\rkt\block_track\block_track_file01.log’;

Database altered.

Once you issue above SQL, Oracle creates a new background process which is responsible for writing data to the block change tracking file, which is called the block change writer CTRW.

To verify the status of block change tracking, use following SQL:

SQL> SELECT filename, status, bytes FROM v$block_change_tracking;

FILENAME

——————————————————

STATUS BYTE

————— ————–

C:\ORACLE\ORADATA\RKT\BLOCK_TRACK\BLOCK_TRACK_FILE01.LOG

ENABLED 1024

To disable block change tracking, use following SQL:

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

Posted in Articles:: Oracle Backup, Blog::Oracle Backup | No Comments »

creating UNIQUE index on a table with existing non unique values.
Monday, May 7th, 2007
Today I came up with this challenge of creating UNIQUE index on a table with exiting non unique value. After lot of r&d I came up with following very interested solution:

I am using scott schema to show my solution.

SQL> create table test as select deptno from emp;

Table created.

SQL> select deptno from test;

DEPTNO
———-
20
30
30
20
30
30
10
20
10
30
20

DEPTNO
———-
30
20
10

14 rows selected.

SQL> alter table test add constraint unique_index unique(deptno) disable;

Table altered.

SQL> create index unique_index on test(deptno);

Index created.

SQL> alter table test enable novalidate constraint unique_index;

Table altered.

SQL> insert into test values (20);
insert into test values (20)
*
ERROR at line 1:
ORA-00001: unique constraint (SCOTT.UNIQUE_INDEX) violated

Oracle Histograms

About Oracle Histograms

Histograms may help the Oracle optimizer in deciding whether to use an index vs. a full-table scan (where index values are skewed) or help the optimizer determine the fastest table join order. For determining the best table join order, the WHERE clause of the query can be inspected along with the execution plan for the original query. If the cardinality of the table is too-high, then histograms on the most selective column in the WHERE clause will tip-off the optimizer and change the table join order.

Most Oracle experts only recommend scheduled re-analysis for highly dynamic databases, and most shops save one very-deep sample (with histograms), storing the statistic with the dbms_stats.export_schema_stats procedure. The only exceptions are highly-volatile systems (i.e. lab research systems) where a table is huge one-day and small the next.

For periodic re-analysis, many shops us the table "monitoring" option and also method_opt "auto" after they are confident that all histograms are in-place.

Oracle histograms statistics can be created when you have a highly skewed index, where some values have a disproportional number of rows. In the real world, this is quite rare, and one of the most common mistakes with the CBO is the unnecessary introduction of histograms in the CBO statistics. As a general rule, histograms are used when a column's values warrant a change to the execution plan.

If you need to reanalyze your statistics, the reanalyze task will be less resource intensive with the repeat option. Using the repeat option will only reanalyze indexes with existing histograms, and will not search for other histograms opportunities. This is the way that you will reanalyze you statistics on a regular basis.

--**************************************************************
-- REPEAT OPTION - Only re-analyze histograms for indexes
-- that have histograms
--
-- Following the initial analysis, the weekly analysis
-- job will use the “repeat” option. The repeat option
-- tells dbms_stats that no indexes have changed, and
-- it will only re-analyze histograms for
-- indexes that have histograms.
--**************************************************************
begin
dbms_stats.gather_schema_stats(
ownname => 'SCOTT',
estimate_percent => dbms_stats.auto_sample_size,
method_opt => 'for all columns size repeat',
degree => 7
);
end;
/

Find histograms for foreign key columns - Many DBAs forget that the CBO must have foreign-key histograms in order to determine the optimal table join order (i.e. the ORDERED hint).

Fix the cause, not the symptom - For example, whenever I see a sub-optimal order for table joins, I resist the temptation to add the ORDERED hint, and instead create histograms on the foreign keys of the join to force the CBO to make the best decision.

For new features, explore the Oracle10g automatic histograms collection mechanism that interrogates v$sql_plan to see where the foreign keys are used. It claims to generate histograms when appropriate, all automatically.

This is one reason that the ORDERED hint is so popular, but it has been shown that having liberal column histograms on the table columns can often aid the optimizer in making better execution plans.

In sum, histograms are not just for non-unique column values that are unevenly distributed (skewed), and several noted DBA’s have suggested that more liberal use of histograms will aid the CBO is making better decisions. The dbms_stats “auto” feature detects and builds column histograms, but it has the shortcoming of being too conservative in some cases.

Savvy DBA’s are now experimenting with broad-brush histograms, for all indexes columns. I first heard of this technique from Jeff Maresh (noted data warehouse consultant), who told me that he has taken to creating 10-bucket histograms for all data warehouse table columns. I heard this advice again at the IOUG conference from Arup Nanda (noted author and DBA of the year) and from Mike Ault.

They are abandoning the use of the “auto” option and manually creating 20-bucket histograms across-the-board, and they claim that it can make a huge difference for databases with lots of multi-table joins in he SQL.

I’ve not tried this technique yet, but when three experts make the assertion, I believe that there may be something to the new technique. The only downside, of course, is the time required to gather the column histograms and a small amount of additional storage in the data dictionary.

One exciting feature of dbms_stats is the ability to automatically look for columns that should have histograms, and create the histograms. Multi-bucket histograms add a huge parsing overhead to SQL statements, and histograms should ONLY be used when the SQL will choose a different execution plan based upon the column value.

Oracle's V$ Views

v$archive_dest
Shows all archived redo log destinations. Use this view to find out to which place archived redo logs are copied: select dest_id,destination from v$archive_dest These values correspond to the init parameter log_archive_dest_n.
v$archive_dest_status
This view allows to find status and errors for each of the defined
v$archived_log
Displays successfully archived redo logs. shows received logs on a primary standby database.
v$archive_gap
Lists sequence numbers of the archived los that are known to be missing for each thread on a (physical?) standby database (highest gap only).
v$archive_processes
This view provides information on the archive processes. It can be used to find out if an ARCH process is active or not.
v$controlfile
Displays the location and status of each controlfile in the database.
v$controlfile_record_section
See sections in a controlfile.
v$bh
This dynamic view has an entry for each block in the database buffer cache. The column status can be:
free
This block is not in use
xcur
Block held exclusively by this instance
scur
Block held in cache, shared with other instance
cr
Block for consistent read
read
Block being read from disk
mrec
Block in media recovery mode
irec
Block in instance (crash) recovery mode

v$buffer_pool
See buffer pools. This view's column id can be joined with x$kcbwds.indx See also x$kcbwbpd
v$buffer_pool_statistics
v$database
This view lets you access database information. For example, you can check (using log_mode) whether or not the database is in archivelog mode:
ADPDB>select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG
checkpoint_change# records the SCN of the last checkpoint. switchover_status: can be used to determine if it is possible to perform a switchover operation Only available for physical standby databases. Can be:
NOT ALLOWED,
SESSIONS ACTIVE,
SWITCHOVER PENDING,
SWITCHOVER LATENT,
TO PRIMARY,
TO STANDBY or
RECOVERY NEEDED.
See protection modes in data guard for the columns protection_mode and protection_level. database_role determines if a database is a primary or a logical standby database or a physical standby database. force_logging tells if a database is in force logging mode or not.
v$datafile
This view contains an entry for each datafile of the database. This view can be used to find out which datafiles must be backed up in a cold backup: select name from v$datafile
v$datafile_header
Various information about datafile headers. For example, if you're interested in when the a file's last checkpoint was:
select name, checkpoint_change#, to_char(checkpoint_time, 'DD.MM.YYYY HH24:MI:SS') from v$datafile_header
v$dataguard_status
Shows error messages in a data guard environment.
v$db_object_cache
This view displays objects that are cached (pinned) in the library cache. See also dbms_shared_pool.
v$enqueue_stat
If there are a lot of enqueue waits "in" v$session_event or v$system_event, v$enqueue_stat allows to break down those enqueues in enqueue classes. For each such class, the gets, waits, failures and the cumulative sum of waited time can be found.
For a list of enqueue types, refer to enqueue types in x$ksqst. The column cum_wait_time stems from x$ksqst.ksqstwtim.
v$eventmetric
This view is new in Oracle 10g and allows improved timing and statistics.
v$event_name
Contains a record for each wait event.
v$filemetric
This view is new in Oracle 10g and allows improved timing and statistics.
v$filestat
v$fixed_table
This view contains the name of all V$, X$ and GV$ tables. In oracle 8.1.7, there are 187 different v$ tables:
ORA81> select count(*) from v where name like 'V$%';

COUNT(*)
----------
185
If you want to know, which x$ tables there are, do a select name from v$fixed_table where name like 'X$%';
v$fixed_view_definition
Contains the defintion in its attribute view_definition for the views of v$fixed_table.
v$flash_recovery_area_usage
See also v$recovery_file_dest
v$instance
instance_role can be used to determine if an instance is an active instance (=primary instance) or a secondary instance (in a standby environment. dbms_utility.db_version can be used to retrieve the same version as the field version in v$instance.
v$latch
Oracle collects statistics for the activity of all latches and stores these in this view. Gets is the number of successful willing to wait requests for a latch. Similarly, misses is how many times a process didn't successfully request a latch. Spin_gets: number of times a latch is obtained after spinning at least once. Sleeps indicates how many times a willing to wait process slept. Waiters_woken tells how often a sleeping process was 'disturbed'.
v$librarycache
v$lock
This view stores all information relating to locks in the database. The interesting columns in this view are sid (identifying the session holding or aquiring the lock), type, and the lmode/request pair. Important possible values of type are TM (DML or Table Lock), TX (Transaction), MR (Media Recovery), ST (Disk Space Transaction). Exactly one of the lmode, request pair is either 0 or 1 while the other indicates the lock mode. If lmode is not 0 or 1, then the session has aquired the lock, while it waits to aquire the lock if request is other than 0 or 1. The possible values for lmode and request are:
1: null,
2: Row Share (SS),
3: Row Exclusive (SX),
4: Share (S),
5: Share Row Exclusive (SSX) and
6: Exclusive(X)
If the lock type is TM, the column id1 is the object's id and the name of the object can then be queried like so: select name from sys.obj$ where obj# = id1 A lock type of JI indicates that a materialized view is being refreshed. A more detailed example can be found here See also x$kgllk.
v$locked_object
Who is locking what:
select
oracle_username
os_user_name,
locked_mode,
object_name,
object_type
from
v$locked_object a,dba_objects b
where
a.object_id = b.object_id
v$log
Contains information on each log group. See also online redo log. Comman values for the status column are:
UNUSED:
Oracle8 has never written to this group,
CURRENT:
This is the active group.
ACTIVE:
Oracle has written to this log before, it is needed for instance recovery.
The active log is the one with the current log sequence number
INACTIVE:
Oracle has written to this log before; it is not needed for instance recovery.

v$logfile
This view can be queried to find the filenames, group numbers and states of redo log files. For example, to find all files of group 2, use select member from v$logfile where group# = 2
v$logmnr_contents
See dbms_logmnr.
v$log_history
This view contains an entry for each Log Switch that occured. The column first_time indicates the time of the first entry??? On physical standby databases, this view shows applied logs.
v$logstdby
Can be used to verify that archived redo logs are being applied to standby databases.
v$managed_standby
Monitors the progress of a standby database in managed recovery mode, more exactly, it displays information about the activities of log transport service and log apply service. see here
select process, pid, status, client_process, group# "Stdby Redo Log Gr", block# from v$managed_standby;
client_process: the corresponding primary database process. If lgwr log transmission is chosen, one row should have client_process=LGWR. If ARCH transmission is chosen, one row should have ARCH.
v$mystat
This view records statistical data about the session that accesses it. Join statistic# with v$statname.
v$sesstat is also similar to v$sysstat, except that v$sysstat accumulates the statistics as soon as a session terminates. See also recording statistics with oracle.
v$nls_parameters
The NLS parameters that are in effect for the session quering this view. The view NLS_SESSION_PARAMETERS is based on v$nls_parameters. See also v$nls_valid_values.
v$nls_valid_values
This view can be used to obtain valid values for NLS parameters such as
supported character sets
languages
territories
sorting orders

v$object_usage
v$object_usage gathers information about used (accessed) indexes when an index is monitored using alter index ... monitoring usage. See On verifying if an index is used.
v$open_cursor
v$option
This view lets you see which options are installed in the server. See also dba_registry.
v$parameter
Lists the name-value pairs of the init.ora file (or their default, if not in the init.ora). For example, if you need to know what your block size is:
select value from v$parameter where name = 'db_block_size'
The columns isses_modifiable and issys_modifiable can be used to determine if a parameter can be changed at session level using alter session or at system level using alter system. A parameter is modifiable at session level if isses_modifiable = 'TRUE'. A parameter is modifiable at system level if issys_modifiable = 'DEFERRED' or issys_modifiable = 'IMMEDIATE'. However, if a parameter is changed at system level if issys_modifiable = 'DEFERRED' it only affects sessions that are started after chaning the parameter. Additionally, the alter system set ... deferred option must be used. There are also some undocumented (or hidden?) parameters.
v$pgastat
See also pga. Thanks to Oleg who notified me of a typo (v$pgastat instead of v$pga_stat).
v$process
Join v$process's addr with v$session paddr. The column traceid is equal to the value used in alter session set .
v$pwfile_users
Lists all users who have been granted sysdba or sysoper privileges. See adding user to a password file.
v$recover_file
Useful to find out which datafiles need recovery. Join with v$datafile to see filenames instead of numbers....
v$recovery_file_dest
See also v$flash_recovery_area_usage
v$reserved_words
This view can be consulted if one is in doubt wheter a particular word is a reserved word (for example when writing PL/SQL Code or assigning a password to a user). Until 9i (is this correct?), the view only consist of two columns: keyword and length. From 10g onwards (?), it has also the columns reserved, res_type, res_attr, res_semi and duplicate. Each of these new columns can only be either 'Y' (meaning: yes) or 'N' (meaning: no) See also reserved words in SQL and reserved words in PL/SQL.
v$resource_limit
v$rollname
The names of online rollback segments. This view's usn field can be joined with v$rollstat's usn field and with v$transaction's xidusn field. v$transaction can be used to track undo by session.
v$rollstat
Statistics for rollback segements
v$session
The column audsid can be joined with sys_context('userenv','SESSIONID') to find out which session is the "own one". Alternatively, dbms_support.mysid can be used. The fields module and action of v$session can be set with dbms_application_info.set_module. (See v$session_longops for an example. The field client_info can be set with dbms_application_info.set_client_info Join sid with v$sesstat if you want to get some statistical information for a particular sesssion. A record in v$session contains sid and serial#. These numbers can be used kill a session (alter system kill session). A client can set some information in client_info. For example, RMAN related sessions can be found with
.... where client_info like 'rman%';
What a session is waiting for can be queried with v$session_wait. However, with Oracle 10g, this is not nessessary anymore, as v$session_wait's information will be exposed within v$session as well. See also sessions.
v$sessmetric
This view is new in Oracle 10g and allows improved timing and statistics.
v$session_event
This views is similar to v$system_event. However, it breaks it down to currently connected sessions. v$session_event has also the column max_wait that shows the maximum time waited for a wait event.
v$session_longops
Use v$session_longops if you have a long running pl/sql procedure and want to give feedback on how far the procedure proceeded. If the following Procedure is run, it will report its progress in v$session_longops. The Procedure will also set the module attribute in v$session which makes it possible to find the sid and serial# of the session.
create table f(g number);

create or replace procedure long_proc as
rindex pls_integer := dbms_application_info.set_session_longops_nohint;
slno pls_integer;
-- Name of task
op_name varchar2(64) := 'long_proc';

target pls_integer := 0; -- ie. The object being worked on
context pls_integer; -- Any info
sofar number; -- how far proceeded
totalwork number := 1000000; -- finished when sofar=totalwork

-- desc of target
target_desc varchar2(32) := 'A long running procedure';

units varchar2(32) := 'inserts'; -- unit of sofar and totalwork
begin

dbms_application_info.set_module('long_proc',null);

dbms_application_info.set_session_longops (
rindex,
slno);

for sofar in 0..totalwork loop

insert into f values (sofar);

if mod(sofar,1000) = 0 then
dbms_application_info.set_session_longops (
rindex,
slno,
op_name,
target,
context,
sofar,
totalwork,
target_desc,
units);

end if;

end loop;
end long_proc;
If the procedure long_proc is run, you can issue the following query to get feedback on its progress:
select time_remaining,sofar,elapsed_seconds
from v$session_longops l, v$session s
where l.sid=s.sid and l.serial# = s.serial# and s.module='long_proc'
v$session_wait
This views shows what wait event each session is waiting for, or what the last event was that it waited for.
In contrast, v$session_event lists the cumulative history of events waited for in a session. The columns P1, P2 and P3 are parameters that are dependant on the event. With Oracle 10g, v$session_wait's information will be exposed within v$session as well. Since 10g, Oracle displays the v$session_wait information also in the v$session view.
v$session_wait_history
This view is new in Oracle 10g and allows improved timing and statistics.
v$sesstat
This view is similar to v$mystat except that it shows cumulated statistics for all sessions. Join sid with v$session and join statistic# with v$statname. v$sesstat is also similar to v$sysstat, except that v$sysstat accumulates the statistics as soon as a session terminates.
v$sga
Shows how much memory the shared global area uses. Selecting * from v$sga is roughly the same as typing show sga in sql plus with the exeption that the latter also show the total.
v$sgastat
Showing free space in the sga:
select * from v$sgastat where name = 'free memory'
v$sga_dynamic_components
Information about SGA resize operations since startup. This view can also be used to find out the granule size of SGA components.
v$sga_resize_ops
v$sort_usage
See temporary tablespaces
v$sort_segment
See Temporary Tablespaces
v$spparameter
Returns the values for the spfile.
v$sql
v$sql is similar to v$sqlarea, the main difference being that v$sql drills down to select * from x$kglob whereas v$sqlarea drills down to select sum from x$kglob. See also here.
v$sqlarea
Join v$sqlarea's address with v$session's sql_address. Find the SQL-text of currently running SQL statements:
select sql_text from v$sqlarea where users_executing > 0;
The field version_count indicates how many versions an sql statement has.
v$sqltext
v$sql_plan
variable addr varchar2(20)
variable hash number
variable child number

exec :addr := '&sqladdr'; :hash := &hashvalue; :child := &childno;

select lpad(' ', 2*(level-1))||operation||' '||
decode(id, 0, 'Cost = '||position) "OPERATION",
options, object_name
from v$sql_plan
start with (address = :addr
and hash_value = :hash
and child_number = :child
and id=0 )
connect by prior id = parent_id
and prior address = address
and prior hash_value = hash_value
and prior child_number = child_number
order by id, position ;
In order to find valid values for sqladdr, hashvalue and childno, this SQL statement can be used:
select sql_text,address,hash_value,child_number from v$sql where users_executing > 0;
v$sql_text_with_newlines
This view can be used to construct the entire text for each session's actual SQL statement. Use the following statement to to that:
set serveroutput on size 1000000

declare
v_stmt varchar2(16000);
v_sql_text v$sqltext_with_newlines.sql_text%type;
v_sid v$session.sid%type;
begin
for r in (
select
sql_text,s.sid
from
v$sqltext_with_newlines t,
v$session s
where
s.sql_address=t.address
order by s.sid, piece) loop

v_sid := nvl(v_sid,r.sid);

if v_sid <> r.sid then
dbms_output.put_line(v_sid);
put_line(v_stmt,100);
v_sid := r.sid;
v_stmt := r.sql_text;
else
v_stmt := v_stmt || r.sql_text;
end if;

end loop;
dbms_output.put_line(v_sid);
dbms_output.put_line(v_stmt,100);

end;
/
Thanks to Sarmad Zafar who notified me of an error in this PL/SQL Block. Note: the function put_line is found here and can be used to prevent ORU-10028.
v$sql_bind_data
Join cursor_num with cno of v$sql_cursor.
v$sql_bind_capture
New with Oracle 10g This view captures bind variables for all sessions and is faster than setting 10046 on level 4.
v$sql_cursor
Join parent_handle with address of v$sql or v$sqlarea.
v$sql_workarea
v$sql_workarea can be joined with v$sqlarea on address and hash_value, and it can be joined with v$sql on address, hash_value and child_number.
v$standby_log
v$statname
Use this view to get decoded names for the statistic# field of v$mystat, v$sysstat and v$sesstat.
v$sysaux_occupants
v$sysaux_occupants doesn't exist in Oracle versions prior to Oracle 10g. See occupants in the sysaux tablepsaces.
v$sysmetric
This view is new in Oracle 10g and allows improved timing and statistics.
v$sysmetric_history
This view is new in Oracle 10g and allows improved timing and statistics.
v$sysstat
v$sysstat is similar to v$sesstat. While v$sesstat displays statitics for the current session, v$sysstat displays the cumulated statitics since startup of the database. For example, it is possible to find out the CPU time (name = 'CPU used by this session') This view is (among others) used to calculate the Hit Ratio.
v$system_event
This view displays the count (total_waits) of all wait events since startup of the instance. If timed_statistics is set to true, the sum of the wait times for all events are also displayed in the column time_waited. The unit of time_waited is one hundreth of a second. Since 10g, an additional column (time_waited_micro) measures wait times in millionth of a second. total_waits where event='buffer busy waits' is equal the sum of count in v$waitstat. v$enqueue_stat can be used to break down waits on the enqueue wait event. While this view totals all events in an instance, v$session_event breaks it down to all currently connected sessions.
v$undostat
undo tablespaces
v$tempfile
v$tempseg_usage
v$tempseg_usage is a public synonym for v$sort_usage.
v$tempstat
v$thread
The Oracle SID can be retrieved through select instance from v$thread
v$timer
This view has only one column (hsecs) which counts hundreths of seconds. Whenever it overflows four bytes, it starts again with 0.
v$transaction
Important fields of v$transaction are used_ublk and used_urec. They tell of how many blocks and records the undo for a transaction consists. In order to find out the name of the corresponding rollback segemnt, join the xidusn field with the usn field of v$rollname. This is demonstrated in Transactions generate undo
v$timezone_names
See also timezones for some values of tzabbrev.
v$transportable_platform
Which platforms are supported for cross platform transportable tablespaces.
v$version
Use this view to find out what version you actually work on: select * from v$version;
BANNER
----------------------------------------------------------------
Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
PL/SQL Release 8.1.7.0.0 - Production
CORE 8.1.7.0.0 Production
TNS for 32-bit Windows: Version 8.1.7.0.0 - Production
NLSRTL Version 3.4.1.0.0 - Production
v$waitstat
total_waits where event='buffer busy waits' is equal the sum of count in v$system_event.

Oracle DBA Checklist

I. Daily Procedures

A. Verify all instances are up
Make sure the database is available. Log into each instance and run daily reports or test scripts. Some sites may wish to automate this.
Optional implementation: use Oracle Enterprise Manager's 'probe' event.
B. Look for any new alert log entries
• Connect to each managed system.
• Use 'telnet' or comparable program.
• For each managed instance, go to the background dump destination, usually $ORACLE_BASE//bdump. Make sure to look under each managed database's SID.
• At the prompt, use the Unix ‘tail’ command to see the alert_.log, or otherwise examine the most recent entries in the file.
• If any ORA-errors have appeared since the previous time you looked, note them in the Database Recovery Log and investigate each one. The recovery log is in .
C. Verify DBSNMP is running
1. Log on to each managed machine to check for the 'dbsnmp' process.
For Unix: at the command line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart DBSNMP. (Some sites have this disabled on purpose; if this is the case, remove this item from your list, or change it to "verify that DBSNMP is NOT running".)
D. Verify success of database backup
E. Verify success of database archiving to tape
F. Verify enough resources for acceptable performance
1. Verify free space in tablespaces.
For each instance, verify that enough free space exists in each tablespace to handle the day’s expected growth. As of , the minimum free space for : [ < tablespace > is < amount > ]. When incoming data is stable, and average daily growth can be calculated, then the minimum free space should be at least

RDBMS

Databases and Database Management Systems
Let's start from basics. What is a database? In very general terms, a database is a collection of related data. Notice the word related, this implies that the collection of letters on this page do not by themselves constitute a database. But if we think of them as a collection of letters arranged to form words, then they can be conceptualised as data in a database. Using similar reasoning, we can also say that a tome such as a telephone directory is also a database. It is a database first, because it is a collection of letters that form words and second, because it is an alphabetical listing of people's names, their addresses and their telephone numbers. How we think of a database depends on what use we want to make of the information that it contains.

So far, we have talked about a database in its broadest sense. This very general definition is not what most people mean when they talk about a database. In this electronic age, the word database has become synonymous with the term "computerised database". Collins English Dictionary describes a database as "A store of a large amount of information, esp. in a form that can be handled by a computer." In this book, we will be dealing only with computerised databases. In keeping with popular trend though, we will be using the word database to refer to a computerised database.

[Edit section] Storage Requirements
A database (computerised) by itself, is not much use. The data is stored electronically on the computer's disk in a format which we humans cannot read or understand directly. What we need is some way of accessing this data and converting it into a form which we do understand. This is the job of the database management system or DBMS for short.

[Edit section] Management Systems
A DBMS is essentially a suite of programs that act as the interface between the human operator and the data held in the database. Using the DBMS, it is possible to retrieve useful information, update or delete obsolete information and add new information to the database. As well as data entry and retrieval, the DBMS plays an important role in maintaining the overall integrity of the data in the database. The simplest example of is ensuring that the values entered into the database conform to the data types that are specified. For example, in the telephone book database, the DBMS might have to ensure that each phone number entered conforms to a set format of XXX-XXXXXXX where X represents an integer.

[Edit section] 2. The Database as a Collection of Tables
[Edit section] Relational Databases and SQL
In the early days of computerised databases, all large database systems conformed to either the network data model or the hierarchical data model. We will not be discussing the technical details of these models except to say that they are quite complex and not very flexible. One of the main drawbacks of these databases was that in order to retrieve information, the user had to have an idea of where in the database the data was stored. This meant that data processing and information retrieval was a technical job which was beyond the ability of the average office manager. In those days life was simple. data processing staff were expected to prepared the annual or monthly or weekly reports and managers were expected to formulate and implement day to day business strategy according to the information contained in the reports. Computer literate executives were rare and DP staff with business sense were even more rare. This was the state of affairs before the advent of relational databases.

[Edit section] History
The relational data model was introduced in 1970, E. F. Codd, a research fellow working for IBM, in his article `A Relational Model of Data for Large Shared Databanks'. The relational database model represented the database as a collection of tables which related to one another.

Unlike network and hierarchical databases, the relational database is quite intuitive to use, with data organised into tables, columns and rows. The table is a list of rows e.g. names and telephone numbers. It is similar to how we might go about the task of jotting down the phone numbers of some of our friends, in the back of our diary for example.

The relational data model consists of a number of intuitive concepts for storing any type of data in a database, along with a number of functions to manipulate the information.

[Edit section] Relational Data Modeling
The relational data model as proposed by Codd provided the basic concepts for a new database management system, the relational database management system (RDBMS). Soon after the relational model was defined, a number of relational database languages were developed and used for instructing the RDBMS. Structured Query Language being one of them.

[Edit section] SQL Language
The SQL language is so inextricably tied to relational database theory that it is impossible to discuss it without also discussing the relational data model. The next two sections briefly describe some of the concepts of this model.

Script to Generate Statspack report for a day

– generateReportScript.sql

spool DayReports.sql

set heading off pages 9999

select ‘define begin_snap=’||snap_id||’;'||chr(10)||
‘define end_snap=’||to_number(snap_id + 1)||’;'||chr(10)||
‘define report_name=sp_’||to_char(snap_time,’ddmmhh24mi’)||’.txt’||chr(10)||
‘@?/rdbms/admin/spreport;’||chr(10)
from stats$snapshot
where trunc(snap_time) = trunc(sysdate -1)
order by snap_time asc;

spool off

I don’t have an understanding of analytic functions yet so the above script crudely just adds on 1 to the snap_id to get the end_snap for a report but if you’ve set up your site to take snapshots every 15mins then the script works fine producing the following script:

– DayReports.sql

define begin_snap=1621;
define end_snap=1622;
define report_name=sp_03060600.txt
@?/rdbms/admin/spreport;

etc…

which when run as the PERFSTAT user produces the required files

sp_03060600.txt
sp_03060615.txt
sp_03060630.txt
sp_03060645.txt
sp_03060700.txt
sp_03060715.txt
sp_03060730.txt
sp_03060745.txt
etc…

which can then be grepped as required!

dbms_xplan in 10g

If you’re using 9i and haven’t learned about the dbms_xplan package, then you should take a good look at it right away. It’s (usually) a much better way of getting execution plans from your system than writing your own queries against the plan_table.

If you’ve been using dbms_xplan, and upgraded from 9i to 10g, make sure that you look at the new features - there are some things which are really useful, and this note is about just one of them.

dbms_xplan.display_cursor()

In 10g, Oracle instroduced a new function in dbms_xplan that can read the in-memory execution plan (v$sql_plan and associated structures). The call is the display_cursor call, and takes three optional parameters, the sql_id and child_number of the sql statement you want to see the plan for, and a formatting string.

The best bit of this function shows up when you look at the script that generates it($ORACLE_HOME/rdbms/admin/dbmsxpln.sql) when you decide to find out how to use the format parameter. Here’s a “live” demo:


set serveroutput off

select
/*+
gather_plan_statistics
ordered use_nl(t1) index(t1)
*/
count(t1.n2), count(t2.n2)
from
t2, t1
where
t2.n2 = 45
and t1.n1 = t2.n1
;

COUNT(T1.N2) COUNT(T2.N2)
------------ ------------
225 225

select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID 98cw5a9c0pw33, child number 0
-------------------------------------
select /*+ gather_plan_statistics ordered use_nl(t1) index(t1) */ count(t1.n2),
count(t2.n2) from t2, t1 where t2.n2 = 45 and t1.n1 = t2.n1

Plan hash value: 3795562434

-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------------
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 146 |
| 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 15 | 225 |00:00:00.01 | 146 |
| 3 | NESTED LOOPS | | 1 | 225 | 241 |00:00:00.02 | 116 |
|* 4 | TABLE ACCESS FULL | T2 | 1 | 15 | 15 |00:00:00.01 | 99 |
|* 5 | INDEX RANGE SCAN | T_I1 | 15 | 15 | 225 |00:00:00.01 | 17 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
4 - filter("T2"."N2"=45)
5 - access("T1"."N1"="T2"."N1")

If you want to call this function, you need access to several of the dynamic performance views -v$session, v$sql, v$sql_plan and v$sql_plan_statistics_all seem to cover all the options between them; and v$sql_plan_statistics_all is the most useful one.
As you can see in the query above, I’ve not supplied an sql_id or child_number, and I’ve used the format ‘ALLSTATS LAST’. By default display_cursor reports the last statement I executed, and the effect of ‘ALLSTATS LAST’ is to report the actual execution stats alongside the predicted row counts.

If you’ve ever got into Wolfgang Breitling’s ”Tuning by Cardinality Feedback” - you’ve now got a tool that can make it a lot easier.

A couple of odd notes - you’ll see that I set serveroutput off. If serveroutput is on when you call this function, the last statement you will have run will be the (hidden) call to dbms_output that follows your execution of any other statement - so you won’t get the plan and statistics.

The hint /*+ gather_plan_statistics */ enables collection of rowsource execution statistics for the statement. It’s a “nearly undocumented” hint, in that the documentation seems only to exist in odd places like the script that generates the package, and as part of the output of the function call if the statistics weren’t enabled when you try to report statistics.

You have to be a little careful comparing the actual and estimated row counts. They are not supposed to be the same in all cases. The estimated values are estimates for each execution of a rowsource, the actual values are the cumulative counts. So, for example, it is perfectly reasonable in line 5 to see E-rows = 15 and A-rows = 225, because line 5 starts 15 times: so 225 Actual rows = 15 starts * 15 estimated rows per start.

Oracle Capacity Planner

Oracle Capacity Planner is an Oracle Enterprise Manager application that allows you to collect different types of performance statistics and record that data in an historical database. You can then use Oracle Capacity Planner to analyze the historical data to plan future capacity.


--------------------------------------------------------------------------------
Note:
In releases prior to 9i, the Agent data gathering service (also known as the Oracle Data Gatherer) was used to collect data for presentation in real-time performance monitoring utilities and historical data. The Intelligent Agent and the data gathering service were separate services that were started and stopped independently of each other.

With 9i, the data collection capabilities are integrated into the 9i Intelligent Agent. Therefore, there is no separate service that needs to be started. Furthermore, when you stop the 9i Intelligent Agent, the data gathering capabilities are disabled and not available.

--------------------------------------------------------------------------------




Oracle Capacity Planner uses the Intelligent Agent (Agent) to collect its historical data. To collect most types of data, you must install the Agent on the system (or systems) where you want to collect data. See the Oracle Intelligent Agent User's Guide for information on managing the Agent.

If you are using Oracle Capacity Planner to collect concurrent manager data, the Agent can be installed on another system.

The types of performance data that Oracle Capacity Planner can collect on a system depend on the products that are installed. Table 4-1 shows the types of data that Oracle Capacity Planner can collect when various Oracle products are installed. The number of targets that appear in the Capacity Planner navigator tree will also vary, depending on the number of applications you have installed.

Table 4-1 Data That Capacity Planner Can Collect When Various Oracle Products are Installed
Oracle Product Oracle Concurrent Manager Data Oracle Database Data Node (Operating System) Data SAP R/3 System Data Microsoft SQL Server Data
Diagnostics Pack
No
Yes
Yes
No
Yes

Management Pack for Oracle Applications
Yes
No
Yes
No
No

Management Pack for SAP R/3
No
No
Yes
Yes
No



If additional products from Oracle are installed, Oracle Capacity Planner may be able to collect additional types of data not shown in Table 4-1.

How Capacity Planner Collects and Stores Performance Data
The following list identifies how Capacity Planner uses the Agent to collect performance data:

The Agent collects the data at an interval you specify from within Capacity Planner.

The Agent stores the data in an efficient binary form.

At a specified loader interval, the data is read from this binary file and stored in a readable form in the Oracle Capacity Planner historical database.

The Agent also aggregates the data by time, creating summary statistics for the data that is collected at different time intervals (hour, day, week, and so on).

Over time, the Agent purges the data in the historical database, keeping only as much data as you specify.

Purging the historical data allows you to have access to a current view of recent performance data for your environment and to manage the amount of stored data to avoid wasting disk space.

Accessing Historical Data Through an Intermediate Host
Oracle Performance Manager relies on the Agent to monitor a target (such as a node, a database, or a concurrent manager). Usually, the collection of the data is done on the system where the target is located. However, for some types of targets Oracle Performance Manager can also monitor data on a system by using the Agent on another system (an intermediate host).

This intermediate host could be the client system where the Oracle Performance Manager is running, or any other system on the network on which the Agent is available.

There are three ways to set the location of the Agent. The method you use depends on how you connect to the target. Consider the following scenarios:

You connect to a database that has been discovered using the Enterprise Manager console and you are running Oracle Capacity Planner while connected to the Oracle Management Server.

Capacity Planner attempts to connect to the Agent on the host where the target is located. If this connection fails or cannot be attempted because no preferred credentials have been set for the database, you will be prompted with a dialog box containing logon credentials for the database. You can use the Connection Details dialog box to specify the location of the Agent that will be used to collect performance data.



You click the Add Service button in the toolbar panel and manually add the target to the navigator.

Capacity Planner displays a Logon dialog box. This dialog box also contains a field to specify the location of the Agent that will be used to collect performance data.

You select a target in the Capacity Planner navigator tree and choose Set Connection Details from the File menu.

Capacity Planner displays the Database Logon dialog box for the selected target. This dialog box also contains a field to specify the location of the Agent that will be used to collect performance data.

Usage Scenarios for Oracle Capacity Planner
Oracle Capacity Planner can perform tasks such as the following:

Track the increase in the amount of used space on a disk over time and estimate when 95 percent of the space on the disk will be consumed

Track the increase in the total I/O rate for a host over time, then estimate what the total I/O rate for the host will be at some future point in time

Determine thresholds to set in Performance Manager by evaluating historical data gathered in Capacity Planner

Provide a predefined set of data that measures performance (Oracle Recommended Collection)

Publish historical data to a website where administrators and system administrators can view data that is periodically updated

Display a chart that shows the number of forms sessions per user and per application; estimate how the numbers will change over a designated period of time.

Starting Oracle Capacity Planner
You can start Oracle Capacity Planner in any of the following ways:



On the Oracle Enterprise Manager console, click the Application Management drawer, then click the Capacity Planner icon.

On the Oracle Enterprise Manager console, select a Concurrent Manager target, click the right mouse button, point at Related Tools in the context menu, then click Capacity Planner.

On the Tools menu of the Oracle Enterprise Manager console, point to Application Management, then click Oracle Capacity Planner.

From the UNIX command line, change directory to the ORACLE_HOME/bin directory (where ORACLE_HOME represents the home directory in which the Management Pack for Oracle Applications is installed), then type the following command:

./oemapp cp


From the Start menu, point to Programs, then to the Oracle Home where Oracle Enterprise Manager is installed, then to Oracle Applications, then click Capacity Planner, which displays the Capacity Planner Login dialog box.

Enter the appropriate connect information for your Oracle Enterprise Manager repository, then click OK.

Using Help
When you are using Oracle Capacity Planner, you can display context-sensitive help topics. You can also display conceptual topics and task topics.

To obtain context-sensitive help for an Oracle Capacity Planner panel, window, dialog box, or property page, select an item in the panel, window, dialog box, or property page, then on the Help menu, choose Window.

To obtain context-sensitive help for a class, chart, or data item on a property page, select the class, chart, or data item on the property page, then take one of the following actions:

Press F1.

Click the Help icon on the toolbar.

On the Help menu, choose Selected Item.

If a help topic is not available for a class, chart, or data item, a No Help Available message is displayed when you request help.


--------------------------------------------------------------------------------
Note:
Not all the charts described in the appendix are available from within Oracle Capacity Planner.

--------------------------------------------------------------------------------




To view the Contents page for the Oracle Capacity Planner help, on the Help menu, choose Contents. On the Contents page, you can:

View a list of the conceptual help topics by double-clicking the Conceptual Topics book on the help system's Contents page.

Conceptual topics provide conceptual or overview information of which you should be aware before performing a particular Oracle Capacity Planner task. Conceptual topics do not contain step-by-step information for performing a task.

View a list of the task help topics by double-clicking the Task Topics book on the help system's Contents page.

Task topics provide step-by-step instructions for accomplishing an Oracle Capacity Planner task. They do not provide conceptual information that you should be aware of before performing the task.

You can access a glossary of Oracle Capacity Planner terms by double-clicking the Oracle Capacity Planner Glossary book on the Contents page.

Using Oracle Capacity Planner
You may find it helpful to think of using Oracle Capacity Planner as a two-step process:

Setting up the collection of historical data

See "Setting Up a Collection" for a detailed description of how to collect historical data.

Analyzing the collected data

See "Analyzing Collected Data" for a detailed description of how to analyze collected data.

Setting Up a Collection
To set up a historical data collection with Oracle Capacity Planner, you:

Select (and connect to) the targets for which you want to collect historical data.

Define the frequency of collection samples.

Define the location of the historical database and the frequency at which collected data should be loaded in the historical database.

Define the amount of data to be kept in the Oracle Capacity Planner database at each time interval.

Select the classes of data to collect, based on what is available in the Oracle Capacity Planner navigator.

Figure 4-1 shows the steps to follow when setting up a collection. For more information on each step, see the following sections.

Figure 4-1 Steps for Setting Up a Collection


Text description of the illustration nu-3695a.gif
Selecting Targets
You can select one or more targets from which you want to collect historical performance statistics. For example, when you have the Management Pack for Oracle Applications installed, the target types are typically Nodes and Concurrent Managers.

The procedure for selecting a target depends upon whether or not the server resides on a node that is known to the Enterprise Manager main console. A node is known to the console when it has been discovered with the Enterprise Manager console Discover Nodes command.

Selecting Targets on Discovered Nodes
To select a target on a node that has been discovered by the Enterprise Manager console:

Click the Collection tab of the Oracle Capacity Planner navigator panel, which displays the Oracle Capacity Planner collection view.

Double-click one of the target type folders.

The Oracle Capacity Planner navigator tree displays the targets of the selected type that are known to the Oracle Enterprise Manager main console.



Select the target from which you want to collect performance data and click the Connect button in the Capacity Planner toolbar, or click the plus sign (+) next to the target in the navigator.

When you connect to a target in the Oracle Capacity Planner navigator panel, you may be prompted for credentials if they are required to connect to that target. If a logon dialog box appears, click Help for information on the individual fields on the dialog box.

After you connect to a target, the available collection classes for that target appear in the navigator and Oracle Capacity Planner displays the Target property sheet for the selected target. For example, Figure 4-2 shows the classes that appear when you connect to a Windows NT node target.

Figure 4-2 Selecting a Target in the Oracle Capacity Planner Window


Text description of the illustration cp_win_serv_selected.gif
Selecting Targets on Undiscovered Nodes
If you double-click a target type and the target you are looking for does not appear in the target type folder, you can do one of the following:

Use the Enterprise Manager console to discover the node where the target resides.

This is the preferred method for adding a target to the Capacity Planner navigator panel. When you discover a node with the Enterprise Manager console, Capacity Planner saves key information about the target, such as the preferred credentials, location of the Agent, and historical database information.

Add the target manually to the Capacity Planner navigator panel.

When you add a target manually, the target is added for the current session of Capacity Planner only. You can start gathering data for the target, but the next time you start Capacity Planner, the target will not appear in the navigator panel. To modify collection settings or analyze data collected for the target, you will have to add the target again the next time you start Capacity Planner.

To add a target manually:

Select the target type for the target you want to add.



Click the Add Service button in the Capacity Planner toolbar.

Capacity Planner displays a message to remind you that targets you add manually will be available for the current session of Capacity Planner only.

Click Yes.

Capacity Planner displays a logon dialog box for the target. Click Help if you need information about any of the specific fields on the dialog box.

After you fill in the logon credentials, click OK to connect to the target.

After you connect to a target, the available collection classes for that target appear in the navigator and Oracle Capacity Planner displays the Target property sheet for the selected target (Figure 4-2).

About the Target Property Sheet
When you connect to a target, Oracle Capacity Planner displays the Target property sheet, which contains the following property pages:

Collection Options page

Storage Options page

Purge Options page

The collection, load, and purge settings you select on these pages apply to all the classes in the selected target. You cannot change these settings for a selected class; you can change them only for a selected target.

Defining the Frequency of Collection Samples
When you first connect to a target, Capacity Planner displays the Collection Options property page. The values you select on this page determine how often the Agent will collect information about the selected target and save it to disk. On the Collection Options property page, you can set the frequency of collection samples using either of these two methods:

By selecting the number of data samples to collect for a specified time interval

For example, to collect information 3 times every hour, select Samples per Time Interval, enter 3 in the text field, and choose Hours from the drop-down list.



Text description of the illustration cp_smpl_interval.gif
By specifying the length of time between collection samples

For example, to collect information every 20 minutes, select Time Between Samples, enter 20 in the text field, and choose Minutes from the drop-down list.



Text description of the illustration cp_time_betwn.gif

--------------------------------------------------------------------------------
Note:
To configure the settings for collecting data, the Agent that will collect data for a target must be running. Normally, the Agent is located on the host where the target is located.

--------------------------------------------------------------------------------




Defining the Location of the Historical Database
After you set the frequency of collection samples, use the Storage Options tab to specify the location of the Oracle Capacity Planner historical database. This setting defines where Capacity Planner will save information collected for the selected target. When data is saved in the historical database, you can later analyze the data or use it to generate trend analyses.

By default, collected data is saved in your Oracle Enterprise Manager repository. However, you can specify an alternate database for the historical data if you do not want to store historical data for the selected target in the repository.

To save the data collected for a target in a location other than the Oracle Enterprise Manager repository:

From the Storage Options tab, select Store historical data in alternate location.

Enter the credentials for the database that will store the collection data for the target.

Enter the appropriate user name, password, and target for the Oracle Capacity Planner historical database. This connection information is used by the Agent when it loads the collected data at the specified loader interval. Therefore, the target specified for the Oracle Capacity Planner historical database must be known and accessible from the host where the Agent is running.

For example, if you specify MY_DB.WORLD as the historical database and your Agent is running on host MYNODE, then database MY_DB.WORLD must be reachable via Net8 or SQL*Net from host MYNODE (MY_DB.WORLD must exist in the TNSNAMES.ORA file for host MYNODE, for example).

Defining the Frequency of Loading Collection Data
After you define how often data will be collected and where the data will be stored, you can define how often the data will be loaded into the historical database.

You can determine how frequently the Oracle Capacity Planner loader program loads collection data for the target using either of these two methods:

Using the Per Time Interval field to specify the number of loader passes for a specified time interval

Using the Time Between field to specify the length of time between loader passes


--------------------------------------------------------------------------------
Note:
Although data can be collected at short time intervals (such as 5, 10, or 15 minutes), collected data is not available to you for analysis until it has been loaded into the Oracle Capacity Planner historical database. Therefore, the load interval you specify controls when the most recently collected data will be visible to you.

--------------------------------------------------------------------------------




The collection frequency you set determines the lowest time interval for which Oracle Capacity Planner can analyze data. For example, if you want to be able to analyze data from hourly intervals, you need to set a collection frequency of at least once an hour.

After you define the frequency of loading collection data, you can set a time for the next loader pass by using the Next Transfer Occurs at field:

Click the value that you want to change in the date and time box.

Click the up or down arrow to select a higher or lower value.


Text description of the illustration cp_next_transfer_field.gif

If the current time is later than the time displayed for the Next Transfer Occurs at setting, click Refresh View to update the display to show the actual time of the next loader pass.

Note that the values that you select on the Storage Options property page apply to all classes of data collected on the selected target.

Understanding When the Loader Runs
The loader runs at the regular time interval specified on the Storage Options property page. The time specified in the Next Transfer Occurs at field controls when the loader runs. For example, if you want the loader to run once a day at 2:00 a.m., then set the transfer interval to once per day, and set the next transfer time to the next day at 2:00 a.m. The loader will run for the first time at 2:00 a.m. the next day, and then will run once a day, each day at 2:00 a.m.

You can use the Next Transfer Occurs at field to force the loader to run immediately--or as soon as possible. Simply set the next transfer time to a time earlier than the current time.

Understanding How Data Aggregation Works
As it loads data into the database, Oracle Capacity Planner automatically aggregates the data. During aggregation, Oracle Capacity Planner examines the collection samples for a given time interval, calculates an average value for that interval, then uses that average value for calculations at the next larger time interval.

On each loader pass, Oracle Capacity Planner examines the time stamps of all collected data to determine whether or not the threshold for a set interval (hour, day, week, month, or year) has been crossed since the previous loader pass. If so, Oracle Capacity Planner performs an aggregation of collected data from the next smaller interval to the interval threshold just passed. Table 4-2 shows the larger time interval to which data at each time interval is aggregated.

Table 4-2 Aggregation of Data from Smaller to Larger Time Intervals
Data at This Interval Aggregates to This Interval
Minutes
Hours

Hours
Days

Days
Weeks

Weeks
Months

Months
Years



An example may help clarify how Oracle Capacity Planner aggregates collection data. Suppose Oracle Capacity Planner is collecting a data sample every 10 minutes to capture the percentage of used space on a particular disk, and the loader runs once an hour. When the Oracle Capacity Planner loader runs and transfers data into the Oracle Capacity Planner database every hour, it examines the values for the percentage of used disk space in the samples collected during the previous hour, then it computes an average percentage for the samples. That average percentage is then used as the average value for the hour.

At the first loader pass after midnight, Oracle Capacity Planner uses the hourly averages since midnight the previous day to compute the day's percentage of used disk space.

At the first loader pass after midnight on Saturday, Oracle Capacity Planner examines the daily averages of used disk space since midnight the previous Saturday and computes an average percentage for the week.

At the first loader pass after midnight on the last calendar day of the month, Oracle Capacity Planner examines the weekly averages since midnight on the last calendar day of the previous month and computes an average percentage for the month.

Finally, at the first loader pass after midnight on December 31, Oracle Capacity Planner examines the last 12 monthly averages to compute a yearly average.

Defining the Amount of Data to Keep
The final step in setting up a target for historical data collection is defining the amount of data to keep.

When you select a target, you can click the Purge Options tab to display the Purge Options property page. On this page, you specify the amount of data you want to store in the Oracle Capacity Planner historical database for each type of data (for example, hour data, day data, and week data). You can accept the default Purge Options property page values or choose different values.

The values set on the Purge Options property page apply to all classes of data collected on the selected target.

The values you specify in the data retention table (the Enter the amount of data you want to keep table on the Purge Options page) determine how far back in time you can go to analyze data for the different types of data.

Suppose, for example, you always want to be able to view hour data for the previous six weeks. In this case, for the Hours Data Type row in the data retention table, you would enter 6 in the Timespan column and Weeks in the To Keep column. This tells Oracle Capacity Planner that you want to keep hour data for a 6-week period. Oracle Capacity Planner calculates 6 weeks from the current time, and the Purge Data Older Than column shows the time span for which hour data is available. Table 4-3 shows the Hours row in the data retention table with this data entered.

Table 4-3 Specifying How Much Data to Keep for a Data Type
Data Type Timespan To Keep Purge Data Older Than
Hours
6
Weeks
(current date minus 6 weeks)



For each row in the data retention table, you can choose the time span to keep data samples for that type of data by entering the desired value in the Timespan to Keep column.

Applying and Reverting the Collection Settings for a Target
When you finish setting all the options on the Target property sheet, do the following to apply your collection settings:



Click Apply.

If you changed the location of the historical database on the Storage Options page, Capacity Planner displays a message box reminding you that any information stored in the previous location will not be available for analysis.

If you don't want to be reminded about this implication the next time you change the database location, select Don't show this message again.

Click OK.

Capacity Planner displays a message stating that the options were set successfully.

Click Continue to close the message box and return to the Capacity Planner main window.

When you apply your collection settings, Capacity Planner sends the settings to the Agent for the selected target. Note that you cannot select data to be collected until these settings have been applied.



At any time during the process of modifying the settings on the Target property sheet, you may revert the settings to their last saved (applied) state by clicking Revert. However, after you apply the settings, they cannot be reverted to their previous settings using the Revert button; they must be modified manually.

Selecting the Classes of Data to Collect
After you set the collection and loading options for a target, you can then select the classes of performance data that you want to collect for that target:

In collection view, select and connect to the target.

For more information, see "Selecting Targets" .

Click the plus sign (+) next to the target to show the classes of data you can collect.

Select one of the classes for the target.

Figure 4-3 shows a typical collection class for a node target on Windows NT and the data sources and data items available for the class.

Figure 4-3 Selecting Data Sources and Data Items for a Collection Class


Text description of the illustration cp_selected_class.gif
The following list explains classes, data items, and data sources in more detail:

Classes

A class is a container for a particular category of data. For example, a node (or operating system) target might include the following classes:

System

Processor

Memory

When you select a target in the navigator, Capacity Planner displays the names of the classes below the target in the tree view.

You can collect data from none, some, or all displayed classes for a target. A class may also be a container for other classes, to provide logical groupings of the performance data at higher levels. For example, an I/O class may be a container for other more specific I/O classes.

Data items

In a class, each of the related statistics is referred to as a data item. For example, a LogicalDisk class might include the following data items:

Percent Free Space

Free Megabytes

Current Disk Queue Length

Disk Writes per Second

A class can contain one or more data items. When you select a class in the navigator, the names of the data items and data sources associated with the data items are displayed on the Collection Details property page. Select the data items that you are interested in collecting.



A green check mark appears next to the name of each data item that is selected for collection. You can toggle the state of all data items between collected and not collected by clicking the Data Items column header that contains the check mark.

Data sources

A data source is a description of where Oracle Capacity Planner can collect the data items for a class. A class can contain one or more data sources. When you select a class in the navigator, the names of the data items and data sources associated with the class are displayed on the Collection Details property page.

The type of data in the class determines the data sources for the class. For example, in a LogicalDisk class, the data sources might be the individual disks for the operating system (for example, disk1, disk2, and so on). In a CPU Utilization class, the sources are the individual CPUs for the host (for example, cpu1, cpu2, and so on). You can collect data items from one or more data sources.



When you select a data source, a green check mark appears next to the name of the data source on the Collection Details property page. You can toggle the state of all data sources between collected and not collected by clicking the Data Sources column header that contains the check mark.

Starting Data Collection
After you select the data items that you want to collect from the available data sources on the Collection Details property page, click Apply to confirm your selections, or click Revert to cancel the selections.



When you click Apply for a class of data, Capacity Planner marks the icon for the selected class and the selected target with a green dot in the Collection view of the navigator panel. A green dot on a collection class icon indicates that one or more data sources are selected for the class. A green dot on a target icon indicates that Capacity Planner is collecting data for one or more classes in the target.

For example, in Figure 4-4, Capacity Planner is actively collecting data for the Processor, Memory, and Cache classes for node ANG-PC. Capacity Planner is not collecting data for the System collection class.

Figure 4-4 Green Dots on a Target or Class Indicate Capacity Planner is Collecting Data


Text description of the illustration cp_green_dots.gif
Stopping Data Collection
To stop collecting data for one or more data items or data sources for a collection class, follow these steps:

Select the collection class in the navigator.

On the Collection Details page, click on the green check marks for those data items or data sources from which you no longer want data collected, which causes the check marks to disappear.

Click Apply.

To stop collecting data for all data items and data sources for a collection class, follow these steps:

Select the collection class in the navigator.



On the Collection Details page, click Stop Collection.

Capacity Planner displays a message box stating that collections for the class have stopped.

Click OK to close the message box and return to the Capacity Planner main window.

When no data sources are selected for collection for a class of data, the icon for that class in the navigator does not display a green dot.

To stop collecting data for all classes in a target:

In collection view, select and connect to the target.

For more information, see "Selecting Targets" .



Click Stop Collection on the Target property sheet.

Capacity Planner displays a confirmation box.

Click Yes to stop collecting data for all classes in the target; click No to cancel the operation.

Generating a Collection Report
A collection report provides a summary of the collection settings for one or more targets. Capacity Planner displays the collection settings in your Web browser. In the browser, you can easily review the settings currently applied to the target or targets you selected.

To display a collection report for one or more targets:

In collection view, on the File menu, choose Collection Report.

Capacity Planner displays the first screen of the Collection Report Wizard.

Use the Collection Report Wizard to specify the target or targets whose collection settings you want to include in the report and to generate and view the report.

At any time while using the wizard, click Help for more information.

When you click Finish on the last screen of the Collection Report Wizard, Capacity Planner saves the report to an HTML file and displays the report in your Web browser.

Copying Collection Settings
After you set up collection settings for one service, you can quickly copy those settings to another similar target.

To copy the collections settings from one target to another:

In collection view, select a target whose collection settings you want to copy.

On the File menu, choose Copy Collection Settings or right mouse click on the Collection Settings from the context menu.

Capacity Planner displays the first screen of the Copy Collections Wizard.

Use the Copy Collection Settings Wizard to specify the destination targets to which you would like to copy the collection settings for the selected target.

From the wizard, you can also view the progress of the copy operation. At any time while you are using the Copy Collection Settings Wizard, click Help for more information.


--------------------------------------------------------------------------------
Note:
Any destination target you select must be running an Agent of the same type and version as the target from which you are copying your settings.

--------------------------------------------------------------------------------




Troubleshooting Collection and Load Problems
During the collection and load processes, errors may be encountered by the Agent. You can determine if data is not being collected or loaded by using either of these methods:

Browsing the analysis data in the Oracle Capacity Planner navigator. See "Browsing the Collected Data" for more information.

Accessing the historical database using SQL. See "Understanding the Oracle Capacity Planner Historical Database" for more information.

If data is not being collected or loaded, you should check the status of the Agent on the host from which data is being collected. If the Agent is running, check the log file ($ORACLE_HOME/network/log/dbsnmp.log) for errors related to the collection or loading of data. See the Oracle Intelligent Agent User's Guide for additional details on the configuration of the Agent.

Analyzing Collected Data
After collected data is loaded into the Oracle Capacity Planner historical database, you can perform a variety of analyses on it. To access the analysis features of Oracle Capacity Planner, click the Analysis tab (Figure 4-5) at the bottom of the Oracle Capacity Planner navigator tree view. This displays the Oracle Capacity Planner analysis view.

Figure 4-5 Clicking the Analysis Tab


Text description of the illustration cp_analysis_tab.gif
Connecting to an Oracle Capacity Planner Historical Database
When you expand the Historical Database folder at the root of the navigator tree in analysis view, you are prompted for the logon credentials for an Oracle Capacity Planner historical database.



Text description of the illustration cp_root_analysis.gif
After you connect to a historical database, the folder expands to show the targets for which data is available in that historical database.



Text description of the illustration cp_apps_analysis.gif
If you store data in multiple historical databases, you must disconnect from one historical database before you can connect to a different historical database.

To disconnect from the current historical database,

Click the Historical Database folder at the root of the navigator tree.



Click the Disconnect button in the Capacity Planner toolbar panel.

To connect to a different historical database, expand the Historical Database folder again and enter the credentials for the new historical database in the resulting logon dialog box.

Organization of Data in the Navigator
When you expand a target in the navigator under the Historical Database folder, two forms of data are displayed below the target:

Classes of data

The classes displayed for the target are the collection classes for which data is available in the Oracle Capacity Planner historical database. You can select and view data from these classes by selecting a class in the navigator, specifying the selection criteria, and then clicking Show New Chart. These operations are the first step in creating an analysis.

Analyses

Below each target in the navigator is an Analyses folder, which contains predefined and user-defined analyses. Predefined analyses are provided for some targets. You can create user-defined analyses by selecting a class of data, displaying an analysis of the data, and then saving the analysis. You can also create a user-defined analysis by using an existing analysis as a template, making modifications to the analysis, and then saving the analysis under a new name.

Browsing the Collected Data
To browse the collected data for a particular target:

In analysis view, expand the Historical Database folder.

You are prompted for the credentials for the historical database that contains the collected data that you want to analyze.

Enter your logon credentials and connect to the historical database.

On the navigator panel, expand the folder for the target whose collected data you want to analyze.

The navigator displays the:

Names of the classes for which collected data exists in the historical database for the target

An Analyses folder, which contains predefined and user-defined analyses you created for the target

Viewing an Analysis for a Class of Data
To view an analysis chart for a selected class of data:

On the navigator panel, click the name of a class whose collected data you want to analyze.

Capacity Planner displays the property page for the class (Figure 4-6).

To specify the selection criteria for the class of data you want to analyze, see "Specifying Selection Criteria for a Class of Data" for instructions.



Click Show New Chart.

Capacity Planner displays the Analysis window, which contains two or more pages. The Chart page displays the chart. The remaining pages are Details pages. A separate Details page exists for each class from which one or more data items are included in the analysis.

For information about the Analysis window, choose Window from the Analysis window Help menu. For information about modifying the analysis or changing the way the data is displayed, see "Working with Analyses" .

Saving a Class Analysis as a User-Defined Analysis
After you view an analysis for a class of data, you can save it as a user-defined analysis to view later.

To save the analysis:



While viewing the analysis chart, click Save Chart on the analysis window toolbar.

Click Yes in the message box.

Performance Manager places the new user-defined analysis in the Analyses folder of the navigator panel.

Figure 4-6 Oracle Capacity Planner Analysis View


Text description of the illustration cp_analysis_view.gif
Specifying Selection Criteria for a Class of Data
When you select a class in the navigator panel in analysis view, Capacity Planner displays a property sheet that contains options available for selecting data from the Oracle Capacity Planner historical database for that class.

Before you display a chart of the collected data you must:

Select the time aggregation level from the Time Aggregation drop-down list.

For example, select Hours if you want to analyze hourly data, Days if you want to analyze daily data, and so on.

In the Data Sources section, select one or more data sources whose data items you want to analyze.

In the Data Items section, select one or more data items you want to analyze.



A check mark appears next to each data item that is selected for analysis.

After you specify the selection criteria, there are two ways to display the data in an analysis chart:



To add the selected data to a new analysis chart, click Show New Chart, which displays the new chart.



To add the selected data to an analysis chart that is already displayed in the Analysis window, click Add to Chart, then in the Add Data to Analysis dialog box, select the name of the previously-displayed analysis chart. Note that only charts that are currently opened are displayed in the Add Data to Analysis dialog box.

Viewing and Modifying User-Defined and Predefined Analyses
You can view and modify both user-defined and predefined analyses. However, once modifications are made to a predefined analysis, the only way to save those changes is to save the analysis to a new name. This creates a new user-defined analysis and preserves the original predefined analysis.


--------------------------------------------------------------------------------
Note:
Not all targets provide predefined analyses.

--------------------------------------------------------------------------------




To view an existing analysis, which makes it available for modification, follow these steps:

Expand the Analysis folder for the target, which displays the existing predefined and user-defined analyses for the target.



A bar chart icon identifies a predefined analysis. Not all targets provide predefined analyses.



A bar chart and person icon identifies a user-defined analysis. You create a user-defined analysis by displaying an analysis for a class of data or displaying a predefined analysis, and then saving the analysis.


--------------------------------------------------------------------------------
Note:
You can limit the list of displayed analyses to only user-defined analyses by removing the check mark from the View Predefined Analyses option on the Analysis menu. When you choose the View Predefined Analyses option on the Analysis menu again, the check mark preceding the menu option is replaced, and predefined analyses are displayed again.

--------------------------------------------------------------------------------




Select a predefined analysis or user-defined analysis in the navigator panel.

The Analysis Summary property page for the selected analysis is displayed in the detail panel. This property page displays the names of the targets and classes whose data is used in the analysis.

To view the data sources and data items in the analysis, click the plus sign (+) next to the target icons in the Service/Source column.

To view the chart for the analysis, click Show.

Note that if you are opening a predefined analysis, Capacity Planner displays the Select Data Sources for Analysis dialog box. Predefined analyses are provided for some targets. Predefined analyses require you to specify the available data sources for your installation. Select the data sources you want to include in the predefined analysis and click OK.

After you have opened an analysis, you can modify the contents and how they are presented. See "Working with Analyses" for more information.

You can also create a copy of an analysis, rename an analysis, or delete an analysis, as explained in the next three sections.

Creating a Copy of an Analysis
You can create an analysis that is an exact copy of an existing analysis, but the copy has a different name. To do this:

In analysis view, locate and expand the target that contains the analysis you want to copy.

Expand the Analyses folder.

Select the analysis you want to copy.

On the Analysis menu, choose Create Like.

In the Create Analysis Like dialog box, enter a unique name for the new analysis that you are creating and click OK.

Capacity Planner creates the new copy of the analysis in the Analyses folder for the target.

Renaming a User-Defined Analysis

--------------------------------------------------------------------------------
Note:
You cannot rename a predefined analysis. This restriction ensures that the original predefined analysis charts are not overwritten.

However, after you make changes to a predefined analysis, you can save the modified version of the predefined analysis as a user-defined chart with a new name. For more information, see "Saving the Results of a Predefined Analysis".

--------------------------------------------------------------------------------




To rename a user-defined chart:

In analysis view, locate and expand the target that contains the analysis you want to rename.

Expand the Analyses folder.

Select the user-defined analysis you want to rename.



A bar-chart-and-person icon identifies user-defined analyses.

On the Analysis menu, choose Rename.

In the Rename Analysis dialog box, enter a unique name for the analysis you are renaming and click OK.

Deleting a User-Defined Analysis

--------------------------------------------------------------------------------
Note:
You cannot delete a predefined analysis.

--------------------------------------------------------------------------------




To delete a user-defined analysis:

In analysis view, locate and expand the target that contains the analysis you want to delete.

Expand the Analyses folder.



A bar-chart-and-person icon identifies the user-defined analyses in the folder.

Select the user-defined analysis that you want to delete.

On the Analysis menu, choose Delete.

In the message dialog box that prompts you to confirm the deletion, click OK.

Working with Analyses
After you have opened an analysis chart, either by selecting and showing an existing analysis or by browsing collected data to create a new analysis, the data included in that analysis is shown in the Oracle Capacity Planner Analysis window.

The Analysis window contains two or more pages. The first page is called the Chart page and it displays the analysis chart. The remaining page or pages are Details pages. A separate Details page exists for each class from which one or more data items are included in the analysis. Each Details page shows the:

Data sources for the data items plotted in the chart

Time aggregation level of the plotted data items

Value of each plotted data item each time the data item was collected during the time range for the analysis

The time range for the collected data appears along the bottom of each chart, with the beginning of the time range at the left and the end of the time range at the right.

Two types of lines can appear on a chart. Each solid line plots the values of a selected data item during the specified time range. Each dashed line plots the result of a trend analysis to time or trend analysis to value. See "What Is a Trend Analysis?" for more information about performing a trend analysis.

The rest of this section describes other features of the Analysis window.

Viewing the Value of a Data Point
To view the value for a data point (that is, one plotted point of a data item) on a line in an analysis chart, position the cursor over the data point. If there is a point beneath the cursor, Capacity Planner displays the date, time, and value for that data point displays in a box above the point (see Figure 4-7).

Figure 4-7 Viewing the Value of a Data Point


Text description of the illustration cp_cursor_over_data_point.gif
Selecting a Line
When you view an analysis chart, you can perform certain operations after selecting a line in the chart. Most of the available operations are started by clicking a toolbar button on the analysis window after a chart line is selected.

To select a line, click a data point on the line. When a line is selected, the description of the line is displayed in the status bar at the bottom of the window, and the appropriate toolbar buttons are enabled.

Operations available from the toolbar are also available on a context menu. To display the context menu, click the right mouse button on a data point in a line in the analysis chart.

To be sure which line is currently selected (before clicking a toolbar button), read the description of the line in the status bar.

Adding a Class of Data to an Analysis
After you open an analysis, you may want to add data from other classes to the analysis. You can use this feature to compare two different classes of data.

To add a class of data to an analysis that is currently displayed in the Analysis window:

Confirm that Capacity Planner is in analysis view.

Select from the navigator the class of data that you want to add.

Specify the selection criteria (as described in "Specifying Selection Criteria for a Class of Data") for the class.



Click Add to Chart.

In the Add Data to Analysis dialog box, select the name of the chart to which you want to add the data.

You can add data items from multiple classes to an analysis, and you can select different aggregation levels and date ranges for each class.

However, data items within a single class must share the same aggregation level and date range. If you try to add additional data items from the same class to an existing chart, you will receive a prompt, warning that the chart already contains data from this class. If you click OK, Capacity Planner will overwrite the chart with the currently selected data, aggregation levels, and date range.

Modifying Selection Criteria from the Analysis Window
From the analysis window, you can modify the selection criteria associated with any of the classes of data included in the analysis. Modify the selection criteria using one of these methods:



Select a line in the analysis chart and click the Select Items toolbar button.

Click the right mouse button on a data point in the line and choose Select Items from the context menu.

Click the Data tab in the analysis window (as opposed to the Chart tab) and click the Select Items toolbar button.

These actions display the Select Items dialog box, which contains the property sheet for the class of data associated with the selected line. This property sheet contains the selection criteria for that class of data, which you can modify and apply to the current analysis. See "Specifying Selection Criteria for a Class of Data" for more information.

After you modify selection criteria, you can click either Apply or OK on the Analysis Options dialog box. If you click Apply, the changes made to the selection criteria are applied to the analysis and the updated analysis is shown. The Select Items dialog box remains displayed, and you can make other modifications to the same selection criteria, if you want. If you click OK, Capacity Planner applies the modifications to the analysis and closes the Select Items dialog box.

Removing a Line from a Chart
You can remove a line from an analysis chart by either of these methods:

Select the line by clicking the right mouse button, then choose Remove Line from the context menu.



Select the line, then click the Remove Line button on the analysis window toolbar.

These actions remove the data for the selected data item from the chart, but not from the historical database.

Excluding a Data Point from a Chart
You can exclude a data point (that is, one plotted point of a data item) from a chart. To exclude a data point, click the right mouse button on the data point and choose Exclude Point from the context menu.

Excluding a data point is useful if you know that the value of a particular data point is skewed. For example, if you know that a particular CPU-intensive program was not run at the usual time, you can remove the value for the % Processor Time data item for that time from the chart. The selected data point is removed from the chart, but the data for the data point is not removed from the historical database.

After you exclude one or more data points from a line, a red X appears in the status bar when you select the line.



To later redisplay excluded data points for a selected line, click View Exclusions on the Analysis window toolbar, or click the right mouse button on the line and select View Exclusions from the context menu.

These actions display the Data Point Exclusions dialog box, which displays the current list of excluded points for the selected line. You can toggle the check mark for each data point in the list. A check mark indicates that the data point is excluded from the analysis. To display an excluded data point in the analysis again, clear the check mark.

When you click OK, Capacity Planner closes the Data Point Exclusions dialog box and updates the chart to include the points removed from the list of exclusions.


--------------------------------------------------------------------------------
Note:
Data point exclusions are not saved with an analysis.

--------------------------------------------------------------------------------




Showing or Hiding the Chart Legend


You can control whether or not the legend is displayed in an analysis chart by toggling the Show/Hide Legend button on the analysis window toolbar. When the legend is not shown, you can identify a line by clicking it, which displays the description of the line in the analysis window status bar.

Zooming and Scrolling to View More or Less Chart Data
Depending on the date range and the number of data items and data sources you are analyzing, you may want to zoom in, zoom out, or scroll the contents of the chart window.



To view more detail or less detail, click the Zoom In and Zoom Out buttons on the Analysis Chart window toolbar. You can also zoom in by pressing the Page Down key on your keyboard or the numeric 3 on your keypad, and you can zoom out by pressing the Page Up key or the numeric 9 on your keypad.

You can also zoom in on a very specific part of the chart by dragging a rectangle around the area you want to magnify (Figure 4-8). Release the mouse button to zoom in on the selected area. You can zoom back out by using the Zoom Out button Home or End key on your keyboard.

Figure 4-8 Zooming in on a Selected Area of the Chart


Text description of the illustration cp_zoom_rect.gif
Often, the chart data will extend beyond the width of the Analysis Chart window. You can view the data that extends beyond the edges of the window by resizing the Analysis Chart window or by scrolling to the right or left.

To scroll the Analysis chart window:

Press and drag the pointer on the black scroll bar below the chart (Figure 4-9).

When the portion of the chart you want to view is displayed in the Analysis Chart window, release the mouse button.

Optionally, you can use the arrow keys on your keyboard to scroll left and right. Press the left arrow to scroll left and the right arrow to scroll right. If you experience a conflict with the Tab function using the left and right arrow keys, you can use the up and down arrow keys on your keyboard to scroll. Press the up arrow to scroll left (or the numeric 8 on your keypad), or the down arrow key to scroll right (or the numeric 2 on your keypad).

Figure 4-9 Scroll Bar in the Analysis Chart Window


Text description of the illustration cp_scroll_chart.gif
Auto-Scaling Data
If an analysis contains data from different classes where the scale of the data is significantly different, you may want to apply scaling to the data. Scaling normalizes the data so that all lines have a maximum value of no greater than 100. Very large data items are scaled down, and very small data items are scaled up.



To apply scaling, click Toggle Auto Scaling on the analysis window toolbar. When scaling is applied to a chart, data lines with very different scales may be visually correlated. When scaling is enabled, Capacity Planner displays the label Scaled Data along the Y-axis of the chart. If you select a line from the chart by clicking it, the scaling factor for that line is included in the status bar.

What Is a Trend Analysis?
A trend analysis infers or estimates future values by projecting and extending known values. Oracle Capacity Planner can perform the following types of trend analyses:

Trend analysis to a point in time

In a trend analysis to a point in time, you provide a goal time for a data item. Oracle Capacity Planner uses the specified values in the historical database to project the value of the data item at the goal time.

A trend analysis to a point in time is used to answer questions such as, "How full will this disk be at the end of the year?"

Trend analysis to a value

In a trend analysis to a value, you provide a goal value for a data item. Oracle Capacity Planner uses the specified values in the historical database to project the time at which the data item will reach the goal value.

A trend analysis to a value is used to answer questions such as, "When will our CPU utilization reach 95 percent?"

As the default, the Capacity Planner Trend Analysis wizard uses the critical threshold for the metric (if one is defined) when you perform a trend to a value.

Specifying and Performing a Trend Analysis
To specify and perform a trend analysis, follow these steps:

In the Analysis window, either:



Select a line by clicking a data point in the line, then click the Trend Analysis Wizard button on the toolbar.

Click the right mouse button on a data point in a line, then select Trend Analysis Wizard from the context menu.

Both of these methods display the Trend Analysis Wizard.

Capacity Planner displays the Trend Analysis Wizard Welcome panel that identifies the selected data on which you can conduct the trend analysis. You can click on the option box that prevents the Welcome screen from displaying in subsequent uses of the wizard. Click on Next to move to the next wizard panel.

In the Date Range panel, select a date range to use as a basis for the trend analysis:

Click Use only the visible date range (as currently zoomed in the chart) to use the range of dates currently visible in the analysis chart window (not including any data that extends beyond the edge of the chart window).

This option is enabled only if you zoomed in to see more detail in the chart. For more information, see "Zooming and Scrolling to View More or Less Chart Data" .

Click Use the date range selected for the chart to use the date range currently used for all the data in the chart (even data that extends beyond the edge of the chart window).

Click Select a new chart range to select a different date range . You can designate either a recent period of time by choosing Use a recent period of time and then selecting the time increment and number of units of time to use, or you can specify a new start and end time by choosing Use a new start and end time.

Click on Next to move to the next wizard panel.

In the Trend Analysis Type panel of the wizard, specify which type of trend analysis you want to perform.

To perform a trend analysis to a value, click the Estimate when the data item will reach the following value: and enter the numeric value for which you wish to determine the date and time.

To perform a trend analysis to a future date and time, click Estimate what the value of the data item will be at the following date: and enter the time and date for which you wish to determine a value.

Click on Next to move to the next wizard panel.

The Summary panel displays the results of the trend analysis. Click on Finish to add the trend line to the chart.

Choose Customize Annotation from the View menu to display the Annotate Trend dialog box and make changes to the annotation. Optionally, you can right mouse button click on the annotation created by the Trend Analysis wizard and then choose Customize Annotation from the context menu.

Use the Annotate Trend dialog box to specify whether Capacity Planner will annotate the trend analysis (see Figure 4-10) and the appearance of the trend analysis line in the chart window. If you need information about the fields in the Annotate Trend dialog box, click Help.

Figure 4-10 Annotation for a Trend Analysis


Text description of the illustration cp_trent_annotation.gif
Click OK to close the Annotate Trend dialog box.


--------------------------------------------------------------------------------
Note:
To move the annotation box in an analysis chart, click the box, hold the mouse button down, then move the mouse to a different position on the chart. Release the mouse button when you have reached the position in the chart where you want the annotation box displayed.

--------------------------------------------------------------------------------




Saving Changes to a User-Defined or Predefined Analysis
You can save the changes you make to an analysis. Oracle Capacity Planner can determine if you are saving a user-defined analysis or a predefined analysis. The steps for saving your changes differ, depending on the type of analysis.

Saving the Results of a User-Defined Analysis
To save changes to a user-defined analysis:



While viewing the analysis, click Save Chart on the analysis window toolbar.

Click Yes in the message box.

Your changes are saved to in the Analysis folder for the selected target using the current name of the analysis or data class. To save the analysis under a different name, choose Save As from the Analysis window File menu.

Saving the Results of a Predefined Analysis
If you modify a predefined analysis, you must provide a new name for the analysis when you save the results of the analysis. When you do this, you create a new user-defined analysis. Saving the analysis with a new name ensures that the original predefined analysis is not overwritten.

To save the results of a predefined analysis:



While viewing the analysis, click Save Chart on the analysis window toolbar.

Click Yes in the dialog box that is displayed.

In the Save Analysis As dialog box, type a unique name for the user-defined analysis that will be created.

Click OK in the Save Analysis As dialog box.

Capacity Planner places the new user-defined chart in the Analyses folder of the navigator panel.

Printing an Analysis Chart


To print an analysis chart, click the Print Chart button in the Analysis Chart toolbar while the chart is displayed.

The size of the chart on the screen determines the size of the chart in the chart printout. For example, when you print a chart that is 5 inches high and 7 inches wide on your screen, the chart will be 5 inches high and 7 inches wide in the printout.

Generating a Report for an Analysis Chart


To generate an HTML report for an analysis chart:

Click the Report Chart button while the chart is displayed.

Capacity Planner displays a dialog box that tells you the name of the HTML file used to display the report and where the file will be saved.

Make a note of the file name and location.

Click Yes to preview the chart in your Web browser, or click No if you do not want to preview the report.

You can view the report later using a Web browser, or--if you have experience with Web publishing tools and techniques--you can share the report with your co-workers by publishing the HTML file and its associated graphic files on a Web server.

Understanding the Oracle Capacity Planner Historical Database
The logical structure of the Oracle Capacity Planner historical database schema was designed to facilitate querying of capacity planning data by tools other than Oracle Capacity Planner, such as Microsoft Excel. This section contains a brief description of the design of the Oracle Capacity Planner historical database schema. It is not intended to be a comprehensive description of all tables and columns in the schema.


--------------------------------------------------------------------------------
Caution:
Performing any modification to the Oracle Capacity Planner historical database schema (tables, columns, or indexes) or to the data itself is not recommended and is not supported. Such modifications may render your Oracle Capacity Planner historical database schema unusable as a target to load or retrieve capacity planning data.

--------------------------------------------------------------------------------




Understanding the Service Reference Table (vp_service_ref)
The service reference table contains a list of all targets for which data has been stored in the Oracle Capacity Planner historical database. The HOST column contains the name of the host where the Agent that loaded data for the specified target (the NAME column) is located. The SVC_ID column is a column used for lookup into the class reference table, described in the following paragraph.

Understanding the Class Reference Table (vp_class_ref)
The class reference table contains a list of all collection classes for which data has been stored in the Oracle Capacity Planner historical database, for all targets. To see the list of classes for a particular target, a query should look for the SVC_ID column equal to a SVC_ID value from the service reference table, described in the previous paragraph. Such a query would produce a list of all classes for which data exists in the database for a particular target.

Understanding the Data Tables
Each class of data in the class reference table has six data tables that contain the data for that collection class. Each table contains data for a particular time aggregate, for example, hours, days, weeks, months, and years. The base table contains the raw data as it is collected at the collection frequency. Therefore, the base table does not contain any statistical aggregations. The aggregate tables contain aggregate statistics for each data item collected for that class.

Data Table Names
Data tables are named according to the first four columns of the class reference table (in reverse order) plus the aggregate. Example 4-1 shows an entry for a collection class in the class reference table.

Example 4-1 Collection Class Entry in the Class Reference Table
CLS_ID MOD_ID SVC_ID IS_UDS CONTAINER_CLASS_LABEL
---------- ---------- ---------- ---------- -------------------------------
8 2 2 0 I/O


The collection class in Example 4-1 would have the data tables shown in Example 4-2.

Example 4-2 Data Tables for a Collection Class
VP_0_2_2_8_BASE
VP_0_2_2_8_HOUR
VP_0_2_2_8_DAY
VP_0_2_2_8_WEEK
VP_0_2_2_8_MONTH
VP_0_2_2_8_YEAR
Data Table Contents
The first two columns of each data table are always the Timestamp column and the Data Source Name column. The Data Source Name column is essentially the class name, for example, Logical Disk. In aggregate tables (tables other than BASE), the third column is an accuracy calculation, which is not supported in this version of Oracle Capacity Planner.

All subsequent columns contain the values for the data items collected for that class, as they are actually named. In aggregate tables, these columns are prefixed with a two-character abbreviation for the aggregation statistic that the column contains. Table 4-4 shows these abbreviations and their meanings.

Table 4-4 Prefixes Used for Column Names in Aggregate Tables
Prefix Meaning
AV
Average

SM
Sum

MN
Minimum

MX
Maximum

SD
Standard deviation

VR
Variance

CT
Count



These statistics are calculated for the set of data in that aggregate from the next lowest level. For example, a row in the DAY aggregate table contains statistics calculated for all entries in the HOUR table for that day.