Wednesday, July 6, 2011

How to open Oracle Apps/Forms from your Server/LINUX Machine

Steps to open OAPPS Forms from your LINUX Machine.

I did the following steps and I can now successfully open OAPPS forms in Linux.

I installed Linux 5 and then Apps R12.1.1 but did not install the mozilla seperately. Rather I used the default mozilla which gets installed with linux.....

Steps I did:

1.Checked my Java Plugin version from $CONTEXT_FILE

[applmgr@jeddah ~]$ grep plugin $CONTEXT_FILE

1.6.0_07
jdk

2.Downloaded exactly the same version of PLUGIN.

Link to downlaod:
http://java.sun.com/products/archive/j2se/6u7/index.html

-- Under the heading JDK 6u7 includes the JVM technology.. Click on "Download JRE" for J2SE Runtime Environment (JRE)
-- Select your specific PLATFORM (Linux as I did)

-- This will take you to the available files
-- Click on "jdk-6u7-linux-i586-rpm.bin" to install on your machine

3. WinSCP it to your linux at some location

4. Change the file permission.
”chmod 755 jdk-6u7-linux-i586-rpm.bin”

5. Execute this file as:

[root@jeddah ~]#./jdk-6u7-linux-i586-rpm.bin (You will be asked to accept license agreement , click enter and when prompted to accept license enter Yes)

–This will install jre 1.6.0_07 in linux machine under /usr/java and plugin is installed under
/usr/java/jre1.6.0_07/plugin/i386/ns7/libjavaplugin_oji.so

6. Configure JRE Plug-in in browser
–Go to the DEFAULT Mozilla FireFox plug-in directory which is
[root@jeddah ~]# cd /usr/lib/mozilla/plugins


Creating softllink:
ln -s /usr/java/jre1.6.0_07/plugin/i386/ns7/libjavaplugin_oji.so ./libjavaplugin_oji.so

THAT's ALL.

Now, open a fresh Firefox window and login to access your FORMS.... HAVE FUN!!

regards
MD Aijaz

Wednesday, June 15, 2011

RMAN Configuration and Implementation

RMAN Configuration and Implementation

This document discusses about RMAN features and implementation steps. Here I used GOLD as catalog database and PROD as my target database.

GOLD is a normal core Oracle Database Version 11.1.0.7.0
PROD is an Oracle Application Database
Database Version: 11.1.0.7.0
Application Version: R12, 12.1.1

Creating recovery catalog: @GOLD

Create Tablespace:
create tablespace rman_ts datafile '/u01/kamal/oradata/GOLD/rman_ts.dbf' size 50m;

User Creation:
create user rman_ts identified by rman_ts default tablespace rman_ts quota unlimited on rman_ts;

Giving required grants to the user:
grant connect,resource to rman_ts;
grant recovery_catalog_owner to rman_ts;

Creating catalog:

Exit from SQL prompt and issue this command from OS
rman catalog rman_ts/rman_ts
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jun 6 23:10:56 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to recovery catalog database

RMAN> create catalog tablespace rman_ts;

recovery catalog created

RMAN> exit
Recovery Manager complete.


Registering target database:

Now, on the Target Database i.e; PROD run:

[oracle@jeddah ~]$ rman target system/manager

Recovery Manager: Release 11.1.0.7.0 - Production on Mon Jun 13 15:37:19 2011

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: PROD (DBID=165212483)

RMAN> connect catalog rman_ts/rman_ts@GOLD

connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN> exit
Recovery Manager complete.

How to check database is registered or not?
Connect to the catalog database as catalog user and query rc_database. This will give you all the databases that are registered on the recovery catalog database.

SQL> sho user
USER is "RMAN_TS"
SQL> select * from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
GOLD.JEDDAH.COM.SA

SQL> select * from rc_database;

DB_KEY DBINC_KEY DBID NAME RESETLOGS_CHANGE# RESETLOGS
---------- ---------- ---------- -------- ----------------- ---------
1 2 165212483 PROD 5.9651E+12 09-JUN-11

By this your RMAN Configuration is DONE, you can now start taking Backup etc;


SQL> sho user
USER is "SYS"
SQL> startup
ORACLE instance started.

Total System Global Area 1071333376 bytes
Fixed Size 1318172 bytes
Variable Size 427819748 bytes
Database Buffers 629145600 bytes
Redo Buffers 13049856 bytes
Database mounted.
Database opened.
SQL>
SQL>
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/oracle/db/tech_st/11.1.0/dbs/arch
Oldest online log sequence 3
Next log sequence to archive 4
Current log sequence 4
SQL>
SQL>
SQL> !ls -ltr /u01/oracle/db/tech_st/11.1.0/dbs/arch
total 926212
-rw-r----- 1 oracle dba 947508736 Jun 13 17:32 1_3_753374150.dbf

SQL> alter system switch logfile;

System altered.

SQL> !ls -ltr /u01/oracle/db/tech_st/11.1.0/dbs/arch
total 926660
-rw-r----- 1 oracle dba 947508736 Jun 13 17:32 1_3_753374150.dbf
-rw-r----- 1 oracle dba 452096 Jun 13 17:35 1_4_753374150.dbf

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@jeddah ~]$
[oracle@jeddah ~]$


[oracle@jeddah ~]$ rman target system/manager

Recovery Manager: Release 11.1.0.7.0 - Production on Mon Jun 13 17:37:17 2011

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: PROD (DBID=165212483)

RMAN> connect catalog rman_ts/rman_ts@GOLD

connected to recovery catalog database

RMAN> RUN {
ALLOCATE CHANNEL c1 TYPE DISK;
backup
format '/u01/RMAN_BACKUP/df_%d_%s_%t.bkp'
database;
}

allocated channel: c1
channel c1: SID=362 device type=DISK

Starting backup at 13-JUN-11
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00401 name=/u01/oracle/db/apps_st/data/a_txn_data02.dbf
input datafile file number=00406 name=/u01/oracle/db/apps_st/data/a_txn_ind05.dbf
input datafile file number=00353 name=/u01/oracle/db/apps_st/data/system08.dbf
input datafile file number=00379 name=/u01/oracle/db/apps_st/data/undo01.dbf
input datafile file number=00403 name=/u01/oracle/db/apps_st/data/a_txn_ind02.dbf
input datafile file number=00394 name=/u01/oracle/db/apps_st/data/a_ref01.dbf
input datafile file number=00400 name=/u01/oracle/db/apps_st/data/a_media01.dbf
input datafile file number=00402 name=/u01/oracle/db/apps_st/data/a_txn_data03.dbf
input datafile file number=00392 name=/u01/oracle/db/apps_st/data/a_txn_data01.dbf
input datafile file number=00013 name=/u01/oracle/db/apps_st/data/system12.dbf
input datafile file number=00407 name=/u01/oracle/db/apps_st/data/a_ref02.dbf
input datafile file number=00396 name=/u01/oracle/db/apps_st/data/a_summ01.dbf
input datafile file number=00404 name=/u01/oracle/db/apps_st/data/a_txn_ind03.dbf
input datafile file number=00393 name=/u01/oracle/db/apps_st/data/a_txn_ind01.dbf
input datafile file number=00405 name=/u01/oracle/db/apps_st/data/a_txn_ind04.dbf
input datafile file number=00395 name=/u01/oracle/db/apps_st/data/a_int01.dbf
input datafile file number=00003 name=/u01/oracle/db/apps_st/data/system03.dbf
input datafile file number=00002 name=/u01/oracle/db/apps_st/data/system02.dbf
input datafile file number=00004 name=/u01/oracle/db/apps_st/data/system04.dbf
input datafile file number=00001 name=/u01/oracle/db/apps_st/data/system01.dbf
input datafile file number=00354 name=/u01/oracle/db/apps_st/data/system11.dbf
input datafile file number=00005 name=/u01/oracle/db/apps_st/data/system05.dbf
input datafile file number=00015 name=/u01/oracle/db/apps_st/data/a_txn_ind06.dbf
input datafile file number=00014 name=/u01/oracle/db/apps_st/data/a_txn_data04.dbf
input datafile file number=00398 name=/u01/oracle/db/apps_st/data/a_archive01.dbf
input datafile file number=00295 name=/u01/oracle/db/apps_st/data/system06.dbf
input datafile file number=00351 name=/u01/oracle/db/apps_st/data/system07.dbf
input datafile file number=00288 name=/u01/oracle/db/apps_st/data/system10.dbf
input datafile file number=00352 name=/u01/oracle/db/apps_st/data/system09.dbf
input datafile file number=00008 name=/u01/oracle/db/apps_st/data/a_queue02.dbf
input datafile file number=00012 name=/u01/oracle/db/apps_st/data/apps_ts_tools01.dbf
input datafile file number=00399 name=/u01/oracle/db/apps_st/data/a_queue01.dbf
input datafile file number=00011 name=/u01/oracle/db/apps_st/data/sysaux01.dbf
input datafile file number=00016 name=/u01/oracle/db/apps_st/data/a_ref03.dbf
input datafile file number=00018 name=/u01/oracle/db/apps_st/data/sysaux02.dbf
input datafile file number=00314 name=/u01/oracle/db/apps_st/data/portal01.dbf
input datafile file number=00397 name=/u01/oracle/db/apps_st/data/a_nolog01.dbf
input datafile file number=00020 name=/u01/oracle/db/apps_st/data/undo2.dbf
input datafile file number=00006 name=/u01/oracle/db/apps_st/data/ctxd01.dbf
input datafile file number=00010 name=/u01/oracle/db/apps_st/data/olap.dbf
input datafile file number=00017 name=/u01/oracle/db/apps_st/data/a_int02.dbf
input datafile file number=00009 name=/u01/oracle/db/apps_st/data/odm.dbf
input datafile file number=00007 name=/u01/oracle/db/apps_st/data/owad01.dbf
input datafile file number=00019 name=/u01/oracle/db/apps_st/data/olap01.dbf
channel c1: starting piece 1 at 13-JUN-11
channel c1: finished piece 1 at 13-JUN-11
piece handle=/u01/RMAN_BACKUP/df_PROD_3_753730668.bkp tag=TAG20110613T173747 comment=NONE
channel c1: backup set complete, elapsed time: 00:37:23
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current control file in backup set
channel c1: starting piece 1 at 13-JUN-11
channel c1: finished piece 1 at 13-JUN-11
piece handle=/u01/RMAN_BACKUP/df_PROD_4_753732911.bkp tag=TAG20110613T173747 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:03
Finished backup at 13-JUN-11
released channel: c1

RMAN>
RMAN> exit
Recovery Manager complete.

Sunday, January 2, 2011

RAC: Real Application Cluster

What is RAC?
RAC stands for Real Application cluster. It is a clustering solution from Oracle Corporation that ensures high availability of databases by providing instance failover, media failover features.

Oracle RAC software components:-

Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database.Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service) that enable cache fusion.
Oracle RAC instances are composed of following background processes:
ACMS—Atomic Controlfile to Memory Service (ACMS)
GTX0-j—Global Transaction Process
LMON—Global Enqueue Service Monitor
LMD—Global Enqueue Service Daemon
LMS—Global Cache Service Process
LCK0—Instance Enqueue Process
RMSn—Oracle RAC Management Processes (RMSn)
RSMN—Remote Slave Monitor

Friday, March 27, 2009

Setup Oracle Database Archivelog mode

A short musical video tutorial showing how easily you can configure Oracle Database to use archivelog mode. Using archivelog mode is highly recommended in any production database and allows RMAN hot backups (online backups) . You can also use RMAN without archivelog, but you’ll have to shutdown the database to backup it (cold backup).

Let’s system of a down :) Beware… highly advisable to TURN OFF sound if you don’t like hard rock :P

can be done easily with the following statements:

SQL> shutdown immediate;

SQL> startup mount;

SQL> alter database archivelog;

SQL> alter database open;

If your database version is prior 10g then after shutdown and before open database you must set log_archive_start = TRUE in your init.ora and/or spfile.ora. After that, login as "sys" and you can check it out with:

SQL> archive log list;

SQL> select log_mode from v$database;

Note: In this case, we need to do a quick restart of the database to enable it to "Archive Log Mode". So please ensure your the required outage window.

How can one improve Import/ Export performance?

EXPORT:
1) Set the BUFFER parameter to a high value (e.g. 2Mb -- entered as an integer "2000000")
2) Set the RECORDLENGTH parameter to a high value (e.g. 64Kb -- entered as an integer "64000")
3) Use DIRECT=yes (direct mode export)
4) Stop unnecessary applications to free-up resources for your job.
5) If you run multiple export sessions, ensure they write to different physical disks.
6) DO NOT export to an NFS mounted filesystem. It will take forever.

IMPORT:
1) Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
2) Place the file to be imported on a separate physical disk from the oracle data files
3) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora file
4) Set the LOG_BUFFER to a big value and restart oracle.
5) Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
6) Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)7) Use COMMIT=N in the import parameter file if you can afford it.
8) Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
9) Remember to run the indexfile previously created.

Common Import/ Export Problems

ORA-00001: Unique constraint (...) violated
You are importing duplicate rows. Use IGNORE=YES to skip tables that already exist (imp will give an error if the object is re-created).

ORA-01555: Snapshot too old
Ask your users to STOP working while you are exporting or try using parameter CONSISTENT=NO

ORA-01562: Failed to extend rollback segment
Create bigger rollback segments or set parameter COMMIT=Y while importing

IMP-00015: Statement failed ... object already exists...
Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.

Enabling ARCHIVELOG Mode

This is a good review of enabling archive log - an article coming from cuddletech(http://www.cuddletech.com/articles/oracle/node58.html)

Most of the High Availability features of Oracle require you to enable ARCHIVELOG mode for your database. When you enable this mode redo logs will be archived instead of overwritten. The archivelogs are stored in a seperate place usually can backed up regularly by your standard filesystem backup system (NetBackup or whatever). Archive logs are utilized by RMAN, Data Guard, Flashback and many others.

If your going to enable archivelog mode on a real database thats important to you, I would recommend shutting down the database and doing a cold backup just in case. Keeping a “final noarchivelog mode backup” seems to be a good and excepted practice.


Enabling archive mode is simple, just connect to your database in mounted but closed mode (startup mount) and alter the database. But if you don’t tune alittle you’ll run into problems down the road, so lets specify some parameters too. Namely, consider LOG_ARCHIVE_DEST.

Lets start by checking the current archive mode.

SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;

LOG_MODE

------------

NOARCHIVELOG

So we’re in NOARCHIVELOG mode and we need to change. We can use a database alter statement, but that won’t be perminant, so lets just update the pfile directly. The pfile should be in either $ORACLE_BASE/admin/SID/pfile or $ORACLE_HOME/admin/SID/pfile. I’ll add the following lines to the end of the file:

#############################

Archive Log Destinations -benr(10/15/04)

############################

log_archive_dest_1='location=/u02/oradata/cuddle/archive'

log_archive_start=TRUE

Note that we’re not actually required to specify the location of the log destination, but if you don’t it’ll end up in strange places (in my test it went to $ORACLE_HOME/dbs making a mess). You can specify as many as 10 diffrent archive log destinations by using the paramters log_archive_dest_1 through log_archive_dest_10. Remember, if you run out of space in your archive log destination the database will shut down!

Now we can startup the database in mount mode and put it in archivelog mode.

[oracle@vixen pfile]$sqlplus sys/passwd as sysdba;

SQL*Plus: Release 10.1.0.2.0 - Production on Fri Oct 15 16:00:58 2004

Copyright (c) 1982, 2004, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount

ORACLE instance started.

Total System Global Area 184549376 bytes

Fixed Size 1300928 bytes

Variable Size 157820480 bytes

Database Buffers 25165824 bytes

Redo Buffers 262144 bytes

Database mounted.

SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

You can see here that we put the database in ARCHIVELOG mode by using the SQL statement “alter database archivelog”, but Oracle won’t let us do this unless the instance is mounted but not open. To make the change we shutdown the instance, and then startup the instance again but this time with the “mount” option which will mount the instance but not open it. Then we can enable ARCHIVELOG mode and open the database fully with the “alter database open” statement.

There are several system views that can provide us with information reguarding archives, such as:

V$DATABASE - Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG mode and whether MANUAL (archiving mode) has been specified.

V$ARCHIVED_LOG - Displays historical archived log information from the control file. If you use a recovery catalog, the RC_ARCHIVED_LOG view contains similar information.

V$ARCHIVE_DEST - Describes the current instance, all archive destinations, and the current value, mode, and status of these destinations.

V$ARCHIVE_PROCESSES - Displays information about the state of the various archive processes for an instance.

V$BACKUP_REDOLOG - Contains information about any backups of archived logs. If you use a recovery catalog, the RC_BACKUP_REDOLOG contains similar information.

V$LOG - Displays all redo log groups for the database and indicates which need to be archived.

V$LOG_HISTORY - Contains log history information such as which logs have been archived and the SCN range for each archived log.

Using these tables we can verify that we are infact in ARCHIVELOG mode:

SQL> select log_mode from v$database;

LOG_MODE

------------

ARCHIVELOG

SQL> select DEST_NAME,STATUS,DESTINATION from V$ARCHIVE_DEST;

Sunday, November 30, 2008

Oracle Database Architecture

Oracle Database Architecture

Oracle is an RDBMS (Relational Database Management System). The Oracle database architecture can be described in terms of logical and physical structures. The advantage of separating the logical and physical structure is that the physical storage structure can be changed without affecting the logical structure.

Logical Structure

The logical structure for Oracle RDBMS consists of the following elements:
· Tablespace
· Schema

Tablespace:
The Oracle database consists of one or more logical portions called as ‘Tablespaces’. A tablespace is a logical grouping of related data. A database administrator can use Tablespaces to do the following:
· Control disk space allocation for database data.
· Assign specific space quotas for database users.
· Perform partial database backup or recovery operations.
· Allocate data storage across devices to improve performance.

Each database has at least one Tablespace called SYSTEM Tablespace. As part of the process of creating the database, Oracle automatically creates the SYSTEM tablespace. Although a small database can fit within the SYSTEM tablespace, it's recommended that to create a separate tablespace for user data. Oracle uses the SYSTEM tablespace to store information like the data dictionary. Data dictionary stores the metadata (or the data about data). This includes information like table access permissions, information about keys etc.
Data is stored in the database in form of files called as datafiles. Each Tablespace is a collection of one or more Datafiles. Each data file consists of ‘Data blocks’, ‘extents’ and ‘segments’.

Data Blocks:
At the finest level of granularity, an ORACLE database's data is stored in data blocks (also called logical blocks, ORACLE blocks, or pages). An ORACLE database uses and allocates free database space in ORACLE data blocks.

Extents:
The next level of logical database space is called an extent. An extent is a specific number of contiguous data blocks that are allocated for storing a specific type of information.

Segments:
The level of logical database storage above an extent is called a segment. A segment is a set of extents that have been allocated for a specific type of data structure, and all are stored in the same tablespace. For example, each table's data is stored in its own data segment, while each index's data is stored in its own index segment. ORACLE allocates space for segments in extents. Therefore, when the existing extents of a segment are full, ORACLE allocates another extent for that segment. Because extents are allocated as needed, the extents of a segment may or may not be contiguous on disk, and may or may not span files. An Oracle database can use four types of segments:

· Data segment--Stores user data within the database.
· Index segment--Stores indexes.
· Rollback segment--Stores rollback information. This information is used when data must be rolled back.
· Temporary segment--Created when a SQL statement needs a temporary work area; these segments are destroyed when the SQL statement is finished. These segments are used during various database operations, such as sorts.

Schema:
The database schema is a collection of logical-structure objects, known as schema objects that define how you see the database's data. A schema also defines a level of access for the users. All the logical objects in oracle are grouped into a schema. A scheme is a logical grouping of objects such as:

· Tables
· Clusters
· Indexes
· Views
· Stored procedures
· Triggers
· Sequences

Physical Structure:

The physical layer of the database consists of three types of files:
1. One or more Datafiles
2. Two or more redo log files
3. One or more control files



Datafiles (.dbf files):
Datafiles store the information contained in the database. One can have as few as one data file or as many as hundreds of datafiles. The information for a single table can span many datafiles or many tables can share a set of datafiles. Spreading tablespaces over many datafiles can have a significant positive effect on performance. The number of datafiles that can be configured is limited by the Oracle parameter MAXDATAFILES.

Data is stored in the Oracle database in form of files called as datafiles. Each Tablespace is a collection of one or more Datafiles. Each data file consists of ‘Datablocks’, ‘Extents’ and ‘Segments’. Some systems put a limit on the number of datafiles that can be used. So, efficient definition of tablespaces and datafiles is important.

Redo Log Files (.rdo & .arc):
Oracle maintains logs of all the transaction against the database. These transactions are recorded in files called Online Redo Log Files (Redo Logs). The main purpose of the Redo log files is to hold information as recovery in the event of a system failure. Redo log stores a log of all changes made to the database. The redo log files must perform well and be protected against hardware failures (through software or hardware fault tolerance). If redo log information is lost, one cannot recover the system. When a transaction occurs in the database, it is entered in the redo log buffers, while the data blocks affected by the transactions are not immediately written to disk. In an Oracle database there are at least three or more Redo Log files. Oracle writes to redo log file in a cyclical order i.e. after the first log file is filled, it writes to the second log file, until that one is filled. When all the Redo Log files have been filled, it returns to the first log file and begin overwrite its content with new transaction data. Note, if the database is running in the ARCHIVELOG Mode, the database will make a copy of the online redo log files before overwriting them.

Control Files (.ctl):
Control files record control information about all of the files within the database. These files maintain internal consistency and guide recovery operation. Control files contain information used to start an instance, such as the location of datafiles and redo log files; Oracle needs this information to start the database instance. Control files must be protected. Oracle provides a mechanism for storing multiple copies of control files. These multiple copies are stored on separate disks to minimize the potential damage due to disk failure. The names of the database’s control files are specified via the CONTROL_FILES initialization parameter.


Oracle Instance and Database:
In Oracle to access the data in the database, Oracle uses a set of Background processes that are shared by every user. Also, along with this there is some memory structure that stores the most recently queried data from the database.

A Database Instance (also save as Server) is set of memory structures and background processes that access a set of database files. It is possible for the single database to contain multiple instances, which is known as Real Application Cluster. The parameters that determine the size and composition of an oracle instance are either stored in an initialization file called init.ora or in spfile.ora. The initialization parameter file is read during instance start up and may be modified by the DBA. Any modification made will not be affected until the next start up.

System Global Area (SGA):
Oracle uses an area of shared memory called the system global area (SGA) and a private memory area for each process called the program global area (PGA). The SGA consists of: System global area: The SGA is a shared memory region that contains data and control information for one oracle instance. Oracle allocates the SGA when an instance starts and de-allocates it when the instance shut downs. Every instance has the SGA .The entire SGA should be large as possible to increase the system performance and to reduce disk I/O.

The information stored in the SGA is divided into three memory structures,

1) Database buffers cache
2) Redo log buffers
3) Shared pool

Database buffers cache:
The database buffer stores the most recently used blocks of data. The set of database buffers in an instance is the database buffer cache. The buffer cache contains modified as well as unmodified blocks. Because the most recently and most frequently used data is kept in memory. It improves the performance of system by reducing the disk I/O operations.

Redo log buffers:
The redo log buffer stores redo entries. This is a log of changes made to the database. The redo entries stored in the redo log buffers are written to an online redo log. An online redo log is a set of two or more files that record all the changes made to oracle data files and control files.

Shared pool:
The shared pool caches various constructs that can be shared among users, for example SQL shared area. For example’s SQL statements are cached so that they can be reused. Stored procedures can be cached for faster access. Note that in previous versions “out of memory” error were occurs if the shared memory was full. In oracle 10g it does not happen. It provides automatic shared memory tuning.

Program global area:
PGA is a memory buffer that contains data and control information for a server process. A server process is a process that services a client’s requests. A PGA is created by oracle when a server process is started. The information in a PGA depends on the oracle configuration. The PGA area is a non-shared area of memory created by oracle when a server process is started. The basic difference between SGA and PGA is that PGA cannot be shared between multiple processes in the sense that it is used only for requirements of a particular process whereas the SGA is used for the whole instance and it is shared.

Processes:
The relationships between the databases’ physical and memory structures are maintained and enforced by the Background Process. These are database’s own background processes that may vary in the number depending in your database configuration. The Trace files are only created when there is any problem.

Some of the Background Processes are:

SMON:
SMON stands for System Monitor and this database background process performs instance recovery at the start of the database. In a multiple instance system, SMON of one instance can also perform instance recovery for other instances that have failed. SMON also cleans up temporary segments that are no longer in use and recovers dead transactions skipped during crash and instance recovery because of file-read or offline errors. It coalesces i.e. combines contiguous free extents into larger free extents.

PMON:
PMON stands for the Process Monitor and this database background process cleans up failed user processes. PMON is responsible for releasing the lock i.e. cleaning up the cache and freeing resources that the process was using. Its effect can be seen when a process holding a lock is killed.

DBWR:
The DBWR (Database Writer) background process is responsible for managing the contents of the data block buffer cache and dictionary cache. DBWR performs batch writes of changed block. Since Oracle uses write-ahead logging, DBWR does not need to write blocks when a transaction commits. In the most common case, DBWR writes only when more data needs to be read into the system global area and too few database buffers are free. The least recently used data is written to the datafiles first. Although there is only one SMON and one PMON process running per database instance, one can have multiple DBWR processes running at the same time. Note the number of DBWR processes running is set via the DB_WRITER_PROCESSES.

LGWR:
The LGWR (Log Writer) background process manages the writing of the contents of the redo log buffer to the online redo log files. LGWR writes the log entries in the batches form. The Redo log buffers entries always contain the most up-to-date status of the database. Note LGWR is the only one process that writes to the online redo log files and the only one that directly reads the redo log buffer during to the normal database operation.

Archiver (ARCH):
The Archiver process reads the redo log files once Oracle has filled them and writes a copy of the used redo log files to the specified archive log destination(s). Actually, for most databases, ARCH has no effect on the overall system performance. On some large database sites, however, archiving can have an impact on system performance. We can specify up to ten ARCn processes for each database instance LGWR will start additional Archivers as needed, based on the load, up to the limit specified by the initialization parameter LOG_ARCHIVE_MAX_PROCESSES.

Checkpoint process (CKPT):
All modified information in database buffer in the SGA is written to the datafiles by a database write process (DBWR). This event indicates a checkpoint. The checkpoint process is responsible for signaling DBWR at checkpoints and updating all of the datafiles and control files of the database.

Recover (RECO):
The recover process automatically cleans up failed or suspended distributed transactions.