Golden gate


Golden gate is platform independent coz of trail files binary format

Golden gate is fast coz it doesn’t pump trail files but the data written to trail files. And trail files only have committed transactions
Otherwise data pump fal process wait for arch logs.

Ansible (software)


https://en.wikipedia.org/wiki/Ansible_(software)

Ansible is an open source software that automates software provisioning, configuration management, and application deployment.[2] Ansible connects via SSH, remote PowerShell or via other remote APIs

kfed – kernel file editor / ASM metadata editor


The kfed is an undocumented ASM utility that is used to read and modify ASM metadata blocks. The kfed is a standalone utility, independent of any ASM instance, so it can be used with either mounted or dismounted disk groups.
 
kfed is available from Oracle Database 11g Release 1 and forward and can be used to read and write ASM metadata, in particular disk headers and ASM metadata contents. kfed in write mode is a powerful but can be potentially destructive tool in the wrong hands. kfed invoked with the -help option displays the online help for the utility.
 
The most common kfed feature is its ability to fix corrupt ASM metadata.  The kfed binary is located in $ORACLE_HOME/bin directory.  With the kfed “read” command we can read a single ASM metadata block. The syntax is:

$ kfed read [aun=ii aus=jj blkn=kk dev=]asm_disk_name

Where the command line parameters are

aun – Allocation Unit (AU) number to read from. Default is AU0, or the very beginning of the ASM disk.
aus – AU size. Default is 1048576 (1MB). Specify the aus when reading from a disk group with non-default AU size.
blkn – block number to read. Default is block 0, or the very first block of the AU.
dev – ASM disk or device name. Note that the keyword dev can be omitted, but the ASM disk name is mandatory.
Use kfed to read ASM disk header block.

# displays online help for the utility
$ kfed -help

# reads the disk header to stdout
$ kfed op=read dev=/dev/mapper/itstor741_11p1

# reads the specified AU and block into file /tmp/a
$ kfed op=read dev=/dev/mapper/itstor741_11p1 aunum=3 blknum=3 text=/tmp/a

# writes from /tmp/a into the specified AU and block
#block checksum is computed and written together with data
$ kfed op=write dev=/dev/mapper/itstor741_11p1 aunum=3 blknum=3 text=/tmp/a

Question :

As you said Voting & OCR Disk resides in ASM Diskgroups, but as per startup sequence OCSSD starts first before than ASM, how is it possible?

How does OCSSD starts if voting disk & OCR resides in ASM Diskgroups?

You might wonder how CSSD, which is required to start the clustered ASM instance, can be started if voting disks are stored in ASM? This sounds like a chicken-and-egg problem: without access to the voting disks there is no CSS, hence the node cannot join the cluster. But without being part of the cluster, CSSD cannot start the ASM instance. To solve this problem the ASM disk headers have new metadata in 11.2: you can use kfed to read the header of an ASM disk containing a voting disk. The kfdhdb.vfstart and kfdhdb.vfend fields tell CSS where to find the voting file. This does not require the ASM instance to be up. Once the voting disks are located, CSS can access them and joins the cluster.

Source – Internet

Compressive Study & Notes: – Parameter conversion (imp/exp & impdp/expdp)


  1. exp|imp file -> expdp|impdp dumpfile, but only filename, not path
  2. exp|imp log -> expdp|impdp logfile, but only filename, not path
  3. exp compress -> no equivalent; some still mistakenly believe `exp compress=y’ compresses dump file; note : expdp compression CAN really compress, on metadata only in 10g and all data in 11g, and it may compress better than gzip
  4. exp|imp rows -> expdp|impdp content
  5. exp owner -> expdp schemas
  6. exp consistent=y -> expdp flashback_{time|scn} (see the section starting with “exp consistent=y can be simulated”)
  7. imp indexfile -> impdp content or sqlfile
  8. imp fromuser touser -> impdp remap_schema (in its presence, schemas parameter, if used, points to the original or source schema, not the remapped one)
  9. imp ignore -> impdp table_exists_action for tables (for tables only, unlike imp ignore for any object, which has no exact equivalent in impdp)
    1. table_exists_action=skip: This says to ignore the data in the import file and leave the existing table untouched.  This is the default and it is not a valid argument if you set content=data_only.
    2. table_exists_action=append:  This says to append the export data onto the existing table, leaving the existing rows and adding the new rows from the dmp file.  Of course, the number and types of the data columns must match to use the append option.  Just like the append hint, Oracle will not re-user any space on the freelists and the high-water mark for the table will be raised to accommodate the incoming rows.
    3. table_exists_action=truncate:  This says to truncate the existing table rows, leaving the table definition and replacing the rows from the expdp dmp file being imported.  To use this option you must not have any referential integrity (constraints) on the target table.  You use the table_exists_action=truncate when the existing table columns match the import table columns.  The truncate option cannot be used over a db link or with a cluster table.
    4. table_exists_action=replace: This says to delete the whole table and replace both the table definition and rows from the import dmp file.  To use this option you must not have any referential integrity (constraints) on the target table.  You use the table_exists_action=replace when the existing table columns do not match the import table columns.
  1. exp|imp statistics=none -> expdp|impdp exclude=statistics (documentation is wrong in saying “A parameter comparable to STATISTICS is not needed. Statistics are always saved for tables.”); before 11gR2, it’s very beneficial to exclude stats or you would contaminate shared pool with lots of literal SQLs like INSERT INTO “SYS”.”IMPDP_STATS”… during impdp (Bug 7185178, fixed in 11gR2). If you forgot, right after impdp, purge those SQLs with grant select on sys.impdp_stats to dba and revoke it, or flush shared pool if you don’t mind. the impact. Note: it seems impdp exclude=(table_)statistics is available in 10g but not 11g. expdp is fine.” Normally, when I tried to play with Data Pump, the expdp utility will always take less time when compared to impdp. The impdp takes more time in importing the collected statistics into the new schema. So, yesterday I tried to use the exclude parameter in impdp/expdp to utilize the power of the data pump. Actually what I did was I didn’t collect the statistics during expdp,expdp exclude=statistics…After this, the entire impdp task was very fast. The same exclude statistics can also be used in the impdp utility as given below,impdp exclude=statistics…But you should not use the exclude=statistics parameter if you have already done this during the expdp. It will throw an error “ORA-39168: Object path STATISTICS was not found.”
  1. imp resumable -> not needed; impdp does it by default, and it shows on screen “ORA-39171: Job is experiencing a resumable wait” except during the final index creation phase.
  2. imp resumable_timeout -> none. If 2 hour default is not suitable, after impdp starts, exec dbms_resumable.set_session_timeout to set it (which also works around Bug 6140308 that DB level setting is ignored, and Bug 2968139 that parallel sessions won’t take the alter session setting).
  3. exp|imp recordlength -> not supported by expdp|impdp
  4. exp|imp trace=y -> The undocumented SQL trace parameter accepts much more options in data pump than just y or n in exp|imp, and can do more than SQL trace in data pump, but requires higher privilege in data pump. Level 1 SQL trace can use value e.g. 400301. See Note:286496.1. NLS_LANG for exp|imp -> environment variable not needed because expdp|impdp uses database nls_characterset parameter on the servers. But see Note:227332.1 Sec.7.
  5. exp|imp through a pipe, usually to compress dumpfile or to pass to ssh to send the dump across servers to avoid intermediate dump file -> no equivalent; expdp|impdp does not work with pipes or any sequential access devices (Ref: Note:463336.1). Expdp compression does real compression already. And dbms_datapump package allows for cross-server data pump export and import

 

Source : http://yong321.freeshell.org & others.

Resumable space allocation


 Source: Internet
Long running operations or batch process would fail due to insufficient space. When job fails, DBA has to allocate the space and restart the job manually. When DBA restarts the job, oracle has to run the job from the scratch again. Oracle9i introduced Resumable space allocation to resolve this kind of issues.
With this resumable space allocation feature, whenever job fails, it will be suspended until DBA fix the issues and job will be resumed instead of restarting from scratch.
In oracle9i, resumable space allocation feature must be turned on in the session level using the ALTER SESSION ENABLE RESUMABLE statement. In oracle10g, a new parameter RESUMABLE_TIMEOUT was introduced, where the resumable space allocation feature can be turned on at the database level. The default for this parameter is 0, which means the resumable timeout is not enabled.

ORA -16014 – Error in archiving, no available destination


One common mistake in 10g, when we enable archivelog mode, happens when we use the flash recovery area default settings as the repository for our archived redo log’s… The mistake consequences will happen 20-30 days after the database archivelog mode been enabled.

By default Oracle 10g Database Configuration Assistant (DBCA) sets flash recovery area size to 2GB (db_recovery_file_dest_size parameter) and when we enable DB archivelog mode it will use the default db_recovery_file_dest parameter for the destination of our archivelogs unless we set any other directory as the destination for them.

A common small 10g database generates an average 100MB of archivelog files daily… Sometimes even at weekend days when most applications DML is almost zero. Why this happens? The reason is Automatic Workload Repository (AWR) and Automatic Database Diagnostic Monitor (ADDM). AWR collects detailed performance-related metrics from the database at regular intervals, known as snapshots. After each snapshot is taken, ADDM is invoked to thoroughly analyze the data and metrics deriving from the difference between snapshots, and then recommend necessary actions. This generates lot’s of DML and consequently lot’s of redo data.

So, after some days, your flash recovery area will reach the default 2GB because we have our default settings database archiving there… When this happens our database will be unable to archive due to flash recovery area went full. This happens even if we have lot’s of disk space!

In our alert log file we’ll see something like this:

ORA-16038: log one sequence 3144 cannot be archived

ORA-19809: limit exceeded for recovery files

ORA-00312: online log string thread string: ‘string: ‘/u01/app/oracle/oradata/mydb/redo01.log’

Sat Oct 11 10:43:56 2008

ARCH: Archival stopped, error occurred. Will continue retrying

Sat Oct 11 10:43:56 2008

ORACLE Instance neo – Archival Error

Oracle saves all the information about what we place in the flash recovery area in the RMAN repository/controlfile. If it determines that there is not sufficient space in the recovery file destination, as set by db_recovery_file_dest_size then it will fail. Just deleting the old backups and archive logs from disk is not sufficient as it’s the rman repository/controlfile that holds the space used information.

How to fix this? If we google we see lot’s of places saying to execute “delete archivelog all” in RMAN. This is the fast easy, but dirty solution, as we don’t want to delete all our precious archivelogs by endanger the ability to do future database recovery. The fix can be done with any of the following solutions:

Solution 1 – Delete unwanted archive log files at OS side, then crosscheck archivelogs to marks the controlfile that the archives have been deleted and deleting expired ones.

?

12

3

4

5

6

[oracle@app oracle]$ cd   /u01/app/oracle/flash_recovery_area/mydb/[oracle@app oracle]$ rm archivelogs*

[oracle@app oracle]$ rman target /

connected to target database: MYDB (DBID=1649570311)

RMAN> crosscheck archivelog all

RMAN> delete expired archivelog all

Solution 2 – Connect RMAN to backup and then delete your archivelogs… this is a much better solution.

?

12

3

4

5

[oracle@app oracle]$ rman target /connected to target database: MYDB (DBID=1649570311)

RMAN> backup archivelog until logseq delete all   input;

or

RMAN> backup archivelog until time ‘sysdate-15’ delete all   input;

Solution 3 – increase dynamically (without shutdown/startup database) the parameter db_recovery_file_dest_size.

?

12 [oracle@app oracle]$ sqlplus “/ as sysdba”SQL> alter system set db_recovery_file_dest_size=4G

Then, if needed (usually with solution 1 and 2), just shutdown and startup your database and you’ll get you database again up. We may have to shutdown abort if the database does not shutdowns normally.

To avoid the same problem in future when the archivelog size reachs the db_recovery_file_dest_size we should set the following parameters to set an alternate place to archive the redo logs.

log_archive_dest_1=’LOCATION=use_db_recovery_file_dest NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2′

log_archive_dest_2=’LOCATION=/other_destination_for_archiving’

log_archive_dest_state_1=’enable’

log_archive_dest_state_2=’alternate’

db_recovery_file_dest=’/u01/app/oracle/flash_recovery_area/mydb/’

db_recovery_file_dest_size=2G

Have a nice loving day

Source : Internet

How Rows Are Stored in ORACLE


The format of a row is: row header, column length – value; column_length – value; column_length – value; ….

The length of a field is stored as an attribute in the row.

If the column name is “LAST_NAME” and the column is defined as CHAR(20) it is be stored as :20:Morgan————–:

If the column name is “LAST_NAME” and the column is defined as VARCHAR2(20) it is stored as  :6:Morgan:

Oracle starts adding rows from the end of the block towards the block header. In this way, the block header can grow if required.

To see the actual block and row as stored by Oracle use the DBMS_ROWID built-in package.

(The DBMS_ROWID package lets you create ROWIDs and obtain information about ROWIDs from PL/SQL programs and SQL statements. You can find the data block number, the object number, and other ROWID components without writing code to interpret the base-64 character external ROWID. DBMS_ROWID is intended for upgrading from Oracle database version 7 to Oracle database version 8.X.)

 

Performance Tuning – SGA_TARGET


SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA memory sizing.

Parameter description:

SGA_TARGET

Property          Description

Parameter type    Big integer

Syntax            SGA_TARGET = integer [K | M | G]

Default value     0 (SGA autotuning is disabled)

Modifiable        ALTER SYSTEM

Range of values   64 to operating system-dependent

Basic             Yes

SGA_TARGET provides the following:

  • Single parameter for total SGA size
  • Automatically sizes SGA components
  • Memory is transferred to where most needed
  • Uses workload information
  • Uses internal advisory predictions
  • STATISTICS_LEVEL must be set to TYPICAL

By using one parameter we don’t need to use all other SGA parameters like.

  • DB_CACHE_SIZE (DEFAULT buffer pool)
  • SHARED_POOL_SIZE (Shared Pool)
  • LARGE_POOL_SIZE (Large Pool)
  • JAVA_POOL_SIZE (Java Pool)

Enable SGA_TARGET

SQL> show parameter sga_target

NAME                                 TYPE        VALUE

———————————— ———– ——————————

sga_target                           big integer 0

SQL> show parameter sga_max_size

NAME                                 TYPE        VALUE

———————————— ———– ——————————

sga_max_size                         big integer 600M

As we can see our automatic SGA tuning is not enabled so we can enable it by setting the SGA_TARGET parameter value.

SQL> alter system set sga_target=500m;

System altered.

SQL> show parameter sga_target

NAME                                 TYPE        VALUE

———————————— ———– ——-

sga_target                           big integer  500M

Resize SGA_TARGET

  • SGA_TARGET is dynamic
  • Can be increased till SGA_MAX_SIZE
  • Can be reduced till some component reaches minimum size
  • Change in value of SGA_TARGET affects only automatically sized components

SQL> show parameter sga_max_size

NAME                                 TYPE        VALUE

———————————— ———– ———–

sga_max_size                         big integer 600M

SQL> show parameter sga_target

NAME                                 TYPE        VALUE

———————————— ———– ———–

sga_target                           big integer 500M

We can resize it to only 600m if we will try to increase it from 600m we will get error.

SQL> alter system set sga_target=605m;

alter system set sga_target=605m

*

ERROR at line 1:

ORA-02097: parameter cannot be modified because specified value is invalid

ORA-00823: Specified value of sga_target greater than sga_max_size

For that we must first increase our SGA_MAX_SIZE parameter value. But we must restart out instance because its STATIC parameter.

SQL> alter system set sga_max_size=956m scope=spfile;

System altered.

SQL> startup force

ORACLE instance started.

Total System Global Area 1000189952 bytes

Fixed Size                  1337492 bytes

Variable Size             624953196 bytes

Database Buffers          369098752 bytes

Redo Buffers                4800512 bytes

Database mounted.

Database opened.

SQL> show parameter sga_max_size

NAME                                 TYPE        VALUE

———————————— ———– ——–

sga_max_size                         big integer 956M

SQL> alter system set sga_target=900m;

System altered.

Be careful when you are using 32 bit system, sga_max_size should be less than 4GB otherwise Oracle instance will fail to startup. Also in 32 bit system, it is observed that total sga+total pga should be less than the actual physical memory available.
It appear that on Windows 32 bits, maximum size for (SGA + PGA + Oracle memory for connection) must be < 2000 MO
On Windows 32 bits with Windows “/3GB” and “/PAE” options (Windows 2003 Server and later), maximum size for (SGA + PGA + Oracle memory for connection) must be < 3000 MO

Disable SGA_TARGET

We can Disable our automatic SGA tuning by setting the parameter SGA_TARGET to value digit 0.

SQL> alter system set sga_target=0;

System altered.

 

Source : Internet

Understand LAST_CALL_ET


 

The LAST_CALL_ET column in V$SESSION is sometimes misunderstood. Is it the Elapsed Time of the Last Call ? What is “Last Call” ? Is it the Last SQL Statement executed ? Or is it the Last Parse/Execute/Fetch Call ?

The documentation states that it is :

If the session STATUS is currently ACTIVE, then the value represents the elapsed time in seconds since the session has become active.

If the session STATUS is currently INACTIVE, then the value represents the elapsed time in seconds since the session has become inactive.

(The 11.2 documentation is no different :

If the session STATUS is currently ACTIVE, then the value represents the elapsed time (in seconds) since the session has become active.

If the session STATUS is currently INACTIVE, then the value represents the elapsed time (in seconds) since the session has become inactive.)

 

ORACLE RAC Basics


Oracle Clusterware (Cluster Ready Services in 10g/ Cluster Manager in 9i) – provides infrastructure that binds multiple nodes that then operate as single server. Clusterware monitors all components like instances and listeners. There are two important components in Oracle clusterware, Voting Disk and OCR (Oracle Cluster Registry).

Voting Disk – is file that resides on shared storage and Manages cluster members.  Voting disk reassigns cluster ownership between the nodes in case of failure.

OCR (Oracle Cluster Registry) – resides on shared storage and maintains information about cluster configuration and information about cluster database. OCR contains information like which database instances run on which nodes and which services runs on which database.

CRS Resource – anything that Oracle Clusterware manages is classified as CRS resource like database, instance, service, listener, VIP address and so on.

.
Cluster-Aware Storage – is storage solution for Oracle RAC like RAW device, OCFS, ASM
.
Interconnect – is private network that connects all the servers in cluster. Interconnect uses switch that only nodes in cluster can access. Instances in cluster communicate to each other via interconnect.
.

Cache Fusion – is disk less cache coherency mechanism in Oracle RAC that provides copies of data blocks directly from one instance’s memory cache (in which that block is available) to other instance (instance which is request for specific data block).  Cache Fusion provides single buffer cache (for all instances in cluster) through interconnect.

In Single Node oracle database, an instance looking for data block first checks in cache, if block is not in cache then goes to disk to pull block from disk to cache and return block to client.

In RAC Database there is remote cache so instance should look not only in local cache (cache local to instance) but on remote cache (cache on remote instance). If cache is available in local cache then it should return data block from local cache; if data block is not in local cache, instead of going to disk it should first go to remote cache (remote instance) to check if block is available in local cache (via interconnect)

This is because accessing data block from remote cache is faster than accessing it from disk.

.

Cache Fusion Model
Cache fusion Model is dependent on three services
– Global Resource Directory (GRD)
– Global Cache Service (GCS)
– Global En-queue Service (GES) and –
.
SSH User Equivalency – means assigning same properties (username, userid, group, group id and same password) to operating system user (installing & owning RAC database) across all nodes in cluster

CVU (Cluster Varification Utility) – is utility to verify that system meets all the criteria for Oracle Clusterware Installation.

 

Storage Options for RAC

  • CFS (Cluster File System) – Easy to manage but only available on some platforms.  Does not address striping and mirroring.
  • RAW – Available on all platforms but difficult to manage. Does not address striping and mirroring.
  • NFS – Easy to manage but only available on some platforms.  Does not address striping and mirroring.
  • ASM (Automatic Storage Management) – Easy to manage, available on ALL platforms, and DOES address striping and mirroring.

CFS (Cluster Filesystems)

  • The idea of CFS is to basically share file filesystems between nodes.
  • Easy to manage since you are dealing with regular files.
  • CFS is configured on shared storage.  Each node must have access to the storage in which the CFS is mounted.
  • NOT available on all platforms.  Supported CFS solutions currently:
  • OCFS on Linux and Windows (Oracle)
  • DBE/AC CFS (Veritas)
  • GPFS (AIX)
  • Tru64 CFS (HP Tru64)
  • Solaris QFS

RAW (character devices)

  • Hard to manage since you are dealing with character devices and not regular files.
  • Adding and resizing datafiles is not trivial.
  • On some operating systems volume groups need to deactivated before LVs can be manipulated or added.

NFS (Network Filesystem)

  • NOT available on all platforms.  Supported NFS solutions currently:
  • Network Appliance
  • Redhat Linux
  • Fujitsu Primecluster
  • Solaris Suncluster

ASM

  • Stripes files rather than logical volumes.
  • Enables online disk reconfiguration and dynamic rebalancing.
  • Provides adjustable re balancing speed.
  • Provides redundancy on a file basis.
  • Supports only Oracle files.
  • Is cluster aware.
  • Is automatically installed as part of the base code set .

Source : Internet