Quantcast
Channel: Oracle DBA - RajaBaskar Blog
Viewing all 25 articles
Browse latest View live

How to recreate the AWR Repository ?

$
0
0

Last week, I have checked the sysaux tablespace and it was growing abnormally. I have checked the v$sysaux_occupants and AWR repository having 23 GB.  Our AWR retention period is 60 days, but AWR having more than 60 days data in repository.  Due to several reasons, purging operation it’s not working. Refer the below article.


 I am dropped 4 snapshots and it was working fine.  But I felt it took lot of time.  I tried to drop lot of snapshots and session was hanging.

I referred the metalink and other notes. They were suggested to drop and recreate the automatic workload repository.

How can we drop and recreate the AWR?

My database is running on 11.2.0.2 and Linux server.

Steps:

1.    To disable the AWR statistics gathering parameter.

STATISTICS_LEVEL = BASIC - disables the collection of many of the important statistics.

2.    If SGA_TARGET is not in “ZERO”, please change the below init parameter temporary while recreating the AWR.

      Take the backup of existing pfile or spfile. If database running spfile, to create the pfile using spfile and take the backup.

shared_pool_size =
db_cache_size =
java_pool_size =
large_pool_size =
sga_target=0
statistics_level=basic

If database running on RAC node, you can change the “CLUSTER_DATABASE=FALSE”.


3.    Take the invalid objects list on databases.

4.    Shutdown and startup the database in RETRICT mode.
 
SQL> Shut immediate

SQL> Startup RESTRICT

5.    DROP and RECREATE the AWR objects


@/rdbms/admin/catnoawr.sql

alter system flush shared_pool;

@/rdbms/admin/catawrtb.sql


--if database running on ORACLE 11g, we should necessary run the below SQL

@/rdbms/admin/execsvrm.sql


6.    Using UTLRP.SQL, we can compile the invalid objects.

7.    Shutdown the database

8.    Startup the database using backup init parameter file. (whatever original values before our drop and recreate the AWR)


I Hope this article helped you. Suggestions are welcome.

Best Regards
RajaBaskar Thangaraj

1/20/2012 - Announced Oracle SCN bug in 11G may cause db's to become unstable and unrecoverable

$
0
0

1/20/2012 - Announced Oracle SCN bug in 11G may cause db's to become unstable and unrecoverable. Some issues in older Db's, but not as severe.

ISSUE Details:
While taking Manual physical hot back ,SCN cause the DB become unstable and un recoverable.

But Oracle 11G evidently has a major bug in 11G regarding how SCN is treated and incremented during backups.  
Oracle doesn’t clearly state whether the issue exists RMAN backups. The 11G Backup Database command “ALTER DATABASE BEGIN BACKUP” is messing up the way SCN in incremented and can artificially raise it above a “soft” limit built into the RDBMS.
At that point the db becomes unstable and possibly unrecoverable. This issue first appeared in the rags on 1/20/2012. But it is getting a lot of press since some companies have already encountered the issue. Evidently there are other problems which can also cause SCN sequence issues in 11G and earlier oracle db’s. But in most cases these never cause issues. The backup command bug in 11G was causing the SCN to increase by “billions”.

Affected Environment:

Versions 9.2.0.8 – 11.2.0.3

Per the articles:
“Oracle released a patch to fix the arbitrary SCN growth rate bug in the hot backup code before InfoWorld began researching this story. The backup bug is listed as 12371955: "High SCN growth rate from
ALTER DATABASE BEGIN BACKUP in 11g."
If you have not already done so, Oracle recommends that you install this patch immediately. (The hot backup bug is confined to Oracle Database 11g releases 11.1.0.7 and 11.2.0.2.)

The SCN is a moving line that cannot be crossed. The line moves up by 16,384 every second; as long as the SCN growth rate is slower, all should be well.
(Note: Oracle has provided a script that allows customers to identify which databases are at risk. The script is referenced in support document 1393363.1.)”

These articles also noted that oracle’s January CPU release includes a series of patches that remove various methods of increasing the SCN and implement a new method of “inoculation” for oracle databases. These patches also were put in the Oracle 10G CPU’s because some of the SCN growth issues may be present in that and older version of Oracle. But only the 11G version had the backup bug which was causing most of the problem.
Support Documents:

Oracle documents dealing with this issue (all updated on 1/19/2012 or later)

System Change Number (SCN), Headroom, Security and Patch Information [ID 1376995.1]
Installing, Executing and Interpreting output from the "scnhealthcheck.sql" script [ID 1393363.1]
Evidence to collect when reporting "high SCN rate" issues to Oracle Support [ID 1388639.1]

Oracle provides SCN health check scripts from Versions 9.2.0.8 to 11.2.0.3


Output of SCN health check scripts:
SQL> @scnhealthcheck.sql
--------------------------------------------------------------
ScnHealthCheck
--------------------------------------------------------------
Current Date: 2012/02/03 07:56:28
Current SCN:  10394068544046
Version:      11.1.0.7.0
--------------------------------------------------------------
Result: A - SCN Headroom is good
Apply the latest recommended patches
based on your maintenance schedule
AND set _external_scn_rejection_threshold_hours=24 after apply.
For further information review MOS document id 1393363.1

Take the appropriate action as indicated by the "Result":

Refer the below note:



 I Hope this article helped you. Suggestions are welcome.

How Dell Migrated from SUSE Linux to Oracle Linux

$
0
0

Today I saw the nice article in www.oracle.com . I would like to share with you all.

From this article:

In June of 2010, Dell made the decision to migrate 1,700 systems from SUSE Linux to Oracle Linux, while leaving the hardware and application layers unchanged. Standardization across the Linux platforms helped make this large-scale conversion possible. The majority of the site-specific operating system and application configuration could simply be backed up and restored directly on the new operating system. Configuration changes were minimal and most could be automated, easing the administration effort required and helping achieve a reliable and consistent transition procedure.

More Details about this migration:


Reference: www.oracle.com

Oracle table fragmentation causing performance issue

$
0
0


Issue Description:

Recently i had faced, one of the jobs is running very slow in production. Due to this delay another job also accessing the same objects and it’s causing blocking locks on database.

1.    Almost one year, we deployed the new enhancement in prod database. Suddenly we faced this performance issue.
2.    Job A is running every  hour and it’s copying the data  from few tables and fetching into another DB via DB link and deleting the records in source database (whatever data copied from source database to target database)
3.     We locked the better statistics for these tables based on performance testing.

Impact:

1.    Job A deleting records on tables for every hour, Due to the large number of deletion on tables causing the fragmentation on tables. Obviously if tables are fragmented, corresponding indexes also fragmented.

2.    If tables statistics were locked, so I couldn’t find the exact details about these tables also I couldn’t gather the current stats for these tables.

How to check if tables are fragmented?

select t.owner,
t.table_name,
t.avg_row_len,
t.last_analyzed,
s.bytes/1024/1024 as SEGMENT_SIZE_MB
from
dba_tables t,
dba_segments s
where t.table_name=s.segment_name
and
t.owner=s.owner
and s.segment_type='TABLE'
and owner='&owner';

 
These below table’s stats were locked. So I manually count the records from below tables and put on Original columns.


OWNER
TABLE_NAME
AVG_ROW_LEN
LAST_ANALYZED
SEGMENT SIZE (MB)
Original Rows
Original Space Size (MB)
RAJA_TEST
Table1
302
08/20/2010 07:57:00
8.000
0
0.000
RAJA_TEST
Table2
120
08/20/2010 07:57:04
259.000
4369
0.650
RAJA_TEST
Table3
104
08/20/2010 07:57:09
145.000
442
0.057
RAJA_TEST
Table4
148
08/20/2010 07:57:23
0.125
0
0.000
RAJA_TEST
Table5
147
08/20/2010 07:57:27
0.125
0
0.000
RAJA_TEST
Table6
0
08/20/2010 07:57:28
0.125
0
0.000
RAJA_TEST
Table7
0
08/20/2010 07:57:28
0.125
0
0.000
RAJA_TEST
Table8
154
10/09/2010 14:55:31
20.000
4509
0.861
RAJA_TEST
Table9
143
08/20/2010 07:57:53
130.000
25058
4.442
RAJA_TEST
Table10
152
10/09/2010 13:40:58
59.000
0
0.000
RAJA_TEST
Table11
158
10/09/2010 13:43:36
0.125
0
0.000
RAJA_TEST
Table12
0
08/20/2010 07:57:59
0.125
0
0.000




 ~ 622 MB

~ 6 MB


These tables having only need ~ 6 MB, but these occupied ~622 MB.


How to calculate the actual space requirement?

Actual Space = (Num of rows in a table) * (Avg_row_len) + ((Num of rows in a table) * (Avg_row_len)* 0.3)


Explanation:                                     

(Num of rows in a table) * (Avg_row_len) --- gives a actual space required for a table

Oracle thumb rule says (actual space required for a table + 30 % space) will calculate the original space requirement for a table.

Note: whenever we creating the segment oracle initially allocated, 0.125 MB space allocated to each segment.


Temporary Solution:

We have a several method to fix the fragmentation.( reset the HWM)

1.    Export/import method
2.    Online redefinition method
3.    CTA’s method
4.    Move the table segment


I suggested the below method.

1)    Hold the jobs
2)    Take the listed tables backup using exp/expdp utility
3)    Truncate the tables
4)    Imported the tables using backup
5)    Release the Job


Permanent Solution:

1.    Tables should be change as daily partition tables.
2.    Instead of deleting the records from tables for every hour, every day that job will drop the daily partition. It will help to avoid the fragmentation.



 I Hope this article helped you. Suggestions are welcome.

Oracle AWR report generate privileges

$
0
0

Recently I got one request. Performance team member requested the privileges to generate the AWR and ADDM report.

I have provided the below privileges to PERFUSR.

Privileges:

GRANT SELECT ON SYS.V_$DATABASE TO PERFUSR;

GRANT SELECT ON SYS.V_$INSTANCE TO PERFUSR;

GRANT SELECT ON SYS.DBA_HIST_DATABASE_INSTANCE TO PERFUSR;

GRANT SELECT ON SYS.DBA_HIST_SNAPSHOT TO PERFUSR;

GRANT ADVISOR TO PERFUSR;

GRANT EXECUTE ON SYS.DBMS_WORKLOAD_REPOSITORY TO PERFUSR;

I have tested and able to generate the AWR and ADDM report using above account.


But Performance team doesn’t have OS account to login into DB server and didn’t generate the AWR report.

Also I have checked is there any possible to take AWR/ADDM report using Client Software ?

We didn’t take AWR report using Oracle Client Software.

So I have provided below solutions…

Select
DBID,
INSTANCE_NUMBER,
SNAP_ID,
BEGIN_INTERVAL_TIME,
END_INTERVAL_TIME
From
Dba_hist_snapshot order by 5 desc;

Using above query to get low snap, high snap details. DBID and Instance number


Select output from table (dbms_workload_repository.awr_report_html (DBID, INSTANCE_NUMBER, LOW_SNAP_ID,HIGH_SNAP_ID,8));

l_options - 8. Displays the ADDM specific portions of the report. These sections include the Buffer Pool Advice, Shared Pool Advice, PGA Target Advice, and Wait Class sections.

Generate the AWR report on HTML format using below query:


set heading off;
set feedback off;
set linesize 1500;
Prompt Getting html format report:
spool awr_report_html.html;
select output from table(dbms_workload_repository.awr_report_html(345725311, 1,22492,22493,8));
spool off;

 I Hope this article helped you. Suggestions are welcome.



ORA-00406: COMPATIBLE parameter needs to be 10.0.0.0.0 or greater

$
0
0
Recently I was worked on UAT Database migration from one server to another new server. Once I got a request I have checked the space requirements and  other checklist details on new server also. I felt everything looks good.

During scheduled time, I started the RMAN online backup with compression option because I have less space for backup location. Also I don’t have any other file system to place the RMAN backup.

While started the RMAN backup I am getting below error.


RMAN-03009: failure of backup command on db_ch2 channel at 02/29/2012 06:22:28
ORA-00406: COMPATIBLE parameter needs to be 10.0.0.0.0 or greater
ORA-00722: Feature "Backup Compression"
continuing other job steps, job failed will not be re-run
released channel: db_ch1
released channel: db_ch2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on db_ch1 channel at 02/29/2012 06:22:28
ORA-00406: COMPATIBLE parameter needs to be 10.0.0.0.0 or greater
ORA-00722: Feature "Backup Compression"


I have checked the compatible parameter and compatible parameter is 9.2.0.  But database running on 10.2.0.4. 
We need to change the compatible parameter is 10.2.0
I have checked with application team and try to get approval for quick recycle the database. But they are doing some crucial test and they don’t allowed to do DB recycle. After one week I got the approval and changed the init parameter, recycled the database.  So while checking the prerequiste, we should cover all the things... If we miss anything, it will create a hmmm :-(

While bring up the database, I am getting below error.

ORA-32004: obsolete and/or deprecated parameter(s) specified
ORA-19905: log_archive_format must contain %s, %t and %r

SQL> !cat initDBUAT1.ora | grep log_archive_format
log_archive_format = _%t_%s.log

I have changed the log_archive_format init parameter and started the database.

SQL> !cat initDBUAT1.ora | grep log_archive_format = _%t_%s_%r.

 I Hope this article helped you. Suggestions are welcome.

Best Regards
RajaBaskar Thangaraj

opatch util Cleanup or how to cleanup the oracle home

$
0
0

I have faced several times Oracle Home size seems to be very high. I cleaned some unnecessary trace/log files in OH under $ORACLE_HOME/dbs.  Few times due to old patch can occupied the lot of spaces in $ORACLE_HOME/.patch_storage/

On oracle 9i, we can manually remove these old patch files directories. Oracle 10g onwards we can follow the below methods for OH cleanup.

How to clean up the oracle home?

While before applying the PSU patch, I am planning to take Oracle Home backup. I have checked Oracle Home size and felt Oracle Home size to be very high.

Oracle10gR2 (10.2.0.X) having new feature to clean up the Oracle Home.
 opatch util –help   ----à command will show the lot of options.

$ opatch util Cleanup
Invoking OPatch 10.2.0.5.1
 Oracle Interim Patch Installer version 10.2.0.5.1
Copyright (c) 2010, Oracle Corporation.  All rights reserved.
 UTIL session
 Oracle Home       : /u01/oracle/product/10.2.0
Central Inventory : /u01/oracle/product/10.2.0/oraInventory
   from           : /var/opt/oracle/oraInst.loc
OPatch version    : 10.2.0.5.1
OUI version       : 10.2.0.4.0
OUI location      : /u01/oracle/product/10.2.0/oui
Log file location : /u01/oracle/product/10.2.0/cfgtoollogs/opatch/opatch2012-03-20_09-17-38AM.log
 Patch history file: /u01/oracle/product/10.2.0/cfgtoollogs/opatch/opatch_history.txt
 Invoking utility "cleanup"
OPatch will clean up 'restore.sh,make.txt' files and 'rac,scratch,backup' directories.
You will be still able to rollback patches after this cleanup.
Do you want to proceed? [y|n]
Y
User Responded with: Y
Size of directory "/u01/oracle/product/10.2.0/.patch_storage" before cleanup is 1222290337 bytes.
Size of directory "/u01/oracle/product/10.2.0/.patch_storage" after cleanup is 474926793 bytes.
 UtilSession: Backup area for restore has been cleaned up. For a complete list of files/directories
deleted, Please refer log file.
 OPatch succeeded.

 While running Opatch util Cleanup what internally happen?

 Its deleting the old patch related files from $Oracle_HOME/.patch_storage directories.
 I saw some details on some forum.. I would like to share here..

Why should we keep the $Oracle_HOME/.patch_storage directory files?
1.    
  If we apply the oracle interim patches, OPATCH utility stores the information in $ORACLE_HOME/.patch_storage directories. Inside this directory, there are separate directory created for each patch applied to the oracle home. Its having CPU/PSU patch information in these directories and while rollback the patch from oracle home, it will get the information from  .patch_storage directory.

2.      If we are removing/rollback the  bug conflict patch  from oracle home, oracle saves copies of all the files that were replaced by the new patch in $ORACLE_HOME/.patch_storage/directory before the new patch applied on oracle home.
 
3.      If we apply the patch, we make some changes on inventory.  There is a chance we may corrupt the inventory.  From 10gR2 onwards, while apply a patch, Opatch create the snapshot of our inventory and stores in $ORACLE_HOME/.patch_storage/ directory. The $ORACLE_HOME/.patch_storage//restore.sh script that comes with Opatch to remove any changes that were made to the inventory after the application of the patch.


        I Hope this article helped you. Suggestions are welcome.




Drop Database in oracle 9i,10g/11g or How to remove/drop/decommission a oracle 9i,10g,11g

$
0
0

How to remove/drop/decommission a oracle 9i & 10g database?


 Up to oracle 9i, we need to manually cleanup the database physical files on OS level. It is very difficult to clean up and may be a chance to accidently delete the some other db files which is using any other oracle database. So be careful before deleting the database physical files.

Steps:

 1)     Please get the approval from Business/Customer.

2)     Send the Notification to Business/Customer -> We are going to decommission the database.

3)     Raise a Request to Storage/Tape team for TAPE request with details “ To keep the Backup for next 2 years retention period” …Retention Period may be vary based on your business needs and SLA.

4)     Take the list of datafiles/control files/Redo log files and parameter files.

5)     Shutdown the database

6)     Shutdown the listener

7)     Take the complete database backup and make sure this backup is valid backup.

8)     To make sure Storage team take the  tape backup and keep the retention period rightly.. and also get Signoff mail from Storage/Tape team.

9)     Remove the monitoring jobs entry from crontab and also remove the monitoring jobs it is running from third party tool.

10)  Remove the archive log files/datafiles/control files/Redo log files/trace & dump files/backup files and respective DB directories – Be careful before deleting the physical files.

11)  Remove the Backup schedule job details.

12)  Remove the database entry from oratab entry.

13)  Send the final notification to Application/Customer “DB was decommissioned” and also share the TAPE retention details to business/Customer.

14)  Remove the database details from inventory sheet (DB registry/Server registry)

 From Oracle10g onwards, Oracle makes it as physical files removal is very simple. Please follow the below steps for Oracle 10G and 11g databases…

 Steps:

 1)     Please get the approval from Business/Customer.

2)     Send the Notification to Business/Customer -> We are going to decommission the database.

3)     Raise a Request to Storage/Tape team for TAPE request with details “ To keep the Backup for next 2 years retention period” …Retention Period may be vary based on your business needs and SLA.

4)     Take the list of data files/control files/Redo log files and parameter files.

5)     Shutdown the database

6)     Shutdown the listener

7)     Take the complete database backup and make sure this backup is valid backup.

8)     To make sure Storage team take the  tape backup and keep the retention period rightly.. and also get Signoff mail from Storage/Tape team.

9)     Remove the monitoring jobs entry from crontab and also remove the monitoring jobs it is running from third party tool.

10)   Startup the database in restrict mode and give drop database command.

                        SQL> conn / as sysdba
Connected to an idle instance.

SQL> startup restrict mount

ORACLE instance started.
 Total System Global Area 3221225472 bytes
Fixed Size                  2044072 bytes
Variable Size            1291849560 bytes
Database Buffers         1912602624 bytes
Redo Buffers               14729216 bytes
Database mounted.

SQL>  drop database;
 Database dropped.

 Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining Scoring Engine
and Real Application Testing options

[/fisc/oracle]oracle@:TEST> ps -ef|grep pmon
  Oracle 25411 26322   0 03:07:07 pts/8       0:00 grep pmon

11)  Remove the archive log files, trace, dump files backup files and respective DB directories – Be careful before deleting the physical files.

12)  Remove the Backup schedule job details.

13)  Remove the database entry from oratab entry.

14)  Send the final notification to Application/Customer “DB was decommissioned” and also share the TAPE retention details to business/Customer.

15)  Remove the database details from inventory sheet (DB registry/Server registry)



 I Hope this article helped you. Suggestions are welcome.

Best Regards
RajaBaskar Thangaraj



Version parameter in oracle expdp and impdp

$
0
0


Data migration using expdp/impdp without any version conflict

 I am frequently getting data migration from one database to another database across all the oracle/OS platform version.

Whenever we are migrating the data using expdp/impdp or exp/imp, we should use the lower version exp binary  (Oracle Home) for expdp/exp.

My requirement:

Copy one schema from 11g database to 10g database.

Example:

we are exporting 11g database schema and import into 10g database, We should use 10g Binary for export. Every time this one is very challenging to me and anyone…

If I take export 11g data  using oracle 11g binary, while import into 10g database won’t support.  

I have 2 option:

1)     Use if any other ORACLE HOME (10g) for export is available on same server. Only 11g database is running on my database server.
2)     Install 10g oracle software on same server and start the export.
3)     Using DBLINK, we can copy the data .. L

I will plan to install 10g software on DB server , so I contact core DBA team and explained why we need 10g s/w ? . One of my friend from core team, he said “ Raja, recently I heard about version parameter.. I am not aware that much.. you can check ..”

I have checked that parameter in my test environment successfully and did in production also…always this parameter will help us to save unnecessary issue..


Test Case:

Source Database:

Oracle Version: 11.1.0.7

SQL> create user rb identified by rb;

User created.

SQL> grant connect,resource,dba to rb;

Grant succeeded.

SQL> connect rb/rb
Connected.
SQL> create table rb_objects as select * from dba_objects;

Table created.

SQL> select count(*) from rb.rb_objects;

  COUNT(*)
----------
    101933


Target Database:

Oracle Version: 10.2.0.3


SQL> show parameter compatible

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      10.2.0.3.0


SQL> create user rb identified by rb;

User created.

SQL> grant connect,resource,dba to rb;

Grant succeeded.

SQL> create directory TEST_DIR as '/u01/backup/export/dump';

Directory created.

SQL>  grant read,write on directory TEST_DIR to rb;

Grant succeeded.


Source Database 11g schema export :

$expdp dumpfile=exp_rb_11g.dmp logfile=exp_rb_11g.log directory=TEST_DIR schemas=RB

Export: Release 11.1.0.7.0 - 64bit Production on Sunday, 22 April, 2012 1:31:06

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: rb
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "RB"."SYS_EXPORT_SCHEMA_01":  rb/******** dumpfile=exp_rb_11g.dmp logfile=exp_rb_11g.log directory=TEST_DIR schemas=RB
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "RB"."RB_OBJECTS"                           9.956 MB  101933 rows
Master table "RB"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for RB.SYS_EXPORT_SCHEMA_01 is:
/u01/backup/export/dump/exp_rb_11g.dmp
Job "RB"."SYS_EXPORT_SCHEMA_01" successfully completed at 01:35:45

Import the 11g schema data in 10g database :


$ impdp dumpfile=exp_rb_11g.dmp logfile=imp_rb_11g_10g.log directory=TEST_DIR schemas=RB

Import: Release 10.2.0.4.0 - 64bit Production on Sunday, 22 April, 2012 1:37:31

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: rb
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39142: incompatible version number 2.1 in dump file "
/u01/backup/export/dump/exp_rb_11g.dmp"

Source Database 11g schema export with version parameter:


How version parameter provides the solution for this issue?

We should give the version parameter value as target database (Wherever we import) compatible parameter value.

$ expdp dumpfile=exp_rb_11g_with_version%U.dmp logfile=exp_rb_11g_with_version.log directory=TEST_DIR schemas=RB parallel=4 version=10.2.0.3.0

Export: Release 11.1.0.7.0 - 64bit Production on Sunday, 22 April, 2012 1:43:42

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: rb
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "RB"."SYS_EXPORT_SCHEMA_01":  rb/******** dumpfile=exp_rb_11g_with_version%U.dmp logfile=exp_rb_11g_with_version.log directory=TEST_DIR schemas=RB parallel=4 version=10.2.0.3.0
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
. . exported "RB"."RB_OBJECTS"                           9.956 MB  101933 rows
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Master table "RB"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for RB.SYS_EXPORT_SCHEMA_01 is:
/u01/backup/export/dump/exp_rb_11g_with_version01.dmp
/u01/backup/export/dump/exp_rb_11g_with_version02.dmp
Job "RB"."SYS_EXPORT_SCHEMA_01" successfully completed at 01:46:31


While import we having issue with tablespace mapping –please ignore it..

$impdp dumpfile=exp_rb_11g_with_version%U.dmp logfile=imp_rb_11g_10g_with_version.log directory=TEST_DIR schemas=RB parallel=4 version=10.2.0.3.0

Import: Release 10.2.0.4.0 - 64bit Production on Sunday, 22 April, 2012 1:50:43

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: rb
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "RB"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "RB"."SYS_IMPORT_SCHEMA_01":  rb/******** dumpfile=exp_rb_11g_with_version%U.dmp logfile=imp_rb_11g_10g_with_version.log directory=TEST_DIR schemas=RB parallel=4 version=10.2.0.3.0
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"RB" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE failed to create with error:
ORA-00959: tablespace 'USERS' does not exist
Failing sql is:
CREATE TABLE "RB"."RB_OBJECTS" ("OWNER" VARCHAR2(30), "OBJECT_NAME" VARCHAR2(128), "SUBOBJECT_NAME" VARCHAR2(30), "OBJECT_ID" NUMBER, "DATA_OBJECT_ID" NUMBER, "OBJECT_TYPE" VARCHAR2(19), "CREATED" DATE, "LAST_DDL_TIME" DATE, "TIMESTAMP" VARCHAR2(19), "STATUS" VARCHAR2(7), "TEMPORARY" VARCHAR2(1), "GENERATED" VARCHAR2(1), "SECONDARY" VARCHAR2(1), "NAMESPACE" NUMBER, "EDITION_NAME" VARCHAR2
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Job "RB"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at 01:50:57


Import the 11g schema data in 10g using version parameter,

impdp dumpfile=exp_rb_11g_with_version%U.dmp logfile=imp_rb_11g_10g_with_version.log directory=TEST_DIR schemas=RB parallel=4 version=10.2.0.3.0 remap_tablespace=USERS:AIAD1_TBL_01

Import: Release 10.2.0.4.0 - 64bit Production on Sunday, 22 April, 2012 1:54:41

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: rb
Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "RB"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "RB"."SYS_IMPORT_SCHEMA_01":  rb/******** dumpfile=exp_rb_11g_with_version%U.dmp logfile=imp_rb_11g_10g_with_version.log directory=TEST_DIR schemas=RB parallel=4 version=10.2.0.3.0 remap_tablespace=USERS:AIAD1_TBL_01
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"RB" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "RB"."RB_OBJECTS"                           9.956 MB  101933 rows
Job "RB"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 01:54:57


If you have any issue/getting any error while import, just import object structure only first (CONTENT=METADATA_ONLY Parameter). Next you can import data separately (CONTENT=DATA_ONLY parameter).


I Hope this article helped you. Suggestions are welcome.

Best Regards
RajaBaskar Thangaraj
rajabaskar.t@gmail.com
 

How to apply PSU patch and Prerequiste details?

$
0
0
PSU112 – PSU patch on 11.1.0.7


Download the respective patch from www.oracle.metalink.com . Unzip the patch software and its having Readme.htm file.  Please go through the Readme.htm file and follow the instructions.

Prerequisite


1)     In ORAINVENTORY, you can set the correct ORACLE_HOME

2)     Check the Opatch utility version.
           
            $cd $ORACLE_HOME

            Opatch directory having Opatch utility . Opatch version utility should be 11.1.0.8 (current version 11.1.0.6.2).

            If Opatch utility version compatibility not satisfied, You can download the Opatch utility from www.oracle.metalink.com

How can we change opatch version in ORACLE_HOME?

            Move the old Opatch to another name.
           
            $/u01/oracle/product/11.1 > mv OPatch OPatch.11.1.0.6.2
           
            Download the Opatch and unzip. Copy this optach directory and placed this Opatch on OH.

            $/fisc/oracle/PSU111 > cp -pr OPatch $ORACLE_HOME
           
            $/fisc/oracle/PSU111 > opatch version
           
            Invoking OPatch 11.1.0.8.3

            OPatch Version: 11.1.0.8.3

            OPatch succeeded.


3)     Check the environment variables.
     
      $export PATH=$PATH:/usr/ccs/bin


4)     To check can we apply the PSU112 patch for this OH.

    Unzip the Patch software unzip p13343461_11107_SOLARIS.zip

                 $opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./13343461

Invoking OPatch 11.1.0.8.3

Oracle Interim Patch Installer version 11.1.0.8.3
Copyright (c) 2010, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/oracle/product/11.1
Central Inventory : /u01/oracle/oraInventory
   from           : /var/opt/oracle/oraInst.loc
OPatch version    : 11.1.0.8.3
OUI version       : 11.1.0.7.0
OUI location      : /u01/oracle/product/11.1/oui
Log file location : /u01/oracle/product/11.1/cfgtoollogs/opatch/opatch2012-03-29_08-15-23AM.log

Patch history file: /u01/oracle/product/11.1/cfgtoollogs/opatch/opatch_history.txt

Invoking prereq "checkconflictagainstohwithdetail"

ZOP-47: The patch(es) has supersets with other patches installed in the Oracle Home (or) among themselves.

Prereq "checkConflictAgainstOHWithDetail" failed.

Summary of Conflict Analysis:

Patches that can be applied now without any conflicts are :
13343461

Following patches are not required, as they are subset of the patches in Oracle Home or subset of the patches in the given list :
10248531

Following patches will be rolled back from Oracle Home on application of the patches in the given list :
10248531

Conflicts/Supersets for each patch are:

Patch : 13343461

        Bug Superset of 10248531
        Super set bugs are:
        7378322,  9068088,  7207654,  8865718,  7835247,  7648406,  8348481,  9054253,  6851110,  7206858,  9744252,  7497788,  8974548,  9956713,  7331867,  8251486,  6434104,  8851675,  8211920,  9352179,  7013124,  7643188,  7135702,  7254221,  7529174,  7036284,  7196532,  8847439,  7515779,  7705669,  9434549,  8402548,  9637033,  8608377,  7119382,  7510766,  9001453,  8364676,  9066130,  7424804,  7628387,  7408621,  7426336,  7553884,  8856696,  6843972,  7694979,  8565708,  6972189,  6598432,  6768362,  6501490,  8836375,  8216875,  9841005,  7527650,  7719143,  8402551,  7454752,  8290478,  7412296,  7484261,  8402555,  7719148,  8284633,  8318050,  10009222,  7185113,  7307821,  7639602,  8539335,  8613137,  9711859,  7650993,  8940197,  9714832,  6970731,  9011088,  10009229,  7589862,  9170608,  7131291,  8263441,  9109536,  7586451,  6709070,  7446163,  8199266,  7460818,  9114072,  6618461,  7451927,  7373196,  8230457,  7336031,  7420394,  8413059,  9399090,  8402562,  8914979,  9713537,  8402637,  7393804,  8876094,  6196748,  7627743,  8834425,  7348847,  9311909,  8408887,  7183523,  8531282,  7441663,  7329252,  9145541,  8622613,  10249534,  7720494,  7719668,  8645846,  8539923,  8577450,  9458811,  8534338,  9272086,  8339404,  8517426,  8236851,  9458814,  7602341,  9458816,  7036453,  7610362,  8855553,  7384419,  7690421,  9458819,  8419383,  8855559,  8543737,  7447559,  10336518,  8247855,  10009241,  6772911,  7626014,  9654987,  10009173,  10009246,  7175513,  7417614,  9143376,  8649055,  7341598,  7706138,  7292503,  8603465,  8243648,  8367827,  8365141,  8342923,  7350127,  8462173,  8549480,  9027691,  7356443,  7593835,  8483871,  9485429,  8242410,  7044551,  7572069,  7639121,  9275072,  8855565,  9458829,  8825048,  7253531,  8328853,  10336525,  7585314,  8341623,  8409848,  6851669,  6988517,  7318276,  8257122,  7013817,  8860821,  9830111,  7309458,  8450529,  8306933,  8306934,  9166322,  6840740,  9458831,  6981690,  8304329,  8281906,  7480809,  8339352,  8855570,  7340448,  8499600,  7393258,  8588540,  8790767,  8855575,  6599920,  7630416,  7426959,  8855577,  6980601,  8342506,  8717461,  6452375,  8607693,  6407486,  7653579,  7416901,  7281382,  9341448,  8599477,  7535429,  8582594,  7475055,  8217795,  7409110,  7432514,  9655014,  8362693,  8764031,  7332001,  7707103,  9084111,  9702142,  7436152,  7680907,  9702143,  10336548,  8481935,  7013835,  7345543,  10046072,  7708340,  8499043,  8361398,  6784747,  7524944,  7496908,  7662620,  8224083,  7385253,  10127716,  9189647,  7225720,  7417140,  6941717,  7122161,  8898852,  8399549,  8354686,  7377810,  7477246,  8363210,  6798650,  7299153,  8213302,  9118620,  6856345,  8909984,  8755082,  9118622,  9311954,  9209238,  8702276,  7484102,  7661251,  7497640,  8771916,  6991626,  7630874,  7133740,  9368549,  7311909,  7614692,  5552232,  9229631,  7022234,  7432601,  8650719,  7213937,  7352414,  7462112,  8248911,  7516867,  8199107,  7296258,  7662491,  8856478,  9369783,  10336560,  7041254,  6812439,  10336565,  6870937,  7828187,  7219752,  9952228,  7263842,  8287680,  6900214,  8981059,  8870559,  6882739,  9488887,  8546332,  8813366,  8990527,  9242411,  8296070,  7500792,  8352304,  7508788,  8352309,  7538000,  9165206,  8565359,  8834636,  8324760,  7652888,  10336577,  7330434,  7113299,  7307972,  8775066,  8487273,  7462709,  7486595,  7643632,  9032717,  8650661,  8658581,  8244217,  7499911,  7515145,  6734871,  7411865,  10094989,  7202451,  7276960,  7432556,  8360192,  8890026,  10019218,  7522002,  7613481,  7022905,  7452373,  8674263,  8221425,  7694273,  9074535,  8763922,  6770443,  9188010,  7438445,  7334226,  9197917,  8284438,  8803762,  7494333,  7318049,  7834195,  8250643,  8214576,  6679303,  8815639,  8277580,  7172752,  7326645,  7171015,  7715244,  7516536,  7675269,  7461921,  9241202,  8490879,  6980597,  7499353,  7646055,  8301559,  9255542,  8268330,  7357609,  8496830,  7462589,  6647480,  10248531,  8416414,  7830065,  6858062,  7189645,  7203349,  8476517,  8702535,  7436280,  8669679,  9768907,  8268775,  9363145,  6955744,  7366290,  7506785,  9952269,  8433693,  6977167,  10426994,  7702085,  8570572,  6522654,  8348464,  7185872,  8625762,  8737065,  6059178,  7257038,  9774756,  8833297,  8595043,  8542307,  7606362,  8578132,  8332021,  7330611,  8226397,  8502963,  7628866,  7278231,  6903819,  8285404,  9399991,  8391256,  7676737,  9231605,  9620202,  9776431,  6798427,  7138523,  6971433,  7258928,  7829321,  10169304,  7311601,  8433270,  7511040,  7434194,  8315482,  8369094,  8563941,  8563942,  8563943,  8563944,  8563945,  7716219,  8563946,  7345904,  9246245,  8563947,  7556778,  8563948,  8220734,  6870994,  7597354,  7523787,  9135679,  7710260,  7697360,  9210925

OPatch succeeded.


We can apply this patch without issue.


5)     Take the OH backup before applying this patch.

          $cd $ORACLE_HOME

          $tar –cvf /u01/oracle/ora_bin_bkup/11.1_b4psu112.tar 11.1.0

          Note: If oracle home size is very huge, you can cleanup some old patch details from OH.

How to clean up?



6)     Take the invalid objects list on respective databases.

            Select owner,object_name,object_type from dba_objects where status=’INVALID’ 

## Now  we are going to apply the patch… All prerequisite completed and looks good .. ####################################################################################

           
7)     Ask application team to bring down all the applications belongs to the database.

                        ##########  Once they done, we will start the patching ##########

8)     Stop the LISTENER

$lsnrctl stop LISTENER

9)     Shutdown the database

                ###  Please make sure databases and respective listener are down  ###


10)  Apply the PSU Patch

##  Please make sure you are going to apply right ORACLE HOME,  ###

Patch Location: /u01/oracle/PSU111/11g/13343461

$opatch apply

Note: verify the log file.

11)  Start the database and run the below statements.

SQL> CONNECT / AS SYSDBA
SQL> STARTUP
      SQL@?/rdbms/admin/catbundle.sql psu apply

  Compile the invalid objects

      SQL>@?/rdbms/admin/utlrp.sql

            Please execute the below query whether view recompilation has already been performed for the database, execute the following statement:
                               SELECT * FROM registry$history where ID = '6452863';  IF returns no rows, we can recompile the views, otherwise no need

SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP UPGRADE
SQL> @?/cpu/view_recompile/view_recompile_jan2008cpu.sql
SQL> SHUTDOWN;
SQL> STARTUP;

12)  Check the invalid objects list and compare the existing taken report.

13)  Start the listener

14)  Check and monitor the alert logs for next few hours.

      15)  Ask Application team to bring up their application.


I Hope this article helped you. Suggestions are welcome.

Can we rebuild the Oracle BITMAP index using REBUILD ONLINE option?

$
0
0
Recently development team come up with new plan for index rebuild maintenance activity using their job for index rebuild instead of DBA manually running...
What their code will do?
Create the procedure on database level - This procedure will create the dynamic index rebuild script based on BLEVEL > 2 indexes.
Note: we are frequently rebuilding the indexes which BLEVEL > 2 and avoid the fragmentation.
I have suggested to developer, while index rebuild to user parallel 4 (degree value) for performance (rebuild the indexes faster) , once they done the rebuild to reset the default degree value.
Also asked to use “REBUILD ONLINE “  option for all indexes except BITMAP INDEX. Because we can't rebuild online for BITMAP index
After they tested in non production region, “We can able to rebuild online for BITMAP index".  I have done index rebuild for BITMAP indexes several times. I used only REBUILD option for BITMAP.
I am not sure... So I have tested again.
Finally I knew, from oracle 10g onwards, we can rebuild the BITMAP index using REBUILD ONLINE.
I am feeling shame now, last 3 years I was used BITMAP index rebuild option only .......Even in production also...

On Oracle 11g:

 SQL> create table AM.AM_objects as select * from dba_objects;
 Table created.

 SQL> create bitmap index AM.AM_objects_idx on AM.AM_objects(object_id);
 Index created.

 SQL> alter index AM.AM_objects_idx rebuild online;
 Index altered.

 On Oracle 10g:

 SQL> create table AM.AM_objects as select * from dba_objects; 
Table created.

SQL> create bitmap index AM.AM_objects_idx on AM.AM_objects(object_id);
 Index created.

SQL> alter index AM.AM_objects_idx rebuild online;
 Index altered.

 On Oracle 9i:

 SQL> create table AM.AM_objects as select * from dba_objects;
 Table created.

SQL> create bitmap index AM.AM_objects_idx on AM.AM_objects(object_id);
 Index created.
  
SQL>  alter index AM.AM_objects_idx rebuild online;
alter index AM.AM_objects_idx rebuild online
*
ERROR at line 1:
ORA-08108: may not build or rebuild this type of index online

I Hope this article helped you. Suggestions are welcome.


Thanks

“ORA-12829: Deadlock - itls occupied by siblings at block 456123 of file 506”

$
0
0
I have got the below error.
“ORA-12829: Deadlock - itls occupied by siblings at block 456123 of file 506”

I have checked few sites ...

ORA-12829: Deadlock - itls occupied by siblings at block string of file string

Cause : parallel statement failed because all itls in the current block are occupied by siblings of the same transaction.

Action : increase MAXTRANS of the block or reduce the degree of parallelism for the statement. Reexecute the statement. Report suspicious events in trace file to Oracle Support Services if error persists.

I don't want to go directly change the MAXTRANS value suddenly in production.

Based on that, i have provided the below suggestions. I would like to share with you all.

The maximum number of concurrent sessions accessing the block 456123 at same time.

So due to this we are getting this error “ORA-12829: Deadlock - itls occupied by siblings at block 456123 of file 506”

I have checked that table “xxxxx table” and no one can access now.



In update statement, Dev team used PARALLEL hint and i suggested below steps to follow them.

1) Please rerun the job again. ( Because no one can access the table right now)

2) If you get the same error again, you can reduce the parallel hint value or without parallel hint to run the update statement.

Dev team tested and


1) Please rerun the job again. ( Because no one can access the job right now)

we were ran the job again and got the same error.

2) If you get the same error again, you can reduce the parallel hint value or without parallel hint and run the update statement.

We removed the parallel hint and re ran the job and got the same error again.

Root cause:

After that i have checked the table and that table having PARALLEL is enabled as default.

So i suggested to use NOPARALLEL hint, while running the update statement(/*+ noparallel “xxxxx table” */ ). They used NOPARALLEL hint and ran the update statement successfully.


I Hope this article helped you. Suggestions are welcome.

Version parameter in oracle expdp and impdp - Part II

$
0
0

Recently I wrote the article "Version parameter in oracle expdp and impdp"

KannanDreams asked the  below question..

Will this issue occur even when we migrate the data from Oracle 11.2.0 to 11.1.0 ?

Thank you Kannan for writing in..

I tested the below scenario on my test environment. I would like to share with you all.

Planning to take 11.2.0.2 schema export and will import into 11.1.0.7

DB version: 11.2.0.2


SQL> create user AM identified by AM;

User created.

SQL> grant connect,resource,dba to AM;

Grant succeeded.

SQL> create table AM.AM_objects  as select * from dba_objects;

Table created.

Export from 11.2.0.2 database without version parameter:

$  expdp dumpfile=exp_AM_11g.dmp logfile=exp_AM_11g.log directory=DP schemas=AM

Export: Release 11.2.0.2.0 - Production on Thu Apr 26 08:24:08 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYS"."SYS_EXPORT_SCHEMA_01":  /******** AS SYSDBA dumpfile=exp_AM_11g.dmp logfile=exp_AM_11g.log directory=DP schemas=AM
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 7.476 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "AM"."AM_OBJECTS"                           5.930 MB   61495 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
  /u01/backup/export/dump/dp/exp_AM_11g.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 08:26:48


Import into 11.1.0.7 database:

$  impdp dumpfile=exp_AM_11g.dmp logfile=imp_AM_11g_10g.log directory=EXP_DIR1 schemas=AM

Import: Release 11.1.0.7.0 - 64bit Production on Thursday, 26 April, 2012 8:32:06

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: AM
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39142: incompatible version number 3.1 in dump file "/u01/backup/export/dump/dp/exp_AM_11g.dmp"

Export from 11.2.0.2 database with version parameter:


$ expdp dumpfile=exp_AM_11g.dmp logfile=exp_AM_11g.log directory=DP schemas=AM  version=11.1.0 REUSE_DUMPFILES=y

Export: Release 11.2.0.2.0 - Production on Thu Apr 26 08:44:18 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Username: AM
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "AM"."SYS_EXPORT_SCHEMA_01":  AM/******** dumpfile=exp_AM_11g.dmp logfile=exp_AM_11g.log directory=DP schemas=AM version=11.1.0 REUSE_DUMPFILES=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 7.476 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
. . exported "AM"."AM_OBJECTS"                           5.930 MB   61495 rows
Master table "AM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for AM.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/backup/dp/exp_AM_11g.dmp
Job "AM"."SYS_EXPORT_SCHEMA_01" successfully completed at 08:44:42


Import into 11.1.0.7 database:


$ impdp dumpfile=exp_AM_11g.dmp logfile=imp_AM_11g_10g.log directory=EXP_DIR1 schemas=AM

Import: Release 11.1.0.7.0 - 64bit Production on Thursday, 26 April, 2012 8:49:42

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Username: AM
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "AM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "AM"."SYS_IMPORT_SCHEMA_01":  AM/******** dumpfile=exp_AM_11g.dmp logfile=imp_AM_11g_10g.log directory=EXP_DIR1 schemas=AM
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"AM" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "AM"."AM_OBJECTS"                           5.930 MB   61495 rows
Job "AM"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 08:51:06


Conclusion:

Will this issue occur even when we migrate the data from Oracle 11.2.0 to 11.1.0 ?

Yes... Without version parameter we can't migrate the data using expdp/impdp across 11.2.0 to 11.1.0

Note: I have few more thoughts on the same issue. 

This version parameter may be internally working based on Compatible parameter.
eg:-
Source  database is running 11.2.0.2, but compatible parameter is 11.1.0

If we take export without version parameter from 11.2.0.2 and import into 11.1.0.7 database which compatible is 11.1.0.

Both source and target database compatible parameter is same(11.1.0). We need to check, whether expdp/impdp is working for this scenario without version parameter.

I will test and keep you post asap...


I Hope this article helped you. Suggestions are welcome.
Thanks !!!



Why my indexes not being imported using impdp?

$
0
0
Last week I have faced the fragmentation issue.
I did below steps and know while import, impdp will create the indexes very fast compare as Manual index creation.

Steps:

1. Export the table
2. Truncate the table
3. Drop the indexes - Table size was very huge. So while import it causing perf issue. So I dropped the indexes.
4. Import the table

After table imported, i have checked the index status and it was not created.
So I manually created the indexes and it took some times.

Why impdp didn’t import the indexes. Why?

I did some test cases and found the root cause.
===========================================================================
-- Create some test table and index
SQL> create table AM_TEST as select * from dba_objects;
Table created.

SQL> select count(*) from AM_TEST; --60935 rows
SQL> create index obj_idx_AM_test on AM_TEST(object_id);
Index created.

--To take the test table Export
$ cat exp_Arul_AM_TEST_Jun22.par

userid=Arul/Arul
DIRECTORY=DP
DUMPFILE=exp_Arul_tables_AM_TEST%u.dmp
LOGFILE=exp_Arul_tables_AM_TEST.log
PARALLEL=4
ESTIMATE=STATISTICS
JOB_NAME=EXP_AM_Arul
compression=ALL
TABLES=(Arul.AM_TEST)

Take the table backup
$ expdp parfile=exp_Arul_AM_TEST_Jun22.par
Export: Release 11.2.0.2.0 - Production on Fri Jun 22 14:17:19 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "Arul"."EXP_AM_Arul":  Arul/******** parfile=exp_Arul_AM_TEST_Jun22.par
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "Arul"."AM_TEST"                             7 MB
Total estimation using STATISTICS method: 7 MB
. . exported "Arul"."AM_TEST"                         647.9 KB   60935 rows
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Master table "Arul"."EXP_AM_Arul" successfully loaded/unloaded
******************************************************************************
Dump file set for Arul.EXP_AM_Arul is:
  /U01/backup/exp_Arul_tables_AM_TEST01.dmp
  /U01/backup/exp_Arul_tables_AM_TEST02.dmp
Job "Arul"."EXP_AM_Arul" successfully completed at 14:17:29

--Truncated the table and dropped the indexes
--Import the table
$cat imp_467302_AM_TEST_Jun22.par
userid=Arul/Arul
DIRECTORY=DP
DUMPFILE=exp_Arul_tables_AM_TEST%u.dmp
LOGFILE=imp_Arul_tables_AM_TEST.log
PARALLEL=4
JOB_NAME=EXP_AM_Arul
TABLES=(Arul.AM_TEST)
TABLE_EXISTS_ACTION=APPEND

After imported the table, data was loaded and but index not there and I reproduced the same issue.

SQL> select count(*) from Arul.AM_test;             --60935 rows

SQL> select owner,index_name,table_name from dba_indexes where table_name='AM_TEST';

no rows selected

So again I have truncated the table and imported again ( just change the TABLE_EXISTS_ACTION=REPLACE parameter instead of APPEND)

$cat imp_467302_AM_TEST_Jun22.par
userid=Arul/Arul
DIRECTORY=DP
DUMPFILE=exp_Arul_tables_AM_TEST%u.dmp
LOGFILE=imp_Arul_tables_AM_TEST.log
PARALLEL=4
JOB_NAME=EXP_AM_Arul
TABLES=(Arul.AM_TEST)
TABLE_EXISTS_ACTION=REPLACE

--Import the table

$ impdp parfile=imp_467302_AM_TEST_Jun22.par

Import: Release 11.2.0.2.0 - Production on Fri Jun 22 15:51:22 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "Arul"."EXP_AM_Arul" successfully loaded/unloaded
Starting "Arul"."EXP_AM_Arul":  Arul/******** parfile=imp_467302_AM_TEST_Jun22.par
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "Arul"."AM_TEST"                         647.9 KB   60935 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Job "Arul"."EXP_AM_Arul" successfully completed at 15:51:23


SQL> select owner,index_name,table_name from dba_indexes where table_name='AM_TEST';
OWNER                          INDEX_NAME                     TABLE_NAME
------------------------------ ------------------------------ ------------------------------
ARUL                        OBJ_IDX_AM_TEST                AM_TEST

Cause:

While import we used TABLE_EXIST_ACTION=APPENDparameter. So this parameter only importing data on that table and its skipped the indexes.

Solution:
 
1. Instead of truncate we can drop the table and import the table. (or)
2. TABLE_EXIST_ACTION=REPLACE – use this command while import. (Internally it will drop and recreate the table)


I am not sure …I hope TABLE_EXIST_ACTION=APPEND doesn’t import the any DDL commands like (indexes, constraints statements), Statistics and grants…
Need to dig some more :-)



I Hope this article helped you. Suggestions are welcome.

Thanks!!!


Standby Statspack Installation on oracle 11g database

$
0
0

Recently I triedgenerate the AWR report for particular timeframe on ACTIVE Data Guard database, ran the awrrpt.sql and it was failed. After that only I knew oracle doesn’t support the AWR/Statspack on active data guard database.
I have checked oracle metalink and they have suggested installing standby statspack for active data guard.
Lets see in this article ..

Active DataGuard Statspack Overview:

Oracle doesn’t support AWR and Statspack feature on active data guard database due to its read only nature. But the same time its supports STANDBY STATSPACK feature and a different set of scripts are required to install in primary database. 
How it’s internally work?

Using sbcreate.sql script, we have created standby statspack repository and STDBYPERF schema on primary database.
STDBYPERF schema having objects (tables, views and procedures)   looks like same as PERFSTAT schema and also having DB LINK between Primary and standby database.  While executing standby snapshot on primary database, this DB LINK fetching the performance data from standby database PERFSTAT schema to STDBYPERF schema. We can generate the standby statspack report on primary database itself.


http://1.bp.blogspot.com/-JPdc8OgvEmU/UAMBdeqekhI/AAAAAAAADLg/Q3SGTbWb8xA/s400/standby_statspack_arch.png


Standby Statspack Prerequisite:

1.       Normal Statspack must be installed on primary database.
2.       Create separate tablespace for both PERFSTAT and STDYBYPERF Schema’s.
3.       Check the GLOBAL_NAME init parameter value.
If GLOBAL_NAME=true, while creating the standby repository due to GLOBAL NAME issue sbcreate.sql script will fail.

To resolve this issue we need to set the GLOBAL_NAME value as false at session level.

Note: GLOBAL_NAME =false value should not be set at Instance level is not advisable.

Oracle Bug:

This one is oracle bug 11899453. This bug is fixed on 12.1

Temporary Solution:

We need to run the sbcreate.sql and sbaddins.sql scripts separately.  (Sbcreate.sql internally calling sbaddins.sql script... so metadata will not be fetched from the standby database to primary database via database link)

1.    sbcreate.sql script create the STDBYPERF user and its repository objects
2.    sbaddins.sql script add the standby instance standby statspack repository information on primary database and create the  private database link   between   primary(STDBYPERF) and DR(PERFSTAT) databases.
3.    Whenever we are taking standby statspack snap, this database link will copy the performance data from active data guard memory to standby Statspack repository.

4 4.       Add the standby TNSNAME entry in primary database server.

Installation Steps:

Normal Statspack installation steps: (Primary Database)
1.       Create separate tablespace PERFSTAT_DATA with 500 MB.
2.       Run the spcreate.sql script on primary database.

@?/rdbms/admin/spcreate.sql

  Enter the perfstat user password:perfstat
  Enter the default tablespace: perfstat_data
  Enter the temp tablespace: ESTT01

Standby Statspack installation steps: (Primary Database)

1      Run the spcreate.sql script on primary database.
@?/rdbms/admin/sbcreate.sql
Enter value for stdbyuser_password: STDBYPERF
Enter value for default_tablespace:PERFSTAT_DATA
Enter value for temporary_tablespace: TEMP
The following standby instances (TNS_NAME alias) have been configured
for data collection
…………………..
……………………….
=== END OF LIST ===


THE INSTANCE YOU ARE GOING TO ADD MUST BE ACCESSIBLE AND OPEN READ ONLY

Do you want to continue (y/n) ?
Enter value for key: n
begin
*
ERROR at line 1:
ORA-20101: Install failed - Aborted by user
ORA-06512: at line 3


Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

2   Check the Global Parameter Value
SQL> conn / as sysdba
Connected.

SQL> show parameter global_names

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
global_names                         boolean     TRUE


3  Login as STDBYPERF user

SQL> conn stdbyperf
Enter password:
Connected.

To resolve this issue we need to set the GLOBAL_NAME value as false at session level.

SQL> alter session set global_names=false;

Session altered.

SQL> @?/rdbms/admin/sbaddins.sql

The following standby instances (TNS_NAME alias) have been configured
for data collection

=== END OF LIST ===


THE INSTANCE YOU ARE GOING TO ADD MUST BE ACCESSIBLE AND OPEN READ ONLY

Do you want to continue (y/n) ?
Enter value for key: y
You entered: y


Enter the TNS ALIAS that connects to the standby database instance
-----------------------------------------------------------------
Make sure the alias connects to only one instance (without load balancing).
Enter value for tns_alias: test_dr_AM_HOST
You entered: test_dr_AM_HOST


Enter the PERFSTAT user's password of the standby database
---------------------------------------------------------
Performance data will be fetched from the standby database via
database link. We will connect to user PERFSTAT.
Enter value for perfstat_password: perfstat
You entered: perfstat

... Creating database link

... Selecting database unique name

Database
------------------------------
test_dr_chn

... Selecting instance name

Instance
------------
test_dr1

... Creating package

Creating Package STATSPACK_test_dr_chn_test_dr1..
No errors.
Creating Package Body STATSPACK_test_dr_chn_test_dr1..
No errors.


Note: Using STATSPACK_test_dr_chn_test_dr1package, we are going to take the standby Statspack snap.

How to take standby snapshot manually?

Note: We should take standby statspack snap and report should be using STDBYPERF user.
If GLOBAL_NAMES=FALSE in instance level, we can set GLOBAL_NAMES=FALSE at session level

Login as STDBYPERF user

SQL> alter session set global_names=false;

Session altered.

SQL> execute STATSPACK_test_dr_chn_test_dr1.snap(10);

PL/SQL procedure successfully completed.


What happen if we take standby statspack snapshot other than STDBYPERF user?

Login as SYS user

SQL> alter session set global_names=false;

Session altered.

SQL> execute stdbyperf.STATSPACK_test_dr_chn_test_dr1.snap;
BEGIN stdbyperf.STATSPACK_test_dr_chn_test_dr1.snap; END;

*
ERROR at line 1:
ORA-02085: database link STDBY_LINK_TEST_DR_AM_HOST connects to TEST_DR
ORA-06512: at "STDBYPERF.STATSPACK_TEST_DR_CHN_TEST_DR1", line 59
ORA-06512: at "STDBYPERF.STATSPACK_TEST_DR_CHN_TEST_DR1", line 5445
ORA-06512: at line 1

After we have got above error, we didn’t take standby statspack snapshot. Let’s try now..

SQL> conn stdbyperf
Enter password:
Connected.

SQL> alter session set global_names=false;

Session altered.

SQL> execute STATSPACK_test_dr_chn_test_dr1.snap;
BEGIN STATSPACK_test_dr_chn_test_dr1.snap; END;

*
ERROR at line 1:
ORA-01400: cannot insert NULL into
("STDBYPERF"."STATS$STATSPACK_PARAMETER"."DB_UNIQUE_NAME")
ORA-06512: at "STDBYPERF.STATSPACK_TEST_DR_CHN_TEST_DR1", line 382
ORA-01403: no data found
ORA-06512: at "STDBYPERF.STATSPACK_TEST_DR_CHN_TEST_DR1", line 4127
ORA-06512: at "STDBYPERF.STATSPACK_TEST_DR_CHN_TEST_DR1", line 101
ORA-06512: at line 1


Conclusion:

If we are trying to take the snap other than stdbyperf user, some metadata’s were updated in standby repository table and mess up the standby repository. So we couldn’t take the standby snap and again we need to drop and recreate the standby statspack on primary database using sbdrop.sql and sbcreate.sql script.



How to generate the standby statspack report?

Login as STDBYPERF user only

SQL> @?/rdbms/admin/sbreport.sql

Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

DB Unique Name                 Instance Name
------------------------------ ----------------
test_dr_chn                    test_dr1

Enter the DATABASE UNIQUE NAME of the standby database to report
Enter value for db_unique_name: test_dr_chn
You entered: test_dr_chn

Enter the INSTANCE NAME of the standby database instance to report
Enter value for inst_name: test_dr1
You entered: test_dr1


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing without
specifying a number lists all completed snapshots.



Listing all Completed Snapshots

                                          Snap
Instance       Snap Id   Snap Started    Level Comment
------------ --------- ----------------- ----- --------------------
test_dr1             1 06 Jul 2012 20:47    10
                     2 06 Jul 2012 21:16    10



Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
Begin Snapshot Id specified: 1

Enter value for end_snap: 2
End   Snapshot Id specified: 2


Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is sb_test_dr_chn_test_dr1_1_2.  To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name: sb_test_dr_chn_test_dr1_1_2.txt

Thanks !!!





ERROR OGG-01038 Oracle GoldenGate Capture for Oracle, extract_rpm.prm: Cannot fetch required data from table due to missing key columns

$
0
0

Recently i did some test case on my desktop regarding Goldengate backout plan

change details:

1.     Databases are replicated using Goldengate
2.     Create the some tables/indexes both source/target
3.     Add the column to existing tables both source/target
4.     Configure these changes into Goldengate (UNI directional - new tables/ existing column changes)

Backout plan is

1.     Drop the tables/indexes both source/target
2.     Drop the column which was recently added both source/target
3.     Remove these changes from Goldengate

Complete backout plan steps

1.     Take the invalid objects list from both source/target
2.     Make sure there  is no active session connected the tables which is going to add the column
3.     Make sure there is no replication delay between source and target databases.
4.     Take the GG definition files/parameter files backup on source and target databases
5.     Bring down the GoldenGate on both source and target
6.     Execute the DB changes on both source and target
7.     Make sure there is no invalid objects on databases
8.     Add the new tables detail on *.prm files /extract files on source; add the new tables detail on target *.prm files also.
9.     Generate the GG definition files using *.prm files on source database
10.  Copy the GG definition (*.def) file from source to target
11.  Start the GG on both side.

I have done the DB backout and GG configuration changes successfully. But while start the GG extract processes (step 11), it was failed…

I see the GGERR.log file and found the below information alert..

2012-07-27 18:40:35  ERROR   OGG-01038  Oracle GoldenGate Capture for Oracle, extract_1.prm:  Cannot fetch required data from table AM.XXXXXXXX due to missing key columns.
2012-07-27 18:40:35  ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, extract_1.prm:  PROCESS ABENDING.

I have dropped some columns from AM.XXXXXXXX as part of backout plan.

I have tried so many times to start the GG.. I have read some forum and suggested to check the table structure, primary key / Unique key on both source/target databases, drop/recreate the supplemental log and I have tried, doesn’t help.

After 2 hours attempt, I have noticed while start the GG and its processes on first time, its replicated the few records from source to target databases. After that only GG is not able to extract/replicate the AM.XXXXXX table from archive log files.

I have confirmed the DML statements on AM.XXXXXXXX only caused this issue.

So I have skipped the particular archive log file extract/replication and started the GG finally ..

ggsci> alter extract  extract_1, extseqno 7855, extAMa 0

7854 -> having mismatch data archive log files. So we have started the extract on 7855 archive.

Root cause:

Example :

1.             Stopped the Goldengate on both source and target
2.             I am going to drop the 3 columns on a table and  its having totally 10 columns.
3.             Before I am dropped these 3 columns, meanwhile few DML I didon that table and it’s not  replicated to target database due to GG shutdown.
4.             I have dropped these column from that table and started the GG
5.             GG  extract and replicate the data from archive log files for other tables..
6.             While extract/replicate the our table data, GG is not able to extract/replicate due to column mismatch
( In archive log files, that tables having 10 columns record data, while extract/replicate having 7 columns, so GG was failed)
7.             I have skipped that archive file from GG extract and then started the GG successfully.
8.             Using Diff script, we can synch up the data between source and target databases.



I Hope this article helped you. Suggestions are welcome.
 
Thanks !!!








Database cloning using physical dataguard database data

$
0
0
 Normally everyone have done several database cloning, this one is somewhat different  and new approach I have followed based on the necessity. I would like to share with you all.

Database architecture:

§  Both source and its DR running on different data center.
§  create the new database on server 2 using source1 database data.


Pre-requisite:

·         Primary database(Source1) total size is ~ 1.8 TB and used space 1.0 TB
·         In  target server I have only ~ 900 GB spaces available.

    I had no enough storage space on target server (server2) for database cloning and i can't wait for storage.

To reduce the Database size:

I had planning to resize the data files which is having free space on source1 database(server 1)

1.     Resized the data file and freed up the space on source database (source1) and reclaimed the 500 GB spaces
2.     The same space was claimed  on target server DR database (source1-DR database: server2)
3.     Current source1 database  total size is 1.3 TB and target server having 1.4 TB free space.

If I  used RMAN backup, I don’t have space to keep the backup on target server (server 2) or If I copy the data files  around ~2 TB across the server with different data  center using RMAN active database cloning will be difficult (from server 1  to server 2).

So I have decided to clone the database using DR database data which is running on source2 server. Our requirement also to create the new database on same
server(server 2). Using OS copy command to copied the datafiles and created the database within short time and no need any file system for this database cloning.

High level plan:

1.     SOURCE1 PRIMARY database force logging enabled – Checked and looks good
2.     Validate the datafiles on SOURCE1-DR  database using RMAN Validate command (Make sure there is no block corruption on SOURCE1 DR database)
3.     Disable the archive purge cronjob on both SOURCE1 and SOURCE1-DR databases
4.     Stop the MRP process on SOURCE1-DR databases  for database consistency. (SCN Should not be changed)

                        alter database recover managed standby database cancel;

                        $ ps -ef|grep mrp

                        oracle 13763     1   0   Mar 10 ?         104:06 ora_mrp0_SOURCE1

5.     Install Separate ORACLE_HOME for new database
6.     Create parameter and  respective directory for new database.
7.     Using OS copy command to copy the datafiles from SOURCE1 –DR database to new database datafiles location.
8.     Once I copied the datafiles, I can start up the database in NOMOUNT stage.
9.     Create the control files for new databases using SOURCE1-DR database control file trace.
10.  Recovering the new database using the  archive logs whatever generated on SOURCE1 PRIMARY after I stopped the MRP process on SOURCE1-DR database.
11.  Ensure the SCN timestamp is same on all the datafiles of the new database
12.  Open the new database in READ WRITE mode
13.  Start the MRP processes on SOURCE1-DR database

                        alter database recover managed standby database disconnect from session;

14.  Shutdown the NEW database
15.  Using NID, I can change the DBID for new database.
16.  Start the database

I Hope this article helped you. Suggestions are welcome.
Thanks !!!

Streams capture: waiting for archive log - Wait Event #2

$
0
0
I have found “Streams capture: waiting for archive log”  wait event in AWR report.

Streams capture: waiting for archive log

Event
Waits
%Time -outs
Total Wait Time (s)
Avg wait (ms)
Waits /txn
Streams capture: waiting for archive log
14,988
33.35
8,990
600
0.06

Details
This is an idle wait so it is not a performance problem.It's just the capture process saying it has nothing to do because it is waiting for another archive log to become available. It isn't wasting any system resources or affecting performance. There is no action necessary for this wait.

How should I know the "idle event"?

break on wait_class skip 1
column event_name format a40
column wait_class format a20

select
wait_class,
event_name
from
dba_hist_event_name
order by
wait_class,
event_name
where
wait_class <> ‘Idle’ ;

About read by other session wait event:

 I Hope this article helped to you. I am expecting your suggestions/feedbacks.
 

Thanks !!!

how to add a column in goldengate?

$
0
0

How to add a column on table which is on GoldenGate replication ?

Replication type : unidirectional

High level Steps:

·         Stop the GG on both source and target databases
·         Check the invalid objects list on both source and target databases
·         Add the column in tables on both source and target databases
·         Check the invalid objects list on both source and target databases
·         Configure these changes on GoldenGate
·         Start the GG and verify the data flow

Steps:

1.      Shutdown the GoldenGate on both source and target databases

Source Database

GGSCI > info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT_TAR     00:00:00      00:00:06
EXTRACT     RUNNING     PUMP_TAR    00:00:00      00:00:03

Before shutdown the GoldenGate , make sure there is no latency between source and target database.


GGSCI > STOP EXT_TAR

Sending STOP request to EXTRACT EXT_TAR ...
Request processed.

GGSCI > STOP PUMP_TAR

Sending STOP request to EXTRACT PUMP_TAR ...
Request processed.


GGSCI > info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     EXT_TAR     00:00:00      00:00:38
EXTRACT     STOPPED     PUMP_TAR    00:00:00      00:00:28


Target Database

GGSCI > info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
REPLICAT     RUNNING     REP_TAR     00:00:00      00:00:06

Before shutdown the GoldenGate , make sure there is no latency between source and target database.

                                                                                                                                                       
GGSCI > STOP REP_TAR

Sending STOP request to REPLICAT REP_TAR ...
Request processed.

GGSCI > info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     STOPPED
REPLICAT     STOPPED     REP_TAR     00:00:00      00:00:38

2.     Check the invalid objects list on both source and target databases.

Select owner,object_name,object_type,status,last_ddl_time from dba_objects where status=’INVALID’ order by 1,2;

3.     Add the columns on both source and target databases.

Alter table RB.TEST add column ( address varchar(255));
Before add the column, make sure there is no concurrent(active) sessions access on that table RB.TEST.
select sid,serial#,username,status,terminal,logon_time,command from v$session where sid in (select /*+ rule */ sid from  v$access where object in ('TEST') and owner='RB') and status='ACTIVE';
select * from v$sess_io where sid in (select sid from v$session where sid in (select /*+ rule */ sid from  v$access where object in ('TEST') and owner='RB') and status='ACTIVE');

4.     Check the invalid objects list and compiled the invalid objects on both source and target databases.

Make sure there is no new objects status is INVALID after the DDL changes. If any new objects status is invalid, compile the objects.
Select owner,object_name,object_type,status,last_ddl_time from dba_objects where status=’INVALID’ order by 1,2;

5.     Configure this changes on Goldengate

DEFGEN utility à Generate the definition of table column mapping  definition file using *.prm  file  on Source database and copy the definition file to target database.

            Backup of *.prm files and *.def files

1)     Take the *prm and def file backup on source and target database. (both *.prm and *.def files are in different location)

cp ./dirprm/defgen_tar.prm  ./dirprm/defgen_tar.prm .08052012
mv  ./dirdef/defgen_tar.def  ./dirdef/defgen_tar.def .08052012
(move this *.def file to another name.. otherwise while run the defgen utility, we are getting already file exist error and utility def generation was abended)
2)     Generate the definition files with new table column mapping

defgen paramfile ./dirprm/defgen_tar.prm 

defgen_tar.prm  - Parameter  file having source database table details and target database details.

                                    Defgen will generate the definiation files on this location ./dirdef/defgen_tar.def

                                    Copy the files to target server definition location.

6      Shutdown the GoldenGate on both source and target databases

Source Database
                                                                                                                                                       
GGSCI > START EXT_TAR

Sending START request to EXTRACT EXT_TAR ...
Request processed.

GGSCI > START PUMP_TAR

Sending START request to EXTRACT PUMP_TAR ...
Request processed.


GGSCI > info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     START     EXT_TAR     00:29:21      00:00:28
EXTRACT     START     PUMP_TAR    00:00:00      00:00:38


Target Database
Start the Manager and REP processes

GGSCI > info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
REPLICAT     RUNNING     REP_TAR     00:00:00      00:00:06


Once GG started on both database, please check the ggerr.log for any errors.

Using below command to check how  many transaction happened on each tables.

GGSCI > stats REP_TAR table *, reportdetail, daily

Reference : Oracle Documents

 Thanks !!!





Time to celebrate - Aug 2012 :-)

$
0
0
My blogs visitor’s hits crossed 1,50,000  and just want to celebrate this moment.
Last year August we have crossed 50,000 viewers for 3 years (May 2008 to August 2011). With in a year 1,00,000 viewers its really good and i am personally very happy what i am doing..

I hope next August 2013, Overall blog visitors will be reach around 5,00,000..

I am really thanking everyone and whoever visited my blog and providing me suggestions /comments.

This Blog provides good relationships with you all whoever is i am not met. Whenever I publishing some articles on my blog, I am really getting lot of confidence. I am trying to write more articles .. Good articles  for this year also :-)
 
I hope, your support will continue in future.
Thanks !!!

Viewing all 25 articles
Browse latest View live