Upgrades Complete

Over the past two weekends we've upgraded our Fusion Middleware stack and our E-Buisness Suite.  To say I put in quite a few extra hours would be an understatement.    The Fusion Upgrade ( -> 11.1.3) went pretty smooth, other than the fact I was up sick the previous night.   I didn't get off the couch until an hour before the upgrade was to start!    I have to say, working while you are sick sucks.    

Last weekend we upgraded our EBS environment.  12.0.6 to 12.1.2, 10g DB to 11gR2 and migrating to a 64bit server.    I started 2pm on Friday and finished 6am Monday morning.  During that time I slept for a grand total of  9 hours.     I only hit one major issue during the upgrade which required opening a P1 SR with Oracle.   It took about 3 hrs to resolve, would have been quicker but my analyst went to lunch!      

The other other issue was performance.     Our DEV server crash and burned a couple of months ago, so the networking guys gave us a loaner.   It didn't even cross my mind to check the specs and compare it to our production hardware before I did a dry run to work out the timings.     I managed to shave 18 hrs off the upgrade, which I assumed was because of a few changes such as applying the patches with nocompiledb,nocompilejsp,noautocfg.     I expected the 12.1.1 patch to finish around 2am on Saturday morning.  2am came and went with the patch still running....  3am, 5am, 7am, finally 8am it finished!   

I couldn't find any significant performance issues so I looked into the server itself.    It turns out, the loaner server is twice as powerful as our prod!   I should have realized it during my dry run, but I was so busy I didn't have time to investigate.   With 2 major  upgrades back to back the last few weeks have been crazy.

Well, all done now, environment has gone live and I haven't heard of any major issues.    I did have issues cloning and will put up a post on that shortly.


FMW 11g – Upgrading Sun JDK

Recently I upgraded a Fusion Middleware Environment from to  As part of this, java had to be upgraded since the version I was on, 1.6.0_16 is not certified for

I checked the Fusion documentation, WebLogic documentation and about the only thing I could find was:

Upgrading Sun JDK in the Oracle Home Directory

Basically it says to install the new JDK version in the same location as the existing JDK.  Well, that wouldn’t be a problem, other than in my infinite wisdom I installed it under the directory name jdk1.6.0_16.    Putting a new version, 1.6.0_22 in that directory could be confusing and not a best practice.  If I had my time back I would have named the directory jdk1.6 and this wouldn’t be an issue.

So how do I fix it?   I searched all the weblogic domain configuration files, startup scripts and console but could only find references to the JDK directory in two files.
  1. One reference in $WLS_HOME/common/bin/


  2. Two references in $WEBLOGIC_DOMAIN_HOME/bin/

  3. SUN_JAVA_HOME="/var/u01/app/oracle/product/jdk1.6"             JAVA_HOME="/var/u01/app/oracle/product/jdk1.6"
    Note: If you have multiple weblogic domains you will need to change each domains file.
I bounced the domains and checked the start logs to verify that the new java home was being used.    For example, if I grep  the AdminServer log for my domain:

grep  jdk1.6 /u01/app/oracle/product/fmw11g/user_projects/domains/SOAdomain/servers/AdminServer/logs/AdminServer.log

now shows references to the new JDK home.  Some lines, but not all:

java.runtime.version = 1.6.0_22-b04
java.version = 1.6.0_22

You can grep it for your old version as well to make sure it isn’t being referenced.

I also opened an SR to make sure that there weren’t other files that should be modified.   Oracle Support didn’t say there were but did say that I could re-open the SR if I encountered any issues.  Not sure if thats a good thing or not.  ;)


OBIEE: Unable to Login. Access Prohibited.

In a TEST environment today users reported they were unable to login.

The NQServer log showed:

2010-11-03 17:03:00
     [nQSError: 13011] Query for Initialization Block 'Authentication' has failed.

At first I thought it was an issue talking to our IDM Server since we just set it up that integration today, but it was quickly ruled out.   Finally I found the following post on Oracle’s Forums where a user accidentally denied access to dashboards in the Privilege Administration Screen.

A forum member suggested appending some parameters to the URL which brings you directly to the Privilege Administration screen:


Much to my relief it worked!   For some reason access to Dashboards was not permitted:


I’m not sure why but I enabled it for everyone and passed it back to the OBIEE Administrator to reset the privileges.   At the time he was importing a catalog.. I’m not very familiar with that part of OBIEE so i’m not sure if that was the culprit.


TEMP Space issues while installing WebLogic?

Trying to install WebLogic today and I hit the following:


By default the installer extracts files to /tmp with a directory name in the format of bea<numbers|timestamp?>.tmp and uses 918M of space on Linux. 

Usually you can use the OS environment variable TMPDIR to specify another temporary directory but I guess since this is a java program its ignored.

Instead you have to pass a parameter to Java when you launch the installer.

$JAVA_HOME/bin/java -jar wls1032_generic.jar


Forget to configure your application server as an Administration Instance?

I’ve installed Oracle Application Server quite a few times and today I had to perform another one.   After I installed the application server I tried to login to the console but was greeted by a lovely message:

Oops! This link appears to be broken.

The apache error logs showed:

File does not exist: /u01/app/oracle/product/apps1013/Apache/Apache/htdocs/em/console

The access log:

"GET /em/console HTTP/1.1" 404 342

It turns out, while installing the application server I forgot to select a check box which designated the server as an administrator instance. 


In a single instance environment, such as the one I am setting up, you need to select this option to be able to manage the instance.   If this was a node in a cluster, you should only designate one instance as the administration instance.

So what do you do if you forgot to select it like I did?  Nothing to worry about, you just need to edit two files:

1. Open $ORACLE_HOME/j2ee/home/config/server.xml

Change start=”true” for the ascontrol line.  It should look like this:

<application name="ascontrol" path="../../home/applications/ascontrol.ear" parent="system" start="true" />

2. Edit $ORACLE_HOME/j2ee/home/config/default-web-site.xml

Add ohs-routing=”true” to the ascontrol line.  It should ook like this:

<web-app application="ascontrol" name="ascontrol" load-on-startup="true" root="/em" ohs-routing="true" />

Now you just need to restart the application server:

$ORACLE_HOME/opmn/bin/opmnctl stopall;
$ORACLE_HOME/opmn/bin/opmnctl startall;

and if you goto http://servername:/em/console the application server control should now work.


R12.1.2 Relinking issues after 64bit Migration

We have a large R12 upgrade coming up on the horizon.  Upgrading to R12.1.2, 11gR2 and migrating to 64bit.   As part of the post 64bit migration you have to relink the application programs.  A number of modules wouldn’t relink tho:


g++: /u01/TEST/apps/apps_st/appl/sht/12.0.0/lib/ilog/6.2/libschedule.a: No such file or directory
g++: /u01/TEST/apps/apps_st/appl/sht/12.0.0/lib/ilog/6.2/libsolveriim.a: No such file or directory
g++: /u01/TEST/apps/apps_st/appl/sht/12.0.0/lib/ilog/6.2/libconcertext.a: No such file or directory
g++: /u01/TEST/apps/apps_st/appl/sht/12.0.0/lib/ilog/6.2/libsolver.a: No such file or directory
g++: /u01/TEST/apps/apps_st/appl/sht/12.0.0/lib/ilog/6.2/libconcert.a: No such file or directory
make: *** [/u01/TEST/apps/apps_st/appl/eng/12.0.0/bin/ENCACN] Error 1
Done with link of eng executable 'ENCACN' on Tue Oct 19 15:32:10 EDT 2010

Relink of module "ENCACN" failed.

I opened up an SR with oracle but continued to research the problem.   I didn't see anything on Metalink for R12, Google same result.   I searched the entire filesystem to see if the libraries existed elsewhere but no luck
I noticed a zip file /u01/TEST/apps/apps_st/appl/sht/12.0.0/lib/ and listed the contents to see what was in it.  Here is a snip:

[oravis@myserver lib]$ unzip -l
$$Header: 120.5 2006/10/02 17:00  juliang ship                       $
Length     Date   Time    Name
--------    ----   ----    ----
0  05-02-06 02:12   ilog/
0  09-27-06 18:09   ilog/6.2/LINUX/
2  09-27-06 18:08   ilog/6.2/LINUX/libconcert.a
2  09-27-06 18:08   ilog/6.2/LINUX/libconcertext.a
8  09-27-06 18:08   ilog/6.2/LINUX/libcplex.a
2  09-27-06 18:08   ilog/6.2/LINUX/libilocplex.a
8  09-27-06 18:08   ilog/6.2/LINUX/libschedule.a
0  09-27-06 18:08   ilog/6.2/LINUX/libsolver.a
0  09-27-06 18:08   ilog/6.2/LINUX/libsolverfloat.a
0  09-27-06 18:08   ilog/6.2/LINUX/libsolveriim.a
--------                   -------
665793708                   72 files

The libraries adrelink was looking for.  I extracted the files, changed to the LINUX directory, moved the files up to the parent directory,  relinked (via adadmin) and it completed successfully.


Looking back on the problem, its strange I didn't encounter it before.   This environment is a copy of an existing R12.1.2 32bit environment.   The only thing i'm changing is migrating it to 64bit.   So I should have hit these relinking issues before.   I checked the source environment and the libraries are indeed there!  

The zip file contains the libraries for a number of platforms, AIX, HPUX, etc.   So the only thing I can think of is that since i'm doing a platform migration, these libraries get removed to make sure the proper ones are used.  However, it doesn't continue and extract the correct libaries from the zip file.


#oow10 – Day 4 (Wednesday) Summary

I started the day with a quick walk around the exhibition hall in Moscone South.  Before long I found my way to the Demo grounds and noticed a sign 12.1 Upgrade Best Practices.   We are in the process of an upgrade to 12.1.2 so I stopped by and asked a few questions.  Before I knew it I had missed my first session!   I have to say, one of the best things about OpenWorld is the ability to meet the people behind the scenes at Oracle.

The first session I went to was S319052 Applications Performance Panel. I attended one of Ahmed Alomari’s  EBS performance sessions back when he was still at Oracle.  To this day I would say it was one of the best EBS sessions I have attended.   This session, as the title says, was a panel and although there was some good advice, fielding alot of questions slowed the pace down quiet a bit.

The next session I attended was S314886 - Oracle Database 11g Upgrade Essentials for Oracle E-Business Suite Environments.   The last session I attended was S313251 - Stats with Confidence.  I would recommend both of these sessions.  Elke Phelps did a great job of listing 11g features as they pertain to EBS, including a step by step guide of a recent upgrade she performed.

The stats session by Arup Nanda talked about the distribution of data, how it affects statistics and performance as it changes over time.  He continued by talking about some new features of 11g such as Pending Statistics, how to use them in a private session to determine if they will have a negative affect on performance.   If they do have a negative impact, how you can reverse the changes.  

Larry’s keynote was also on Wednesday and as usual it drew a packed crowd…  Unfortunately it seemed to be a repeat of his Sunday night keynote.  I left early so I could attend my last session but as it happens, the session was in one of the rooms broadcasting the keynote so it was delayed.


I have to agree with Larry and his views on Cloud Computing. is an application on the web, not cloud computing.   However, the best definition of cloud computing had to be the street interview where the guy said that it was invented a number of years ago by the airlines.  Internet access at 35k ft, over the clouds. ;)

The appreciation event was also that evening, so I had to rush back to the hotel and get ready.  This year my wife took the opportunity to do some shopping in San Francisco and she is a huge Black Eyed Peas fan.   It wouldn’t have been wise for me to be late.    I was actually looking forward to Berlin.  The last time they played at OpenWorld I arrived just as they were finishing their last song.  I’m glad I was able to finally see a full set.

I have to say tho, out of all the OpenWorld appreciation events I have attended, its safe to say that the Black Eyed Peas drew the largest response.   It was an awesome concert.   Shortly after the Steve Miller band came on stage we decided to head out.  I would have liked to stay a little longer but the boss was getting tired.  Amateur!

I took quite a few pictures, a few are below.. Not bad for sitting all the way back in the stands on my old Canon S3.



#oow10 – Day 3 (Tuesday) Summary

I have to apologize in advance for the delay its taken me to publish my thoughts on OpenWorld this year.  I was pretty busy this year and each night I pretty much collapses once I arrived back at the hotel.  I’ll try to get these out over the next couple of days.


The day started with a Keynote by Tom Kilroy and I have to say I really enjoyed it.   In retrospect, IMHO, it was probably the best one of the week.   Mr. Kilroy talked about how connected our world was becoming, the number of devices connected to the internet and the amount of traffic that was being generated.   Since the internet ‘began’ almost 150 Exabytes of data has traversed the net.  In 2010 alone its 175 Exabytes which brings the total to 325 Exabytes since the inception of the internet.   With an estimated 10 billion new devices connected by 2015 and the explosive popularity of video that amount of data is going to be staggering.   Mr. Kilroy continued by talking about how to structure that data, making it more relevant and less time consuming to find what you want.  

Thomas Kurian was next on stage where he talked about cloud computing, which seemed to be the primary focus of many vendors both in keynotes and the exhibition floor.   I remember years ago Oracle talked about using many cheap x86 computers in a grid computing environment instead of huge SMP systems but I guess the reality is that those systems are hard to maintain.  Now Oracle has Exalogic + Exadata, where a few high end, tightly integrated reliable systems deliver outstanding performance and scalability.  

Thomas Kurian also talked about systems management and how you can use Business Performance Indicators (BPI) to give you a better view than CPU, DISK, etc as to how your system is performing.   During the demo, the root cause of a performance issue was determined to be a CPU bottle neck. I find this interesting  because I would have been paged shortly after the CPU maxed if that was my environment. The BPI approach is pretty much the opposite of how most DBA's work today.

The first session I attended was Explaining the Explain Plan (S316955) and I have to say I really enjoyed this session.  So much material was covered that I couldn’t possible do it justice in a few lines so you should definitely download this presentation.  You can also follow the Opitmizer teams blog at   A second presentation that afternoon built on the material covered here, S317019 but unfortunately it was full.   A friend of mine attended and said it was good as well.

In a nutshell this presentation covered what an execution plan was, how to generate it, definitions around what is a good plan, cost, performance, cardinality, etc.     Causes of incorrect cardinality estimates by the optimizer and their solutions.  Access paths, how the optimizer chooses which one to use and situations where it can choose the wrong path.   Join types, causes of incorrect joins and situations where the join orders are wrong.  To drive the points home she included examples and asked the audience questions.

My next session was Tuning All Layers of the Oracle E-Business Suite Environment S317108.  This session was very good as well and was broken up into the various layers such as database tier, applications tier, concurrent manager, etc.   What I liked about this session was that they talked about some common performance problems, their causes and suggestions on how to resolve them or how to gather the right information to send to Oracle support.

Another good session on Tuesday was Oracle Fusion Middleware 11g: Architecting for Continuous Availability S317391.  This session talked about how to reduce the impact of both planned and unplanned outages.   How to upgrade your deployed applications  and apply minor Weblogic patches with no downtime.  A good review of HA features was in there as well.  Since I am new to Weblogic this provided me with a good overview but for those experienced there many not be much here for you.

The last session I attended was SQL Tuning for Smarties, Dummies, and Everyone in Between S317295.   Arup Nanda talked about the typical challenges DBA’s face with SQL tuning,   from ‘queries from hell’ to dealing with data that evolves over time.     Jagan Athreya continued the presentation by talking about the new features of 11g and 11gR2.  

One of the features that caught my eye was the ability in 11gR2 to save all the metadata related to a particular SQL statement as an interactive report.  It looks like it contains all the information you would need to identify who is executing it, bind variables, explain plan and metrics.   

Another feature also in 11gR2 is the ability to monitor PL/SQL, so you can figure out where PL/SQL blocks are spending most of their time.  The session continues to talk about the common problems that cause SQL to go bad (optimizer stats, application issues, cursor sharing, etc) and how these new database features can help you.

Overall,  Tuesday had some great sessions.


#OOW10 – Day 1 and 2 Summary

I’ve barely had a chance to sit down and collect my thoughts.   A short while ago I returned from my first NFL game.   It was a crazy experience, its insane the way fans get into the game.  I think I high fived more in a single quarter than all the NHL games i’ve ever been too.

So lets recap a bit.  Saturday I arrived shortly after lunch and checked into the hotel.  Our room was next to the elevator.  I always thought when selecting your room preferences that people choose to be away from the elevator since it would be noisier due to the foot traffic.   It didn’t cross my mind that it would be because the elevator passing by would cause an entire wall to vibrate!  Luckily they were able to switch me to another room the next day.

Sunday I attended a number of user group sessions and in general found them to be very useful.  At the OAUG Sysadmin Sig session I found out about the Mismanaged Session Cookie bug affectin EBS users.  For more information you can check our Steven Chan’s blog.  We are in the process of upgrading to R12 and the version of Java we are using is affected.   The first thing on my plate when I get back to the office is to downgrade.

I really enjoyed, Resolving the Free Buffer Waits Event and its too bad he had to rush through the material.  Craig was very entertaining to listen to and the hour went by very quickly.

Today I didn’t find the sessions as useful, which at least one other has noticed as well.  Its not to say they were bad, just not as good. ;)   After the keynote, my first session was Using Oracle VM to Support Test and Development in Oracle E-Business Suite.   I’ve been supporting a couple of R12 environments running on Oracle VM for a few months now but I was disappointed not to hear anything I didn’t already know.   About the only useful tidbit was Note: 464754.1, FAQ: Certified Software on Oracle VM.   Basically a trump card if Oracle support says to reproduce an issue on physical hardware.

My next session was Managing Customizations in Oracle E-Business Suite.  The session didn’t really talk about managing customizations at all which was disappointing.. It focused mostly on an environment and change control strategy.   Our environment doesn’t have a lot of customizations.  I was hoping to hear how large EBS implementations support customizations.

I’ve been following Richard Foote’s blog and it was nice to finally see him in person.  I think its safe to say that he knows his indexes!   Alot of useful information in this session and i’m definitely going to have to download the presentation once its available.

The last session I attended was Managing Oracle WebLogic Server: New Features and Best Practices.  Based on the session description I was expecting to hear how to troubleshoot problems in WebLogic.   However, it was basically a discussion on using the WebLogic management pack for Enterprise Manager.   For me that wasn’t a bad thing since we have purchased it but haven’t installed it yet.   Now that i’ve seen what it can do, I am anxious  to get it up and running.  The session didn’t really contain any tips tho.  It would have been nice to see some common support issues with WebLogic Server and steps used to diagnose and resolve the problem.

As with each Openworld there have been a slew of announcements.    Those which caught my eye was the upcoming release of Solaris 11 and the Unbreakable Linux Kernel.  There were plenty of hardware announcements as well.  I listened with interested but didn’t really take notes since I work for a small company and we would never need something so powerful.

So far the only really annoying thing I have noticed is the amount of people talking loudly during the keynotes. As well, the number of people who support mission critical environments but can't seem to figure out how to put their phones on vibrate!

So thats it… Day 3 starts tomorrow!


Agenda for OpenWorld 2010

I’m happy to say that this year I will be attending OpenWorld as part of the blogger program.  My first OpenWorld was back in 1998 (I believe), held in Los Angeles.  I’ve attended 5 times since then and every year I pick up alot of good information and have alot of fun.

Today I reviewed my agenda and finalized it (hopefully).   In a few slots there aare multiple sessions I would like to see.  Often its hard to choose one or the other, so I pick which ever one is more applicable to my day to day job.  I can always download the presentations for the others after.

Most of the sessions I will be attending are Fusion and E-Business Suite related.  We are in the process of upgrading our Fusion stack to and E-Business to 12.1.2 on Oracle 11g.

I leave Saturday morning and if everything goes according to plan I should be in San Francisco shortly after lunch.   

S318417 OAUG SysAdmin SIG
S318375 OAUG EBS Applications Technology SIG
S318617 IOUG: Resolving the Free Buffer Waits Event
S314960 Performance/Capacity Trend Analysis with Automatic Workload Repository in 11g
S316387 Using Oracle VM to Support Test and Development in Oracle E-Business Suite
S315658 Managing Customizations in Oracle E-Business Suite
S319069 A Detailed Analysis of Indexing New Features in Oracle Database 11g R1 and R2
S318126 An Oracle E-Business Suite Integration Primer: Technologies and Use Cases
S317063 Managing Oracle WebLogic Server: New Features and Best Practices
S316955 Explaining the Explain Plan: Interpreting Execution Plans for SQL Statements
S317108 Tuning All Layers of the Oracle E-Business Suite Environment
S318968 Oracle Fusion Middleware Management
S317391 Oracle Fusion Middleware 11g: Architecting for Continuous Availability
S317295 SQL Tuning for Smarties, Dummies, and Everyone in Between
S318119 Oracle E-Business Suite Technology Certification Primer and Roadmap
S319052 Applications Performance Panel
S314886 Oracle Database 11g Upgrade Essentials for Oracle E-Business Suite Environments
S313251 Stats with Confidence
S317114 What Else Can I Do with System and Session Performance Data?
S318130 Personalize, Customize, and Extend Oracle E-Business Suite User Interface
S316789 Manage the Security of Your Oracle Database, Middleware, and Applications
S317066 Deep Java Diagnostics and Performance Tuning: Expert Tips and Techniques
S318121 Oracle E-Business Suite Applications Technology: Diagnostics and Troubleshooting

EBS: DB Upgrade to 11gR2 – Autoconfig Fails

Today I hit an issue upgrading our database from to  

The main metalink note which details the steps needed is: Interoperability Notes EBS R12 with Database 11gR2 [ID 1058763.1]

Step 22 involves implementing autoconfig in the new database home.  However, when I ran $ORACLE_HOME/appsutil/bin/ it would fail

Checking the logfile I found that fails with ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

Metalink has an article which discusses the issue:

ORA-12504 When Using (HOSTNAME) Method For 11G Client/Database [ID 556996.1]

The note goes into a fair bit of detail about why this error is happening and how to resolve it.  In a nutshell, 11g expects the service name to be specified in the connect string.  If one isn’t specified then it uses the default service name specified at the listener level.   If the listener is not configured with a default then an error is thrown: 

ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

Previous to 11g, if you dídn’t specify the service name, then the connect string alias was used instead.  In the case of the following connection string, VIS is the alias:

sqlplus apps/pass@VIS

The solution is to configure listener with a default service name using the DEFAULT_SERVICE_listener_name parameter.   I added the following to my listener.ora ifile, reloaded the listener and re-ran successfully. Note: if you add it directly to the listener.ora file and not the ifile, then the change will be lost when your run


Since this is an EBS environment I always search to make sure there are no issues but I was surprised to find nothing.   I’m wondering if I missed something in the upgrade steps.  Have you upgraded to 11g?  Did you hit this issue?


I hit the following error trying to install SOA on a Linux x86-64 environment.  It seems a few people have hit this issue so I thought I would post the solution I found.  Its not much of a "fix", but while researching the problem I noticed some questionable suggestions, including deleting files.  Who knows, they may have opened an SR and thats what they were told to do.  Regardless, here is what worked for me.

On the Select Domain Source screen:


I would get the following error when trying to select “Oracle SOA Suite


The solution I found, was just to select the products you want from the bottom up.  So Oracle JRF first, Oracle WSM Policy Manager second, etc.  By looking at error above you can see that there are multiple dependencies, one such being Oracle WSM Policy Manager.   So if you select them first, you don't get the error.

If your using windows try running  the config.cmd script as an Administrator.


Web Home Upgrade – OPMN is failing to start

As part of our upgrade from 12.0.6 to 12.1.2 the Web Oracle Home (10.1.3) needed to upgraded to   I encountered the following issue:


I actually encountered this same error while upgrading two cloned environments.  Each time the cause was different.  The easiest one to check is to verify that your ORACLE_CONFIG_HOME is pointing to $INS_TOP/ora/10.1.3 directory. If thats ok and this is a cloned environment, rebuild your inventory.    For a complete guide of how to rebuild an R12 inventory check Metalink note:  How to create, update or rebuild the Central Inventory for Applications R12 [ID 742477.1].

If that doesn’t work, try creating a clean inventory: How to Create a Clean oraInventory in Release 12 [ID 834894.1]

Both of those notes talk about running the $IAS_ORACLE_HOME/appsutil/clone/ script.  I also hit an error running this:

NON-COMPLIANT: /u03/VIS/apps/tech_st/10.1.3/oraInst.loc does not point to an inventory inside the currenOME
Rapid Clone only supports oraInst.loc at that location if its content points to an inventory inside the s_HOME
Please make the necessary changes to the following file:

The weird thing here, is that I compared the oraInst.loc files from my first cloned environment upgrade and they are identical.  In both environments the oraInst.loc file is pointing to:


For the script I found OUICLI.PL Fails when Running adcfgclone with R12 if Global Inventory does not exist [ID 458653.1]  which instructs you to create an oraInventory directory inside 10.1.3 ORACLE_HOME, then modify the 10.1.3 ORACLE_HOME/oraInst.loc file and point it to that directory.  Afterwords re-run the cloning script,

I didn’t run, instead I re-ran the script and tried to apply the patch again.   This time the OPMN failing to start error did not occur.

Note 458652.1 says the cause was that the global inventory wasn’t found under default locations or there were permissions issues.   I verified that both of those issues were ok.

I believe the problem stems from the fact that I didn’t totally clean this environment before cloning.   There was an existing oraInventory from a previous clone and some other Oracle products that are no longer used.   

Hardware Partitioning with Oracle VM

I didn’t realize that Oracle VM could do hardware partitioning until the question was raised on Oracle-L and I looked into it.   I use Oracle VM for some Fusion Middleware and R12 environments but nothing overly complex.

I checked the Oracle VM FAQ document and one of the questions was, “How does partitioning relate to software licensing?”   The answer points to the following document which talks software vs hardware partitioning:

That document refers to the following article which explains how to configure hardware partitioning:

The information can also be found on Oracle’s Wiki:


Why are scripts needlessly complex?

Tonight i’m restoring an environment to another server as a test.   The source server is support by another group, so they use their own scripts to backup the database.  Before I started the restore I took a look at the scripts to see how its being backed up, file locations,etc and what I found utterly shocked me.

To backup one database there were at least 6 scripts scheduled in cron with a line count of almost 600 lines!   I couldn’t believe it.    I try to keep things as simple as possible.  For example, here is one of my backup scripts:

   1:  #!/bin/sh
   2:  . /home/oracle/.bash_profile
   3:  . /usr/local/bin/oraenv << END
   4:  ORCL
   5:  END
   7:  cd /home/oracle/scripts
   8:  logfile=/home/oracle/scripts/log/rman_ORCL_LVL0.log.`date '+%d%m%y'`
  10:  rman target / nocatalog CMDFILE /home/oracle/scripts/rman_ORCL_LVL0.sql LOG $logfile
  11:  status=$?
  13:  if [ $status -gt 0 ] ; then
  14:     mailx -s "[BACKUP][FAILED] ORCL LVL0" <<!
  15:  `cat $logfile`
  16:  !
  17:  else
  18:      mailx -s "[BACKUP][SUCCESS] ORCL LVL0" <<!
  19:  `cat $logfile`
  20:  !
  21:  fi
  23:  echo "Backup files removed (4+ days OLD):"
  24:  echo `find /u03/backup/ORCL  -mtime +4 -print`
  25:  find /u03/backup/ORCL -type f -mtime +4 -exec rm -f {} \;
  27:  echo "Archive logs removed (2+ days OLD):"
  28:  echo `find /u03/archive/ORCL  -mtime +2 -print`
  29:  find /u03/archive/ORCL -type f -mtime +2 -exec rm -f {} \;

It doesn’t get much simpler than that.  I send emails both on SUCCESS and FAIL of the backup because i've seen cron stop working before.   I have filters in my mail client to separate them into different folders.  Each day I checked for failed backups and periodically I check successful folder to make sure my backups are working properly.   The rman_ORCL_LVL0.sql file basically contains:  unneedlessly

   incremental level 0
   tag 'ORCL_LVL0';

I know there are exceptions but in this case these databases are small and have simple backup requirements.  Yeah, I could spend a bit of time to put in variables so its more generic but I don’t manage hundreds of databases so thats not a huge concern.  

As for the original 600 lines of scripts… Yeah, its very generic, handles a multitude of scenarios and probably even does your laundry but its worthless IMHO if it can’t be quickly understood.  The last thing I want to be doing at 3am in the morning is trying to figure out someones scripts because I was the lucky person on call.


iPad, iPhone and Webcache

I’ve been using an iPhone sinceFebruary and over that time I have used a bunch of websites running on Oracle.    Such as Grid Control, custom applications deployed on Weblogic and Oracle Application Server, etc all which work fine.   I wasn’t asked to test our applications but happened to be in situations where my iPhone was quickly available.

I was kind of surprised when I was told one of our applications didn’t work properly on the iPhone or iPad.   When a user tried to access the site a Safari error would pop up:

“Safari can’t open the page <website> because Safari can’t establish a secure connection to the server <website>”

We also verified the problem existed on the desktop with Safari 5.

A quick search on Google turns out a ton of hits, none of them solved the problem but provided us with some ideas to try out.   We contracted our SSL certificate provider and they sent us another chain certificate and a different root certificate which they said would resolve any issues we were having Apple products.

We loaded them into Oracle Wallet manager but unfortunately that didn’t work either.  Back to square one.

A coworker enabled tracing in Webcache and noticed the following in the logfile:

[17/Jun/2010:10:48:46 -0400] [warning 11904] [ecid: 104838150494,0] SSL handshake fails NZE-29048

Since it was an obvious error on the client side I was surprised to see this as a warning.  It just goes to show that if your trying to troubleshoot a problem, always enable tracing.

This error led me to Metalink note:

Internet Explorer Fails To Connect To Web Cache Via SSL If SSLV2.0 Is Unchecked - NZE-29048 [ID 342626.1]

By default Webcache sets SSL_ENABLED to SSLV3_V2H which only supports SSL V2.0 and SSLV3.0 not TLSv1.  In the Metalink note they did an Ethereal Sniff and found that IE tries to use TLSv1 but since by default its not supported, it can’t connect. I also found a post in the Safari forum of Apple Discussions which basically talks about the same problem. 

In our staging environment I tried the fix, changing  SSLENABLED in $ORACLE_HOME/webcache/webcache.xml from SSLV3_V2H to SSL, restarted Webcache and success!

What made this problem slightly more confusing to troubleshoot was that we have another environment with the same version of Webcache.  We could access that application from our iPhone/iPad successfully.  The only difference is in that case SSL isn’t handled by Webcache but by an SSL accelerator. 

Why are R12 patches so large?

While performing patch analysis for some one-off patches I noticed there was quite a size difference between codelines.  (ie. R12.AP.A vs R12.AP.B)   For example, take a look at payables patch 8733916.   The version compatible with AP.A is 35.4MB while the AP.B version is only 9MB.

A coworker forwarded me a note from Metalink which describes the issue: 

Release 12: Why are One-Off Patches so Large? [ID 841218.1]

The note has an interesting chart which shows how many files are applied from a patch based on the codelevel.   Basically a one-off patch contains all the files necessary to fix the problem on a base R12 release.   So to summarize the note, the more upto date you keep your EBS environment, the less work adpatch has to do in order to apply patches.

Background Info:  Adpatch performs a number of tasks when applying a patch.  One of them is to compare versions of files supplied by the patch to those in your EBS environment.  Only newer files will be copied.

In my case, with patch 8733916, its basically the same situation.  R12.0 was released I believe back in Jan 2007, while R12.1 was released in May 2009.  So if a bug affects both versions, its not surprising that the patch would be smaller for R12.1 since its more upto date.   Now, I would assume this is highly dependant on the bug but I haven’t seen a patch for R12.1 that is larger than R12.0.


Fusion Middleware and 11g DB Password Expiry

As a few DBA’s have noticed, the 11g database has password expiry enabled.  This is not entirely a bad thing, I am in favor of this move.  However, if your not aware of this change then it can cause you some problems with your Fusion Middleware (FMW) 11g environment.

Developers contacted me with

ORA-28001: the password has expired.  

Originally I didn’t even think of the repository accounts being an issue.  I assumed it was a password policy in Oracle Internet Directory (OID) or WebLogic accounts they created for deploying applications.     After those were verified, then the only thing left was the database.

A quick look at dba_users showed a couple of accounts already expired or in grace status:

USERNAME                       ACCOUNT_STATUS                   LOCK_DATE EXPIRY_DA
------------------------------ -------------------------------- --------- ---------
DCM                            EXPIRED                                    12-MAY-10
ORASSO_PS                      EXPIRED                                    10-MAY-10
DEV_PORTAL                     EXPIRED                                    10-MAY-10
ODSSM                          EXPIRED                                    10-MAY-10
ORASSO                         EXPIRED(GRACE)                             20-MAY-10

You can view the password policy of the database default profile by looking at dba_profiles:

SQL> select * from dba_profiles;

PROFILE                        RESOURCE_NAME                    RESOURCE LIMIT
------------------------------ -------------------------------- -------- ---------------------
DEFAULT                        FAILED_LOGIN_ATTEMPTS            PASSWORD 10
DEFAULT                        PASSWORD_LIFE_TIME               PASSWORD 180
DEFAULT                        PASSWORD_REUSE_TIME              PASSWORD UNLIMITED
DEFAULT                        PASSWORD_REUSE_MAX               PASSWORD UNLIMITED
DEFAULT                        PASSWORD_LOCK_TIME               PASSWORD 1
DEFAULT                        PASSWORD_GRACE_TIME              PASSWORD 7

I personally do not like to have password expiry setup for database level application accounts.   In most cases the passwords for these accounts can’t be changed without downtime, so its best to have a policy were once a quarter (or whatever your corporate standards are) to manually change these passwords.

Since individual end users do not have their own database level accounts I modified the default profile.    If this is not the case for your server, you may want to create a new profile for application users so that you can have separate password policies.

The command to alter the default profile is:

SQL> alter profile default limit PASSWORD_LIFE_TIME unlimited FAILED_LOGIN_ATTEMPTS unlimited;

The next task was to re-enable expired accounts.   To do this the password for these accounts need to be changed manually and I would recommend reusing the same password.  One thing I need to do is investigate password changes for FMW accounts and see if there are any dependencies.   NOTE:  If PASSWORD_REUSE_MAX is not set to UNLIMITED you may not be able to reuse the previous password.

Remember back to the Fusion Middleware installation, you were prompted to create passwords for a number of repository accounts.  If any of these accounts have expired either issue the

alter user <username> identified by <password>;

or login as each user and you’ll be prompted for a new password.  As noted above, use the previous password.

You may notice or find out the hard way, that you don’t have the passwords for some of these accounts.    If you take a look at the DBA_USERS query above you’ll notice the ORASSO, ORASSO_PS, and DCM users.    When these accounts are created they are assigned random passwords.   Use ldapsearch, changing the OrclResourceName parameter for each account you need to find the password for:

[oracle@myserver ~]$ ldapsearch -b "orclReferenceName=<SID>.world,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContext" -D cn=orcladmin -h <OID Server> -p 3060 -q OrclResourceName=ORASSO

Please enter bind password:
OrclResourceName=ORASSO,,cn=IAS Infrastructure Databases,cn=IAS,cn=Products,cn=OracleContext

The current password is identified by orclpasswordattribute. Now you can reset the password for these accounts as you did with the others above.


OID 11g: Viewing and Setting the Password Policy via ODSM and the Command Line

We are about to go live with our new 11g Fusion Middleware environment and wanted to setup the password policy for user accounts before they logged in for the first time.

I logged in to Oracle Directory Services Manager, which by default resides at http://server:7005/odsm.   The first screen is informational and shows you some relevant version numbers and some statistics.


To change password policy options click on the Security tab followed by clicking on Password Policy.


Next you need to determine the correct policy to modify.  The easiest way is to probably look at the Distinguished Name  which has the proper domain component values.  (ie. dc=youserver, dc=com)


There are a number of options you can set for your password policy and the values you choose will be dependent on your corporate standards.   To get help for any particular option click on it and a context sensitive dialog box will appear with more information.


Once you have made all your changed click on the apply button.  This is where I ran into trouble and was presented with the following error:


I searched google, Metalink but didn’t find any solutions so I decided to try the command line method. 

Login to the server which hosts your Identity Management Domain and initialize your environment.   Properly set, ldapsearch and ldapmodify should be in your path.

To view the password policy use ldapsearch utility:

ldapsearch -D "cn=orcladmin" -w <orcladmin_pass> -h <OID_Host> -p 3060 -b "cn=default,cn=pwdPolicies,cn=Common,cn=Products,cn=OracleContext,dc=mydomain,dc=com" -s sub "(objectclass=*)" "*"


To modify the password policy use ldapmodify and pass it a file containing the options you’d like to change:

ldapmodify -p 3060 -D cn=orcladmin -w password < PolicyMod.txt

In the PolicyMod.txt document below I am modifying the minimum length of a password and the number of failures before their account is locked:

dn: cn=default,cn=pwdPolicies,cn=Common,cn=Products,cn=OracleContext,dc=myserver,dc=com
changetype: modify
replace: pwdminlength
pwdminlength: 8

dn: cn=default,cn=pwdPolicies,cn=Common,cn=Products,cn=OracleContext,dc=myserver,dc=com
changetype: modify
replace: pwdmaxfailure
pwdmaxfailure: 5 

So now you are familiar with two methods to changing password policy settings.

Refreshing VS Cloning an e-Business Suite Environment

Just a quick note on refreshing vs cloning, what each of them means and when you should perform them.

What is Refreshing?

A refresh is where the data in the target environment has been synchronized with a copy of production. This is done by taking a copy of the production database and restoring it to the target environment.

What is Cloning?

Cloning means that an identical copy of production has been taken and restore to the target environment. This is done by taking both a copy of the production database as well as all of the application files.

When should you Clone or Refresh?

There are a couple of scenarios when cloning should be performed:

1. Building a new environment.
2. Patches or other configuration changes have been made to the target environment so that they are now out of sync.

3. Beginning of development cycles. Before major development efforts take place, its wise to re-clone dev, test environments so that your 100% positive that the environments are in sync.

There is only one scenario in which you should refresh an environment:

1. Your 100% confident that the environments are in sync and need an updated copy of the production data in order to reproduce issues.

Technically, if proper change control processes are being followed, test and production environments should be identical. So in the case of test, you should be able to get away with performing refreshes. However, to ease concerns and for comfort levels, test environments are usually re-cloned at the beginning of new development cycles as well.

If I have missed any scenarios, feel free to comment.

Related Articles:
R12 Cloning with RMAN

This is going to take awhile..

I haven't been able to update the blog recently...  I barely have time to eat at work since Xmas and I don't forsee that changing for the next few weeks.  On my list is to build our new test and production Oracle Fusion Middleware environments, install a VISION instance, put together an R12.1.2 upgrade analysis, clone an environment and upgrade it.   Plus all the lights on and day to day issues that crop up.    I'm hoping tho to have a new series or R12 posts coming out tho in the next few weeks.

Off topic, I have a 1TB external drive built by Comstar.   When choosing an external drive the first piece of information I wanted to find out was what kind of  hard drives were in it.   I can't remember the brand name off hand now (since it was 2 years ago) but I believe it was either a Western Digital or Seagate.  The reviews for the drive were good enough for me to buy it.

About a year into it I noticed that I had trouble reading certain files and the drive would get very slow at times.   I'm not sure if its just the drive is failing, or it has become corrupted somehow.  Within Windows you can configure external drives so that you have to safely remove them or quick disconnect.    The trade off is performance, such that write caching is disabled if you want to be able to quickly disconnect it.   Even tho I had this option enabled, each time I tried to dismount the drive Windows would complain.   I would check task manager but nothing would be accessing files on the drive so I would just power it off.   I'm wondering if this caused corruption over time.

This past weekend I bought a 1TB Lacie drive and am in the process of moving all my files off the old drive.   After all my files (hopefully) have been moved i'll take a closer look at the drive and see if its recoverable.  It may take awhile tho!