Tuesday, December 6, 2016

Securing cleartext FactoryProperties credentials in the Oracle JMS Adapter using Oracle Wallet

Are you installing an Oracle SOA Suite 11g or 12c cluster? Clearly you're well acquainted with the EDG (aka Enterprise Deployment Guide). One of the steps involves configuring high availability for the Oracle JMS Adapter.

This requires adding an entry similar to the following:
This is how it looks like on the Oracle WebLogic Server 12c Administration Console:

As you can see in the screenshot, the password for the "weblogic" user is unfortunately in cleartext.

Securing FactoryProperties Credentials with Oracle Wallet

1. Create a wallet.

java -jar $ORACLE_HOME/wlserver/server/lib/wljmsra.rar create $JAVA_HOME/jre/lib/security

2. This creates an Oracle Wallet with the file name cwallet.sso under the $JAVA_HOME/jre/lib/security directory.

3. Create an alias for your property. This is a name-value pair property and will have a name of "weblogicPwdAlias" and a value of "welcome1".

java -jar $ORACLE_HOME/wlserver/server/lib/wljmsra.rar add weblogicPwdAlias welcome1

4. List the aliases in the Oracle Wallet to confirm all is good.

java -jar $ORACLE_HOME/wlserver/server/lib/wljmsra.rar dump$JAVA_HOME/jre/lib/security

5. On the WebLogic Server Administration Console, click on Deployments.

6. Navigate to Deployments > JmsAdapter > Configuration > Outbound Connection Pools.

7. Expand oracle.tip.adapter.jms.IJmsConnectionFactory.

8. Click on eis/wls/Queue.

9. Add the following FactoryProperties property. Make note of java.naming.security.credentials (which is now the alias) and weblogic.jms.walletDir (which is the path to cwallet.sso).


10. Click on Save.

11. On the Save Deployment Plan page, enter the Path (e.g., /u01/app/oracle/middleware/products/fmw1221/user_projects/applications/soa_domain/dp/JmsAdapterPlan.xml).

12. Click on OK.

13. Click on Save.

15. Activate Changes.

Applicable Versions:
  • Oracle WebLogic Server 12c (12.1.x)


Thursday, November 10, 2016

Oracle SOA Suite and Oracle Managed File Transfer High Availability Installation

Have you purchased the best Oracle SOA Suite 12c administration book on the market? If not, then what are you waiting for! (Check it out on Amazon.com.)

Fig. 1: One of the best books ever written

The authors are Ahmed Aboulnaga, Harold Dost, and Arun Pareek.

Chapter 12 is titled Clustering and High Availability. This was based on the Enterprise Deployment Guide (EDG) from Oracle, but much more simplified and straightforward instructions, allowing you to quickly install a two-node cluster without having to read through hundreds and hundreds of pages.

The book was published back in November 2015 when Oracle SOA Suite 12.1.3 had just come out.

We've just finished our new and improved instructions on installing a fully functioning, production quality two-node cluster for the new release... and it includes MFT.

Want a copy of this document?
Fig. 2 The two-node cluster architecture

If you're able to provide proof of purchase, we can share with you these instructions that have been used to install production environments for multiple customers. Don't spend days sifting through the Oracle documentation. Purchase our outstanding book, and get yourself a document that will save you a tremendous amount of effort.

The table of contents is roughly as follows:

We're here to help. :)

Applicable Versions:
  • Oracle SOA Suite 12c (
  • Oracle Service Bus (OSB) 12c (
  • Oracle Web Services Manager (WSM) 12c (
  • Oracle Managed File Transfer (MFT) 12c (
  • Oracle HTTP Server (OHS) 12c (


Tuesday, November 8, 2016

Getting "Cannot retrieve repository metadata (repomd.xml) for repository: public_ol6_UEK_latest"


If you try to install an Oracle Linux package via yum, you might get the following error:

[root@soa-training ~]# yum install telnet

Loaded plugins: refresh-packagekit, ulninfo
Setting up Install Process
http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/x86_64/repodata/repomd.xml: [Errno 14] PYCURL ERROR 47 - "Maximum (5) redirects followed"
Trying other mirror.
Error: Cannot retrieve repository metadata (repomd.xml) for repository: public_ol6_UEK_latest. Please verify its path and try again


1. Try again tomorrow (seriously!).

Thursday, November 3, 2016

Do you need ESS when using core Oracle SOA Suite 12c functions?

Are you new to Oracle Enterprise Scheduler (ESS)? Did you know that ESS is included for free, for no extra charge, with your Oracle SOA Suite 12c license?

ESS is a first-class, no-frills, simple-to-use scheduling service that gets the job done and is a welcome addition to the Oracle SOA Suite 12c platform. It provides the ability to run different job types according to a preconfigured schedule, including Java, PL/SQL, binary scripts, web services, and EJBs distributed across the nodes in an Oracle WebLogic Server cluster.

Now do you know what components within Oracle SOA Suite that takes advantage of ESS? They are:
  • Auto Purge
  • Error Notification Rules
  • Activating and deactivating Inbound Adapter Endpoints
  • Bulk Fault Recover or Error Hospital
For more information, check out our excellent book Oracle SOA Suite 12c Administrator's Guide (shameless marketing, I know!). We introduce and explain ESS concepts, terminology, scheduling, compatibility issues, purging, tuning, and so much more.

Enjoy this snippet from Chapter 11!

Applicable Versions:
  • Oracle SOA Suite 12c
  • Oracle Enterprise Schedule (ESS) 12c


Tuesday, November 1, 2016

Using DynamicServerList to control routing from Oracle HTTP Server to Oracle WebLogic Server

The point of this blog post is to simply explain that when you configure httpd.conf in Oracle HTTP Server (OHS) or Apache, and even if you explicitly specify a single WebLogic host and port to route to (Scenario #2), OHS will still route to all nodes of your WebLogic cluster.

What you need to use is the DynamicServerList parameter.

Check out the three scenarios below.

Scenario #1: Load balancing between OHS and WebLogic using 'WebLogicCluster'

Your OHS/Apache configuration may look like this:
<Location /myapp>
  SetHandler weblogic-handler
  WebLogicCluster dev1.raastech.com:8011,dev2.raastech.com:8011
  WLProxySSLPassThrough ON

Thus, requests coming in to OHS will be routed across all nodes of your WebLogic cluster as shown:

Scenario #2: Attempting to single balance OHS and WebLogic using 'WebLogicHost' and 'WebLogicPort'

Your OHS/Apache configuration may look like this:
<Location /myapp>
  SetHandler weblogic-handler
  WebLogicHost dev1.raastech.com:8011
  WebLogicPort 7011
  WLProxySSLPassThrough ON

Even though the configuration explicitly states routing to a single WebLogicHost and WebLogicPort, requests coming in to OHS will still be routed across all nodes of your WebLogic cluster as shown:

Scenario #: Single balance OHS and WebLogic with 'DynamicServerList'

Your OHS/Apache configuration may look like this:
<Location /myapp>
  SetHandler weblogic-handler
  WebLogicHost dev1.raastech.com:8011
  WebLogicPort 7011
  DynamicServerList OFF
  WLProxySSLPassThrough ON

Simply adding the DynamicServerList parameter solves the problem in that all requests routed to OHS will now successfully go to a single WebLogic host/port as shown:

Applicable Versions:
  • Oracle HTTP Server 11g/12c
  • Oracle WebLogic Server 11g/12c


Sunday, October 23, 2016

Our upcoming presentation on Oracle Compute Cloud vs. AWS EC2 at MOUS

Ahmed Aboulnaga, Technical Director at Raastech, will be presenting at the upcoming Michigan Oracle Users Summit (MOUS) on November 2.

He will demonstrate live the provisioning of an Oracle Compute Cloud instance and an AWS EC2 instance and provide a comprehensive comparison of both.

Details of the session are in the following table. Hope to see you there!

Schoolcraft College – VisTaTech Center
November 2, 2016
2:20pm - 3:15pm
Presentation Title:
Oracle Compute Cloud Service versus Amazon Web Services EC2 – A Hands-On Review
We have extensive experience provisioning and utilizing Amazon Web Services (AWS) EC2 servers for our corporate production servers. At the time of launching one of our new products last year, it was our intent on leveraging the Oracle Compute Cloud Service, which was not generally available at the time. Now, we've finally migrated from AWS to Oracle Compute Cloud and we will walk through, in a live demo, the provisioning process of each, and share our opinions of both services.

Monday, October 17, 2016

How to read the Oracle Fusion Middleware Supported System Configuration spreadsheet (aka Certification Matrix)

I won't lie. The first time you look at the Oracle Fusion Middleware Supported System Configurations spreadsheet, it's confusing as heck.

Also referred to as the Oracle Fusion Middleware Certification Matrix, this spreadsheet lists out supported combination of configurations of Oracle Fusion Middleware - the architecture, the OS, and the JDK. It is ultimately intended to help you install a fully certified and supportable version of the product.

In this blog post, I will attempt to simplify reading this for those who have never seen it before.

Why is the Certification Matrix important?

I had a customer who experienced startup issues with Oracle SOA Suite 11g once. They called me at midnight on a Saturday night. After spending an hour trying to decipher a cryptic error in the logs, I refocused my attention on the installed software and its versions.

It turned out that an admin had changed the JDK to the latest version without telling anyone. After reverting to the "certified" version, it started up just fine!

And this was only a JDK change from one update to another (e.g., 70 to 90)!

Step #1: Go to the Oracle Fusion Middleware Supported System Configurations Page

Step #2: Locate the version you're looking for, and download the Excel spreadsheet

For example, click on the xls link beside the System Requirements and Supported Platforms for Oracle Fusion Middleware 12c ( to download the Excel spreadsheet for

Step #3: Open the Excel spreadsheet

Once you open the spreadsheet, the first worksheet titled "Menu" appears. Here, you can confirm the version (in the first red row).

As you can see in the highlighted red square, this page says that this current Certification Matrix includes and applies to "SOA Product Line".

Click on the "System" worksheet.

Here, you notice the "ALL" which means all the products in Oracle Fusion Middleware (as shown in the above screenshot).

Then simply scroll down to the row for the combination of Release, Processor, OS Version, and JDK Version that you want to verify.

For example, I plan on installing the product on Red Hat Enterprise Linux 7, so here I can see that:

  • FMW is supported on Red Hat Enterprise Linux 7 Update 0 or higher
  • The processor is 64-bit
  • It requires JDK 1.8.0_77 or higher, and the JDK must be 64-bit

That's it!

The ultimate goal of the Certification Matrix is to find the combination of Oracle product, OS, architecture (32-bit or 64-bit), and JDK version that are supported with each other. Oracle Support may deny SRs on environments that are not "certified".


Saturday, October 15, 2016

Where to download Oracle JDeveloper 12c for SOA and OSB development

Oracle likes to confuse you sometimes.

Are you looking to download Oracle JDeveloper 12c to do your SOA and OSB development?

You might struggle finding the correct version. Essentially, you will need to download Oracle SOA Suite 12.2.1 QuickStart. Do not download Oracle JDeveloper 12c ( or Oracle JDeveloper Studio Edition


Download the two files from here: 
Extract them, then run the command:
java -jar fmw_12.


Do not download from here:


Do not download from here:

Applicable Versions:
    • Oracle JDeveloper 12c ( for SOA Suite and Service Bus

    Friday, October 14, 2016

    Recap: DevOps Days 2016

    The presentations for both days varied in level of preparedness and quality as is expected to some degree, but overall there was value that I gained from all of them. The Ignites today were a little less interesting than Day 1. Overall today had a lot more technical focus. The conference as a whole gave me a much better feeling at the end that I think I've had with many more expensive conferences, and I think a large part of that has to do with the smaller group atmosphere. There was a lot more chance to bump into people you had previously met on the day before and were able to learn from them and vice versa.

    Some of my favorite talks from the event were What the Military Taught Me About DevOpsEnter the Trough of DisillusionmentDebugging TLS/SSLMigrating legacy infrastructure to Kubernetes. The military talk was given by a veteran of the US Air Force, Chris Short where he spoke about how his experiences in the military. he spoke about failures he encountered and the framework with which the military uses, which prepared him to manage business successfully. Enabling him and his teams to recover from failures with plans for the future both near and far. He closed by mentioning that hiring veterans can be very advantageous for all involved due to their focus, goal-oriented nature, and ability to think quick on their feet. While I'm not a veteran myself, I believe this to be a very cogent point, and I personally support. Another talk that was also less technology oriented was the Trough of Disillusionment. 

    For those unfamiliar with the term, it comes from Gartner, and it involves the hype of expectations around particular concepts or technologies. In this case it was around DevOps, and how currently we are at a point of peak expectations. The talk dove into how as we head towards the trough what actions we can take in order to continue progress even through potential reigning in of funding and more sullen attitudes towards DevOps as a concept.

    The next talk and probably the best put together one was the Debugging SSL/TLS (different link) by John Downey, I have a decent amount of experience with setting up TLS on servers and creating certificates. I think some of my favorite parts were honestly in talking about the history around TLS. However, John did also make mention of the various vulnerabilities that still exist and some great tools including one called sslyze. A tool that I look forward to using should the need arise; another tool I am looking forward to using is Kubernetes.

    The last presentation I'm going to be talking about was Migrating legacy infrastructure to Kubernetes. Kubernetes for the uninitiated is a tool which is used to wrangle fleets of containers. While I have never gotten a chance to use it Brandon Dimcheff had a great story of his experiences with the tool. Additionally he made the suggestion for an Open Space Discussion about it of which I took full advantage.

    DevOps Days Detroit 2016 has come to a close, and in typical conference fashion, a slow draining of attendees from until only the few remain. I look forward to coming back next year listening to many more good presentations, having some great discussions, and who knows maybe I will be one of the those with the privilege to speak in front of the group.

    Thursday, October 13, 2016

    First Impressions and Recap: DevOpsDays Detroit Day 1

    I have to say that I was pleasantly surprised by the quality of the experience I had at DevOpsDays Detroit 2016. While I didn't quite have the easiest time getting into the event, once there, I was treated to a number of good talks and a wildly different format from what I had experienced previously.

    One of the first things that struck me was that the vendor area was smack dab at the entrance. Get your badge and first thing is all the vendors that sponsored the event. For an event entering its second year I have to say I was amazed. I am assuming that this had partly to do with the well established network already out there, but even so, its great to see sponsors take so quickly to the event.

    The next thing that took me a little off guard even though I kind of knew that it would be mostly the case was that nearly everything was in one room. It maintains focus which is good, though I suppose doesn't give much option. Either way I was a fan. On that same note there were a few things called "Ignite" sessions which were 5-minute rapid fire talks. Which covered topics including Women in Technology and Infrastructure as Code.

    The final section which I thought proved to be the most valuable were the Open Space discussions. The way it works is explained in the link, but in short the whole group is given the ability to provide topics. Once a topic has been suggested, it is voted on, and then based on a somewhat subjective although seemingly accurate sizing we were able to divide topics into different rooms and time slots.

    Two of the them functioned very well. We had a pretty continuous conversation, and I was able to pick up a few tidbit, while at the same time I was able to provide some tips of my own. The last one however, was not so successful. I was definitely not in the camp of an expert for the particular talk, which happened to be about Effective Postmortems. The group in this case was smaller than the rest and it sounded like most were in the same boat. This is an inevitability in this sort of process. However, I still think I and others were able to draw some value out of this, and even if it hadn't turned out as well 2 out of 3 is actually pretty good for a conference.

    Open Space Discussion Schedule
    One thing that I'm not sure is an issue or not was that some mentioned it did not feel very DevOpsy topics. Didn't get a qualification on that, but I think that's largely due to the fact that not all topics were highly technically, which make sense given that the target is somewhat broad.

    Either way I look forward to Day 2 which will be starting soon, and I hope to meet even more people today, and hearing some more great talks.

    Tuesday, October 11, 2016

    OTN Appreciation Day : The Oracle Universal Installer (OUI)

    Are you old enough to remember rolling up and rolling down the window in your car?

    Are you old enough to remember the time you had to not only wait to develop film, but also wait until you've first taken all 20 pictures in your camera roll?

    Are you old enough to remember the sound a modem made when dialing in? And the frustration you felt when someone lifted up the other phone in the house?

    Then you're probably old enough to remember life before the Oracle Universal Installer (OUI).  

    Back in the late 1990's, Oracle released the Oracle Universal Installer in an attempt to start standardizing the installation process across its many products. Now, you at least have a consistent look-and-feel with an easy to navigate GUI to help in the installation of too many Oracle products to count.

    For you young folks out there, that's why it's extremely easy to install a number of Oracle products on your Windows laptops today.

    Some cool things about the Oracle Universal Installer:
    • Ability to install and deinstall
    • Globalization and National Language Support (NLS)
    • Support for multiple Oracle Homes
    • The ability to easily create response files for unattended "silent" installations

    This blog post was written in support of the OTN Appreciation Day on October 11, 2016, graciously spearheaded by the one and only Tim Hall.

    Wednesday, October 5, 2016

    Default 'root' passwords for Oracle Compute Cloud instances

    When you create an Oracle Compute Cloud instance:
    • The default username is opc

    To log in as root on an Oracle Linux instance:
    sudo su - 
    To log in as root on an Oracle Solaris x86 instance:
    su - 
    (Default password is "solaris_opc".)

    Tuesday, October 4, 2016

    Setting up SSH Certificates for Authentication

    If you've used AWS, you know that by default EC2 instances are setup with a key pair and you are provided the SSH key file so that you can login. If you're familiar with the login command then you can skip of this next little section and move straight to the Setup section.


    When logging in with SSH keys there a are a couple of things to be aware of. Ideally you shouldn't need to enter a password once you are connected to the server. This does not necessarily mean that you won't need a password still. Many times you will still want to protect the key and to do that you should set a passphrase on the key on your local machine.

    In order to specify a key you can use the following command:

    ssh -i ~/path/to/file user@example.com

    You don't necessarily need to specify the file if it exists in your ~/.ssh directory. To make sure that it is being used then the -v option can be used.

    ssh -v user@example.com

    Assuming you connect with out any issues you may not need to use any options besides the users and server.


    In order to get started I would actually begin I would start with the key you want to use that way once you're connected you can simply add the necessary file.

    Key Creation

    To create a key the below command can be used:

    ssh-keygen -t rsa -b 4096 -C "user@example.com" -f ~/.ssh/<key_name>

    It creates an 4096-bit RSA style key. -C refers to the common name associated with this key a lot of time this will be your personal email if it's for a personal login, or potentially the name of the server or service you are planning to act as the client which is authenticating. -f specifies the output file.

    Note: Two files are actually created as a result of this command one will be simply what is specified while the other will be the same, but with .pub appended to the end of it. The first file is the private key file and the second contains the public key.

    As mentioned before it is a good idea to have a passphrase on your keys so you will be prompted for one and a confirmation will also be required.

    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:

    Once you hopefully entered a password something similar to the follow will display.

    The key fingerprint is:
    SHA256:usnNCQV8Vt6Ti/rrW3+4/IA+IibgRrJIpfuHtP53glA something@example.com
    The key's randomart image is:
    +---[RSA 4096]----+
    |          .      |
    |     .   o . .   |
    |      o o . +    |
    |   . E +   . o   |
    |  o .   S . .    |
    | o.oo  o .   .   |
    |..o=+.+ .   o .. |
    |  o+o.=o*o=+..++.|

    Step 1 Complete!

    SSHD Configuration

    Now that you have you're key login to your target server. Keep in mind you will likely need to have root access to do this. Once you have the adequate permissions find your sshd configuration file. For the purpose of this tutorial I am using Oracle Linux 7, so the configuration file is located:


    Verify that the following two settings are configured:

    PubkeyAuthentication yes
    AuthorizedKeysFile .ssh/authorized_keys

    While this isn't related specifically to setting up, it would be recommend (just make sure that you have a non-root user on this server before setting this).

    PermitRootLogin no

    Once your settings are configured. Restart the service using:

    systemctl restart sshd.service

    This system is Oracle Linux 7 and it uses systemd. If you're more familiar with init.d this provides a little bit of information about the differences.

    Adding your Public Key

    Now that your ssh service is restarted. You will need to reconnect with the server using your password. Once connected, make sure you are using the user you want to use with the certificate.

    1. On your local machine get ready to copy the contents of your .pub file. I usually use the cat command to print it to my console.
    2. On your target machine open or create the file ~/.ssh/authorized_keys If it already exists with a key inside start a new line. Each key should have it's own line, and should only take up a single line.
    3. Copy the contents of the .pub file on your local machine to the clipboard.
    4. Paste the value into your ~/.ssh/authorized_keys file and save it.
    5. Close the connection and specify your private key file.
    6. Follow the steps from Login section.

    Finishing up

    If you still want to allow password logins you are done! Congratulations!

    If you want to take away password logins, then set the sshd configuration:

    PasswordAuthentication no

    Restart sshd and you're good to go!

    Thanks for reading!

    Saturday, October 1, 2016

    Disable Root SSH Login on Red Hat / Oracle Linux 7

    Any Linux server should be configured to disable root login via SSH. This is one of many security best practices.

    To do so:

    1. Login as root to the server.

    2. Edit the SSH config file:
    [root@soahost1 ~]# vi /etc/ssh/sshd_config
    3. Make the following change to the file:
    OLD: #PermitRootLogin yes 
    NEW: PermitRootLogin no
    4. (Linux 5/6) Restart the SSH service:
    [root@soahost1 ~]# /etc/init.d/sshd restart
    5. (Linux 7) Restart the SSH service:
    [root@soahost1 ~]# systemctl restart sshd.service

    Applicable Versions:
    • Red Hat / Oracle Linux 5+, 6+
    • Red Hat / Oracle Linux 7+

    Friday, September 23, 2016

    Getting "error code = -1" when installing Oracle Fusion Middleware 12c R2


    When running any of the Oracle Fusion Middleware 12c ( installers, we received the following error:
    [oracle@soahost1 ~]# ./fmw_12. 
    ** Failed to extract files from /u01/temp/fmw_12.; error code = -1.


    1. Delete everything in your /tmp folder (as the oracle user) and try again:
    [oracle@soahost1 ~]# rm -rf /tmp/*

    Applicable Versions:
    • Oracle Fusion Middleware 12c (

    Wednesday, September 14, 2016

    Mounting a storage volume on an Oracle Compute Cloud Linux instance

    If you've provisioned an Oracle Compute Cloud instance (Linux) and have already created a storage volume for it during the wizard-based installation through the My Services Console, then you need to mount your volume on your Linux box:

    1. Find out your mounted volumes.
    [root@d6c1c9 ~]# df -m
    Filesystem     1M-blocks  Used Available Use% Mounted on
    /dev/xvdb3         17522  5347     11283  33% /
    tmpfs               7392     0      7392   0% /dev/shm

    /dev/xvdb1           477   121       327  27% /boot 
    You'll notice that on a fresh installation, your 3 main mount points are on the /dev/xvdb device:
    • /boot is on /dev/xvdb1
    • /dev/shm is on /dev/xvdb2 (although not clear here)
    • / is on /dev/xvdb3 

    2. View the list of your devices.
    [root@d6c1c9 ~]# ls /dev/xvd*
    /dev/xvdb  /dev/xvdb1  /dev/xvdb2  /dev/xvdb3  /dev/xvdc
    Here you will notice a new device called /dev/xvdc which is not used in your previous step. This is likely your unused storage volume.

    3. Create a filesystem. Note that this will trash all data in this device.
    [root@d6c1c9 ~]# mkfs -t ext4 /dev/xvdc
    mke2fs 1.43-WIP (20-Jun-2013)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    19660800 inodes, 78643200 blocks
    3932160 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=4294967296
    2400 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
            4096000, 7962624, 11239424, 20480000, 23887872, 71663616

    Allocating group tables: done
    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done
    4. Now create a directory, mount it, and change ownership of it.
    [root@d6c1c9 ~]# mkdir /u01
    [root@d6c1c9 ~]# mount /dev/xvdc /u01
    [root@d6c1c9 ~]# chown oracle:oinstall /u01
    5. Now you'll see your file system mounted and ready to use.
    [root@d6c1c9 ~]# df -m
    Filesystem     1M-blocks  Used Available Use% Mounted on
    /dev/xvdb3         17522  5347     11283  33% /
    tmpfs               7392     0      7392   0% /dev/shm
    /dev/xvdb1           477   121       327  27% /boot
    /dev/xvdc         302380   191    286830   1% /u01
    6. Don't forget to add an entry in /etc/fstab so that your file system is mounted on server reboot.
    [root@d6c1c9 ~]# echo "/dev/xvdc /u01 ext4 defaults,nofail 0 2" >> /etc/fstab

    Applicable Versions:
    • Oracle Compute Cloud (2016)


    Wednesday, September 7, 2016

    Git Hooks: Unwanted Code Lines

    When working with source control, there are often files that you don't want to commit. This could be for cleanliness sake, for things such as IDE file that get placed into a directory. It could be security related files like keystores or password files. Luckily there's a very simple solution in the form of the .gitignore file. Simply add the file name pattern to it and voila it doesn't appear. However, what if you want to prevent certain lines of a file from being committed?

    A simple case may be where you are initially coding something that requires passwords in the code itself. Maybe you're starting an interface that goes out to some remote system, but you don't have in system in place to managed the passwords in a smart way yet. First you want to get the interface built, and then augment it with smarter practices. Additionally, you want the ability to commit your progress along the way, but at the same time you to make sure that these credentials don't make their way into the repo since that could. In your file you have some line that looks like this:

    private final String userName = "someusername";

    So how do you prevent this? Git Hooks , but more specifically we are going to use the pre-commit hook. Git Hooks can be pretty powerful, but unfortunately on the client side they have to more or less be populated manually, and it's on a per-repo basis. Nonetheless they can still be useful.

    After customizing a script that I forked on Github I copied the pre-commit script into my .git/hooks folder and away it goes. Now all I need to do in the future is add the NOCOMMIT keyword to my files and it will prevent them from getting committed.

    private final String userName = "someusername"; // NOCOMMIT

    Now if the keyword is found a message similar to this will appear:

    Checking modified file: path/to/violating/File.java [NOCOMMIT]
    NOCOMMIT found in file: path/to/violating/File.java 

    These errors were found in try-to-commit files: 
    private final String userName = "someusername"; // NOCOMMIT

    Can't commit, fix errors first.

    This is far from a flawless implementation, but for 90% of the time, it reduces the headaches associated with needing to rollback commits, rebase and all that fun to be rid these tainted commits. I figured if it works for me, then it can work for most people.

    Happy Committing!

    Monday, September 5, 2016

    Getting "The SOA debugger is not enabled" when starting up SOA Suite 12c VM


    If you've used one of the Oracle SOA Suite 12c pre-built VirtualBox VMs, you may run into this error when starting up the SOA server: 
    ####<Sep 1, 2016 9:35:44 AM PDT> <Error> <oracle.soa.bpel.system> <soa-training.oracle.com> <AdminServer> <[ACTIVE] ExecuteThread: '12' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <BEA1-284BB8C5B2E77D9EAAB8> <a0d52a41-3cf7-42ad-b3d3-ee6757d67af6-00000dca> <1472747744786> <BEA-000000> <cube deliveryCannot activate block.
    failure to activate the block "BpPrc0" for the instance "30008"; exception reported is: The SOA debugger is not enabled.
    This error contained the exceptions thrown by the underlying routing system.
    Contact Oracle Support Services.  Provide the error message, the composite source and the exception trace in the log files (with logging level set to debug mode).
    , Cikey=30008, FlowId=30004, InvokeMessageGuid=21272c1b-7062-11e6-9fea-0800277d1b86, ComponentDN=default/InsertEmployee!1.0*soa_6c1bde3c-4ccc-4ab7-9bfe-e172d8e90c97/InsertEmpBPEL
    java.lang.IllegalStateException: The SOA debugger is not enabled.
    at oracle.integration.fabric.debug.server.SOADebugger.getInstance(SOADebugger.java:50)
    at oracle.integration.fabric.debug.server.DebugAgentImpl.getProxy(DebugAgentImpl.java:38)
    at oracle.integration.fabric.debug.server.DebugAgentImpl.enterFrame(DebugAgentImpl.java:84)
    at com.collaxa.cube.engine.debugger2.DebugService.pushAndStep(DebugService.java:324)
    at com.collaxa.cube.engine.debugger2.DebugService.enterFrame(DebugService.java:379)
    at com.collaxa.cube.engine.ext.bpel.v2.blocks.BPEL2ProcessBlock.activate(BPEL2ProcessBlock.java:232)
    at com.collaxa.cube.engine.CubeEngine.invokeMethod(CubeEngine.java:932)
    at com.collaxa.cube.engine.CubeEngine._createAndInvoke(CubeEngine.java:722)
    at com.collaxa.cube.engine.CubeEngine.createAndInvoke(CubeEngine.java:586)
    at com.collaxa.cube.engine.delivery.DeliveryService.handleInvoke(DeliveryService.java:634)
    at com.collaxa.cube.engine.ejb.impl.CubeDeliveryBean.handleInvoke(CubeDeliveryBean.java:343)
    at com.collaxa.cube.engine.ejb.impl.bpel.BPELDeliveryBean_5k948i_ICubeDeliveryLocalBeanImpl.__WL_invoke(Unknown Source)
    at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
    at com.collaxa.cube.engine.ejb.impl.bpel.BPELDeliveryBean_5k948i_ICubeDeliveryLocalBeanImpl.handleInvoke(Unknown Source)
    at com.collaxa.cube.engine.dispatch.message.invoke.InvokeInstanceMessageHandler.handle(InvokeInstanceMessageHandler.java:57)
    at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:153)
    at com.collaxa.cube.engine.dispatch.BaseDispatchTask.process(BaseDispatchTask.java:132)
    at com.collaxa.cube.engine.dispatch.BaseDispatchTask.run(BaseDispatchTask.java:90)
    at com.collaxa.cube.engine.dispatch.WMExecutor$W.run(WMExecutor.java:239)
    at weblogic.work.j2ee.J2EEWorkManager$WorkWithListener.run(J2EEWorkManager.java:184)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:311)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:263)


    1. Restart the VM.

    Applicable Versions:
    • Oracle SOA Suite 12c ( VM