Monday, April 21, 2014

COLLABORATE14: A week in review

The week of COLLABORATE organized by some of the most dedicated people was filled with many presentations of varying quality. The event took place at the Venetian Palazzo and Sands Expo Center in Las Vegas, Nevada. While there are many uncountable events, people, presentations, and slot machines I am going to just highlight a few of my experiences on my first trip to Las Vegas.

RAC ATTACK!
My first and probably favorite of all the presentations was "RAC Attack!" Provided by the group at racattack.org this session was a hands-on interactive help session for learning how to setup a two node RAC database. As a disclaimer I don't profess or market myself as a DBA, but I do like to have some what of an understanding of the systems that I use even if my useage of them only touches the surface. Setting up this cluster provided a good insight of what DBAs work with on a daily basis. The group provided much help where ever it was needed, but the instructions they had laid out in the first place were very descriptive and helpful. They tried to have us emulate the setup as if we wouldn't have direct access, which is typical in most settings. Often times a Unix/Linux administrator will have done a lot of the beginning work and have approved programs install, appropriate ports opened, and necessary sudoers access provided. In lieu of needing to emulate the Linux Admin role in addition to the DBA role all security was uprooted: firewall disabled and selinux disabled. The group incentivized people to get engaged by awarding attendees with T-Shirts and at the end providing those who had made it the furthest with prizes. Overall I would call it a rewarding experience, this helped me understand what my colleagues in the DBA role deal with on a daily basis as well as provided a good insight into some of the workings of the Oracle Database product.

More updates will appear here as time progresses.



Tuesday, April 15, 2014

My failed Google Glass experience

At the Collaborate 14 conference, @hdost lent me his Google Glass to try out for a bit.

AHMED: Okay Glass. Call John Doe. (I tried someone I knew was in his address book.)

GOOGLE GLASS: (Nothing happened.)

AHMED: (I see a menu that is now giving me options of what to say.)

AHMED: Okay Glass. Make a call.

AHMED: Hey Harold, it says "Mom" here in the dropdown.

GOOGLE GLASS: (Starts dialing Mom.)

AHMED: Cancel!

AHMED: Google Glass cancel!

AHMED: Okay Glass, cancel call!!!

Fortunately, Harold quickly ended the call from his phone. It would appear I need a little more practice before trying it out in the real world!

I thought I was cool until I accidentally started dialing Harold's mom


Monday, April 14, 2014

Harold Dost III now an Oracle ACE Associate

Harold Dost III, Senior Consultant at Raastech, is now an Oracle ACE Associate!


Becoming a member of the Oracle ACE Program highlights an individual's excellence and technical proficiency. Harold joins an elite group of about 460 individuals, including Raastech's Ahmed Aboulnaga (Oracle ACE), to be recognized as Oracle enthusiasts and advocates. Harold's impressive credentials, 6+ years of experience working in the Oracle community, and enthusiasm to contribute on higher levels are qualities that validate his merit in becoming an Oracle ACE Associate. Congratulations Harold!

Anyone in the Oracle Technology and Applications communities is eligible to apply for consideration, or nominate someone, for one of the following tiers: Oracle ACE Associate, Oracle ACE, or Oracle ACE Director.

Harold can be found on Twitter at @hdost.


Oracle ACE Program:
http://www.oracle.com/technetwork/community/oracle-ace/index.html


"Crazy or Courageous?": Impressive C-level presentation on the importance of branding and selling a project

Last week, while attending Collaborate 14 in Las Vegas, one of the sessions I attended was Crazy or Courageous? Lessons Learned From Making it Happen by Patrick Ott from Amway. Patrick shared his experience during the first global implementation of Oracle E-Business Suite at Amway across 31 European markets. This was a C-level presentation, targeting managers and especially managers outside of IT.

Patrick Ott, Operations Directory, Amway

* Disclaimer though. Raastech, the company I work for, currently supports Amway in a consulting capacity, and I have personally met Patrick in passing several times over the course of the project but have not directly worked with him.

Patrick talked about the challenges of selling the Oracle E-Business Suite solution to company executives, employees, and their customers. The job was made more challenging after what was considered an unsuccessful rollout of a similar solution several years earlier.

Here are a few personal takeaways from the presentation.

Selling to Upper Management

To get their attention, you have to convince the executives and the board that they're either sitting on a gold mine... or about to fall off a cliff.

The Spinning Plates Example

The project management team sometimes felt like the guy trying to keep the spinning plates balanced, always jumping back and forth making sure when things appear that they're about to collapse, bringing them back on track.



A Single Dashboard Slide

Project statuses to the executives were kept under 5 slides. The main project dashboard slide was in fact one page, depicted very simply and in easy to understand graphs, at the expense of detail obviously. By doing so, it allowed management to drill deeper into each of the status areas by asking questions. This level of interactivity could not have been achieved going through a 50 slide project update.

Branding a Project

How can you get everyone to feel passionate and proud about their involvement in a project? The same way people pay ridiculous amounts for Starbucks coffee instead of the generic brand. It's partly about branding. It was not the rebranding of the "ATLAS" project, as it's called, that made the project successful, but it was one of the aspects of convincing the executives, board, employees, and customers that things this time were different... which they were.

Old Logo
New Logo

Overall, it was a very good presentation by a very competent presenter who clearly understands the challenges it takes to make the rollout of a global enterprise project successful. His examples were impressive and completely relatable, highlighting how project success is not always about technology, but about people as well.


Sunday, April 13, 2014

Two minor problems with Collaborate 14


Note: I paid full price to attend Collaborate 14 and it was absolutely worth the cost of admission.

It was an excellent conference, excellent venue, much improved presentations, and overall an extremely well organized conference. I highly recommend it, and most definitely recommend it over Oracle OpenWorld.

With that being said, I highlight below the two biggest problems I personally found with the conference.

Problem #1: Limited exhibition booth time

Collaborate 14 is a 5 day conference, from April 7 to April 11. The exhibition hall was only open for 2 days, and for a period of 4 hours each day, from 10:45am to 3:15pm. The exhibitors lose an hour dedicated to lunch, so there's essentially only 6 hours to roam the vendor booths. That's 6 hours total for a 5 day conference!

Bad planning? Intentional? Either way it doesn't matter. Personally I was disappointed and the vendors I talked to weren't too happy either. I would have liked to spend more time socializing and engaging with the vendors, as well as spend more time at the Oracle stands.

Ahmed roaming the exhibition halls, happy at something, though he's not sure what exactly.

Problem #2: Late publishing of agenda causes presenter planning challenges

Many presenters were unsure what day they were presenting, so they were unsure when to book their travel, opting to stay for the entire 5 days, not by choice. Now granted as an organizer I would love/encourage/plead with presenters to stick around for the entire conference, it's unfair to some of them who may have other commitments. Give them the choice I say.


Don't read too much into these criticisms. I merely raise them to highlight areas of improvement and they reflect my opinion alone (although a lot of people I met agreed or shared similar concerns). This was an excellent conference and I will surely be back next year. I had a blast, I learned a lot, and I met some old and new colleagues. Hats off to the IOUG, OAUG, and Quest for once again organizing a great conference.

Saturday, April 12, 2014

4 under-recognized presentations at Collaborate 14

I attended many, many sessions last week at Collaborate 14. Big data. OEM Grid Control. Cloud. SOA. Engineered systems. Many were great. Few were disappointing. Attendance among the presentations were mixed. Some were a full house while others had only a few in attendance.

In today's blog post, I highlight four presentations in particular which were extremely impressive but lacked the turnout that they deserved.

Some presentations were early in the morning; clearly a major disadvantage for a conference taking place in Las Vegas. Others were later in the afternoon, a time when people were just getting ready to enjoy the evening and great weather. Some of the presentation titles could have definitely been made to be more attractive (judge for yourself below). And in one case, the name of the well known presenter was not in the agenda on the mobile app. Marketing also has something to do with it, and perhaps the presenters spreading the word a bit in advance might have helped.

These four presentations were all standout presentations, and I give each of them a 10 out of 10. Let me explain why.


An Alternative to Exadata for Large Scale ERP Deployments
Cliff Burgess, Director of Information Technology, Gentex Corporation

Cliff talks about their experience at Gentex and why they upgraded the commodity hardware running Oracle E-Business Suite R12 instead of moving to Exadata. Typically you don't find too many anti-Exadata presentations out there, so it was refreshing to see a different perspective.

What I learned:
  • Exadata is not just hardware, it's also software so don't forget about the ongoing support cost.
  • Who administers Exadata? The Oracle DBA? System administrator? Network admin? Storage admin? Training is clearly an issue.
  • Though Oracle sells Exadata as a means to stop finger pointing among the various administrators, this clearly was not a factor for Gentex.
  • On their commodity hardware, Gentex increased their CPUs by 50% but their RAM by 1000%. This was clearly to get as much power as possible while controlling Oracle licensing costs, which is licensed by the core.
  • To minimize licensing cost, Gentex went with the highest end CPUs at the time.
  • Given enough time and effort, you may be able to prove that Exadata performance gains may not be drastically better than commodity hardware for OLTP based transactions, something that Gentex confirmed themselves through an extensive POC.
It was a great presentation with some good insight on how Gentex saved $2 million by not moving to Exadata yet were able to resolve their performance issues with their E-Business Suite R12 environment.


Fusion Middleware-Heart of Fusion Applications, Tips and Tricks to Maintain, Install a Successful Fusion Application Install Base
Manoj Machiwal, Consulting Director, Jade Global

Manoj talks about what it takes to install Oracle Fusion Applications, an extremely new topic area. I'll be honest with you. I didn't have high hopes for this presentation, but as the presentation progressed, I realized that this one was a hidden gem.

What I learned:
  • Fusion Apps requires a lot of the Fusion Middleware infrastructure, such as the application server, identity management, and integration products.
  • Other Fusion Middleware products such as OBIEE and WebCenter Portal are optional.
  • Users are now stored in an external directory (i.e., the concept of FND_USERS no longer exists).
  • The Fusion Apps Vision instance requires 8 CPUs, 220 GB of memory, and 2 TB of disk!

Well done Manoj. Sorry I had to leave the presentation a little early, but what I saw was impressive.


Real-World Cloud & On-premise ERP Integration Simplified with Oracle SOA Suite
Vikas Anand, Senior Product Director, Oracle

Vikas talks about cloud integration and walks through a demo of a two-way integration between Salesforce.com and E-Business Suite using the new Salesforce Adapter. I attended this presentation at the last Oracle OpenWorld conference, but this one had a few new interesting twists. If you're interested in knowing why I think highly of this presentation, see my review of that OpenWorld presentation.

What's new that I learned this time around:
  • Session managed to the external service provides (e.g., Salesforce.com) is fully managed by the adapter (i.e., the SalesForce Adapter).
  • The adapter supports the ability to provide a response interface for Salesforce events to invoke.
  • The BMC Software use case, on how their CIO gave a directive to move the majority of their services to the cloud, was interesting (and scary!).


I know it doesn't seem like much, but remember, the majority of the content was similar to what was presented last October at OpenWorld, so check out my last review to find out more.


Human Task and ADF: How-to
Harold Dost III, Senior Consultant, Raastech

Harold presents a live demo creating an ADF based form to handle Human Tasks in Oracle SOA Suite 11g. Unfortunately, both the presentation and the abstract should have had some reference to Oracle SOA Suite 11g, as Human Task is one of the many components of this suite.

What I learned:
  • You do not have to rely on the awful BPM Worklist to be the UI that users navigate to manage workflow actions.
  • The ADF custom developed workflow management forms can be hosted externally or embedded within the BPM Worklist.
  • Seeing a live (working) demo and walkthrough is always welcome and enhances the understanding.


This is a presentation that's mostly geared towards Oracle SOA Suite developers or those who rely on Human Task for workflow purposes. Since it's an area I specialize in, it is of particular interest to me.


There you have it. Four under-recognized yet excellent presentations at Collaborate 14, and ones that I'm extremely glad I attended.


Thursday, April 10, 2014

Ahmed Aboulnaga interviewed in Oracle Magazine

Ahmed Aboulnaga, Technical Director at Raastech, appeared in the March/April issue of Oracle’s flagship magazine. With a subscribership of around 550,000, Oracle Magazine provides unique viewpoints on business and technology. The Peer-to-Peer article, on page 27, by Blair Campbell titled "All for One", features Ahmed (Oracle ACE) alongside Lakshmi Sampath (Oracle ACE) from Dell and  Bjoern Rost (Oracle ACE Director) from Portrix Systems and highlights the different ways groups can motivate and inspire.

Ahmed, having 18 years experience working with Oracle products, discusses why whiteboards are his favorite tool for fostering teamwork and collaboration, where he would like to see Oracle go in the future, and more. Below is a link to the current issue.
Link to current issue:

To subscribe to Oracle Magazine:

Link to digital edition (subscription required):

Wednesday, January 8, 2014

ATLRUG Presentation: Solr and Sunspot

Tonight I presented at the Atlanta Ruby User's Group more affectionately known as ATLRUG. I have been attending for a while, and I finally took the initiative to make a presentation. The presentation itself was about using the Solr server with the Sunspot gem.  I would like to thank everyone who was there, and for all the positive feedback. For anyone looking for a copy of the slides, here they are.


Monday, December 23, 2013

Recap of Raastech's Fusion Middleware and Cloud Presentations @ MOUS

On November 13, 2013, Raastech gave two different presentations at the 6th Annual MOUS Conference at Schoolcraft College in Livonia, MI. The Michigan Oracle Users Summit (MOUS) conducts annual conferences with approximately 300+ attendees and is a representation of four regional user groups; MI-OAUG, SEMOP, BTU, and Hyperion.


Best Practices for Infrastructure Tuning of Oracle Fusion Middleware Components

Arun Reddy, Technical Director at Raastech, explained the common infrastructure best practices for Installations, Patching, Administration, Deployments, and Security.

During the presentation he offered information about planning your environment based on key business factors, implementing the best practices, automating tasks, securing environments, and the need for a backup and recovery plan.

His presentation can be downloaded here.



Cloud Concepts - Everything you Wanted to Know but Were Afraid to Ask

Ahmed Aboulnaga, Technical Director at Raastech, and Javier Mendez, Principal Consultant at Raastech, teamed up during their professed "introductory" presentation on Cloud Concepts.

Ahmed began by describing the confusion and hype associated with Cloud Computing. He also explained how virtualization laid the foundation for the cloud, and the difference between the two. Javier explained the purpose and consumers of Cloud services. He discussed the differences in various service and deployment models.

The presentation offered a great assessment of the pros and cons of cloud concepts.

 Their presentation can be downloaded here.



Sunday, December 22, 2013

High quality Facebook photos are not really high quality

Friends, colleagues, and the rest of the Internet seem to believe that by uploading "high quality" images to Facebook means that it is truly saved in high quality. This is not the case. In Facebook terms, "high quality" is Facebook's way of telling us that "it's just higher quality than what we had before".

There are 3 things you should be aware of to understand the impact of uploading high quality photos to Facebook.


Facebook has a maximum resolution of 2048x2048.

Facebook "high quality" images are maxed out at 2048x2048. For example, if you upload an image or photo that is 4340x3568 in size, Facebook will reduce it to 2048x1684 even if the "high quality" checkbox is checked.


Facebook compression reduces image size.

For example, if you upload an image which is 310,773 bytes in size in compressed format, Facebook further compresses it to 88,717 bytes, even with the "high quality" checkbox checked. This is advantageous in that it reduces the amount of disk space required by Facebook as well as the bandwidth needed to download such images, but it has a direct effect on the quality of the image.


Facebook compression reduces image quality.

Take a look at this photo. The one on the left is the original and the one on the right is the uploaded Facebook version. Note that both the color quality and detail have been reduced on the Facebook version, leading to a photo that is generally poorer in quality.



Here is the same image, zoomed in onto the shirt collar. The one on the left is the original photo while the one on the right is the uploaded Facebook version (recall that this photo is uploaded with the "high quality" setting). The reduction in detail and the effects of compression are obvious.



Verdict?

Facebook is great for sharing photos with your mom or uploading selfies that you won't care much about a year from now. Currently, Facebook reports that 350 million photos are uploaded every day. Thus, the limit on image size imposed by Facebook as well as its high compression routines are understandable.

But under no circumstance should you use Facebook to backup your photos. And under no circumstance should you share photos on Facebook for the purpose of professional collaboration.


Tuesday, November 26, 2013

Troubleshooting 12c Cloud Control Performance Issues

Oracle 12c Cloud Control certainly has some great features that we are now taking advantage of. Specifically, the Oracle Fusion Middleware Plugins have been of great service to our project. Like many others, however, we experienced a huge drop in performance between 11g Grid Control and 12c Cloud Control after our upgrade. The issues we had were specific to navigating the Database Instance and Performance pages. It was not uncommon to see below database and performance pages hang for 20 minutes. To make matters worse, the entire application would hang if a couple of the DBA's were on at the same time.

What's the problem? I logged into the WebLogic Server Administration Console where our 12c Cloud Control is deployed and looked at the threads (Servers > EMGC_OMS1 > Monitoring > Threads). I noticed there were many stuck threads when we were accessing the DB Instance and Performance pages. I did a thread dump and saw a lot of the following:
"[STUCK] ExecuteThread: '13' for queue: 'weblogic.kernel.Default (self-tuning)'" waiting for lock oracle.jdbc.driver.T4CConnection@6e72e5f2 BLOCKED
If the other threads weren't stuck, they would be soon. The only other threads running were executing SQL queries on the database. The next obvious step was to check the database. I ran an AWR report for the time the application was hanging and found this:

Physical ReadsExecutionsReads per Exec %TotalElapsed Time (s)%CPU%IOSQL IdSQL ModuleSQL Text






























4,217,94414,217,944.004.79453.9973.9325.85f58fb5n0yvr7c EM Realtime Connection select 'uptime' stat_type, rou...

1 execution running for over 7 minutes from EM? That is not good!

Here is the full SQL (formatted)

SELECT 'uptime' stat_type,
       ROUND ( (SYSDATE - startup_time) * 24) v1,
       NULL v2,
       version v3,
       NULL v4
  FROM v$instance
UNION ALL
SELECT 'total_sga',
       SUM (VALUE) / 1024 / 1024 v1,
       NULL v2,
       NULL v3,
       NULL v4
  FROM v$sga
UNION ALL
SELECT 'storage',
       SUM (NVL (f.total_gb, 0) - NVL (s.used_gb, 0)) v1,
       NULL v2,
       NULL v3,
       NULL v4
  FROM dba_tablespaces t,
       (  SELECT tablespace_name,
                 SUM (NVL (bytes, 0)) / (1024 * 1024 * 1024) total_gb
            FROM dba_data_files
        GROUP BY tablespace_name) f,
       (  SELECT tablespace_name,
                 SUM (NVL (bytes, 0)) / (1024 * 1024 * 1024) used_gb
            FROM dba_segments
        GROUP BY tablespace_name) s
 WHERE     t.tablespace_name = f.tablespace_name(+)
       AND t.tablespace_name = s.tablespace_name(+)
       AND t.contents != 'UNDO'
       AND NOT (t.extent_management = 'LOCAL' AND t.contents = 'TEMPORARY')
UNION ALL
SELECT 'sysmetric',
       SUM (
          CASE
             WHEN metric_name = 'Average Active Sessions' THEN VALUE
             ELSE 0
          END)
          v1,
       SUM (CASE WHEN metric_name = 'Session Count' THEN VALUE ELSE 0 END) v2,
       NULL v3,
       NULL v4
  FROM v$sysmetric
 WHERE     GROUP_ID = 2
       AND metric_name IN ('Average Active Sessions', 'Session Count')
UNION ALL
  SELECT 'addm_findings',
         COUNT (*) v1,
         f.task_id v2,
         NULL v3,
         NULL v4
    FROM dba_advisor_findings f
   WHERE     f.task_id =
                (WITH snaps
                      AS (SELECT /*+ NO_MERGE */
                                MAX (s.snap_id) AS end_snap,
                                 MAX (v.dbid) AS dbid
                            FROM DBA_HIST_SNAPSHOT s, V$DATABASE v
                           WHERE s.dbid = v.dbid)
                 SELECT MAX (t.task_id) AS task_id
                   FROM dba_addm_tasks t, snaps s, dba_addm_instances i
                  WHERE     t.dbid = s.dbid
                        AND t.begin_snap_id = s.end_snap - 1
                        AND t.end_snap_id = s.end_snap
                        AND t.how_created = 'AUTO'
                        AND t.requested_analysis = 'INSTANCE'
                        AND t.task_id = i.task_id
                        AND i.instance_number =
                               SYS_CONTEXT ('USERENV', 'INSTANCE'))
         AND f.TYPE NOT IN ('INFORMATION', 'WARNING')
         AND f.parent = 0
         AND (f.filtered IS NULL OR f.filtered <> 'Y')
GROUP BY f.task_id


I ran the query using TOAD and found that it took between 9 and 18 minutes to complete depending on the database instance. After looking at the execution plan, I saw why. There was a NESTED LOOP over a 136,000 row result-set. The nested loop was okay, processing each row at about 8ms...but 136,000 times? Something wasn't right. After some searching, I stumbled on the following note from Oracle Support (Doc ID 1528334.1). It seemed to describe our scenario perfectly, however, it did not resolve our issue and the query performance did not improve. 

Interestingly, the problem was not happening in our OMS repository database. When I took a look at the same query's execution plan, I noticed that instead of the nested loop there was a hash join of two full table scans. This seemed more efficient. Why aren't they using the same plan? As a test, I decided to copy the existing plan from our repository and propagate the changes to some of our other database instances using the SQLT Diagnostic Tool. You can download this from Oracle Support (SQLT Diagnostic Tool (Doc ID 215187.1)). 

To create the sql profile from the repository instance, use the following script after downloading SQLT: coe_xfr_sql_profile.sql 

START coe_xfr_sql_profile.sql [SQL_ID] [PLAN_HASH_VALUE];

This will create a script similar to this one (coe_xfr_sql_profile_&&sql_id._&&plan_hash_value..sql) which you can then run on your database instances. In our case, the following SQL_ID and PLAN_HASH_VALUE were used (f58fb5n0yvr7c 2336039161). Once generated, I ran the sql script on some of the other instances and tested the results. 

old plan: 13 minutes
new plan: 9 seconds

HUGE difference. Now, to test out the application...



Wow, the DB Instance and Performance pages actually came up (about 10 seconds or so). The concurrency issues were gone as well! All is well again! I logged in to our Admin Console and noticed there were no more stuck threads or warnings. Everything was working great... but I could do better. I like things to go FAST and the few seconds of waiting just bother me. I'm not going to recommend the next steps. If this solved your issue, wonderful! I'm glad it helped.

From the explain plan, I could see that the bulk of the work was parsing the query that populates the dba_segments view. The offending SQL?

SELECT tablespace_name, SUM (NVL (bytes, 0)) / (1024 * 1024 * 1024) used_gb
FROM dba_segments
GROUP BY tablespace_name


What if we create a local version of dba_segments as a Materialized View? Why not? The DBSNMP schema is the one running the query so lets give it a try. As the SYS user, run the following:

GRANT SELECT ON sys_dba_segs TO DBSNMP;

GRANT EXECUTE ON DBMS_SPACE_ADMIN TO DBSNMP;

CREATE MATERIALIZED VIEW DBSNMP.DBA_SEGMENTS
(
   OWNER,
   SEGMENT_NAME,
   PARTITION_NAME,
   SEGMENT_TYPE,
   SEGMENT_SUBTYPE,
   TABLESPACE_NAME,
   HEADER_FILE,
   HEADER_BLOCK,
   BYTES,
   BLOCKS,
   EXTENTS,
   INITIAL_EXTENT,
   NEXT_EXTENT,
   MIN_EXTENTS,
   MAX_EXTENTS,
   MAX_SIZE,
   RETENTION,
   MINRETENTION,
   PCT_INCREASE,
   FREELISTS,
   FREELIST_GROUPS,
   RELATIVE_FNO,
   BUFFER_POOL,
   FLASH_CACHE,
   CELL_FLASH_CACHE
)
AS
   SELECT owner,
          segment_name,
          partition_name,
          segment_type,
          segment_subtype,
          tablespace_name,
          header_file,
          header_block,
            DECODE (BITAND (segment_flags, 131072),
                    131072, blocks,
                    (DECODE (BITAND (segment_flags, 1),
                             1, DBMS_SPACE_ADMIN.segment_number_blocks (
                                   tablespace_id,
                                   relative_fno,
                                   header_block,
                                   segment_type_id,
                                   buffer_pool_id,
                                   segment_flags,
                                   segment_objd,
                                   blocks),
                             blocks)))
          * blocksize,
          DECODE (BITAND (segment_flags, 131072),
                  131072, blocks,
                  (DECODE (BITAND (segment_flags, 1),
                           1, DBMS_SPACE_ADMIN.segment_number_blocks (
                                 tablespace_id,
                                 relative_fno,
                                 header_block,
                                 segment_type_id,
                                 buffer_pool_id,
                                 segment_flags,
                                 segment_objd,
                                 blocks),
                           blocks))),
          DECODE (BITAND (segment_flags, 131072),
                  131072, extents,
                  (DECODE (BITAND (segment_flags, 1),
                           1, DBMS_SPACE_ADMIN.segment_number_extents (
                                 tablespace_id,
                                 relative_fno,
                                 header_block,
                                 segment_type_id,
                                 buffer_pool_id,
                                 segment_flags,
                                 segment_objd,
                                 extents),
                           extents))),
          initial_extent,
          next_extent,
          min_extents,
          max_extents,
          max_size,
          retention,
          minretention,
          pct_increase,
          freelists,
          freelist_groups,
          relative_fno,
          DECODE (buffer_pool_id,  1, 'KEEP',  2, 'RECYCLE',  'DEFAULT'),
          DECODE (flash_cache,  1, 'KEEP',  2, 'NONE',  'DEFAULT'),
          DECODE (cell_flash_cache,  1, 'KEEP',  2, 'NONE',  'DEFAULT')
     FROM sys.sys_dba_segs;


Want to know how fast it runs now?

Old plan: 9 seconds
New plan: 284msecs

And the application? Faster than ever! Don't let your MV get stale. Make sure you create a job to refresh the MV at the frequency you see fit. I hope this helps. 






Tuesday, November 19, 2013

Oracle XML DB

In a past project I did some extensive work with ORACLE XML DB. There were many "lesson-learned" moments that I think may be valuable to others and I'll share them here in a series of posts. I'll start by going over the product briefly and follow up with some examples in later posts.

What is XML DB? 

It is an extension of Oracle Database that comes with every installation by default. It provides native XML support (storage and retrieval) through a suite of XML functions and procedures. XML schema's can be registered and transformed and data can be manipulated using hybrid SQL and XPATH queries. Essentially, it provides you with all of the ORACLE database technology you are used to with the ability to incorporate the flexibility/transportability of XML. Oracle explains it best:

XML/SQL Duality
A key objective of Oracle XML DB is to provide XML/ SQL duality. This means that the XML programmer can leverage the power of the relational model when working with XML content and the SQL programmer can leverage the flexibility of XML when working with relational content. This provides application developers with maximum flexibility, allowing them to use the most appropriate tools for a particular business problem.  LINK
What are some key benefits? 

Unification!

  • Unification of Data and Content
  • Transparent XML and SQL interoperability
  • Exploiting Database Capabilities
    • Indexing and Search
    • Updates and Transaction Processing
    • Managing Relationships
    • Multiple Views of Data
    • Performance and Scalability
  • Exploiting XML Capabilities
    • Structure Independence
    • Storage Independence
    • Ease of Presentation
    • Ease of Interchange (B2B data exchange)
Faster, faster, faster.
  • All your data and content in one place. No need to have a separate XML repository/processing layer. 
Image From Oracle Documentation link

Integration and Migration.
  • Connect to other databases, files, etc. 
  • Uniform SQL/XML queries over data integrated from multiple sources. 
  • Facilitates migrating non-XML to XML data


 How do we leverage these capabilities? 

This is the best part, it is available with oracle 9.2 and higher. Look for my next post on getting started. I'll go over registering an xml schema, creating xmltype tables, xml queries and text search capabilities, schema evolution, and more.

Tuesday, October 29, 2013

Provisioning Amazon AWS Servers

These days, services like Amazon Web Services (AWS) have made it very simple and affordable to provision scalable virtual private servers for development, testing, and even production environments. So simple, in fact, that I'm going to show you just how.
Let's start by navigating to the AWS homepage. From there, click on the "Get Started for Free" button. Free, of course, is relative to what you're doing. I plan on provisioning RHEL servers that require a little more "juice" as well as features like Elastic IP's (more on this later). When we click on the get started button, we will be asked to create an account. Enter your login credentials and click Continue.


We will then be prompted for Billing information. Enter that as well and Continue. Based on the server type and add-ons we choose, we will be billed accordingly (ex. $0.12 per hour while server is up).



 Next, we'll select our support plan. I personally do not see the need for a support plan for my use but you might. Select the appropriate support plan and Continue.



 We are now logged in to our AWS account. Click on the "AWS Management Console" link on the left to get started launching an instance. 


 We will be using the EC2 (Virtual Servers in the Cloud) service under the "Compute & Networking" category. 

Once we reach the EC2 Dashboard we will be presented with many options for managing and monitoring our EC2 instances. Amazon really makes it easy for us by providing that layer of abstraction between us and the nitty-gritty server administration tasks. We will go over some of those tools later. Click on "Launch Instance" to get going.


 Finally, the good stuff! Amazon will now walk us through the steps of choosing and configuring our servers. The first step we're presented with is choosing an Amazon Machine Image (AMI). Per Amazon, AMI's are templates that contain the software configuration required to launch an instance. For our purposes, we're going to select the Red Hat Enterprise Linux 6.4 AMI.



Once you select your AMI, you are presented with instance types. Lets pick a good medium sized instance (m1.medium size).







If you're planning to launch more than one instance, you can do that here. We're going to choose 3 instances and accept the defaults for the rest. For those that have an unsteady mouse, I would select "Enable Termination Protection" so you don't accidentally terminate your instance (losing all work). The Virtual Private Cloud option is useful if you want to have complete control over your networking environment. VPC's allow you to select your own set of IP address ranges and create subnets for different server functionality. You can therefore create public-facing subnets for your webservers and private-facing subnets for your database and application servers. If you want to have this flexibility, create a new VPC and configure it accordingly. For now, we're going to accept the defaults with the exception of the number of instances (I happen to need 3).  



Next, we will add storage based on our needs. I happen to need about 25GB per instance. Click Continue. 






For security configuration, lets set up an SSH TCP firewall rule. If you know the IP's you want to connect to your server, you can specify those as well.






 Review Instance and LAUNCH!






After launching, you will be asked to create a new key pair. The key pair is essential for private key authentication from your machine to the AWS Instance. Download the key and store it somewhere safe. DO NOT LOSE IT! We will use the .pem file to generate a ppk file later to log ssh with putty.



 Your instances should now be running. 









In order to connect to the instances you will need to download putty.exe and puttygen.exe. Let's start with puttygen.exe. Click on the "Load" button and locate the .pem file you downloaded a minute ago. You will need to specify All Files in the drop down. Open the file and click OK to dismiss the confirmation dialog box. Click on "Save Private Key" and putty will save it with a .ppk extension. Open putty and under Connection > SSH > Auth browse for the private key file. Copy and paste the public DNS name and specify the ec2-user and connect. 

 


That's it. You're connected to your new AWS Instance.I hope you found this useful. If not, leave questions below and I will try to answer them. I'll go over setting up VNC and Desktop in a later post for those that need the graphical interface for easier installations of software on your Instances.





Wednesday, October 23, 2013

Upgrading to OS X with Full-Disk Encryption

Today Apple released the latest version of OS X entitled Mavericks. Surprisingly I was able to get the download finished in no time. It's a good change of pace compared to most opening day downloads. Unfortunately when I got to the point of installing the operating system it got part way through the process and flashed an error "Unable to create recovery partition." Directing me to go to the http://apple.com/support/no-recovery. The page noted that a recovery partition is not required may be for certain features to work.  My assumption was that the issue was with my full disk encryption.

What I needed to do was install mavericks from a thumbdrive. After I got the install from the App Store flashed onto the thumbdrive everything installed no problem. Below I have included links so that anyone else looking to upgrade their Mac to Mavericks is ready to go.


http://www.macworld.com/article/2056561/how-to-make-a-bootable-mavericks-install-drive.html
http://www.macworld.com/article/2055589/how-to-format-a-startup-drive-for-a-mac.html

Wednesday, October 16, 2013

Using Semaphores In BPEL

I came upon a situation in which I needed to make a bunch of asynchronous calls to long running worker BPELs. However, I didn't want to necessarily rely on the connection pools to be the gatekeepers for fear that other processes may be starved as they may require more real-time responses and time-out. As a result, I was looking for a way to limit the number of asynchronous calls I made at one time.

My first thought was to use something built into BPEL like the "targets" and "sources". I was using a parallel forEach activity knew they had some flow control properties. Unfortunately, their nature seems to only be if you a one particular activity to be performed before another. I needed something a little more powerful; I needed a semaphore.

The only problem with using a semaphore is that they are a foreign concept to SOA Suite. Much of the concepts of concurrency and threading are handled by the individual composites and the overarching BPEL and Mediator engines. However we wanted to stay within SOA Suite, but gain a little more control over our process. BPEL fortunately has the ability to include Java packages and execute them through Java Activities, so I wrote a little Java code to create a singleton wrapper around a semaphore.

SharedSemaphore Code
package test;
import java.util.concurrent.Semaphore;

public class SemaphoreSingleton {
    private static SemaphoreSingleton instance;
    private Semaphore sem;
    private SemaphoreSingleton(int threads) {
        sem = new Semaphore(threads,true);
    }
    public static SemaphoreSingleton getSharedSemphore(){
        return getSharedSemphore(1);
    }
    public static SemaphoreSingleton getSharedSemphore(int threads){
        if(instance == null){
            instance = new SemaphoreSingleton(threads);
        }
        return instance;
    }
    public boolean acquire(){
        return sem.tryAcquire();
    }
    public void release(){
        sem.release();
    }
}

Once I had my class written all I needed to do was make sure that the class was imported into the BPEL process and utilize the class in the Java Activities. Below you will notice that there is an input on the number of threads necessary, because we wanted to make sure that we could tweak it over time to deal with our loads in the future. This was accomplished using a global variable with the number of threads based upon a number of input parameters.

Class Imports
...
  <import location="test.SemaphoreSingleton" importType="http://schemas.oracle.com/bpel/extension/java"/>
  <import location="java.lang.Integer" importType="http://schemas.oracle.com/bpel/extension/java"/>
  <partnerLinks>

Lock Java Activity
int threads = ((Integer)getVariableData("threads")).intValue();
SemaphoreSingleton impl = SemaphoreSingleton.getSharedSemaphore(threads);   
  
setVariableData("haveLock",impl.acquire());

Release Java Activity
SemaphoreSingleton impl = SemaphoreSingleton.getSharedSemaphore(); 
impl.release();

One additional thing to note is the use of tryAcquire() in the Java class. The reason for using it instead of acquire()or acquireUninterruptibly() was that it would cause the entire BPEL process to halt and it will just spin its wheels. As a result of only trying the acquire that means that it needs to be tried again until a lock is acquired. This can be seen in the picture below. To determine whether a lock was attained there is a scoped variably called haveLock which is just a boolean.


I hope this helps anyone looking to have a little more control over their asynchronous BPEL processes. Please leave comments and questions below.