Tuesday, October 26, 2010

Oracle XE user for TIBCO Admin Domain repository

If you elect to use TIBCO EMS as the transport of the administration domain, you need to use database as the domain repository. If you plan to use Oracle XE Universal as the database, you might be looking for the reference on what privilege to grant to the user of the schema for the TIBCO Administrator Domain. The following information might help.

Assuming the user you want to create is called "tibpg", run the following sql commands to create the user and grant the privileges. You should be able to specify and use this user in the DomainUtility to create your domain without problem.





Thursday, October 14, 2010

Summary of FileGateway project

I realised some of the previous articles describing the FileGateway are a bit haphazard, un-organised at best. So here is a summary that I hope to clear the mess I have introduced.

Finally we have completed a conceptual FileGateway based on some of the design patterns described in "SOA Design Patterns" by Thomas Erl.

The patterns involved are:

  • File Gateway
  • Data Model Transformation
  • Data Format Transformation
  • Part of Service Broker (707)
  • Part of Enterprise Service Bus (704)

Figure 1: Conceptual Architecture of FileGateway

Here is a summary of articles to follow:

Project Artifacts:

Wednesday, October 13, 2010

Consuming TIBCO EMS Topic Message from Oracle SOA 11g Application

The previous article describes how FileGateway publishes its data transformation completion event. We will see in this article how another application consumes this event.

In this example, we will create a simple Oracle SOA 11g Application that contains a JMS listener and a file writer.

The use case
JMS Message arrives in FGW.FILEREADY topic.

1. JMS adapter receives message.
2. Mediator component maps the input (JMS body) to the input of a File Adapter.
3. File adapter writes the JMS message body into a local file.
4. End of use case.

The Steps
1. Create an empty Oracle SOA 11g Application. Name it as DataLoader. Create a new project called WaitFile within DataLoader. Ensure the fileInfo.xsd created in the previous article is included in this project.

Figure 1: Empty Oracle SOA Application in JDeveloper, with fileInfo.xsd

2. Configure the JMS Adapter
Double click on the composite.xml to open the composite editor. From the "Component Palette" on the right, drag the "JMS Adapter" into the "Exposed Services" pane of the composite editor.

Configure the following:
  • In Step 2 - Service name: WaitFile
  • In Step 3 - Select Third Party JMS Provider
  • In Step 4 - JMS Connection JNDI Name: eis/tibjmsDirect/Topic
  • In Step 5 - Select Define from operation and schema (specified later)
  • In Step 6 - Operation Type: Consume Message, Operation Name: Consume_Message
  • In Step 7 - Destination Name: FGW.FILEREADY. This is the topic name defined in the TIBCO EMS.
  • In Step 8 - Browse for fileInfo in the xsd file.WaitFile
    • Figure 2: Browse Message Schema
    • In Step 9 - Click Finish

    3. Configure the File Adapter
    From the "Component Palette" on the right, drag the "File Adapter" into the "External Refereces" pane of the composite editor.

    Configure the following:
    • In Step 2 - Service name: WriteFileInfo
    • In Step 3 - Select Define from operation and schema (specified later)
    • In Step 4 - Operation Type: Write File, Operation Name: Write
    • In Step 5 - Directory for Outgoing File (physical path): C:\. File Naming Convention (po_%SEQQ%.txt): fileInfo_%SEQ%.txt
    • In Step 6 - Browse for fileInfo in the xsd file.WaitFile
    • In Step 7 - Click Finish

    4. Configure the Mediator
    From the "Component Palette" on the right, drag the "Mediator" into the "Components" pane of the composite editor.

    Configure the following:
    • Name: Mediator1
    • Template: Define Interface Later
    • Click the OK button

    5. Wiring the service, component and reference in the composite.

    Figure 3: Wired service, component and reference

    6. Configure the Mediator
    Double-click the "Medialtor1" component to bring up the mapper. Create a new mapper file, accept the default file name.

    Figure 4: Create new transformation map

    Drag to map the entire source node (imp1:fileInfo) to target node (imp1:fileInfo)

    Figure 5: Mapped source and target

    7. Configure the JMS Module in Oracle SOA Admin Server console
    Make sure your SOA Admin server is started for your domain. Point your browser to http://localhost:7001/console assuming your server runs locally.

    Under the "Domain Structure" menu, click on the "Deployments" menu. Locate the "JmsAdapter" in the Deployments table. Click on the "JmsAdapter", the setting page of "JmsAdapter" is displayed.

    Figure 6: Domain Structure -> Deployments

    Instead of creating a new JMS connection pool, we will just modify the existing pool, for the sake of simplicity. You can always create your own connection pool dedicated to consuming messages from the FileGateway.

    Click on the Configuration tab, then the Outbound Connection Pools sub-tab. Then click on the "eis/tibjmsDirect/Topic"

    Figure 7: Outbound Connection Pool table

    In the "Properties" tab of "eis/tibjmsDirect/Topic", enter the required parameter values for FactoryProperties, Password and Username, then click "Save"

    Figure 8: Connection pool property configuration

    You need to re-deploy the JmsAdapter for the change to be effective. The changes you have made will be stored in a deployment plan.

    Back to the "Deployments" menu. Check the checkbox to the left of "JmsAdapter" and click the "Update" button at the top of the "Deployments" table. Re-deploy the application, create a new plan if necessary.

    Figure 9: Update JmsAdapter deployment

    Figure 10: Deployment plan

    Figure 11: Deployment plan

    8. Deploy the WaitFile SOA 11g application into the soa_server1. (assuming you have created a soa managed server)

    Figure 12: WaitFile composite deployed

    9. Start TIBCO EMS server (make sure you have carried out the steps described in this article). Run FileGateway in TIBCO Designer test mode. A JMS message should be sent out as soon as the data transformation is completed.

    10. Inspect the execution trail and instance information in the dashboard of WaitFile composite.

    Figure 13: Dashboard

    11. Inspect the File adapter output file in the output directory.
    We configured the file adapter to output in c:\, so you should find a file "fileInfo.txt" under this folder. Open the file and inspect the content, you should get something like this.


    Voila, that is a proof of concept of how to connect to and consume JMS message of a TIBCO JMS topic from an Oracle SOA 11g Application.

    Download the Oracle SOA 11g Application source code (JDeveloper project) here.


    Tuesday, October 12, 2010

    A TIBCO BusinessWorks-based file gateway – Part 3

    Publishing an event to TIBCO EMS topic from BusinessWorks.

    We will look at the mechanism to publish the file processing completion events from the FileGateway. The following use case describes the flow.

    Trigger: FileGateway finished writing the XML file to the outbound folder.

    1. FileGateway publishes event to FGW.FILEREADY topic.
    2. End of use case.

    What we need

    A target topic
    We have configured the TIBCO EMS to address part of this requirement. Refer to this article for details of setting up the topic, user and access control list.

    An xml schema for event message
    This schema is used by both the producer and the consumer of the event message. The TIBCO BusinessWorks JMS Publish activity will publish the message according to this schema and the consumer will reference this schema when parsing the message.

    As usual, create the schema using your favorite xml tool. Here is the completed schema.

    <?xml version="1.0" encoding="UTF-8"?>

    Import the xml schema into the TIBCO Designer project.

    If you haven't been following the series of articles on FileGateway, you can learn about the FileGateway in the following posts.

    You may also simply download and examine the baselined TIBCO Designer project from here.

    A JMS Publish activity in the BusinessWorks process
    Here are the steps to add a publish JMS activity into the process
    1. Create a JMS Connection shared configuration.
    Under the PollEmpsCSV folder, create a JMS Connection. Uncheck the "Use JNDI for Connection Factory" as our consumers will be connecting directly without getting the resource handle from the JNDI. Enter the username and password of the fgwuser.

    Start the TIBCO EMS server and click on the "Test Connection" button to ensure correct configuration.

    2. Add a JMS Topic Publisher activity.
    In the PollDumpCompletion process, add a JMS Topic Publisher and name it as "PublishEvent". In the configuration tab, enter "FGW.FILEREADY" in the Destination Topic field. In the "Message Type" dropdown, select "XML Text".

    Create a transition from WriteXMLFile to PublishEvent, and from PublishEvent to End activities.

    With PublishEvent activity selected, go to the "Input Editor" tab. Click on the "+" icon and select "XML Element Reference" from the "Content" dropdown. Browse to the schema fileInfo.xsd and select the element "fileInfo"

    Go to the "Input" tab, map the fileName, fileSize and fileDateTime elements of the activity input to the corresponding fields of $WriteXMLFile process data.

    Duplicate the "property" element of the fileInfo instance to contain additional informations, such as the username and password of the ftp server where the output file is located, the url of the ftp server and so forth. The value of the "transport" element indicates how the output file is hosted. Possible values are ftp, http, https or even a "RESTful file server"?!. Well, the password is in plain-text, not an ideal implementation, but that is a separate exercise!

    Here we are, done. A FileGateway that broadcasts the file conversion event to interested parties. The output JMS body of the event is as follow.

    <?xml version="1.0" encoding="UTF-8"?>

    Download the update FileGateway here.

    In the next article, we will be looking at testing/consuming the publish feature from Oracle SOA Suite.


    Thursday, October 7, 2010

    Configuring TIBCO EMS for FileGateway to broadcast completion event

    In our next article, we will add a new feature to the FileGateway. This article describes the steps to setup the EMS for that purpose.

    We need:
    • a secured topic
    • an EMS user
    • authorization enabled EMS
    • access control list (acl)

    Before you start

    Start EMS server.
    - simply execute tibemsd.exe in the bin folder of your TIBCO EMS installation (windows). The default EMS_HOME for win32 installation is c:\tibco\ems\5.1 for TIBCO EMS 5.1.x.

    %EMS_HOME%\bin\tibemsd.exe -config fullpath_to_your_tibemsd.conf_file

    Launch EMS admin console.
    In the same directory of tibemsd.exe, execute the tibemsadmin.exe


    Connect to EMS server

    - In the TIBCO EMS Administration console, enter the command "connect". Assuming you have not changed the admin password, login as admin with no password.

    Creating a topic and securing it

    We will create a EMS topic called FGW.FILEREADY to which the FileGateway will publish its file completion events. Just to add a little security to it, we will secure this topic by allowing only authorized consumers to subscribe, effectively blocking the anonymous consumers.

    Enter the following commands into the admin console.

    create topic FGW.FILEREADY secure
    To see the newly created topic in the console, enter the following command.

    show topics
    Note the '+' sign under the column 'S', it indicates the topic is secured.

    Enable EMS authorization

    The 'secure' property of a EMS topic or queue will only come to effect if the server authorization is enabled. To enable authorization on EMS server, enter the following command at the admin console.

    set server authorization=enabled
    Authorization can also be turned on via the tibemsd.conf file.
    authorization = enabled
    Server restart is required if this method is used.

    Creating an EMS user

    To access to secured topics, the JMS consumer needs to provide credentials when subscribing. For that reason we will create a user called "fgwuser" with the password "fgwuser".

    Enter the following commands into the admin console.

    create user fgwuser "FileGateway User" password=fgwuser
    Use the following command to list the created user.

    show user fgwuser

    Configure the access control list (acl)

    The consumer of FGW.FILEREADY topic needs at least the 'subscribe' privilege in order to subscribe to the topic. If the consumer intends to become a durable subscriber, it also needs to be given the 'durable' privilege. Note that in our scenario, the consumer is not allowed to publish to this topic, hence the absence of 'publish' privilege.

    Enter the following command into the admin console.

    grant topic FGW.FILEREADY fgwuser subscribe, durable
    To inspect the privileges assigned to fgwuser, use the following commands

    showacl topic FGW.FILEREADY


    showacl user fgwuser


    By now we have configured/created the following:
    • A secured EMS topic called FGW.FILEREADY
    • An EMS user called fgwuser
    • Access control on fgwuser
    • EMS server authorization = enabled

    We will update the FileGateway to publish file completion events to this topic in our next article.

    Cheers, happy publishing...

    Tuesday, October 5, 2010

    Oracle RCU on Oracle XE Universal

    OK, this post is going to be quick.

    I am setting up my other notebook to run Oracle SOA Suite and start by installing a copy of Oracle XE Universal. All went well, I need a small footprint database on this old notebook.

    To use OracleXE as the DB for SOA Suite, I only need to modify the following parameters.

    alter system set open_cursors=500 scope=spfile;
    alter system set processes=500 scope=spfile;

    (also depends on with component you want to install, can be less)

    I then downloaded Oracle RCU (ofm_rcu_win32_11. from Oracle and tried to install it with all options selected. To my dismay, I kept bumping into the problem saying JVM is not installed.

    2010-10-05 21:34:56.846 ERROR rcu: oracle.sysman.assistants.rcu.backend.task.ActualTask::run: RCU Operation Failed
    oracle.sysman.assistants.common.task.TaskExecutionException: RCU-6083:Failed - Check prerequisites requirement for selected component:OIM
    Please refer to RCU log at Z:\Oracle\ofm_rcu_win32_11.\rcuHome\rcu\log\logdir.2010-10-05_19-24\rcu.log for details.
    Error: JVM is not installed on the Database.

    Google is your friend, isn't that right? So I found the following thread on the Oracle forum.


    No solution as yet. Some people went back to RCU version and installed successfully. I search further and found that OracleXE does not come with the DBMS_JAVA package, which only available in Oracle SE and EE. In theory, one can install DBMS_JAVA package into XE, but that is illegal. So I ended up unselect the offending components, and everything went well, with a few warnings. Here is the list of components that are unlikely to install successfully into OracleXE if you are going with RCU

    1) Identity Management
    2) Portal and BI

    When installing other components, I have also encountered problems with Enterprise Scheduler Service and the following errors are recorded in the oraess.log. I have ignored all these warnings and will fix them when it comes to surface.

    Executing SQL statement: begin
      execute immediate 'grant select on dba_subscr_registrations to DEV_ORAESS';
      when others then
        if sqlcode = -942 then
        end if;
    JDBC SQLException - ErrorCode: 942SQLState:42000 Message: ORA-00942: table or view does not exist
    ORA-06512: at line 7
    JDBC SQLException handled by error handler

    The reason is because OracleXE does not come with the table DBA_SUBSCR_REGISTRATIONS.

    Information about this table is here.


    Other than that, all was good. I can use OracleXE Universal database for RCU with a few exceptions.


    Wednesday, September 22, 2010

    Free random fake name generators for testing FileGateway

    I thought of a site that allows one to generate records of fake names with other general fields. You can generate 50,000 records at a time, for free. Visit here for more information.
    With 50,000 records we can test the performance and collect realistic performance data via Hawk.
    I have modified the Data Format configuration as well as the Mapper activity to accommodate this change.
    The Data Format describing the input csv file.
    <?xml version="1.0" encoding="UTF-8"?>
    <xsd:element xmlns:xsd="http://www.w3.org/2001/XMLSchema" name="emp">
                <xsd:element name="empNum" type="xsd:int"/>
                <xsd:element name="gender" type="xsd:string"/>
                <xsd:element name="firstname" type="xsd:string"/>
                <xsd:element name="midinit" type="xsd:string"/>
                <xsd:element name="lastname" type="xsd:string"/>
                <xsd:element name="stradd" type="xsd:string"/>
                <xsd:element name="city" type="xsd:string"/>
                <xsd:element name="state" type="xsd:string"/>
                <xsd:element name="zip" type="xsd:int"/>
                <xsd:element name="cntry" type="xsd:string"/>
                <xsd:element name="email" type="xsd:string"/>
                <xsd:element name="phone" type="xsd:string"/>
                <xsd:element name="mommaiden" type="xsd:string"/>
                <xsd:element name="dob" type="xsd:string"/>
                <xsd:element name="cctype" type="xsd:string"/>
                <xsd:element name="ccnum" type="xsd:long"/>
                <xsd:element name="cvv2" type="xsd:string"/>
                <xsd:element name="ccexp" type="xsd:string"/>
                <xsd:element name="nationalid" type="xsd:string"/>
                <xsd:element name="ups" type="xsd:string"/>
                <xsd:element name="job" type="xsd:string" minOccurs="0"/>
                <xsd:element name="domain" type="xsd:string"/>
    Here is the updated BusinessWorks project file. Just to be safe, I would let you obtain your own copies of fake names rather than distributing something that I am not sure I should.
    Well, nothing is better than putting your hands into the tasks, try’em out!
    Cheers, happy SOA'ing!
    Note1 : When you generate your own list of fake names, please ensure that you select all fields except the password field. They have been mapped in the process definition.
    Note 2:You also need to modify the designer.tra file to allow a large heap size. Locate the "tibco.env.HEAP_SIZE" variable in the designer.tra and assign it a value of 1024M
    tibco.env.HEAP_SIZE 1024M

    Wednesday, September 1, 2010

    A TIBCO BusinessWorks-based file gateway – Part 2

    In the previous article, we went through the steps to create a TIBCO BW process that parses a flat CSV file and transforms the data format from CSV to XML. Along the process we also transformed the data model, concatenating the firstname and lastname fields into a single element specified in the employee.xsd.
    This article adds a file poller process with Render XML activity and sets up a few global variables for configurations. At the end of this article, we will have something like this.

    For now, the following are configurable:
    • Inbound directory: DirInboundFolder (default to 'c:\fgw\inbound\')
    • Outbound directory: DirOutboundFolder (default to 'c:\fgw\outbound\')
    • Semaphore file extension: InboundSemaphoreExt (default to 'done')
    • Semaphore extension separator: InboundSemaphoreExtSeparator (default to '.')
    • Search pattern: InboundSemaphorePattern (default to '*')
    This article assumes that you already knew how to perform basic tasks in TIBCO Designer.
    1) Add a new folder under the FileGateway root folder.
    2) Add a new process definition with a name PollEmpsCSV
    3) Double-click the newly created process definition. Add a File Poller activity into the process. Note that the original ‘Start’ activity is now replaced by the File Poller activity.
    4) Configure the File Poller activity.

    In the Configuration tab, paste the following string into the File Name field:


    The global variable name surrounded by double % signs will be resolved during run time.

    In the Advanced tab:

    5) Expand the ParseAndTransform folder in the Project Panel. Drag the 'ParseAndTransform' process into the PollEmpsCSV process. Wire the PollDumpCompletion activity, ParseAndTransform process and the End activity in the following order.
    6) An input parameter has already been defined earlier in the 'Start' activity of the ParseAndTransform process. This parameter holds the fully qualified name of the file to be parsed by the parser. While the Poller activity polls for the semaphore file, we will need to construct the filename required. This step assumes that the semaphore file without the extension is the filename to be processed.

    In the input tab of ParseAndTransform sub-process, click on the pencil icon to bring up the XPath editor.


    Paste the following XPath formula into the formula pane.

    tib:substring-before-last($PollDumpCompletion/ns:EventSourceOuputNoContentClass/fileInfo/fullName,string-length($_globalVariables/pfx:GlobalVariables/InboundSemaphoreExtSeparator) + string-length($_globalVariables/pfx:GlobalVariables/InboundSemaphoreExt))

    The above XPath formula returns the filename required by the ParseEmpsCSV activity inside the sub-process. If the semaphore filename is
    then the input filename to the parser activity will be
    There are different ways to implement semaphore. Another alternative is to rename the source file when the transfer/file creation by the legacy system has completed. Semaphore is required when writing large file that takes time to complete. This is to avoid the process from reading an incomplete file.

    7) The ParseAndTransform sub-process provides 2 output elements in its output schema defined in its 'End' activity. Refer step 13 of part 1 of this article.
    Add a render XML activity followed by a write file activity. The write file activity will use the filename specified in the 'outputFileName' element of the ParseAndTransform output schema.

    8) For the sake of simplicity, add a 'Catch' activity that catches all unhandled exceptions.

    The final process looks like this.

    Download the entire project here and play with it!
    There are a lot of works to be done for this gateway to be useful. The following considerations are in mind.

    • The semaphore file cleanup mechanism
    • The separation of write file activity into a sub-process
    • Error logging and resolutions
    • The 'execute-only-once' restriction
    • Refactoring of global variables, shared variables, and other considerations such as implementation of checkpoints for fault tolerance and load balancing.

    Happy testing.

    Tuesday, August 31, 2010

    A TIBCO BusinessWorks-based file gateway – Part 1

    Revision 0.1:

    Change 1:
    The following changes have been made to the ParseAndTransform process.
    - Add 2 output parameter to the process. They are configured as the input parameter of the 'End' activity of the process.
    > empColl - This element contains a copy of the empAccumulator process variable
    > outputFileName - This element contains the output filename to be used by the downstream file writer activity. The reason behind this design decision is that when a new data model output is required, a new sub-process will be created by the developer and deployed into the BW engine. Different ParseAndTransfor process responsible to produce different data model based on different xsd will cause a different XML file name to be written by the downstream writer activity.

    Example: ParseAndTransform sub-process will have its output writen to a file called ConsumerXXX-UniqueID.xml. ParseAndTransform will have its output written to a file called ConsumerYYY-UniqueID.xml. These output filename will eventually be published to interested parties.

    Change 2:
    EmployeeDataFormat (Data Format) has been updated to contain 'Complex Element' rather than 'XML Element Reference' to demonstrate the difference between the data model of the original input file and the desired out file (employees xsd). The Data Format defines the CSV file to contain the following fields



    Been busy with work and done heaps of catch up with the TIBCO SOA platform. Came back from the TIBCO NOW seminar in Melbourne and got a good picture of what to expect from TIBCO product roadmap. I have to say TIBCO’s platform is nimble. They really know how to do integration and have learned a lot from their customers in specific verticals. But that is from a helicopter view. OK, let’s start.
    This article is a part of the series of articles target to describe crude ‘reference implementation’ of File Gateway pattern in the book “SOA Design Patterns” by Thomas Erl.

    This article explains the steps to create a file gateway that perform the following tasks.

    1) A legacy system writes a batch flat-file into an inbound folder
    2) The parser-transformer component polls and retrieve the flat file
    3) The parser-transformer parses the data and performs data format transformation
    4) The parser-transformer optionally performs data model transformation
    5) The parser-transformer writes file to an outbound folder
    6) The parser-transformer optionally broadcasts/triggers an event that a file is ready for consumption

    To help you visualise what we are about to build, refer to this end-state.

    The final product
    Due to the width and depth covered in this topic, this article is split into 4 parts. The first part talks about how to create a TIBCO-BW process definition to parse and transform data models.

    In the second part we will be extending the parser-transformer process to write the resultant file into the outbound folder. We will also implement four more global variables to enable configuration of inbound file path, outbound file path, semaphore file extension as well as the file pattern to watch.

    The third part of this article will describe the steps to create a simple file poller process and invocation of the parser-transformer process.

    The fourth part of this article will look at testing and deployment of this gateway into the TIBCO BW infrastructure, some performance tuning and monitoring using TIBCO Hawk agents.

    In the roadmap, one would hope to have/be able to interact with the following capabilities:

    1) Publish transformation completion events to the interested parties (typically the consumers, can also be the legacy providers provided they are capable of listening to events)
    2) Pluggable architecture of schema specific parsers/transformation engines, effectively supporting one source multiple target use.
    3) Load balancing via TIBCO infrastructures
    4) Service call-back (pattern)
    5) Protocol bridging (pattern)
    6) Schema Centralisation (pattern)

    We will discuss the appropriateness of item 4, 5 & 6 when time permits.
    The writing of this article is unrehearsed. It may and will contain errors and inaccuracies, both technically and organisationally. Your comments and corrections are very welcomed.

    Here goes the first part.

    Begin with the ends in mind…here is what we will get at the end of this article.


    1) Create the following folders structure in the file system. This file system is a location to exchange inbound and outbound files. It can be a folder in an ftp server or a mappable folder on the NAS.


    The inbound folder is for incoming files, usually file dumps performed by the legacy system. Corresponding semaphore files will also transiently exist in this folder.

    The outbound folder, on the other hand, holds the parsed and transformed files for the targeted consumers.
    2) Create a XML schema that defines the data structure to be handled by the parser/transformer process. We created the schema using Oracle JDeveloper, based on the EMP table from the SCOTT schema. I have to admit that this tool, like many other commercial XML tools, provides better user experience through the adoption of widely accepted visual paradigm.


    3) Create a new TIBCO Designer project.

    Launch the TIBCO Designer. Create a new empty project, name it as FileGateway.


    4) Create a new folder to contain our BusinessWorks process. To create a new folder, right click on the root node in the Project Panel of the TIBCO Designer, select the ‘New Folder’ menu item, and a new folder with a default name ‘Folder’ will be added. Rename the folder to ‘ParseAndTransform’ directly in the Configuration Panel.


    5) Import the schema of the data model output expected from this process. This schema will be referenced multiple times throughout the entire process definition. Notably in the definition of Data Format, Parse Data and other activities.
    To import the schema we have created in Step 2, make sure the ParseAndTransform folder is selected. Under the Project menu, select ‘Import Resources from File, Folder, URL…’.


    In the ‘Import Resource of File…’ dialogue box, select the ‘File (.xsd, .xslt, .wsdl, .*)’ as the Format.


    In the ‘File:’ field, click on the binocular icon to browse for your schema file. Our file is named employees.xsd.
    You should now have a schema appearing on the Designer Panel.


    Double-click the schema icon, you can inspect the schema through TIBCO Designer’s schema editor. Click the ‘Overview’ button on top to see the schema overview. We are not going to make any changes through this editor throughout this project.


    6) Our parser will need to know how to parse the CSV. This step involve the process of defining an activity call ‘Data Format’. Just as one would do when importing a CSV file into MS Excel, we will define the format, the delimiter character, the EOL character and other characteristic of the input flat file.
    Back to the ParseAndTransform folder, in the Designer Panel, right-click and select ‘Add Resource’ à Parse à ‘Data Format’ sub-menu item.


    Rename the activity to ‘EmployeeDataFormat’ and click ‘Apply’.


    In the configuration panel, click on the 'Data Format' tab, we will specify the content type as 'Complex Element'.
    Click on the ‘+’ button, a default ‘root’ element will be added. Rename it to 'emp'. Define the children elements as shown in the picture below.


    Click the ‘Apply’ button to commit your changes.
    Before we process further, let’s look at what we have done.

    We have
    - Created a XML schema using our preferred XML authoring tool. We called that schema ‘employees.xsd’. It contains 2 complex types and 2 elements.
    - Created an empty TIBCO Designer project called FileGateway.
    - Create a new folder called ParseAndTransform in the FileGateway project.
    - Imported the ‘employees.xsd’ schema into the ParseAndTransform folder
    - Created a Data Format that references the employees.xsd, we called this Data Format ‘EmployeeDataFormat’.

    In the next step we will create a process definition that will perform the following tasks.
    - Takes an input that contains filename information from an external process.
    - Parses the inbound file (Parse Data activity)
    - Constructs the employees collection from the parsed records (Mapper activity)
    - Updates a process variable that acts as accumulator (Assign activity)

    This process definition will be built with the following capabilities:
    - Configurable number of records to be grouped for resource optimisation/tuning

    7) Add a new process under the ParseAndTransform folder.
    Click on the ParseAndTransform node on the Project Panel. One the Designer Panel, right click and select the Process à Process Definition sub-menu item.


    A new process definition with a default name ‘Process Definition’ will be added. Rename the process definition to ‘ParseAndTransform’ directly in the Configuration Panel.


    Define a process variable to act as an accumulator of employee records.


    8) Up to this stage, we have an empty process definition. This process will not poll the file system, the polling part will be performed by the parent process, or even a separate component, depends on our design; instead, this process takes an input that specifies the fully qualified filename to be parsed. In this step, we need to specify the input parameter, and we will define this at the ‘Start’ activity of the process definition.
    Double click on the ParseAndTransform process definition icon in the Designer Panel.


    Click on the ‘Start’ activity icon, in the Configuration Panel, click on the ‘Output Editor’ tab.

    Click the ‘+’ sign under the ‘Output Editor’ tab and name that parameter as ‘inboundFileName’. Specify the Content as ‘Element of Type’ and the Type as ‘String’. Click the ‘Apply’ button to make the changes effective.

    Note: This tab is called ‘Output Editor’ because it allows one to specify the output that will be available to all downstream activities. This ‘output’ parameter will appear as ‘input’ parameter when the entire process is reference from another activity or process. We will see how this works in the coming steps.

    9) Define a global variable called CHUNK_SIZE of type integer. This variable will be referenced in the ‘Group’ for grouping of records for processing. This value of this variable can be configured in the TIBCO Administrator console during the deployment and possibly would be useful for performance turning.

    To define a global variable, click on the Global Variables panel. Click on the pencil icon to open the Advanced Editor. Click the ‘abc’ icon at the bottom of the Global Variables pop-up window and name that variable as ‘CHUNK_SIZE’ of type Integer. Assign it a default value of ‘3’.


    We shall see the newly created variable appear in the Global Variables list together with other pre-defined variables.

    10) Add the necessary activities into the process definition.

    Add the ‘Parse Data’ activity.

    Double click on the ParseAndTransform process definition, in the Designer Panel, right-click on the white space area to insert a Parse Data activity.


    Rename the Parse Data activity to ‘ParseEmpsCSV’. In the configuration tab of configuration panel for ParseEmpsCSV activity, click on the binocular icon to select the Data Format created earlier.


    Click OK. When back to the configuration tab, ensure the ‘Input Type’ field is specified as ‘File’. Click ‘Apply’ and save the project.

    11) Wire the ‘Start’ and ‘ParseEmpsCSV’ activities together.
    Right click on the ‘Start’ activity to select ‘Create Single Transition’ menu item. Note that the cursor pointer changes to a hairline ‘+’.


    Drag a line from ‘Start’ to ‘ParseEmpsCSV’ activities to represent the direction of the transition. At this stage, there is no configuration required for this transition line.


    Click on the ParseEmpsCSV activity to continue working on it. In the configuration tab, click on the binocular icon to pick the required Data Format.


    Click OK. In the Input Type, select ‘File’ from the dropdown box.


    Click on the input tab, map the input fields with the required process data. Two values are required. The first one, is the filename which will be mapped to the ‘inboundFileName’ of the Start activity. The second input, noOfRecords, will be mapped to the CHUNK_SIZE defined as the process variable earlier.


    Enter a literal 0 into the SkipHeaderCharacters field as out CSV file starts from the first column.

    The next activity in the process will be the Mapper activity. This activity in our process will perform 2 functions. First, it acts as a mapper to map the output of the Parse Data activity into the final form. The second function which is equally important is that it will perform the ‘accumulation’ of the employee records in every iteration of the group (will be discussed later) it operates in.

    12) Add a Mapper activity, name it as BuildEmpsColl.
    Right click on the Designer Panel to add a Mapper activity located under ‘General Activities’ menu.


    Create a transition from ParseEmpsCSV into BuildEmpsColl.
    In the input editor tab, define the input parameter that this activity will take.


    In the input tab, we need to map 2 inputs from the process data into one employee collection. One of the process data is the output of ParseEmpsCSV activity, the other is the empAccumulator process variable. Parsed result from every iteration will be accumulated in empAccumulator process variable.

    Right click on the emp element in the Activity Input pane and select ‘Duplicate’


    To map the process data, expand $empAccumulator in the process data pane and drag the child node (empColl) into the first copy of emp in the Activity Input pane.


    In the dialogue box, select ‘Make a copy of each ‘emp’’


    Repeat the same for the output of ParseEmpsCSV activity.


    In the dialogue box, select the 'For each…' option.


    Click the 'Next' button.


    Note that only 2 fields are automatically mapped. This is because the source file (flat file) contain different data model and even the field name are not all the same. We will need to manually map the rest.


    Mapper is a powerful activity that can perform many transformation tasks. In the above example, the incoming CSV file contains 2 columns for a name, i.e. first name and last name. The output file (the xsd schema) has only one element called ename, so we need to concat the fields into one. This of course depends on the requirement. Refer to the TIBCO documentation for more details and examples.

    Now we need to update the accumulator with this new value. We use the 'Assign' activity for that.

    13) Add a 'Assign' activity to the process. Assign activity is under the 'General Activities' palette. Name the activity 'UpdateProcessVar'. The process variable to update is empAccumulator. Create the required transition, including the one that transition to the 'End' activity.
    In the input tab, map the entire empColl into the process variable.


    When asked, just specify "select 'Make a copy of each 'empColl'".
    Up to now, our process definition looks like this.


    We are getting there. The next step is to wrap all the activities into a group so that iteration can be performed based on group of records. This approach is optional, but will become visibly important when the file size being processed is large, such as 1GB or even 2GB.

    14) Group the activities. Select the activities that we want to group. ParseEmpsCSV, BuildEmpsColl and UpdateProcessVar. Multi select can be performed by holding down the control key while selecting. Click the ‘Create a group’ icon on the tool bar.


    15) Configure the group. Select the created group, in the configuration tab, select ‘Repeat-Until-True’ as the group action. Enter ‘i’ as the index name and use the following XPath expression as the condition to stop the iteration.

    $ParseEmpsCSV/Output/EOF = string((true()))  

    The expression will cause the iteration to exit when the parser (ParseEmpsCSV) encounters the EOF character in the input file.


    16) Define the process output at the 'End' activity.
    Click on the 'End' activity. In the 'Input Editor' tab, add the following elements under a default root schema called 'group'. You can choose other name for this root schema, I would just accept the default name 'group'.


    In the 'Input' tab, drag the $empAccumulator process variable into the empColl element in the Activity Input pane. This will be out process output. This output will be used by the downstream writer to create the XML file.

    Paste the following XPath formula into the 'outputFileName' element.

    concat('EMP-OUT00A-', tib:format-dateTime('yyyyMMdd-HH-mm-ss', current-dateTime()),".XML") 

    The first part of the output filename is hardcoded for the sake of simplicity. This is not critical as the filename convention is specific to this particular implementation of ParseAndTransform process.


    Well, that is part 1.

    Until then, happy SOA’ing.

    Another example of parsing CSV with header and footer is now available here.