Skip to Content

Learn how to migrate JDBC Lookups from SAP Process Orchestration to Cloud Integration

This tutorial is developed into three different steps, one for the SAP Process Orchestration and the other two are distinct options with different processes for the Cloud Integration capability of SAP Integration Suite.
You will learn
  • How to use the JDBC Lookup function on SAP Process Orchestration
  • How to create an alternative JDBC Lookup on Cloud Integration
joaohqssapJoão HenriquesMarch 27, 2024
Created by
joaohqssap
February 14, 2024
Contributors
joaohqssap

Prerequisites

  • SAP Process Orchestration package needs a JDBC connection allowance
  • Database is accessible for SAP Process Orchestration and Cloud Integration
  • Create a JDBC Driver and JDBC Data Source for connecting to the database

In this sample scenario we are moving data from system A to system B and enrich with content data.

JDBC Lookup is a method used to enhance a specific set of data retrieved from a given data source with extra inputs. This method can be done in both platforms, SAP Process Orchestration and Cloud Integration, in three distinguished ways, one dedicated operation in SAP Process Orchestration and two created in Cloud Integration. The main difference between them is the number of messages sent. SAP Process Orchestration sends one SQL Statement per key wanted, where the solutions created in Cloud Integration send one SQL Statement with all the information required. In Cloud Integration this method does not have a single specific approach to operate the JDBC Lookup, but it can be done by two simple integration flow systems shown later in this tutorial.

  • Step 1

    In the SAP Process Orchestration there is an out of the box function that operates the JDBC Lookup, called JDBC Lookup. To operate the previous method mentioned, we need to follow these steps:

    1. Create Business Component and Communication Channel with type JDBC Receiver and fill the information for the Database Connection.

      JDBC Receiver
    2. Create an External Definition with category: ‘dbtab’ and messages: ‘Do Not Extract’, as shown in the image below.

      External Definiton
    3. Select the Communication Channel previously created and the desired fields from the Data Source.

      Select Database
    4. Create a Message Mapping and in ‘Signature’ create a parameter with (‘Adapter’, ‘JDBC’, ‘Import’).

      MM Parameter
    5. Still in Message Mapping in ‘Definition’ - ‘Conversions’ select the function ‘JDBC Lookup’ and follow this URL to better understand the JDBC Lookup Function.

      JDBC Lookup
    6. Create an Operation Mapping and attribute the Service Interface and Message Mapping and add a parameter in the same way as we did previously in the Message Mapping (‘Adapter’, ‘JDBC’, ‘Import’).

      OM Parameter

    For this method to work we need to make sure that the package has JDBC connection allowance.

    Note: Step 1 is done in the Integration Builder while the remaining are done in ESR.

  • Step 2

    The first approach created to bridge the gap of the non-existence of JDBC Lookup in Cloud Integration is done by using XSLT Mapping. It can be done following the next steps (the final integration flow can be seen in the last step):

    1. Retrieve the data from the data source.

    2. Use a Sequential Multicast to create two branches, one to carry the initial message and one to operate the JDBC Lookup.

      Two Branches
    3. Set and personalize the parameters used to define the SQL Statement in the next step, through a “Content Modifier”. In the following table there are the fields, the description and the guidelines to fill them in:

      Field Name Property Name Description Guidelines Necessity
      Access Parameters access All the fields needed to access the DataBase All seperated by commas Mandatory
      Key Parameters key Fields used as keys to fetch the data All seperated by commas Mandatory
      SQL Action action Action to operate on the SQL Statment Only one and using capital letters Mandatory
      Table Name table Name of the table for access Only one and using capital letters Mandatory
      SP Parameters spParams Parameters to fill in case of action Execute. Only relevant for stored procedures All seperated by commas Optional
    4. Define a dynamic SQL Statement with the fields value defined in the previous step. The script shown below can be applied in all scenarios:

      Groovy
      Copy
      import com.sap.gateway.ip.core.customdev.util.Message;
      import java.util.HashMap;
      import groovy.xml.*; 
      
      def Message processData(Message message) {
      
          def accessParam = message.getProperty("access")
          def dbTableName = message.getProperty("table")
          def sqlAction = message.getProperty("action") 
          def keyParam = message.getProperty("key")
          def spParams = message.getProperty("spParams") 
      
      
          def bodyString = message.getBody(java.lang.String) as String;
      
          def oXML = new XmlParser().parseText(bodyString);
      
          Writer writer = new StringWriter();
      
          def indentPrinter = new IndentPrinter(writer, ' ');
      
          def builder = new MarkupBuilder(indentPrinter);
      
          /*Create the SQL Statement with a builder function with an exception route for the 'EXECUTE' method.
          Define a Statement with the table name provided followed by the action.
          In case the access and key parameters are filled, they are processed one by one seperated by a comma.*/
      
          builder.'root' {
              if (spParams && sqlAction == "EXECUTE"){
                  oXML.row.each{ item ->
                      'Statement' {
                          "${dbTableName}"('action': "${sqlAction}") { 
                              'table' "${dbTableName}"
                              def arrSPFields = spParams.tokenize(",");
                              arrSPFields.each{ SPField ->
                                  def SPValue = item."${SPField}".text();
                                  "${SPField}" "${SPValue}"
                              }
                          }   
                      }
                  }            
              }else{ 
                  'Statement' {
                      "${dbTableName}"('action': "${sqlAction}") { 
                          'table' "${dbTableName}"
      
                          if (accessParam){ 
                              'access' {
                                  def arrFields = accessParam.tokenize(",");
                                  arrFields.each{ field ->
                                      "${field}" ''
                                  }
                              }
                          }
      
                          if (keyParam){ 
                              oXML.row.each{ item -> 
                                  "key" { 
                                      def arrKeyFields = keyParam.tokenize(",");
                                      arrKeyFields.each{ keyField ->
                                          def keyValue = item."${keyField}".text();
                                          "${keyField}" "${keyValue}"
                                      }
                                  }
                              } 
                          }
                      }   
                  }
              }
          }
      
          message.setBody(writer.toString());
      
          return message;
      }
      
    5. Join both branches and gather the messages in one XML File.

      Join Branches
    6. Combine the messages through XSLT Mapping. Sample script below:

      XML
      Copy
      <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
          <xsl:output omit-xml-declaration="yes" indent="yes"/>
      
          /*Create a key for the lookup.
              Its 'match' attribute represents a collection of nodes (response from JDBC query) where lookup takes place.
              Its 'use' attribute is the field inside the collection.*/
      
          <xsl:key name="contacts-lookup" match="Statement_response/row" use="ID"/>
      
          /*For every row in CustomerNotification do:
              1. Copy original contents.
              2. Find via lookup a collection of contacts (use 'key' function and pass a key name and the value to find) and add to the current row copies of EMAIL and TELEPHONE nodes.*/
      
          <xsl:template match="CustomerNotification/row">
              <row>
                  <xsl:copy-of select="node()" copy-namespaces="no"/>
                  <xsl:for-each select="key('contacts-lookup', ID)">
                      <xsl:copy-of select="EMAIL" copy-namespaces="no"/>
                      <xsl:copy-of select="TELEPHONE" copy-namespaces="no"/>
                  </xsl:for-each>
              </row>
          </xsl:template>
      
          <xsl:template match="/">
              <CustomerNotification>
                  <root>
                      <xsl:apply-templates select="//CustomerNotification/row"/>
                  </root>
              </CustomerNotification>
          </xsl:template>
      </xsl:stylesheet>
      
      

      Disclaimer: In order for XSLT Mapping to work, it needs to be adapted accordingly to each scenario.

    7. Send the final message.

      Final XSLT iFlow
  • Step 3

    The second approach created for Cloud Integration uses Process Direct to combine the messages via Content Enricher instead of using an XSLT Mapping script. The next steps explain how this can be done (the final integration flow can be seen in the last step):

    1. Retrieve the data from the data source.

    2. Insert a Content Enricher connected with a receiver via Process Direct in the direction shown in the image below. Give a name to the “Connection Detail” in the Process Direct configurations.

      Process Direct
    3. The Content Enricher needs to have the “Aggregation Algorithm” as “Enrich” and the fields “Path to Node” and “Key Element” need to be filled with the correct format from the messages.

      Content Enricher
    4. Add an Integration Process and connect it with a Sender via Process Direct using the same connection detail as the one created in the previous step.

      Integration Process
    5. Follow steps 3 and 4 from the previous chapter 2 “Create the JDBC Lookup function on Cloud Integration using XSLT Mapping” to create the SQL statement and insert them in the newly placed Integration Step.

    6. Use a Message Mapping to transform the message received from the Content Enricher into the intended format.

      Message Mapping
    7. Send the final message.

      Final PD iFlow
  • Step 4

    Which fields are MANDATORY to define the SQL Statement in SAP Cloud Integration?

Back to top