Skip to Content

Learn how to bring SAP Concur data to SAP Datasphere

You will learn
  • How to create Database Users/Open SQL Schemas SAP Datasphere’s database.
  • How to registered external systems IP address in SAP Datasphere IP allowlist.
  • Update the existing integration flow with SAP Datasphere Data Source.
  • Modeling using Graphical view on SAP Concur data
alphageek7443Lalit Mohan SharmaJanuary 23, 2024
Created by
alphageek7443
January 17, 2024
Contributors
alphageek7443

Prerequisites

For analyzing data, SAP Datasphere is a flexible, scalable, and cost-effective solution; with Integrated tools and data governance, and Self-service analytics, you can easily cleanse, enrich, and visualize your data. It provides a large set of default connections to access data from a wide range of sources, which might be in the cloud as well as on-premises, or from SAP as well as from non-SAP sources or partner tools.

To extend connectivity beyond SAP Datasphere standard remote connectivity and cover additional data sources we can create a connection to a partner tool and generate an Open SQL schema in SAP Datasphere. When data is pushed by the source via created database users with an Open SQL schema, you can directly import a table in the Data Builder which is then deployed as a local table.

architecture
  • Step 1
    1. To create a Space, click on the Space management tab on the bottom left, and click on the Create button on the top right.
      discoveriflow
    2. Enter a name for your Space. The Space ID will auto-populate. In this example, let’s call your Space DEV SPACE. Click on Create, and you’ve successfully created your Space in SAP Datasphere.
      discoveriflow
    3. We need this space to create a dedicated schema which will be used to store the Concur Expense Report data. To do so, navigate to Database Access, and choose Database Users section.
      discoveriflow
    4. To create database users to connect external tools to SAP Datasphere Click on Create in the Database Users section.

      discoveriflow

    5. Provide a Database User Name Suffix for your schema, and enable both Read and Write access. Click on Create to close.
      discoveriflow
    6. Once created, Deploy the space to activate it. Click the little information icon next to your user to open its details.
      discoveriflow
    7. Take note of the Database User Name, Host Name, Port and Password. To see the password you have to request one by clicking the button. Make sure to note this password down as you won’t be able to retrieve it again afterwards.
      discoveriflow
    8. Finally, we can open the Database Explorer and log in with the credentials we just noted down.
      discoveriflow
    9. In Explorer, open a new SQL window (use the little SQL icon on the left) and verify that you are connected to the correct schema. In order to store the Concur Expense Report data we need to create a new table.

      We will create a table to store Expense Reports, which will have an ID, Name, currency code, Country, Total, and so on with all necessary fields which are required for analyzing data.

      You can use the below sample code and hit Execute (the green play button) to execute your statement.

      discoveriflow

    10. This is the SQL code you can used to create the “SPEND_ANALYSIS#EXTERNAL”.“REPORTS” table:

      SQL
      Copy
      CREATE COLUMN TABLE "SPEND_ANALYSIS#EXTERNAL"."REPORTS"(
      "CREATED" LONGDATE DEFAULT CURRENT_TIMESTAMP,
      "ID" NVARCHAR(25),
      "NAME" NVARCHAR(50),
      "CURRENCYCODE" NVARCHAR(5),
      "COUNTRY" NVARCHAR(20),
      "RECEIPTSRECEIVED" BOOLEAN,
      "OWNERNAME" NVARCHAR(50),
      "OWNERLOGINID" NVARCHAR(50),
      "PAYMENTSTATUSNAME" NVARCHAR(100),
      "TOTAL" DOUBLE,
      PRIMARY KEY(
      	"ID"
      )
      )
      
  • Step 2

    SAP Datasphere uses an IP allowlist concept where external systems have to be registered with their IP address before they can connect. In our case, SAP Integration Suite is the external system.

    1. First, you need to find out where your cloud integration’s tenant is hosted. You can see that in your integration’s tenant URL, for me it shows us10-001.
      discoveriflow
    2. Now look up the range of potential egress IP addresses this tenant can use via the documentation on the SAP Help portal: Regions and API Endpoints Available for the Cloud Foundry Environment. This is the list of IP addresses you have to register in SAP Datasphere.
      discoveriflow
    3. In the SAP Datasphere menu, click on System and Configuration. Open the IP Allowlist menu and select Trusted IPs. Now click on Add for each of the addresses to add them to your allowlist.
      discoveriflow

    SAP Datasphere is now ready to store your data.

  • Step 3

    To connect an integration flow to the SAP Datasphere data source, you need to provide the credentials hence the JDBC data source artifact is required. This artifact will be used to store the access data for the database (as defined and generated with the previous steps).
    During the integration flow design, for the JDBC adapter, you need to point to the alias of the JDBC data source. No additional configurations are then required in the integration flow.

    1. For this, In SAP Integration Suite tenant, we need to navigate to Monitor > Integration, which you can do by clicking on Overview in the breadcrumbs on top. Then choose JDBC Material.
      discoveriflow
    2. Click Add to create a new data source, and provide the SAP Datasphere schema credentials noted down earlier.
      Make sure to change the Database Type to SAP HANA Cloud, and use the correct format for the JDBC URL field: **jdbc:sap://{hostname}:{port}**.
      discoveriflow
    3. Click on Deploy to store this credential.
  • Step 4

    In previous tutorial, we have already used pre-packaged integration content which help us to get started with minimal integration efforts. Now we are going to extend the same integration package.

    1. To get started, Go to your SAP Integration Suite tenant. navigate to the Design > Integration and select your integration package which you have created in previous tutorial.
      discoveriflow
    2. Switch to the Artifacts tab, and select the Consume SAP Concur API Integration Package.
      discoveriflow
    3. then select Edit to start editing the integration flow.
      discoveriflow
    4. Select the Groovy Script 1 Integration flow step and select the script file from the Processing tab.
      discoveriflow
    5. In the script editor, specify the script according to the requirements of your scenario. For an overview of the classes and interfaces supported by the Script step, see SDK API. For more information on how to use the dedicated interfaces and methods for specific use cases, refer to Script Use Cases.

    This is the Groovy script you can use to insert the data in the “SPEND_ANALYSIS#EXTERNAL”.”REPORTS” table which we have created in the SAP Dataspace Database. Copy & Paste it.

    JAVA
    Copy

    import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap; import java.util.Arrays; def Message processData(Message message) { def body=message.getBody(java.lang.String) as String; def xml=new XmlSlurper().parseText(body); List paramList = new ArrayList(); xml.Items.Report.each{ paramList.add(Arrays.asList(it.ID.text(),it.Name.text(),it.CurrencyCode.text(), it.Country.text(),it.ReceiptsReceived.text(),it.OwnerLoginID.text(), it.OwnerName.text(),it.PaymentStatusName.text(),it.Total)); } message.setHeader("CamelSqlParameters",paramList); message.setBody("INSERT INTO SPEND_ANALYSIS#EXTERNAL.REPORTS(ID,NAME,CURRENCYCODE,COUNTRY,RECEIPTSRECEIVED,"+ "OWNERNAME,OWNERLOGINID,PAYMENTSTATUSNAME,TOTAL) VALUES(?,?,?,?,?,?,?,?,?)"); return message; }

    When you’ve finished the definition of your script, Click OK.

    discoveriflow

  • Step 5

    We have already discussed the JDBC (Java Database Connectivity) adapter enables you to connect SAP Cloud Integration to cloud or on-premise databases and execute SQL operations on the database.

    To connect to the database we need to create a data source under Manage JDBC material->JDBC Data Source which you have already created in previous steps. Once these prerequisites are done then we can create an iflow that will push data from the SQL database.

    1. You use the Receiver elements to model remote systems that are connected to your integration flow. Search and select Receiver from the integration palette and drop it onto canvas.
      discoveriflow
    2. To establish inter-communication between integration flows you can use the Request Reply integration pattern with an JDBC adapter. Search and Select Request Reply from the integration palette and drop it into the Integration Process.
      discoveriflow
    3. Select and connect the Request Reply integration step to the Receiver.
      discoveriflow
    4. This would open the Adapter list, from the available Adapter select JDBC in the Adapter Type option dialog.
      discoveriflow
    5. Under the Connection tab, enter the name of the JDBC data source, which you have created in previous steps, and check Batch Mode this enables you to process collection queries in a single request. For more details, see Batch Payload and Operation.
      discoveriflow
  • Step 6
    1. Select Save and then select Deploy to deploy the integration flow to SAP Integration Suite tenant. This will trigger the deployment of your integration flow.
      discoveriflow
    2. Navigate to Monitor-> Integrations tab to check the status of the deployed integration flow. Since the integration flow was designed as a Run immediate timer-based integration the flow will be executed as soon as it deployed. Select Completed Messages tile to view the successfully executed integration flows. In case of any failure, it will appear under Failed Messages.
      discoveriflow
    3. Check the status of the newly created integration flow from the Monitor Message Processing. If status changes to Completed then go the Attachments Tab and click on Response Payload attachment.
      discoveriflow
    4. Now open the Datasphere Database Explorer and log in with the credentials. In the explorer Search for you table, Right click on it and select Open Data to check the records.
      discoveriflow
  • Step 7

    The graphical view builder in SAP Datasphere makes it simple to develop data views. You may proceed logically without requiring to be familiar with SQL statements.
    In the graphical view builder, you have many resources to model your data, combine them from many sources and assigning business semantics that make your output easier to understand.

    1. In your DataSphere tenant, On welcome page, select Data Builder in the left side menu and select the space where you want to model your data
      discoveriflow
    2. Then click on the **New Graphical View**
      discoveriflow
    3. Your data is under Sources, Select Sources on the top right-hand side of the screen.
      discoveriflow
    4. To start building your model, click and drag the REPORT table onto the canvas.
      discoveriflow

    To create a database user to connect external tools to SAP Datasphere we need to provide:

  • Step 8

    Now you can easily transform or drill down your Expense Report Data by adding filters, joins and projections as below.

    discoveriflow

    You have successfully completed the last tutorial of this tutorial group. you have seen the Integrating Data via Database Users/Open SQL Schema approach to store SAP Concur’s Data in SAP Datasphere using SAP Integration Suite and to use it in Data Builder to model and enrich the business expenses data.

Back to top