Skip to Content

Leveraging Orchestration Capabilities to Enhance Responses

In this tutorial, we will explore optional orchestration capabilities available in the Gen AI Hub, such as Data Masking, translation and Content Filtering.
You will learn
  • Inference of GenAI models using orchestration along with Data Masking, translation and Content Filtering features
I321506Smita NaikJuly 10, 2025

Prerequisites

  1. BTP Account
    Set up your SAP Business Technology Platform (BTP) account.
    Create a BTP Account
  2. For SAP Developers or Employees
    Internal SAP stakeholders should refer to the following documentation: How to create BTP Account For Internal SAP Employee, SAP AI Core Internal Documentation
  3. For External Developers, Customers, or Partners
    Follow this tutorial to set up your environment and entitlements: External Developer Setup Tutorial, SAP AI Core External Documentation
  4. Create BTP Instance and Service Key for SAP AI Core
    Follow the steps to create an instance and generate a service key for SAP AI Core:
    Create Service Key and Instance
  5. AI Core Setup Guide
    Step-by-step guide to set up and get started with SAP AI Core:
    AI Core Setup Tutorial
  6. An Extended SAP AI Core service plan is required, as the Generative AI Hub is not available in the Free or Standard tiers. For more details, refer to
    SAP AI Core Service Plans
  7. Orchestration Deployment:
    Refer to the tutorial the basic consumption of GenAI models using orchestration and ensure at least one orchestration deployment is ready to be consumed during this process.
  8. Basic Knowledge:
    Familiarity with the orchestration workflow is recommended
  • Step 1

    This tutorial builds on the foundational orchestration concepts introduced in the beginner’s tutorial and focuses on enhancing GenAI responses using orchestration modules such as data masking, translation and content filtering.

    Previously in the beginner’s tutorials, we used a resume processing use case to illustrate how to create orchestration workflow to consume models and easily switch different models using harmonized API. In this tutorial, we use a sentiment analysis use case to demonstrate how optional orchestration modules such as Data Masking, Translation, and Content Filtering can be applied to protect sensitive information, translate multilingual support requests, and filter out undesirable or non-compliant content—thereby enhancing the quality, safety, and compliance of generative AI outputs.

    Data masking in SAP AI Core allows you to anonymize or pseudonymize personal or confidential data before sending it to the generative AI model.
    🔗 Learn more about Data Masking in SAP AI Core

    Translation in SAP GenAI Orchestration enables automatic language conversion of inputs and outputs during LLM processing.
    🔗 Learn more about Data Masking in SAP AI Core

    Content filtering helps identify and block inappropriate, offensive, or non-compliant input and output content within an orchestration workflow.
    🔗 Learn more about Content Filtering in SAP AI Core

    In this tutorial, we specifically focus on data masking, translation and content filtering. Other orchestration modules such as grounding are also available in SAP AI Core and it is covered in Separate tutorials.

    You will learn how to:

    • Integrate data masking within the orchestration flow to safeguard personal or confidential information.
    • Apply content filtering to identify and restrict inappropriate or non-compliant responses.
    • Use relevant SAP AI Core features and configurations to support these capabilities.

    By the end of this tutorial

    • you’ll understand how to design a secure and controlled orchestration pipeline suitable for enterprise-grade GenAI applications.

    • Learn how to implement the solution using SAP AI Launchpad, Python SDK, Java, JavaScript, and Bruno.

  • Step 2

    In this tutorial, we will build upon the orchestration framework introduced in Tutorial. The focus will shift from basic orchestration to leveraging optional modules to enhance data privacy and refine response quality. These enhancements include:

    **Data Masking** : Hiding sensitive information like phone numbers, organizational details, or personal identifiers. 
    
    **Content Filtering** : Screening for categories such as hate speech, self-harm, explicit content, and violence to ensure safe and relevant responses.
    
    **Translation** : Automatically converts input and/or output text between source and target languages to support multilingual processing.
    
    • Here, we use a sentiment analysis use case, where orchestration is enhanced by incorporating data masking, translation or content filtering. These additions help improve data privacy, security, and response quality.
  • Step 3

    The templating module is a mandatory step in orchestration. It allows you to define dynamic inputs using placeholders, construct structured prompts, and generate a final query that will be passed to the model configuration module.

    In this step, we create a template that defines how the sentiment analysis prompt will be structured using message components:

    system: Defines assistant behavior and task.

    user: Provides the support request input.

  • Step 4

    The Data Masking Module ensures data privacy by anonymizing or pseudonymizing sensitive information before it is processed.

    **Anonymization** : Irreversibly replaces personal identifiers with placeholders (e.g., MASKED_ENTITY). 
    
    **Pseudonymization** : Substitutes identifiers with reversible tokens (e.g., MASKED_ENTITY_ID).
    
  • Step 5

    The Translation Module enables multilingual processing by translating content sent to and received from the generative AI model. This is especially useful when the user input or model output is not in the default language expected by the LLM.

    • The module uses SAP’s Document Translation service.

    • The target language is mandatory.

    • If source language is not specified, it will be automatically detected.

  • Step 6

    The Content Filtering Module allows screening of both input and output content to remove inappropriate or unwanted elements such as hate speech or violent content. This ensures that sentiment analysis is performed on safe and relevant inputs, and the responses generated are also safe for consumption.

  • Step 7

    This step runs the orchestration pipeline for each selected LLM model using the provided input text for sentiment analysis. It captures and stores the model-generated responses, enabling comparison of output quality across different models.

  • Step 8

    Once the orchestration completes, you can observe that the output is now more refined, with sensitive information masked and inappropriate content filtered. This demonstrates the power of modules like data masking and content filtering to enhance privacy and ensure response quality.

    While this tutorial used a sentiment analysis use case, the same principles can be applied to other use cases. You can customize the Data Masking and Content Filtering settings based on your specific requirements to handle sensitive or categorized data effectively.

    By incorporating these optional modules, you can tailor your Response to meet organizational data security policies and ensure safe, reliable responses for diverse scenarios.

Back to top