Deploy Your CAP Application to Kyma
- How to add the Authorization and Trust Management service (XSUAA) to your project
- How to build docker images for your CAP service and database deployer
- How to push the docker images to your container registry
- How to deploy your app to your Kyma cluster
Prerequisites
- Add Helm Chart
- You have created a DB secret as specified in
Step 3: Setup SAP HANA Cloud
in Set Up SAP HANA Cloud for Kyma.
- Step 1
The next step is to add the Authorization and Trust Management service which will allow user login, authorization, and authentication checks. Add the following snippet to the
chart/values.yaml
file:YAMLCopysrv: ... xsuaa: serviceOfferingName: xsuaa servicePlanName: application parameters: xsappname: cpapp tenant-mode: dedicated role-collections: - description: Manage Risks name: RiskManager role-template-references: - '$XSAPPNAME.RiskManager' - description: View Risks name: RiskViewer role-template-references: - '$XSAPPNAME.RiskViewer'
The configuration for XSUAA is read from the
xs-security.json
file that was created in the tutorial Prepare User Authentication and Authorization (XSUAA) Setup. But in theconfig
element, values can be added and overwritten. The valuexsappname
gets overwritten with a space-dependent value. The name has to be unique within a subaccount. This allows multiple deployments of this tutorial in different spaces of the same subaccount. For example, different people of a team that want to try it out and don’t want to create a new subaccount for each team member. For a productive application, thexsappname
should be explicitly set to the desired value. Alternatively, role collections can be manually assigned in the SAP BTP cockpit.Additional Documentation:
See section Assigning Role Collections in SAP BTP documentation for more details.
Log in to complete tutorial - Step 2
Let’s first set the environment variable for the container registry in your terminal. This will set a temporary environment variable for the current terminal session. At the same time, it will be easier to use the environment variable as a shorter alternative of the container registry URL when building and pushing the docker images later in the tutorial. Open a terminal and run the following command:
Shell/BashCopyCONTAINER_REGISTRY=<your-container-registry>
Looking for
<your-container-registry>
?Value for
<your-container-registry>
is the same as the docker server URL and the path used for docker login. You can quickly check it by running the following command in your terminal:jsonCopycat ~/.docker/config.json
Log in to complete tutorial - Step 3
NPM
uses a file calledpackage-lock.json
to remember which versions of packages were installed.NPM
installs the same versions and ignores any updates in minor releases not explicitly specified in thepackage.json
file. Maintaining this consistency is important for production applications. For the purposes of this tutorial, you’ll be using the latest versions of the packages.-
Remove
node_modules
andpackage-lock.json
from your project folder because they can cause errors later when building the CAP service:Shell/BashCopyrm -rf node_modules package-lock.json
-
Execute the following command in your project folder:
Shell/BashCopycds build --production
You should get an output like:
[cds] - build completed in XXX ms
. -
(Optional) Run the following command to remove the test data:
Shell/BashCopyrm -rf gen/db/data
Although the app will work with the test data, usually test data should be removed before deployment.
Test files should never be deployed to an SAP HANA database as table data. This can cause the deletion of all files of the affected database table with a change of a data file. You can find more details in
Step 6: Exclude CSV files from deployment
of Deploy Your Multi-Target Application (MTA).
Log in to complete tutorial -
- Step 4
Run the following command:
Shell/BashCopypack build $CONTAINER_REGISTRY/cpapp-srv --path gen/srv \ --buildpack gcr.io/paketo-buildpacks/nodejs \ --builder paketobuildpacks/builder:base --env BP_NODE_RUN_SCRIPTS=""
You should get an output like:
Shell/BashCopySuccessfully built image <Container Registry>/cpapp-srv
Log in to complete tutorial - Step 5
Run the following command:
Shell/BashCopypack build $CONTAINER_REGISTRY/cpapp-hana-deployer --path gen/db \ --buildpack gcr.io/paketo-buildpacks/nodejs \ --builder paketobuildpacks/builder:base --env BP_NODE_RUN_SCRIPTS=""
You should get an output like:
Shell/BashCopySuccessfully built image <Container Registry>/cpapp-hana-deployer
Log in to complete tutorial - Step 6
Now that we’ve build the docker images, let’s push them to the container registry.
Make sure you’re logged in to your container registry:
Shell/BashCopydocker login
Push the images to container registry:
Shell/BashCopydocker push $CONTAINER_REGISTRY/cpapp-srv docker push $CONTAINER_REGISTRY/cpapp-hana-deployer
Log in to complete tutorial - Step 7
Deploy your app:
Shell/BashCopyhelm upgrade cpapp ./chart --install
In case you get an error message about the CPU limits, increase the values for CPU in the file
chart/values.yaml
.yaml hl_lines=Copyglobal: ... resources: limits: cpu: 100m ephemeral-storage: 1G memory: 500M requests: cpu: 100m ...
Copy the app URL when done and paste it into a new browser window:
Now, you can access the CAP server:
If the error message
No healthy upstream.
is shown, wait a few seconds and try again.When you choose the Mitigation or Risk service entity, you will see an error message:
The service expects a so called
JWT
(JSON Web Token) in the HTTPAuthorization
header that contains the required authentication and authorization information to access the service. In the next tutorial, you will deploy the SAP Fiori UIs, so that you can access your UIs from SAP Build Work Zone, standard edition. The SAP Build Work Zone, standard edition will trigger the authentication flow to provide the required token to access the service.List installed helm charts:
Shell/BashCopyhelm list
The installed helm chart should be displayed:
Shell/BashCopyNAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION cpapp risk-management 5 yyyy-mm-dd time timezone deployed cpapp-1.0.0 1.0.0
Log in to complete tutorial - Step 8
The Helm chart starts a deployment with the CAP service and a job that deploys the database content. After successful execution, the job is deleted. In case you encounter an error during the deployment process, follow the instructions in the sections below to troubleshoot.
Log in to complete tutorial - Step 9
On macOS, if you get the error
ERROR: failed to build: failed to fetch builder image '<DOCKER-IMAGE>': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
when running thepack build
command, try updating your container management app, as detailed in section Install a container management app of Prepare Your Kyma Development Environment.Log in to complete tutorial - Step 10
-
Run the following command to check your database deployment:
Shell/BashCopykubectl get jobs
If the job fails or if it’s still in progress, the job is displayed as incomplete (completions
0/1
) like in this example:Shell/BashCopyNAME COMPLETIONS DURATION AGE cpapp-hana-deployer 0/1 3m7s 3m7s
-
In case the job is not completed, you can check the deployer’s logs. Let’s first print the pods:
Shell/BashCopykubectl get pods
You should see a list of pods that run on error because of failed deployment attempts:
Shell/BashCopyNAME READY STATUS RESTARTS AGE cpapp-hana-deployer-6s7fl 0/1 Error 0 6m16s cpapp-hana-deployer-n5fnq 0/1 Error 0 7m46s cpapp-hana-deployer-plfmh 0/1 Error 0 7m16s cpapp-hana-deployer-z2nxh 0/1 Error 0 8m8s cpapp-hana-deployer-zc9c2 0/1 Error 0 6m56s
-
Pick one of the pods and check its logs. For example:
Shell/BashCopykubectl logs cpapp-hana-deployer-6s7fl
The logs will give you more details about the deployment including error code and description.
-
With the
describe
command you can inspect the state of the pod even further:Shell/BashCopykubectl describe pod cpapp-hana-deployer-6s7fl
The
describe
command returns a handy list of the pod parameters includingName
,Namespace
,Service Account
,Status
,IP
,Containers
, andEvents
among others.
You can use the
logs
anddescribe
commands as described above to inspect the pods. You can find further information about debugging pods in the Kubernetes documentation.Log in to complete tutorial -
- Step 11
If you see the error
Connection failed (RTE:[89013] Socket closed by peer
, it’s possible that your SAP HANA Cloud instance doesn’t allow your Kyma cluster’s IP address. You can find more info in SAP HANA Database Connections.To specify trusted source IP addresses for your SAP HANA Cloud instance:
-
Get your Kyma cluster’s outbound IP address with the following command:
Shell/BashCopykubectl run -it --rm --restart=Never --image alpine/curl nat-ip-probe --overrides='{ "apiVersion": "v1", "metadata": {"annotations": { "sidecar.istio.io/inject":"false" } } }' -- curl https://httpbin.org/ip
The command creates a temporary container that runs a command to return your Kyma cluster’s outbound IP address and then deletes the container. It takes a few seconds to execute and will print a JSON object with the IP address.
-
Go to your Cloud Foundry space where you already have the SAP HANA Cloud service instance.
-
Choose SAP HANA Cloud in the left-hand pane.
-
Choose Manage SAP HANA Cloud in the upper right corner.
-
Sign in with your SAP BTP Cockpit username and email. You should see your SAP HANA Cloud instance in the SAP HANA Cloud Central cockpit.
-
Choose Manage Configuration from the Actions menu for your SAP HANA Cloud instance.
-
Change the Allowed Connections selection to
Allow specific IP addresses and IP ranges (in addition to BTP)
and add your Kyma cluster’s outbound IP address. -
Your SAP HANA Cloud instance will automatically restart when you save your changes. Once the instance is running, try to deploy your app again:
Shell/BashCopyhelm upgrade cpapp ./chart --install
Log in to complete tutorial -
- Step 12
If the deployment was successful, you should see the running CAP service in the list of pods:
Shell/BashCopykubectl get pods
Shell/BashCopyNAME READY STATUS RESTARTS AGE cpapp-srv-84964965cd-5mwtm 2/2 Running 0 13m
Your service is made externally available using the
VirtualService
resource fromIstio
. You can check your externally exposed hostname:Shell/BashCopykubectl get virtualservice
It should look like this:
Shell/BashCopyNAME GATEWAYS HOSTS AGE cpapp-srv-bsbj8 ["kyma-gateway.kyma-system.svc.cluster.local"] ["srv-cpapp-risk-management.c-abc.stage.kyma.ondemand.com"] 2d15h