Apache Kafka Python Sample: Consuming and Writing Data into SAP HANA Cloud, data lake Files
Requires Customer/Partner License
This tutorial demonstrates how to set up an Apache Kafka instance locally and create Python scripts to produce and consume data into data lake Files based on user specifications.
You will learn
- How to create a Kafka producer script in Python.
- How to create a Kafka consumer script in Python.
- How to integrate and write data into SAP HANA Cloud ,data lake files.
Prerequisites
- Apache Kafka installed.
- Python installed.
- Basic knowledge of Python and Kafka.
- Access to SAP Business Technology Platform (BTP) and a provisioned data lake instance (HDLF) with configured certificates. Refer to the SAP HANA Cloud Data Lake Setup for data lake setup and this tutorial for configuring certificates.
- A running local Kafka server with the address
localhost:9092.
Apache Kafka is a distributed event streaming platform capable of handling trillions of events a day. In this tutorial, you will use a local Kafka instance and create Python scripts to consume and write data into SAP HANA Cloud, data lake files. Specifically, we will be sending random temperature and timestamp data, which will be processed and stored in the data lake in csv format.




