Read from Kafka
SparkKafkaSource
Bases: SourceInterface
This Spark source class is used to read batch or streaming data from Kafka. Required and optional configurations can be found in the Attributes tables below.
Additionally, there are more optional configurations which can be found here.
Example
#Kafka Source for Streaming Queries
from rtdip_sdk.pipelines.sources import SparkKafkaSource
from rtdip_sdk.pipelines.utilities import SparkSessionUtility
# Not required if using Databricks
spark = SparkSessionUtility(config={}).execute()
kafka_source = SparkKafkaSource(
spark=spark,
options={
"kafka.bootstrap.servers": "{HOST_1}:{PORT_1},{HOST_2}:{PORT_2}",
"subscribe": "{TOPIC_1},{TOPIC_2}",
"includeHeaders", "true"
}
)
kafka_source.read_stream()
#Kafka Source for Batch Queries
from rtdip_sdk.pipelines.sources import SparkKafkaSource
from rtdip_sdk.pipelines.utilities import SparkSessionUtility
# Not required if using Databricks
spark = SparkSessionUtility(config={}).execute()
kafka_source = SparkKafkaSource(
spark=spark,
options={
"kafka.bootstrap.servers": "{HOST_1}:{PORT_1},{HOST_2}:{PORT_2}",
"subscribe": "{TOPIC_1},{TOPIC_2}",
"startingOffsets": "earliest",
"endingOffsets": "latest"
}
)
kafka_source.read_batch()
Parameters:
Name | Type | Description | Default |
---|---|---|---|
spark |
SparkSession
|
Spark Session |
required |
options |
dict
|
A dictionary of Kafka configurations (See Attributes tables below). For more information on configuration options see here |
required |
The following attributes are the most common configurations for Kafka.
The only configuration that must be set for the Kafka source for both batch and streaming queries is listed below.
Attributes:
Name | Type | Description |
---|---|---|
kafka.bootstrap.servers |
A comma-separated list of host︰port
|
The Kafka "bootstrap.servers" configuration. (Streaming and Batch) |
There are multiple ways of specifying which topics to subscribe to. You should provide only one of these attributes:
Attributes:
Name | Type | Description |
---|---|---|
assign |
json string {"topicA"︰[0,1],"topicB"︰[2,4]}
|
Specific TopicPartitions to consume. Only one of "assign", "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) |
subscribe |
A comma-separated list of topics
|
The topic list to subscribe. Only one of "assign", "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) |
subscribePattern |
Java regex string
|
The pattern used to subscribe to topic(s). Only one of "assign, "subscribe" or "subscribePattern" options can be specified for Kafka source. (Streaming and Batch) |
The following configurations are optional:
Attributes:
Name | Type | Description |
---|---|---|
startingTimestamp |
timestamp str
|
The start point of timestamp when a query is started, a string specifying a starting timestamp for all partitions in topics being subscribed. Please refer the note on starting timestamp offset options below. (Streaming and Batch) |
startingOffsetsByTimestamp |
JSON str
|
The start point of timestamp when a query is started, a json string specifying a starting timestamp for each TopicPartition. Please refer the note on starting timestamp offset options below. (Streaming and Batch) |
startingOffsets |
"earliest", "latest" (streaming only), or JSON string
|
The start point when a query is started, either "earliest" which is from the earliest offsets, "latest" which is just from the latest offsets, or a json string specifying a starting offset for each TopicPartition. In the json, -2 as an offset can be used to refer to earliest, -1 to latest. |
endingTimestamp |
timestamp str
|
The end point when a batch query is ended, a json string specifying an ending timestamp for all partitions in topics being subscribed. Please refer the note on ending timestamp offset options below. (Batch) |
endingOffsetsByTimestamp |
JSON str
|
The end point when a batch query is ended, a json string specifying an ending timestamp for each TopicPartition. Please refer the note on ending timestamp offset options below. (Batch) |
endingOffsets |
latest or JSON str
|
The end point when a batch query is ended, either "latest" which is just referred to the latest, or a json string specifying an ending offset for each TopicPartition. In the json, -1 as an offset can be used to refer to latest, and -2 (earliest) as an offset is not allowed. (Batch) |
maxOffsetsPerTrigger |
long
|
Rate limit on maximum number of offsets processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume. (Streaming) |
minOffsetsPerTrigger |
long
|
Minimum number of offsets to be processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume. (Streaming) |
failOnDataLoss |
bool
|
Whether to fail the query when it's possible that data is lost (e.g., topics are deleted, or offsets are out of range). This may be a false alarm. You can disable it when it doesn't work as you expected. |
minPartitions |
int
|
Desired minimum number of partitions to read from Kafka. By default, Spark has a 1-1 mapping of topicPartitions to Spark partitions consuming from Kafka. (Streaming and Batch) |
includeHeaders |
bool
|
Whether to include the Kafka headers in the row. (Streaming and Batch) |
Starting Timestamp Offset Note
If Kafka doesn't return the matched offset, the behavior will follow to the value of the option startingOffsetsByTimestampStrategy
.
startingTimestamp
takes precedence over startingOffsetsByTimestamp
and startingOffsets.
For streaming queries, this only applies when a new query is started, and that resuming will always pick up from where the query left off. Newly discovered partitions during a query will start at earliest.
Ending Timestamp Offset Note
If Kafka doesn't return the matched offset, the offset will be set to latest.
endingOffsetsByTimestamp
takes precedence over endingOffsets
.
Source code in src/sdk/python/rtdip_sdk/pipelines/sources/spark/kafka.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
|
system_type()
staticmethod
Attributes:
Name | Type | Description |
---|---|---|
SystemType |
Environment
|
Requires PYSPARK |
Source code in src/sdk/python/rtdip_sdk/pipelines/sources/spark/kafka.py
130 131 132 133 134 135 136 |
|
read_batch()
Reads batch data from Kafka.
Source code in src/sdk/python/rtdip_sdk/pipelines/sources/spark/kafka.py
155 156 157 158 159 160 161 162 163 164 165 166 167 |
|
read_stream()
Reads streaming data from Kafka.
Source code in src/sdk/python/rtdip_sdk/pipelines/sources/spark/kafka.py
169 170 171 172 173 174 175 176 177 178 179 180 181 |
|