This is a Splunk Modular Input Add-On for indexing messages from a Kafka broker or cluster of brokers that are managed by Zookeeper. Kafka version 0.8.1.1 is used for the consumer and the testing of this Modular Input.
https://kafka.apache.org/
Settings -> Data Inputs -> Kafka Messaging
to add a new Input stanza via the UIREADME/inputs.conf.spec
. The inputs.conf
file should be placed in a local
directory under an App or User context.You require an activation key to use this App. Visit http://www.baboonbones.com/#activation to obtain a non-expiring key
Any log entries/errors will get written to $SPLUNK_HOME/var/log/splunk/splunkd.log
These are also searchable in Splunk : index=_internal error kafka.py
The default heap maximum is 64MB. If you require a larger heap, then you can alter this in $SPLUNKHOME/etc/apps/kafkata/bin/kafka.py on line 95
You can declare custom JVM System Properties when setting up new input stanzas. Note : these JVM System Properties will apply to the entire JVM context and all stanzas you have setup
The way in which the Modular Input processes the received Kafka messages is enitrely pluggable with custom implementations should you wish.
To do this you code an implementation of the com.splunk.modinput.kafka.AbstractMessageHandler class and jar it up.
Ensure that the necessary jars are in the $SPLUNKHOME/etc/apps/kafkata/bin/lib directory.
If you don't need a custom handler then the default handler com.splunk.modinput.kafka.DefaultMessageHandler will be used.
This handler simply trys to convert the received byte array into a textual string for indexing in Splunk.
This project was initiated by Damien Dallimore , damien@baboonbones.com