When I was an employee at Splunk, I was fortunate enough to be given the opportunity to work on some innovative military projects.
One such project was the Smart Soldier App , with contributions from Justin Boucher (a US Army Veteran) and Ramik Chopra.
This App is essentially a data correlation narrative that aggregates data from multiple sources to provide more effective, targetted and expedited triage for soldiers in the field and provide commanders with real time insights into the condition and locations of soldiers and resources in the battle theatre :
Soldier and Medic geolocation data and details
Soldier real time health and condition metrics
Soldier historical medical/patient information
Soldier “smart suit” condition
Geomapping based visualizations to determine the nearest medic and best treatment facility for an injured soldier
But why restrict this to just soldiers in the battlefield? You could take this use case as inspiration to apply to any type of first responder scenario. The possibilities in your data are limitless, so do reach out to us if you have any ideas that you want to collaborate on.
You can find out more about the Smart Soldier App in the links below :
In a world of text log files and JSON over HTTP, do not forget about the absolute treasure trove of incredibly valuable binary data that can be captured and preprocessed.
To name just a few examples :
proprietary industry protocols such as MATIP in Aviation or ISO8583 in Payments Processing
media files , images/audio/video
binary application dumps
A few years ago I wrote a really cool free and open source Splunk App called Protocol Data Inputs (PDI), in use by many customers today, that allows you to capture any data (binary or text) and then preprocess it for textual indexing in Splunk.
Check out some of the content in these blogs and presentations, I hope you can get some inspiration from them and start looking around for what binary data you have to unleash!
What do you think the future experience of interacting with your data is going to be?
Is it going to be logging in by way of a user interface and then using your mouse/keyboard/gestures to view and interact with something on a display panel, or is it going to be more like simply talking with another person?
Introducing the “Talk to Splunk with Amazon Alexa” App
This is a Splunk App that enables your Splunk instance for interfacing with Amazon Alexa by way of a custom Alexa skill, thereby provisioning a completely voice based Natural Language Interface for Splunk.
You can then use an Alexa device such as Amazon’s Echo, Tap or Dot or another 3rd party hardware device to tell or ask Splunk anything you want.
Get answers to questions based off Splunk Searches
Ask for information, such as search command descriptions
Return static responses and audio file snippets
Developer extension hooks to plugin ANY custom voice driven requests and actions you want
The App also allows you to train your Splunk instance to the conversational vocabulary for your specific use case.
The ultimate vision I foresee here is a future where you can completely do away with your keyboard, mouse, monitor & login prompt.
Even right now there are use cases where having to look at a monitor or operate an input device are simply counter productive, infeasible or unsafe.Industrial operating environments immediately come to mind.
You should be able to be transparently & dynamically authenticated based on your voice signature and then simply converse with your data like how you would talk to another person… asking questions or requesting to perform some action.
This app is a step in the direction of this vision.