I recently had the great pleasure of talking with my good mate Justin Vaughn-Brown over at Sendachi (edit: previously Contino, now Contino again), a smart bunch of people focused on DevOps, headed up by the excellent Benjamin Wooten, who founded Contino UK. Justin and I had a great chat about Splunk, and the role of machine data in a DevOps:
Due to the typical nature of applications and systems used in a DevOps approach – often automated, abstracted, multi-device/multi-channel, containerized and highly distributed through cloud and other service providers – the only constant for measuring velocity, quality, and impact of applications is the data generated by the application components themselves, and the systems that support them.
So increasingly DevOps professionals are building logging into applications, running traces and commands remotely, capturing webserver and other logs, polling APIs and service providers, intercepting wire data and more, before ingesting the output into machine data systems like Splunk that can analyze and correlate this diverse machine data to make it meaningful.
Check out the whole discussion on Sendachi’s blog, at DevOps & the value of Data: Splunk’s Andi Mann interviewed – Sendachi.