Splunk Interview Questions & Answers

Criteria Splunk Spark
Deployment area Collecting large amounts of machine generated data Iterative applications & in-memory processing
Nature of tool Proprietary Open Source
Working mode Streaming mode  Both streaming and batch mode

Splunk is Google for your machine data.It’s a software/Engine which can be used for searching,visualizing,Monitoring,reporting etc of your enterprise data. Splunk takes valuable machine data and turns it into powerful operational intelligence by providing real? time insight to your data through charts,alerts,reports etc

Below are common port numbers used by splunk,however you can change them if required

Service Port number Used
Splunk Web Port:  8000
 Splunk Management Port:  8089
 Splunk Indexing Port:  9997
 Splunk Index Replication Port   8080
 Splunk network port 514 (Used to get data in from netwok port i.e. UDP data)
KV store 8191

Below are components of splunk:

1)  Search head  – provides GUI for searching

2) Indexer – indexes machine data

3) Forwarder -Forwards logs to Indexer

4) Deployment server -Manges splunk components in distributed environment

The indexer is the Splunk Enterprise component that creates and manages indexes. The primary functions of an  indexer are:

  • Indexing incoming data.
  • Searching the indexed data.
  • Picture

There are two types of splunk forwarder as below

a) universal forwarder(UF) -Splunk agent installed on non-Splunk system to gather data locally, can’t parse or index data

b) Heavy weight forwarder(HWF) – full instance of splunk with advance functionality.

– Generally works as a remote collector, intermediate forwarder, and possible data filter because they parse data, they are not recommended for production systems

Enterprise license

Free license

Forwarder license

Beta license

Licenses for search heads (for distributed search)

Licenses for cluster members (for index replication)

splunk app is  container/directory of configurations,searches,dashboards etc. in splunk

splunk free lacks these features:

  • authentication and scheduled searches/alerting
  • distributed search
  • forwarding in TCP/HTTP (to non-splunk)
  • deployment management

license slave will start a 24-hour timer, after which search will be blocked on the license slave (though indexing continues). users Will not be able to search data in that slave until it can reach license master again

The Summary index is the default summary index (the index thatSplunk Enterprise uses if you do not indicate another one).

If you plan to run a variety of summary index reports you may need to create additional summary indexes.

Splunk DB Connect is a generic SQL database plugin for Splunk that allows you to easily integrate database information with Splunk queries and reports.

There are multiple ways we can extract ip address from logs.Below are few examples.

Regular Expression for extracting ip address:

rex field=_raw  "(?<ip_address>\d+\.\d+\.\d+\.\d+)"


rex field=_raw  "(?<ip_address>([0-9]{1,3}[\.]){3}[0-9]{1,3})"

The transaction command is most useful in two specific cases :

  • Unique id (from one or more fields) alone is not sufficient to discriminate between two transactions. This is the case
  • when the identifier is reused, for example web sessions identified by cookie/client IP. In this case, time span or pauses are also used to segment the data into transactions.
  • In other cases when an identifier is reused, say in DHCP logs, a particular message may identify the beginning or end of a transaction.
  • When it is desirable to see the raw text of the events combined rather than analysis on the constituent fields of the events.
  • In other cases, it’s usually better to use stats as the performance is higher, especially in a distributed search environment.
  • Often there is a unique id and stats can be used.

Answer to this question would be very wide but basically interviewer would be looking for following keywords in interview :

  • Check  splunkd.log for any errors
  • Check server performance issues i.e. cpu/memory usage,disk i/o etc
  • Install SOS (Splunk on splunk) app and check for warning and errors in dashboard
  • check number of saved searches currently running and their system resources consumption
  • install Firebug, which is a firefox extension. After it’s installed and enabled, log into splunk (using firefox), open firebug’s panels,

switch to the ‘Net’ panel (you will have to enable it).The Net panel will show you the HTTP requests and responses along with the time spent in each. This will give you a lot of information quickly over which requests are hanging splunk for a few seconds, and which are blameless. etc..

Splunk places  indexed data in directories, called as “buckets”. It is physically a directory containing events of a certain period.

A bucket moves through several stages as it ages :

  • Hot – Contains newly indexed data. Open for writing. One or more hot buckets for each index.
  • Warm – Data rolled from hot. There are many warm buckets.
  • Cold – Data rolled from warm. There are many cold buckets.
  • Frozen – Data rolled from cold. The indexer deletes frozen data by default, but you can also archive it. Archived data can later be thawed (Data in  frozenbuckets is not searchable)

By default, your buckets are located in $SPLUNK_HOME/var/lib/splunk/defaultdb/db. You should see the hot-db there, and any warm buckets you have.By default, Splunk sets the bucket size to 10GB for 64bit systems and 750MB on 32bit systems.

Interested in learning Splunk? Click here to learn more in this Splunk Training!

Stats command generate summary statistics of all existing fields in your search results and save them as values in new fields.

Eventstats is similar to the stats command, except that aggregation results are added inline to each event and only if the

aggregation is pertinent to that event.

eventstats computes the requested statistics like stats, but aggregates them to the original raw data.

logstash, Loggly, Loglogic,sumo logic etc..

how much data you can index per calendar day

midnight to midnight on the clock of the license master

They are included with splunk, no need to purchase separately

$SPLUNK_HOME/bin/splunk enable boot-start

$SPLUNK_HOME/bin/splunk disable boot-start

Sourcetype is splunk way of identifying data

To reset your password log in to server on which splunk is installed and rename passwd file at below location and then restart

splunk.After restart you can login using default username:adminpassword:changeme


Set value OFFENSIVE=Less in splunk_launch.conf

Learn more about plunk launch message in Splunk tutorial.

Delete following file on splunk server


splunk btool is a command line tool that helps us to  troubleshoot configuration file issues or just see what values are being used by your Splunk Enterprise installation in existing environment

Basiclly both contains preconfigured configuration and reports etc but splunk add on do not have visual app. Splunk apps have preconfigured visual app

File precedence is as follows :

System local directory — highest priority

App local directories

App default directories

System default directory — lowest priority

Its a directory or index at default location /opt/splunk/var/lib/splunk .It contains seek pointers and CRCs for the files you are indexing, so splunkd can tell if it has read them already.We can access it through GUI by seraching for  “index=_thefishbucket”

This can be  done by defining a regex to match the necessary event(s) and send everything else to nullqueue.Here is a basic

example that will drop everything except events that contain the string login In props.conf:



# Transforms must be applied in this order

# to make sure events are dropped on the

# floor prior to making their way to the

# index processor

TRANSFORMS-set= setnull,setparsing



In transforms.conf


[setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue


REGEX = login

DEST_KEY = queue

FORMAT = indexQueue


By watching  data from splunk’s metrics log in real-time.

index=”_internal” source=”*metrics.log” group=”per_sourcetype_thruput” series=”&lt;your_sourcetype_here&gt;” |

eval MB=kb/1024 | chart sum(MB)

or to watch everything happening split by sourcetype….

index=”_internal” source=”*metrics.log” group=”per_sourcetype_thruput” | eval MB=kb/1024 | chart sum(MB) avg(eps) over series And if you’re having trouble with a data input and you want a way to troubleshoot it, particularly if your whitelist/blacklist rules arent working the way you expect, go to this URL:



These are described in more detail on Splunk community.

To do this in Splunk Enterprise 6.0, use ui-prefs.conf. If you set the value in $SPLUNK_HOME/etc/system/local, all your users

should see it as the default setting. For example, if your $SPLUNK_HOME/etc/system/local/ui-prefs.conf file includes :

  1. [search]
  1. dispatch.earliest_time = @d
  1. dispatch.latest_time = now

The default time range that all users will see in the search app will be today.

The configuration file reference for ui-prefs.conf is here: http://docs.splunk.com/Documentation/Splunk/latest/Admin/Ui-prefsconf

$SPLUNK_HOME/var/run/splunk/dispatch contains a directory for each search that is running or has completed. For example,

a directory named 1434308943.358 will contain a CSV file of its search results, a search.log with details about the search execution, and other stuff. Using the defaults (which you can override in limits.conf), these directories will be deleted 10 minutes after the search completes – unless the user saves the search results, in which case the results will be deleted after 7 days.

Both are features provided splunk for high availability of splunk search head in case any one search head goes down. Search head cluster is newly introduced and search head pooling will be removed in next upcoming versions.Search head cluster is managed by captain and captain controls its slaves.Search head cluster is more reliable and efficient than search head pooling.

Below are steps to add folder access logs to splunk

  1. Enable Object Access Audit through group policy on windows machine on which folder is located
  2. Enable auditing on specific folder for which you want to monitor logs
  3. Install splunk universal forwarder on windows machine
  4. Configure universal forwarder to send security logs to splunk indexer

License violation warning  meanssplunk has indexed more data than  our purchased license  quota.We have to identify which index/sourcetype has received more data recently than usual daily data volume.We can check on splunk license master pool wise available quota and identify the pool for which violation is occurring.Once we know the pool for which we are receiving more data then we have to identify top sourcetype for which we are receiving more data than usual data.Oncesourcetype is identified then we have to find out source machine which is sending huge number of logs and root cause for the same and troubleshoot accordingly.

Maprduce algorithm is secret behind splunk fast data searching speed.It’s an algorithm typically used for batch based large scale parallelization.It’s inspired by functional programming’s map() and reduce () functions.

At indexer splunk keeps track of indexed events in a directory called fish buckets (default location /opt/splunk/var/lib/splunk). It contains seek pointers and CRCs for the files you are indexing, so splunkd can tell if it has read them already.

Splunk SDKs are designed to allow you to develop applications from the ground up and not require Splunk Web or any components from the Splunk App Framework. These are separately licensed to you from the Splunk Software and do not alter the Splunk Software. Splunk App Framework resides within Splunk’s web server and permits you to customize the Splunk Web UI that comes with the product and develop Splunk apps using the Splunk web server. It is an important part of the features and functionalities of Splunk Software , which does not license users to modify anything in the Splunk Software.

inputlookup command returns the whole lookup table as search results.

For example

…| inputlookuplookuptabllename returns a search result for every row in the table lookup which has two field


  • host
  • machine_type.

Outputlookup  outputs the current search results to a lookup table on the disk.

For example

…| outputlookup lookup.csv saves all the results into lookup.csv.