Detection: Detect Spike in S3 Bucket deletion

EXPERIMENTAL DETECTION

This detection status is set to experimental. The Splunk Threat Research team has not yet fully tested, simulated, or built comprehensive datasets for this detection. As such, this analytic is not officially supported. If you have any questions or concerns, please reach out to us at research@splunk.com.

Description

The following analytic identifies a spike in API activity related to the deletion of S3 buckets in your AWS environment. It leverages AWS CloudTrail logs to detect anomalies by comparing current deletion activity against a historical baseline. This activity is significant as unusual spikes in S3 bucket deletions could indicate malicious actions such as data exfiltration or unauthorized data destruction. If confirmed malicious, this could lead to significant data loss, disruption of services, and potential exposure of sensitive information. Immediate investigation is required to determine the legitimacy of the activity.

 1`cloudtrail` eventName=DeleteBucket [search `cloudtrail` eventName=DeleteBucket 
 2| spath output=arn path=userIdentity.arn 
 3| stats count as apiCalls by arn 
 4| inputlookup s3_deletion_baseline append=t 
 5| fields - latestCount 
 6| stats values(*) as * by arn 
 7| rename apiCalls as latestCount 
 8| eval newAvgApiCalls=avgApiCalls + (latestCount-avgApiCalls)/720 
 9| eval newStdevApiCalls=sqrt(((pow(stdevApiCalls, 2)*719 + (latestCount-newAvgApiCalls)*(latestCount-avgApiCalls))/720)) 
10| eval avgApiCalls=coalesce(newAvgApiCalls, avgApiCalls), stdevApiCalls=coalesce(newStdevApiCalls, stdevApiCalls), numDataPoints=if(isnull(latestCount), numDataPoints, numDataPoints+1) 
11| table arn, latestCount, numDataPoints, avgApiCalls, stdevApiCalls 
12| outputlookup s3_deletion_baseline 
13| eval dataPointThreshold = 15, deviationThreshold = 3 
14| eval isSpike=if((latestCount > avgApiCalls+deviationThreshold*stdevApiCalls) AND numDataPoints > dataPointThreshold, 1, 0) 
15| where isSpike=1 
16| rename arn as userIdentity.arn 
17| table userIdentity.arn] 
18| spath output=user userIdentity.arn 
19| spath output=bucketName path=requestParameters.bucketName 
20| stats values(bucketName) as bucketName, count as numberOfApiCalls, dc(eventName) as uniqueApisCalled by user 
21| `detect_spike_in_s3_bucket_deletion_filter`

Data Source

Name Platform Sourcetype Source Supported App
AWS CloudTrail AWS icon AWS 'aws:cloudtrail' 'aws_cloudtrail' N/A

Macros Used

Name Value
cloudtrail sourcetype=aws:cloudtrail
detect_spike_in_s3_bucket_deletion_filter search *
detect_spike_in_s3_bucket_deletion_filter is an empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL.

Annotations

- MITRE ATT&CK
+ Kill Chain Phases
+ NIST
+ CIS
- Threat Actors
ID Technique Tactic
T1530 Data from Cloud Storage Collection
KillChainPhase.EXPLOITAITON
NistCategory.DE_AE
Cis18Value.CIS_13
Fox Kitten
Scattered Spider

Default Configuration

This detection is configured by default in Splunk Enterprise Security to run with the following settings:

Setting Value
Disabled true
Cron Schedule 0 * * * *
Earliest Time -70m@m
Latest Time -10m@m
Schedule Window auto
Creates Risk Event True
This configuration file applies to all detections of type anomaly. These detections will use Risk Based Alerting.

Implementation

You must install the AWS App for Splunk (version 5.1.0 or later) and Splunk Add-on for AWS (version 4.4.0 or later), then configure your AWS CloudTrail inputs. You can modify dataPointThreshold and deviationThreshold to better fit your environment. The dataPointThreshold variable is the minimum number of data points required to have a statistically significant amount of data to determine. The deviationThreshold variable is the number of standard deviations away from the mean that the value must be to be considered a spike. This search works best when you run the "Baseline of S3 Bucket deletion activity by ARN" support search once to create a baseline of previously seen S3 bucket-deletion activity.

Known False Positives

Based on the values ofdataPointThreshold and deviationThreshold, the false positive rate may vary. Please modify this according the your environment.

Associated Analytic Story

Risk Based Analytics (RBA)

Risk Message Risk Score Impact Confidence
tbd 25 50 50
The Risk Score is calculated by the following formula: Risk Score = (Impact * Confidence/100). Initial Confidence and Impact is set by the analytic author.

Detection Testing

Test Type Status Dataset Source Sourcetype
Validation Not Applicable N/A N/A N/A
Unit ❌ Failing N/A N/A N/A
Integration ❌ Failing N/A N/A N/A

Replay any dataset to Splunk Enterprise by using our replay.py tool or the UI. Alternatively you can replay a dataset into a Splunk Attack Range


Source: GitHub | Version: 2