Cyber threats are a growing concern for organizations worldwide. These threats take many forms, including malware, phishing, and denial-of-service (DOS) attacks, compromising sensitive information and disrupting operations. The increasing sophistication and frequency of these attacks make it imperative for organizations to adopt advanced security measures. Traditional threat detection methods often fall short due to their inability to adapt to new and evolving threats. This is where deep learning models come into play.
Deep learning models can analyze vast amounts of data and identify patterns that may not be immediately obvious to human analysts. By leveraging these models, organizations can proactively detect and mitigate cyber threats, safeguarding their sensitive information and ensuring operational continuity.
As a cybersecurity analyst, you identify and mitigate these threats. In this project, you will design and implement a deep learning model to detect cyber threats. The BETH dataset simulates real-world logs, providing a rich source of information for training and testing your model. The data has already undergone preprocessing, and we have a target label, sus_label
, indicating whether an event is malicious (1) or benign (0).
By successfully developing this model, you will contribute to enhancing cybersecurity measures and protecting organizations from potentially devastating cyber attacks.
The Data
Column | Description |
---|---|
processId | The unique identifier for the process that generated the event - int64 |
threadId | ID for the thread spawning the log - int64 |
parentProcessId | Label for the process spawning this log - int64 |
userId | ID of user spawning the log |
mountNamespace | Mounting restrictions the process log works within - int64 |
argsNum | Number of arguments passed to the event - int64 |
returnValue | Value returned from the event log (usually 0) - int64 |
sus_label | Binary label as suspicous event (1 is suspicious, 0 is not) - int64 |
More information on the dataset: BETH dataset (Invalid URL)
# Make sure to run this cell to use torchmetrics. If you cannot use pip install to install the torchmetrics, you can use sklearn.
!pip install torchmetrics
# Import required libraries
import pandas as pd
from sklearn.preprocessing import StandardScaler
import torch
import torch.nn as nn
import torch.nn.functional as functional
from torch.utils.data import DataLoader, TensorDataset
import torch.optim as optim
from torchmetrics import Accuracy
# from sklearn.metrics import accuracy_score # uncomment to use sklearn
# Load preprocessed data
train_df = pd.read_csv('labelled_train.csv')
test_df = pd.read_csv('labelled_test.csv')
val_df = pd.read_csv('labelled_validation.csv')
# View the first 5 rows of training set
train_df.head()
We can see that the features have different scales, a standardization is thus needed:
# Start coding here
# Use as many cells as you need
#The features have very different scales and need to go standard scaler:
train_features = train_df[train_df.columns[1:-1]]
train_labels = train_df[train_df.columns[-1]].to_numpy()
test_features = test_df[test_df.columns[1:-1]]
test_labels = test_df[test_df.columns[-1]].to_numpy()
val_features = val_df[val_df.columns[1:-1]]
val_labels = val_df[val_df.columns[-1]].to_numpy()
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
test_features = scaler.transform(test_features)
val_features = scaler.transform(val_features)
val_features[:5]
Build data loaders but later, due to the small size of the datasets, they won't be needed and the training sample can be passed as a unique batch:
#transform features and labels to torch tensors:
train_features_tensor = torch.tensor(train_features, dtype=torch.float32)
train_labels_tensor = torch.tensor(train_labels, dtype=torch.float32).view(-1, 1)
test_features_tensor = torch.tensor(test_features, dtype=torch.float32)
test_labels_tensor = torch.tensor(test_labels, dtype=torch.float32).view(-1, 1)
val_features_tensor = torch.tensor(val_features, dtype=torch.float32)
val_labels_tensor = torch.tensor(val_labels, dtype=torch.float32).view(-1, 1)
#use TensorDatasets and DataLoaders. Won't be used later due to the simpleness of the data
dataset_train = TensorDataset(train_features_tensor,train_labels_tensor)
dataset_test = TensorDataset(test_features_tensor,test_labels_tensor)
dataset_val =TensorDataset(val_features_tensor,val_labels_tensor)
train_dataloader = DataLoader(
dataset_train, shuffle=True, batch_size=1,
)
test_dataloader = DataLoader(
dataset_test, shuffle=True, batch_size=1,
)
val_dataloader = DataLoader(
dataset_val, shuffle=True, batch_size=1,
)
Building a simple model: ReLU/ELU activation functions were tried but happen to only slow/disturb the training. Removed.
#Build our model
num_features = train_features_tensor.shape[1]
model = nn.Sequential(nn.Linear(num_features,256),
nn.Linear(256,128),
nn.Linear(128,1),
nn.Sigmoid()
)
Onto the training loop: use binary cross entropy as a loss function and Adam as an optimizer with a learning rate of 0.001
#train:
# Define the loss function
criterion = nn.BCELoss()
# Define the optimizer
optimizer = torch.optim.Adam(model.parameters(),lr=0.001)
num_epochs = 20
# Loop over the number of epochs and the dataloader
for epoch in range(num_epochs):
#running_loss = 0.
optimizer.zero_grad()
prediction = model(train_features_tensor)
loss = criterion(prediction, train_labels_tensor)
loss.backward()
optimizer.step()
# for data in train_dataloader:
# # Set the gradients to zero
# optimizer.zero_grad()
# # Run a forward pass
# feature, target = data
# #print(feature.to(torch.float32))
# prediction = model(feature)
# #print(prediction.squeeze().size(), target.size())
# # Calculate the loss
# loss = criterion(prediction, target)
# # Compute the gradients
# loss.backward()
# # Update the model's parameters
# optimizer.step()
# running_loss += loss.item()
# epoch_loss = running_loss / len(train_dataloader)
print(f"Epoch {epoch+1}, Loss: {loss.item():.4f}")
Evaluate the performance on the test and validation samples:
#estimate accuracy for test and valid samples:
accu = Accuracy(threshold=0.5, task = 'binary')
test_preds = model(test_features_tensor)
#val_preds = val_preds > 0.5
test_accuracy = accu(test_preds,test_labels_tensor).item()
print("test accuracy: ", test_accuracy)
val_preds = model(val_features_tensor)
#val_preds = val_preds > 0.5
val_accuracy = accu(val_preds,val_labels_tensor).item()
print("val_accuracy: ", val_accuracy)