Skip to content

Sowing Success: How Machine Learning Helps Farmers Select the Best Crops

Measuring essential soil metrics such as nitrogen, phosphorous, potassium levels, and pH value is an important aspect of assessing soil condition. However, it can be an expensive and time-consuming process, which can cause farmers to prioritize which metrics to measure based on their budget constraints.

Farmers have various options when it comes to deciding which crop to plant each season. Their primary objective is to maximize the yield of their crops, taking into account different factors. One crucial factor that affects crop growth is the condition of the soil in the field, which can be assessed by measuring basic elements such as nitrogen and potassium levels. Each crop has an ideal soil condition that ensures optimal growth and maximum yield.

A farmer reached out to you as a machine learning expert for assistance in selecting the best crop for his field. They've provided you with a dataset called soil_measures.csv, which contains:

  • "N": Nitrogen content ratio in the soil
  • "P": Phosphorous content ratio in the soil
  • "K": Potassium content ratio in the soil
  • "pH" value of the soil
  • "crop": categorical values that contain various crops (target variable).

Each row in this dataset represents various measures of the soil in a particular field. Based on these measurements, the crop specified in the "crop" column is the optimal choice for that field.

In this project, you will build multi-class classification models to predict the type of "crop" and identify the single most importance feature for predictive performance.

# All required libraries are imported here for you.
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import metrics
import seaborn as sns
from sklearn.metrics import f1_score

# Load the dataset
crops = pd.read_csv("soil_measures.csv")

crops.head()
#Check for missing values
crops.isnull().sum()
# Check for crop types
crops['crop'].unique()
# check if the data is balanced or not ( to know I'll use stratify or not )
print(crops["crop"].value_counts())
# Split the data where x are all varibles and y is the target variable crop.
x_train,x_test,y_train,y_test= train_test_split(crops[["N","P", "K", "ph"]], crops["crop"], test_size=0.2, random_state=42)

# Calculate the correlation matrix
crops_corr = crops[["N", "P", "K", "ph"]].corr()

# Create a heatmap using seaborn
sns.heatmap(crops_corr, annot=True)
plt.show()
#we will use logistic multi-class model  to know which is the most importance feature for predictive performance. the highs f-score is the best .

for feature in ["N", "P", "K", "ph"]:
    log_reg = LogisticRegression(
        max_iter=2000,
        multi_class="multinomial",
    )
    log_reg.fit(x_train[[feature]], y_train)
    y_pred = log_reg.predict(x_test[[feature]])
    f1 = f1_score(y_test, y_pred, average="weighted")
    print(f"F1-score for {feature}: {f1}")

# Store results in a dictionary
feature_scores = {}

for feature in ["N", "P", "K", "ph"]:
    log_reg = LogisticRegression(max_iter=3000, multi_class="multinomial")
    log_reg.fit(x_train[[feature]], y_train)
    y_pred = log_reg.predict(x_test[[feature]])
    f1 = f1_score(y_test, y_pred, average="weighted")
    feature_scores[feature] = f1

# Get best feature and score
best_feature = max(feature_scores, key=feature_scores.get)
best_score = feature_scores[best_feature]

best_predictive_feature = {best_feature: best_score}
print("Best predictive feature:", best_predictive_feature)
#so based on the results we should drop "p" as it has low f score and correlation with "k" 
# Select the final features for the model
final_features = ["N", "K", "ph"]

# Split the data with the final features
X_train, X_test, y_train, y_test = train_test_split(
    crops[final_features],
    crops["crop"],
    test_size=0.2,
    random_state=42
)

# Train a new model and evaluate performance
log_reg = LogisticRegression(
    max_iter=2000, 
    multi_class="multinomial"
)
# Predict & evaluate
log_reg.fit(x_train, y_train)
y_pred = log_reg.predict(x_test)
model_performance = f1_score(y_test, y_pred, average="weighted")
print("Model F1-score:", model_performance)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score, classification_report

# Train the model on training data
log_reg = LogisticRegression(multi_class='multinomial', solver='lbfgs', max_iter=2000)
log_reg.fit(x_train, y_train)

# Predict on test data
y_pred = log_reg.predict(x_test)

# Evaluate performance
f1 = f1_score(y_test, y_pred, average="weighted")
print("F1-score on test data:", f1)

# Optional: Detailed report
print("\nClassification Report:\n", classification_report(y_test, y_pred))