Sowing Success: How Machine Learning Helps Farmers Select the Best Crops
Measuring essential soil metrics such as nitrogen, phosphorous, potassium levels, and pH value is an important aspect of assessing soil condition. However, it can be an expensive and time-consuming process, which can cause farmers to prioritize which metrics to measure based on their budget constraints.
Farmers have various options when it comes to deciding which crop to plant each season. Their primary objective is to maximize the yield of their crops, taking into account different factors. One crucial factor that affects crop growth is the condition of the soil in the field, which can be assessed by measuring basic elements such as nitrogen and potassium levels. Each crop has an ideal soil condition that ensures optimal growth and maximum yield.
A farmer reached out to you as a machine learning expert for assistance in selecting the best crop for his field. They've provided you with a dataset called soil_measures.csv
, which contains:
"N"
: Nitrogen content ratio in the soil"P"
: Phosphorous content ratio in the soil"K"
: Potassium content ratio in the soil"pH"
value of the soil"crop"
: categorical values that contain various crops (target variable).
Each row in this dataset represents various measures of the soil in a particular field. Based on these measurements, the crop specified in the "crop"
column is the optimal choice for that field.
In this project, you will build multi-class classification models to predict the type of "crop"
and identify the single most importance feature for predictive performance.
# All required libraries are imported here for you.
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score
from sklearn import metrics
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.pipeline import Pipeline
# Load the dataset
crops = pd.read_csv("soil_measures.csv")
# Doing a quick scan of our data
crops.head()
# Checking nulls
crops.isna().sum()
crops.shape
features = ["N", "P"]
for feature in features:
plt.figure(figsize=(10, 6))
sns.boxplot(x="crop", y=feature, data=crops)
plt.title(f'Comparison of {feature} with crop')
plt.xticks(rotation=90)
plt.show()
features = ["K", "ph"]
for feature in features:
plt.figure(figsize=(10, 6))
sns.boxplot(x="crop", y=feature, data=crops)
plt.title(f'Comparison of {feature} with crop')
plt.xticks(rotation=90)
plt.show()
Model Preprocessing
# Since the target column is categorical, encoding must occur.
df = crops.copy()
encoder = LabelEncoder()
df['crop'] = encoder.fit_transform(df['crop'])
df['crop'].unique()
# Checking for class imbalance
crops['crop'].value_counts()
# Splitting the data
X = df.drop('crop', axis= 1)
y = df['crop']
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=42, stratify=y, test_size=.20)
X_test, X_con, y_test, y_con = train_test_split(X_test,y_test, random_state=42, stratify=y_test, test_size=.05)
# Defining the number of folds
kf = StratifiedKFold(n_splits= 6, shuffle=True, random_state= 42)
# Define the scoring method
scorer = metrics.make_scorer(metrics.f1_score, average='weighted')
# Create the pipeline with feature selection
steps = [
('scaler', StandardScaler()),
('model', LogisticRegression(multi_class='multinomial'))
]
pipeline = Pipeline(steps)
# Perform cross-validation
cross_val_scores = cross_val_score(pipeline, X_train, y_train, cv=kf, scoring=scorer)
# Fit the pipeline to the training data
pipeline.fit(X_train, y_train)
# Get the feature scores
feature_scores = pipeline.named_steps['model'].coef_
# Get coefficients (importance scores)
importance_df = pd.DataFrame(
abs(pipeline.named_steps['model'].coef_), # Absolute values for feature importance
columns=X_train.columns
)
# Aggregate across classes (Logistic Regression is multi-class, so we take the mean across rows)
mean_importance = importance_df.mean(axis=0) # Mean importance per feature
# Identify the best predictive feature
best_feature_name = mean_importance.idxmax() # Feature with highest importance
best_feature_score = mean_importance.max() # Its corresponding score
# Store in the required dictionary format
best_predictive_feature = {best_feature_name: best_feature_score}
print(best_predictive_feature)
def predict(x, pipeline=pipeline, encoder=encoder):
"""
Predict the original labels for the given input data using a pre-fitted pipeline and label encoder.
Parameters:
x (DataFrame): The input data for which predictions are to be made.
pipeline (Pipeline): The pre-fitted scikit-learn pipeline used for making predictions.
encoder (LabelEncoder): The label encoder used to inverse transform the encoded labels.
Returns:
tuple: A tuple containing two arrays:
- The first array contains the original labels corresponding to the input data.
- The second array contains the encoded labels predicted by the pipeline.
"""
# Predict the test set using the original pipeline
y_pred_encoded = pipeline.predict(x)
# Inverse transform the predicted labels to original labels
y_pred_original = encoder.inverse_transform(y_pred_encoded)
return y_pred_original, y_pred_encoded