Skip to content

Sowing Success: How Machine Learning Helps Farmers Select the Best Crops

Measuring essential soil metrics such as nitrogen, phosphorous, potassium levels, and pH value is an important aspect of assessing soil condition. However, it can be an expensive and time-consuming process, which can cause farmers to prioritize which metrics to measure based on their budget constraints.

Farmers have various options when it comes to deciding which crop to plant each season. Their primary objective is to maximize the yield of their crops, taking into account different factors. One crucial factor that affects crop growth is the condition of the soil in the field, which can be assessed by measuring basic elements such as nitrogen and potassium levels. Each crop has an ideal soil condition that ensures optimal growth and maximum yield.

A farmer reached out to you as a machine learning expert for assistance in selecting the best crop for his field. They've provided you with a dataset called soil_measures.csv, which contains:

  • "N": Nitrogen content ratio in the soil
  • "P": Phosphorous content ratio in the soil
  • "K": Potassium content ratio in the soil
  • "pH" value of the soil
  • "crop": categorical values that contain various crops (target variable).

Each row in this dataset represents various measures of the soil in a particular field. Based on these measurements, the crop specified in the "crop" column is the optimal choice for that field.

In this project, you will build multi-class classification models to predict the type of "crop" and identify the single most importance feature for predictive performance.

# All required libraries are imported here for you.
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn import metrics

# Load the dataset
crops = pd.read_csv("soil_measures.csv")

crops.head()
# Features and target variable
features = ['N', 'P', 'K', 'ph']  # Actual column names
target = 'crop'

# Initialize dictionary to store scores
scores = {}

# Loop through each feature and evaluate its performance
for feature in features:
    X = crops[[feature]]
    y = crops[target]
    
    # Split the data into training and test sets
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
    
    # Initialize and fit the model
    model = LogisticRegression()
    model.fit(X_train, y_train)
    
    # Predict and evaluate the model
    y_pred = model.predict(X_test)
    score = metrics.accuracy_score(y_test, y_pred)
    
    # Store the score in the dictionary
    scores[feature] = score

# Identify the best predictive feature
best_predictive_feature = max(scores, key=scores.get)
best_predictive_score = scores[best_predictive_feature]

# Create the dictionary as required
best_predictive_feature = {best_predictive_feature: best_predictive_score}

best_predictive_feature
# El potasio (K) aunque es la característica más predictiva, presenta un % bajo de preción (28.03%) 
# Se procederá a usar otras características tratando de mejorar la efectividad del modelo

# Utilizando todas las características juntas
X_all = crops[features]  # Todas las características: N, P, K, ph
y = crops[target]

# training y test
X_train, X_test, y_train, y_test = train_test_split(X_all, y, test_size=0.3, random_state=42)

# Inicializar el modelo y entrenarlo
model_all = LogisticRegression(max_iter=1000)
model_all.fit(X_train, y_train)

# Predicción y evaluación
y_pred = model_all.predict(X_test)
all_features_score = metrics.accuracy_score(y_test, y_pred)

print(f"Precisión usando solo K: {scores['K']:.4f}")
print(f"Precisión usando todas las características: {all_features_score:.4f}")
print(f"Mejora: {(all_features_score - scores['K']) * 100:.2f} puntos porcentuales")

# Opcional: Ver los coeficientes para entender la importancia de cada característica
feature_importance = pd.DataFrame({
    'Feature': features,
    'Importance': model_all.coef_.sum(axis=0)
})
feature_importance = feature_importance.sort_values('Importance', ascending=False)
print("\nImportancia relativa de las características:")
print(feature_importance)

# Matriz de confusión para evaluar el rendimiento por clase
print("\nMatriz de confusión:")
conf_matrix = metrics.confusion_matrix(y_test, y_pred)
print(conf_matrix)

# Reporte de clasificación detallado
print("\nReporte de clasificación:")
class_report = metrics.classification_report(y_test, y_pred)
print(class_report)