Skip to content
Project: Predictive Modeling for Agriculture
  • AI Chat
  • Code
  • Report
  • Sowing Success: How Machine Learning Helps Farmers Select the Best Crops

    Measuring essential soil metrics such as nitrogen, phosphorous, potassium levels, and pH value is an important aspect of assessing soil condition. However, it can be an expensive and time-consuming process, which can cause farmers to prioritize which metrics to measure based on their budget constraints.

    Farmers have various options when it comes to deciding which crop to plant each season. Their primary objective is to maximize the yield of their crops, taking into account different factors. One crucial factor that affects crop growth is the condition of the soil in the field, which can be assessed by measuring basic elements such as nitrogen and potassium levels. Each crop has an ideal soil condition that ensures optimal growth and maximum yield.

    A farmer reached out to you as a machine learning expert for assistance in selecting the best crop for his field. They've provided you with a dataset called soil_measures.csv, which contains:

    • "N": Nitrogen content ratio in the soil
    • "P": Phosphorous content ratio in the soil
    • "K": Potassium content ratio in the soil
    • "pH" value of the soil
    • "crop": categorical values that contain various crops (target variable).

    Each row in this dataset represents various measures of the soil in a particular field. Based on these measurements, the crop specified in the "crop" column is the optimal choice for that field.

    In this project, you will apply machine learning to build a multi-class classification model to predict the type of "crop", while using techniques to avoid multicollinearity, which is a concept where two or more features are highly correlated.

    # All required libraries are imported here for you.
    import matplotlib.pyplot as plt
    import pandas as pd
    from sklearn.linear_model import LogisticRegression
    from sklearn.model_selection import train_test_split
    import seaborn as sns
    from sklearn.metrics import f1_score
    # Load the dataset
    crops = pd.read_csv("soil_measures.csv")
    crops.dtypes
    crops["crop"].unique()
    crops.head()
    crops.describe()
    crops.info()
    # Split the data into features (X) and target (y)
    X = crops.drop("crop", axis=1)
    y = crops["crop"]
    
    # Split into training and test sets
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    for feature in ["N", "P", "K", "ph"]:
        logistic_reg = LogisticRegression(max_iter=2000, multi_class="multinomial")
        logistic_reg.fit(X_train[[feature]], y_train)
        y_pred = logistic_reg.predict(X_test[[feature]])
        feature_performance = f1_score(y_test, y_pred, average="weighted")
        print(f"F1-score for {feature}: {feature_performance}")

    The F1 score is a metric used to assess the quality of a classification model. It is the harmonic mean of precision and recall, and ranges from 0 to 1, with 1 representing a model that perfectly classifies each observation into the correct class and 0 representing a model that is unable to classify any observation into the correct class.

    Precision is the number of true positive predictions relative to total positive predictions, and Recall is the number of true positive predictions relative to total actual positives.

    # Calculate correlation matrix
    corr_matrix = crops.corr()
    
    # Visualize correlation heatmap
    sns.heatmap(corr_matrix, annot=True);
    print(corr_matrix)

    Analysis:

    Retain:

    • K: It has the highest F1 score (0.200787) and relatively low correlation with other features. It's likely an informative predictor.

    Drop:

    • P: It has a high correlation with K (0.736232) and a lower F1 score than K. It might not add significant information beyond what K already provides.

    Consider:

    • N: It has a moderate F1 score (0.105079) and low correlation with other features. It could potentially contribute to the model. However, its F1 score is not as high as K.
    • ph: It has the lowest F1 score (0.045327) and low correlation with other features. It might not be a strong predictor on its own.

    *Recommendation:

    • Start with K as a definitive feature.
    • Experiment with including N: Build models with K alone and with K + N to see if N improves performance.
    • Consider dropping ph: If it consistently doesn't improve performance, it might not be worth including.
    # Select features based on F1 scores and correlation analysis
    final_features = ["N", "ph", "K"]  # Example based on analysis
    
    # Re-split the data using selected features
    X_train, X_test, y_train, y_test = train_test_split(X[final_features], y, test_size=0.2, random_state=42)
    
    # Train the final model
    log_reg = LogisticRegression(max_iter=2000, multi_class="multinomial")
    log_reg.fit(X_train, y_train)
    
    # Evaluate the final model
    y_pred = log_reg.predict(X_test)
    model_performance = f1_score(y_test, y_pred, average="weighted")
    print(f"Final model F1-score: {model_performance}")