Skip to content
Project: Predictive Modeling for Agriculture
  • AI Chat
  • Code
  • Report
  • Sowing Success: How Machine Learning Helps Farmers Select the Best Crops

    Measuring essential soil metrics such as nitrogen, phosphorous, potassium levels, and pH value is an important aspect of assessing soil condition. However, it can be an expensive and time-consuming process, which can cause farmers to prioritize which metrics to measure based on their budget constraints.

    Farmers have various options when it comes to deciding which crop to plant each season. Their primary objective is to maximize the yield of their crops, taking into account different factors. One crucial factor that affects crop growth is the condition of the soil in the field, which can be assessed by measuring basic elements such as nitrogen and potassium levels. Each crop has an ideal soil condition that ensures optimal growth and maximum yield.

    We need to select the best crop for his field using a dataset called soil_measures.csv, which contains:

    • "N": Nitrogen content ratio in the soil
    • "P": Phosphorous content ratio in the soil
    • "K": Potassium content ratio in the soil
    • "pH" value of the soil
    • "crop": categorical values that contain various crops (target variable).

    Each row in this dataset represents various measures of the soil in a particular field. Based on these measurements, the crop specified in the "crop" column is the optimal choice for that field.

    We will build multi-class classification models to predict the type of "crop" and identify the single most importance feature for predictive performance.

    Importing libraries and previewing dataset

    # All required libraries are imported here for you.
    import pandas as pd
    from sklearn.linear_model import LogisticRegression
    from sklearn.model_selection import train_test_split
    from sklearn import metrics
    
    # Load the dataset
    crops = pd.read_csv("soil_measures.csv")
    # Previewing dataset
    crops.sample(10)
    # Previewing data types
    crops.info()
    # Assessing the target variable
    crops.crop.value_counts()

    Preprocessing data

    Standardizing features

    In this cell, we perform data preprocessing to prepare the dataset for machine learning. The target variable 'crop' is separated from the features, and a custom function 'standardscaler' is defined for standardizing numerical features.

    # First, we separate the target variable 'crop' from the features.
    X = crops.drop('crop', axis=1)
    y = crops.crop
    # Extract unique crop names from the 'crop' column in the 'crops' dataset
    crop_list = crops.crop.unique()
    
    # Display the unique crop list
    crop_list
    # Initialize an empty dictionary to store the mapping of crops to indices
    crop_dict = {}
    
    # Enumerate through the unique crop list and assign each crop an index
    for index, crop in list(enumerate(crop_list)):
        crop_dict[crop] = index
    
    # Display 
    crop_dict
    # Convert crop labels in the target variable 'y' with numeric values
    y.replace(crop_dict, inplace=True)
    
    y
    # We define a function 'standardscaler' for standardizing a given series.
    def standardscaler(series):
        # Calculate the mean and standard deviation of the series.
        mean = series.mean()
        sd = series.std(ddof=0)
        
        # Standardize the series using the mean and standard deviation.
        z = (series - mean) / sd
        
        # Return the standardized series.
        return z
    

    Feature Standardization

    The features in the DataFrame X are standardized using the custom 'standardscaler' function to ensure that all features are on a similar scale.

    # Standardize Each Feature
    for col in X.columns:
        X[col] = standardscaler(X[col])