You're working as a sports journalist at a major online sports media company, specializing in soccer analysis and reporting. You've been watching both men's and women's international soccer matches for a number of years, and your gut instinct tells you that more goals are scored in women's international football matches than men's. This would make an interesting investigative article that your subscribers are bound to love, but you'll need to perform a valid statistical hypothesis test to be sure!
While scoping this project, you acknowledge that the sport has changed a lot over the years, and performances likely vary a lot depending on the tournament, so you decide to limit the data used in the analysis to only official FIFA World Cup matches (not including qualifiers) since 2002-01-01.
You create two datasets containing the results of every official men's and women's international football match since the 19th century, which you scraped from a reliable online source. This data is stored in two CSV files: women_results.csv and men_results.csv.
The question you are trying to determine the answer to is:
Are more goals scored in women's international soccer matches than men's?
You assume a 10% significance level, and use the following null and alternative hypotheses:
# Imports
import pandas as pd
import matplotlib.pyplot as plt
import pingouin
from scipy.stats import mannwhitneyu
# Load men's and women's datasets
men = pd.read_csv("men_results.csv")
women = pd.read_csv("women_results.csv")
# Filter the data for the time range and tournament
men["date"] = pd.to_datetime(men["date"])
men_subset = men[(men["date"] > "2002-01-01") & (men["tournament"].isin(["FIFA World Cup"]))]
women["date"] = pd.to_datetime(women["date"])
women_subset = women[(women["date"] > "2002-01-01") & (women["tournament"].isin(["FIFA World Cup"]))]
# Create group and goals_scored columns
men_subset["group"] = "men"
women_subset["group"] = "women"
men_subset["goals_scored"] = men_subset["home_score"] + men_subset["away_score"]
women_subset["goals_scored"] = women_subset["home_score"] + women_subset["away_score"]
# Determine normality using histograms
men_subset["goals_scored"].hist()
plt.show()
plt.clf()
# Goals scored is not normally distributed, so use Wilcoxon-Mann-Whitney test of two groups
men_subset["goals_scored"].hist()
plt.show()
plt.clf()
# Combine women's and men's data and calculate goals scored in each match
both = pd.concat([women_subset, men_subset], axis=0, ignore_index=True)
# Transform the data for the pingouin Mann-Whitney U t-test/Wilcoxon-Mann-Whitney test
both_subset = both[["goals_scored", "group"]]
both_subset_wide = both_subset.pivot(columns="group", values="goals_scored")
# Perform right-tailed Wilcoxon-Mann-Whitney test with pingouin
results_pg = pingouin.mwu(x=both_subset_wide["women"],
y=both_subset_wide["men"],
alternative="greater")
# Alternative SciPy solution: Perform right-tailed Wilcoxon-Mann-Whitney test with scipy
results_scipy = mannwhitneyu(x=women_subset["goals_scored"],
y=men_subset["goals_scored"],
alternative="greater")
# Extract p-value as a float
p_val = results_pg["p-val"].values[0]
# Determine hypothesis test result using sig. level
if p_val <= 0.01:
result = "reject"
else:
result = "fail to reject"
result_dict = {"p_val": p_val, "result": result}import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import OneHotEncoder
# Load the training data
train_df = pd.read_csv("C:\\Users\\akira\\Downloads\\workspace (2)\\workspace\\house_sales.csv")
# Convert 'sale_date' to datetime and extract features
train_df['sale_date'] = pd.to_datetime(train_df['sale_date'])
train_df['sale_year'] = train_df['sale_date'].dt.year
train_df['sale_month'] = train_df['sale_date'].dt.month
train_df['sale_day'] = train_df['sale_date'].dt.day
# Drop the original 'sale_date' column
train_df = train_df.drop(['sale_date', 'house_id'], axis=1)
# Separate the features and target variable
X = train_df.drop(['sale_price'], axis=1)
y = train_df['sale_price']
# Encode categorical variables using one-hot encoding
categorical_columns = ['city', 'house_type']
encoder = OneHotEncoder(sparse_output=False, drop='first')
X_encoded = encoder.fit_transform(X[categorical_columns])
X_encoded_df = pd.DataFrame(X_encoded, columns=encoder.get_feature_names_out(categorical_columns))
# Combine encoded categorical variables with numerical variables
X = X.drop(categorical_columns, axis=1).reset_index(drop=True)
X = pd.concat([X, X_encoded_df], axis=1)
# Create and fit the linear regression model
model = LinearRegression()
model.fit(X, y)
# Load the validation data
val_df = pd.read_csv('validation.csv')
# Convert 'sale_date' to datetime and extract features for validation data
val_df['sale_date'] = pd.to_datetime(val_df['sale_date'])
val_df['sale_year'] = val_df['sale_date'].dt.year
val_df['sale_month'] = val_df['sale_date'].dt.month
val_df['sale_day'] = val_df['sale_date'].dt.day
# Drop the original 'sale_date' column
val_X = val_df.drop(['sale_date', 'house_id'], axis=1)
# Encode the validation data
val_X_encoded = encoder.transform(val_X[categorical_columns])
val_X_encoded_df = pd.DataFrame(val_X_encoded, columns=encoder.get_feature_names_out(categorical_columns))
val_X = val_X.drop(categorical_columns, axis=1).reset_index(drop=True)
val_X = pd.concat([val_X, val_X_encoded_df], axis=1)
# Ensure the columns in val_X match the training features
val_X = val_X.reindex(columns=X.columns, fill_value=0)
# Predict the sale prices for the validation data
predicted_prices = model.predict(val_X)
# Calculate the RMSE
if 'sale_price' in val_df.columns:
rmse = mean_squared_error(val_df['sale_price'], predicted_prices, squared=False)
print(f'RMSE: {rmse:.2f}')
else:
print("Validation data does not contain 'sale_price' for RMSE calculation.")
# Create a dataframe with the predicted prices and house IDs
base_result = pd.DataFrame({'house_id': val_df['house_id'], 'price': predicted_prices})
# Return the result
base_result