Skip to content
Competition - Dance Party Songs
  • AI Chat
  • Code
  • Report
  • Which songs are most suitable for a dancing party?

    šŸ“– Background

    It's that vibrant time of year again - Summer has arrived (for those of us in the Northern Hemisphere at least)! There's an energy in the air that inspires us to get up and move. In sync with this exuberance, your company has decided to host a dance party to celebrate. And you, with your unique blend of creativity and analytical expertise, have been entrusted with the crucial task of curating a dance-themed playlist that will set the perfect mood for this electrifying night. The question then arises - How can you identify the songs that would make the attendees dance their hearts out? This is where your coding skills come into play.

    šŸ’¾ The Data

    You have assembled information on more than 125 genres of Spotify music tracks in a file called spotify.csv, with each genre containing approximately 1000 tracks. All tracks, from all time, have been taken into account without any time period limitations. However, the data collection was concluded in October 2022. Each row represents a track that has some audio features associated with it.

    ColumnDescription
    track_idThe Spotify ID number of the track.
    artistsNames of the artists who performed the track, separated by a ; if there's more than one.
    album_nameThe name of the album that includes the track.
    track_nameThe name of the track.
    popularityNumerical value ranges from 0 to 100, with 100 being the highest popularity. This is calculated based on the number of times the track has been played recently, with more recent plays contributing more to the score. Duplicate tracks are scored independently.
    duration_msThe length of the track, measured in milliseconds.
    explicitIndicates whether the track contains explicit lyrics. true means it does, false means it does not or it's unknown.
    danceabilityA score ranges between 0.0 and 1.0 that represents the track's suitability for dancing. This is calculated by algorithm and is determined by factors like tempo, rhythm stability, beat strength, and regularity.
    energyA score ranges between 0.0 and 1.0 indicating the track's intensity and activity level. Energetic tracks tend to be fast, loud, and noisy.
    keyThe key the track is in. Integers map to pitches using standard Pitch class notation. E.g.0 = C, 1 = Cā™Æ/Dā™­, 2 = D, and so on. If no key was detected, the value is -1.
    loudnessThe overall loudness, measured in decibels (dB).
    modeThe modality of a track, represented as 1 for major and 0 for minor.
    speechinessMeasures the amount of spoken words in a track. A value close to 1.0 denotes speech-based content, while 0.33 to 0.66 indicates a mix of speech and music like rap. Values below 0.33 are usually music and non-speech tracks.
    acousticnessA confidence measure ranges from 0.0 to 1.0, with 1.0 representing the highest confidence that the track is acoustic.
    instrumentalnessInstrumentalness estimates the likelihood of a track being instrumental. Non-lyrical sounds such as "ooh" and "aah" are considered instrumental, whereas rap or spoken word tracks are classified as "vocal". A value closer to 1.0 indicates a higher probability that the track lacks vocal content.
    livenessA measure of the probability that the track was performed live. Scores above 0.8 indicate a high likelihood of the track being live.
    valenceA score from 0.0 to 1.0 representing the track's positiveness. High scores suggest a more positive or happier track.
    tempoThe track's estimated tempo, measured in beats per minute (BPM).
    time_signatureAn estimate of the track's time signature (meter), which is a notational convention to specify how many beats are in each bar (or measure). The time signature ranges from 3 to 7 indicating time signatures of 3/4, to 7/4.
    track_genreThe genre of the track.

    Source (data has been modified)

    import numpy as np
    import pandas as pd
    import seaborn as sns
    import matplotlib.pyplot as plt
    
    from sklearn.pipeline import Pipeline
    from sklearn.compose import ColumnTransformer
    from sklearn.linear_model import LinearRegression
    from sklearn.ensemble import RandomForestRegressor
    from sklearn.model_selection import train_test_split
    from sklearn.ensemble import GradientBoostingRegressor
    from sklearn.neighbors import KNeighborsRegressor
    from sklearn.preprocessing import StandardScaler, OneHotEncoder
    from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
    spotify = pd.read_csv('data/spotify.csv')
    spotify.head()
    # Checking for missing values
    spotify.isnull().sum()
    rows_with_missing_values = spotify[spotify.isnull().any(axis=1)]
    print(rows_with_missing_values)
    spotify = spotify.dropna()
    spotify.isnull().sum().sum()
    # Checking unique values
    spotify.nunique()
    # Checking summary statistics
    spotify.describe()
    # Checking data types
    spotify.info()
    # Plot song durations
    plt.figure(figsize=(10, 6))
    plt.hist(spotify['duration_ms'] / 60000, bins=30, color='skyblue', edgecolor='black')
    plt.title('Distribution of Song Durations')
    plt.xlabel('Duration (minutes)')
    plt.ylabel('Frequency')
    plt.grid(True)
    plt.show()
    # Drop songs longer than 5 minutes
    spotify = spotify[spotify['duration_ms'] <= 300000]  # 300000 ms = 5 minutes
    
    # Drop songs shorter than 2 minutes
    spotify = spotify[spotify['duration_ms'] >= 120000]  # 120000 ms = 2 minutes
    # Create a box plot for song durations
    plt.figure(figsize=(8, 6))
    plt.boxplot(spotify['duration_ms'] / 60000, vert=False)
    plt.title('Box Plot of Song Durations')
    plt.xlabel('Duration (minutes)')
    plt.ylabel('Song')
    plt.show()
    # Descriptive statistics
    audio_features = spotify.drop(['track_id', 'artists', 'album_name', 'track_name', 'explicit', 'track_genre'], axis=1)
    statistics = audio_features.describe()
    print(statistics)
    ā€Œ
    ā€Œ
    ā€Œ