Analyzing River Thames Water Levels
Time series data is everywhere, from watching your stock portfolio to monitoring climate change, and even live-tracking as local cases of a virus become a global pandemic. In this project, you’ll work with a time series that tracks the tide levels of the Thames River. You’ll first load the data and inspect it data visually, and then perform calculations on the dataset to generate some summary statistics. You’ll end by reducing the time series to its component attributes and analyzing them.
The original dataset is available from the British Oceanographic Data Center.
Here's a map of the locations of the tidal meters along the River Thames in London.
The provided datasets are in the data folder in this workspace. For this project, you will work with one of these files, 10-11_London_Bridge.txt, which contains comma separated values for water levels in the Thames River at the London Bridge. After you've finished the project, you can use your same code to analyze data from the other files (at other spots in the UK where tidal data is collected) if you'd like.
The TXT file contains data for three variables, described in the table below.
| Variable Name | Description | Format |
|---|---|---|
| Date and time | Date and time of measurement to GMT. Note the tide gauge is accurate to one minute. | dd/mm/yyyy hh:mm:ss |
| Water level | High or low water level measured by tide meter. Tide gauges are accurate to 1 centimetre. | metres (Admiralty Chart Datum (CD), Ordnance Datum Newlyn (ODN or Trinity High Water (THW)) |
| Flag | High water flag = 1, low water flag = 0 | Categorical (0 or 1) |
# We've imported your first Python package for you, along with a function you will need called IQR
import pandas as pd
def IQR(column):
""" Calculates the interquartile range (IQR) for a given DataFrame column using the quantile method """
q25, q75 = column.quantile([0.25, 0.75])
return q75-q25
# Begin coding here ...# load and filter data
thames = pd.read_csv('data/10-11_London_Bridge.txt', parse_dates=['Date and time'])
# rename cols
thames.columns = ['date', 'water_level', 'flag', 'hw_or_lw']
# drop col
thames = thames.drop('hw_or_lw', axis=1)
# preview data
thames.head()
# prepare data for analysis
# convert data types
thames['date'] = pd.to_datetime(thames['date'])
thames['water_level'] = thames['water_level'].astype('float')
# add month and year colums
thames['month'] = thames['date'].dt.month
thames['year'] = thames['date'].dt.year
thames.head()# create high water and low water datasets
high_water = thames[thames['flag'] == 1]
low_water = thames[thames['flag'] == 0]# finding summary statistics for high and low-tide data
# high tide
high_tide_stats = high_water['water_level'].agg(['mean', 'median', IQR])
print(high_tide_stats)
# low tide
low_tide_stats = low_water['water_level'].agg(['mean', 'median', IQR])
print(low_tide_stats)# determining percentages of high- and low-tide level days each year
# number high water days / year
all_high_days = high_water.groupby('year')['water_level'].count()
very_high_days = high_water[high_water['water_level'] > high_water['water_level'].quantile(0.9)].groupby('year')['water_level'].count()
very_high_ratio = (very_high_days/all_high_days).reset_index()
# number low water days / year
all_low_days = low_water.groupby('year')['water_level'].count()
very_low_days = low_water[low_water['water_level'] < low_water['water_level'].quantile(0.1)].groupby('year')['water_level'].count()
very_low_ratio = (very_low_days/all_low_days).reset_index()# results dictionary
solution = {'high_statistics': high_tide_stats,
'low_statistics': low_tide_stats,
'very_high_ratio': very_high_ratio,
'very_low_ratio': very_low_ratio}
print(solution)