Skip to content

In the quest for efficiency and effectiveness in urban transportation, finding the optimal routes to take passengers from their initial locations to their desired destinations is paramount. This challenge is not just about reducing travel time; it's about enhancing the overall experience for both drivers and passengers, ensuring safety, and minimizing environmental impact.

You have been asked to revolutionize the way taxis navigate the urban landscape, ensuring passengers reach their destinations swiftly, safely, and satisfactorily. As an initial step, your goal is to build a reinforcement learning agent that solves this problem within a simulated environment.

The Taxi-v3 environment

The Taxi-v3 environment is a strategic simulation, offering a grid-based arena where a taxi navigates to address daily challenges akin to those faced by a taxi driver. This environment is defined by a 5x5 grid where the taxi's mission involves picking up a passenger from one of four specific locations (marked as Red, Green, Yellow, and Blue) and dropping them off at another designated spot. The goal is to accomplish this with minimal time on the road to maximize rewards, emphasizing the need for route optimization and efficient decision-making for passenger pickup and dropoff.

Key Components:

  • Action Space: Comprises six actions where 0 moves the taxi south, 1 north, 2 east, 3 west, 4 picks up a passenger, and 5 drops off a passenger.
  • Observation Space: Comprises 500 discrete states, accounting for 25 taxi positions, 5 potential passenger locations, and 4 destinations.
  • Rewards System: Includes a penalty of -1 for each step taken without other rewards, +20 for successful passenger delivery, and -10 for illegal pickup or dropoff actions. Actions resulting in no operation, like hitting a wall, also incur a time step penalty.

# Re-run this cell to install and import the necessary libraries and load the required variables
!pip install gymnasium[toy_text] imageio
import numpy as np
import gymnasium as gym
import imageio
from IPython.display import Image
from gymnasium.utils import seeding

# Initialize the Taxi-v3 environment
env = gym.make("Taxi-v3", render_mode='rgb_array')

# Seed the environment for reproducibility
env.np_random, _ = seeding.np_random(42)
env.action_space.seed(42)
np.random.seed(42)

# Maximum number of actions per training episode
max_actions = 100 
Hidden output
# Start coding here
# Feel free to add as many cells as you want
#create and visualize the environment
env = gym.make("Taxi-v3", render_mode='rgb_array')
state, info= env.reset(seed=42)
print(state)
#visualize state
import matplotlib.pyplot as plt
state_image= env.render()
plt.imshow(state_image)
plt.show()
#gymnasium states and actions
num_actions= env.action_space.n
num_states= env.observation_space.n
num_actions
#initialize Q table
Q= np.zeros((num_states, num_actions))
#number of episodes
num_episodes= 2000
max_actions= 100
#calculating the learned q values
def update_q_table(state, action, reward, new_state):
    old_value= Q[state, action]
    n_max= max(Q[new_state])
    Q[state, action] = (1-alpha) * Q[state, action] + alpha * (reward + gamma *        n_max)
# Implementation of Q-learning
import numpy as np

# Initialize epsilon and other necessary variables
epsilon = 0.1  # Example value, you can adjust this

episode_returns = []
for episode in range(num_episodes):
    state, info = env.reset()
    done = False
    episode_reward = 0
    
    for action_index in range(max_actions):
        if np.random.rand() < epsilon:
            action = env.action_space.sample()  # Explore
        else:
            action = np.argmax(Q[state, :])  # Exploit
        
        # Take action and observe new state
        new_state, reward, terminated, _, _ = env.step(action)
        
        # Update Q table
        update_q_table(state, action,reward, new_state)
        
        episode_reward += reward
        state = new_state
        
        if terminated:
            break
    
    episode_returns.append(episode_reward)

print(len(episode_returns))
print(episode_reward)

q_table= Q
print(q_table)
print(new_state)
plt.plot(episode_returns)
plt.xlabel("Episode")
plt.ylabel("Total Reward")
plt.show()
state_image1= env.render()
plt.imshow(state_image1)
plt.show()

policy= {state: np.argmax(Q[state])
        for state in range(num_states)}
print(policy)