Skip to content
New Workbook
Sign up
Prompt Engineering for Coding Tasks

Prompt Engineering for Coding Tasks

0. Setup

Do you need help to get your API Key?

Then checkout A Step-by-Step Guide To Getting Your OpenAI API Key.

import os
from openai import OpenAI

client = OpenAI(
    # This is the default and can be omitted
    api_key=os.environ.get("OPENAI_API_KEY"),
)
def chatgpt_call(prompt, model="gpt-3.5-turbo"):
    response = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content
prompt = f"""
Can you provide me a Python code snippet to generate a dispersion chart
given two vectors?
"""

response = chatgpt_call(prompt)
print(response)

1. Provide the backbone of the code with code comments

Explore the weak (and noisy) pattern of natural and programming language in code, e.g code comments.

prompt = f"""
Can you provide me a Python code snippet to generate a dispersion chart
given two vectors?

Please use the following structure:

```python

```
"""
response = chatgpt_call(prompt)
print(response)

2. Ask for Auxiliary Tasks

Asking for auxiliary learning tasks improves the performance of the model.

3. Compute Perplexity

Better prompt understanding (with lower prompt perplexity as a proxy) leads to more functionally accurate programs.

import numpy as np

prompts = [
    "Substitute by prompt 1",
    "Substitute by prompt 2",
]

def chatgpt_call_logprobs(prompt, model="gpt-3.5-turbo"):
    responses = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": prompt}],
        n=5,
        logprobs=True # log probabilities of each output token
    )
    return responses

for prompt in prompts:
    responses = chatgpt_call_logprobs(prompt)

    logprobs = [token.logprob for token in responses.choices[0].logprobs.content]
    response_text = responses.choices[0].message.content
    response_text_tokens = [token.token for token in responses.choices[0].logprobs.content]
    max_starter_length = max(len(s) for s in ["Prompt:", "Response:", "Tokens:", "Logprobs:", "Perplexity:"])
    max_token_length = max(len(s) for s in response_text_tokens)
    

    formatted_response_tokens = [s.rjust(max_token_length) for s in response_text_tokens]
    formatted_lps = [f"{lp:.2f}".rjust(max_token_length) for lp in logprobs]

    perplexity_score = np.exp(-np.mean(logprobs))
    print("Prompt:".ljust(max_starter_length), prompt)
    print("Response:".ljust(max_starter_length), response_text, "\n")
    print("Tokens:".ljust(max_starter_length), " ".join(formatted_response_tokens))
    print("Logprobs:".ljust(max_starter_length), " ".join(formatted_lps))
    print("Perplexity:".ljust(max_starter_length), perplexity_score, "\n")

4. Chain-of-Thought

  • Provide a chain of relevant reasonings to follow for reaching the answer.
  • Compute intermediate steps implies spending more computational effort.
  • Sometimes you don’t even need to define the intermediate steps: “Let’s think step by step”.

  • For code generation, Chain-of-Thought works better when including sequence, branch, and loop structures present in code.

[source] (Invalid URL)

prompt = f"""
Write a function to find sequences of lowercase letters joined with an underscore.
"""
response = chatgpt_call(prompt)
print(response)
prompt = f"""
Write a function to find sequences of lowercase letters joined with an underscore.

```
def text_lowercase_underscore(text): 
    Input:
    Output:
    1: 
    2: 
    3: 
    4: 
    5: 
```
"""