Building a simple AI email assistant with ChatGPT API - Tutorial

A practical, step‑by‑step tutorial showing how to build an AI‑powered email assistant using the ChatGPT API. Learn how to summarize long emails, extract action items, and generate professional replies with Python code examples, prompt design strategies, and a complete walkthrough using a realistic business scenario.

Building a simple AI email assistant with ChatGPT API - Tutorial
AI Email Assistant

Building a simple AI email assistant with ChatGPT API

 

You want something concrete, not hand‑wavy—so let’s build a small but real tool: an AI assistant that (1) summarizes long emails and (2) drafts suggested replies, using the ChatGPT API and Python.

 

We’ll walk through it end‑to‑end with a fictitious example, code, and the reasoning behind each step.

 

1. The tool we’ll use and what we’re building

 

Tool: ChatGPT API (via OpenAI’s Python SDK). It’s widely used, beginner‑friendly, and works over simple HTTP calls from any language.

 

Use case:
You’re “Alex,” a project manager drowning in long client emails. You want a script that:

  • Summarizes an email in a few bullet points
  • Extracts key action items
  • Drafts a polite, professional reply in your tone

You’ll run it from the command line, paste in an email, and get structured output.

 

2. Prerequisites and setup

 

2.1. What you need

  • Python installed (3.9+ is ideal)
  • An OpenAI account with API access and billing enabled
  • A terminal and a text editor (VS Code, etc.)

 

2.2. Get your API key and store it safely

  1. Create/Open your OpenAI account and enable billing.
  2. Generate an API key from the API keys section in the dashboard.
  3. Store it in an environment variable—never hard‑code it in your script.

 

Example using a .env file (for local dev only):

Bash:

# .env

OPENAI_API_KEY=sk-your-real-key-here

 

Then load it in Python with python-dotenv or via your shell environment.

 

3. Project skeleton and installation

 

Create a folder, e.g.:

Bash:

email-assistant/

└─ main.py

└─ .env

 

Install dependencies:

Bash:

pip install openai python-dotenv

The OpenAI Python SDK is the official way to talk to the ChatGPT API and is commonly used in modern tutorials and automation tools.

 

4. Core logic: talking to the ChatGPT API

 

We’ll build three layers:

  1. Low-level client – handles authentication and API calls
  2. Prompting logic – how we ask the model to behave
  3. User interaction – reading an email and printing results

 

4.1. Basic client setup

Python:

# main.py

 

import os

from dotenv import load_dotenv

from openai import OpenAI

 

load_dotenv() # loads .env into environment

 

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

 

This pattern—loading the key from an environment variable and passing it to the client—is standard practice to avoid exposing secrets in code.

 

5. Designing the AI behaviour with prompts

 

We’ll send a system message to define the assistant’s role and a user message containing the email and instructions.

 

5.1. Helper: summarize and extract actions

Python:

def analyze_email(email_text: str) -> dict:

        """

        Returns a dict with 'summary' and 'actions' keys.

        """

        prompt = f"""

You are an assistant for a busy project manager.

Given the email below, do the following:

1. Provide a concise summary in 3–5 bullet points.

2. List clear action items with who is responsible and any deadlines.

 

Email:

\"\"\"{email_text}\"\"\"

 """

        response = client.chat.completions.create(

               model="gpt-4o-mini",

               messages=[

               {"role": "system", "content": "You are a precise, structured email analysis assistant."},                             

               {"role": "user", "content": prompt}

               ],

               temperature=0.4,

        )

        content = response.choices[0].message["content"]

        return {"raw_output": content}

 

Why this structure works:

  • System message sets the persona and style (precise, structured).
  • User message includes explicit numbered tasks, which tends to produce more predictable, structured output.
  • Low temperature (0.4) nudges the model toward consistency over creativity.

We’re returning raw_output for simplicity; you could later parse it into separate fields.

 

6. Drafting a reply in your tone

 

Now we’ll ask the model to write a reply as “Alex,” the project manager.

Python:

def draft_reply(email_text: str, summary: str | None = None) -> str:

        """

        Drafts a professional reply to the given email.

        Optionally uses a precomputed summary for context.

        """

        context_block = f"\nHere is a summary you can rely on:\n{summary}\n" if summary else ""

        prompt = f"""

You are Alex, a professional but friendly project manager.

Write a reply to the email below.

 

Goals:

- Be clear, concise, and polite.

- Acknowledge the sender's main points.

- Confirm any decisions or next steps.

- Ask clarifying questions only if necessary.

 

Email:

\"\"\"{email_text}\"\"\"

{context_block}

"""

 

        response = client.chat.completions.create(

             model="gpt-4o-mini",

             messages=[

                  {"role": "system", "content": "You write concise, professional business emails."},       

                  {"role": "user", "content": prompt}

             ],

             temperature=0.6,

        )

        return response.choices[0].message["content"]

 

Logic choices:

  • Persona: “Alex, professional but friendly” gives the model a tone anchor.
  • Goals list: acts like a checklist the model tends to follow.
  • Optional summary: reduces the chance the model misses key points in very long emails.

 

7. Wiring it together: a simple CLI flow

 

Now we’ll add a main() that:

  1. Reads a multi‑line email from stdin
  2. Calls analyze_email
  3. Calls draft_reply
  4. Prints everything nicely

 

Python:

def main():

        print("Paste the email content below. End with an empty line:")

        lines = []

        while True:

             line = input()

             if line.strip() == "":

                  break

             lines.append(line)

        email_text = "\n".join(lines)

 

        print("\n--- Analyzing email ---\n")

        analysis = analyze_email(email_text)

        analysis_text = analysis["raw_output"]

        print(analysis_text)

 

        print("\n--- Drafting reply ---\n")

        reply = draft_reply(email_text, summary=analysis_text)

        print(reply)

 

if __name__ == "__main__":

        main()

 

This is intentionally minimal—no GUI, no database—so you can focus on the AI logic and prompts.

 

8. Fictitious example: Alex and the demanding client

 

Let’s walk through a specific scenario.

 

8.1. The incoming email

Imagine Alex pastes this into the script:

 

Hi Alex,

Following up on the Q2 dashboard project. We still don’t have final confirmation on the data sources from your analytics team, and our launch date of May 15 is at risk.

We need:

  • A confirmed list of data sources by Friday
  • Clarification on whether marketing attribution data will be included
  • An updated timeline if May 15 is no longer realistic

Please let us know today how you plan to proceed so we can update our leadership.

Best,
Priya

 

8.2. What the analysis step might produce

A typical analyze_email output could look like:

  • Summary (3–5 bullets):
    • Client is concerned about delays in the Q2 dashboard project.
    • They lack confirmation on data sources from the analytics team.
    • The May 15 launch date may be at risk.
    • They need clarity on marketing attribution data and an updated timeline.
    • They require a same‑day response to inform their leadership.
  • Action items:
  • Alex to confirm the list of data sources by Friday.
  • Alex to confirm whether marketing attribution data will be included.
  • Alex to provide an updated timeline if May 15 is not feasible.
  • Alex to respond today with a plan so Priya can update leadership.

(Your actual output will vary, but this is the shape you’re aiming for.)

 

8.3. What the drafted reply might look like

The draft_reply function might generate something like:

 

Hi Priya,

Thanks for the detailed note and for flagging the timing concerns.

I’ve synced with our analytics team this morning. We’ll send you a confirmed list of data sources by Friday, and we are planning to include marketing attribution data as part of the initial release. If anything changes on that front, I’ll let you know immediately.

Based on current progress, we still believe May 15 is achievable. That said, I’ll review the latest dependencies today and share an updated timeline by tomorrow so you have a clear view before your leadership update.

I appreciate your patience and partnership on this.

Best,
Alex

You can tweak the tone by adjusting the system message and goals in the prompt.

 

9. Where to go next

 

Once this basic flow works, you can extend it:

  • Add a GUI or web front‑end (Flask, FastAPI, or a simple web form)
  • Log all analyses and replies for later review
  • Add “tone presets” (formal, casual, empathetic) via a command‑line flag
  • Batch process emails from a mailbox or CSV export

The same pattern—clear system role, explicit instructions, and structured prompts—is used in many real‑world ChatGPT API applications, from summarizers to workflow assistants and smart bots.

 

Share in the comments if you have a preferred stack (e.g., Node instead of Python, or integrating with Outlook/Gmail), we can evolve this into a more production‑ready flow tailored to your world.

 

Written/published by AI Quantum Intelligence.