Skip to main content
A compact LangGraph example (TypeScript and Python) that binds a ChatOpenAI model to Product Hunt-inspired tools, loops until tool calls are complete, and streams every graph state. Both servers expose /kickoff as NDJSON with tool call events.

What You’ll Build

  • A LangGraph with assistant + tool executor nodes.
  • Two tools: list_top_posts (sorted by votes) and search_launches (keyword/topic search).
  • Streaming runs keyed by thread_id, kept in sync via MemorySaver.
  • A template you can expose over HTTP/SSE to connect with CometChat’s AI Agents.

Prerequisites

  • TypeScript: Node.js 18+ (Node 20 recommended); OPENAI_API_KEY in .env (optional PRODUCT_OPENAI_MODEL, default gpt-4o-mini).
  • Python: Python 3.10+; OPENAI_API_KEY in .env (optional MODEL, default gpt-4o-mini).
  • CometChat app + AI Agent entry.


How it works

  • Toolslist_top_posts and search_launches live in src/graph.ts, backed by helpers in src/data/search.ts. Both return markdown bullets so the assistant can cite results directly.
  • GraphStateGraph(MessagesAnnotation) alternates between the assistant node and the tools node based on shouldCallTools; tool outputs are fed back as ToolMessage objects.
  • StateMemorySaver checkpoints per configurable.thread_id let you run multi-turn conversations on the same graph instance.
  • Streamingapp.stream emits state snapshots (streamMode: "values"). The console demo prints each message as the model calls tools and drafts the response.

Setup (TypeScript)

1

Install

cd typescript/langgraph-product-hunt-agent && npm install
2

Env

Copy ../.env.example to .env; set OPENAI_API_KEY (optional PRODUCT_OPENAI_MODEL).
3

Run demo

npm run demo — “Top Product Hunt style launches right now?”
4

Run server

npm run serverPOST /kickoff on http://localhost:3000.

Setup (Python)

1

Install

cd python && python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
2

Env

Create .env with OPENAI_API_KEY (optional MODEL).
3

Run server

python -m langgraph_product_hunt_agent.serverPOST /kickoff on http://localhost:8000.

Project structure


Step 1 - Understand the agent tools

buildProductHuntGraph binds both tools to ChatOpenAI (temperature 0 by default). The graph checks each assistant reply for tool_calls, routes to the tools node to execute them, then loops back until there are none left.

Streaming API (HTTP)

Event order (TypeScript and Python servers): text_starttext_delta chunks → tool_call_starttool_call_argstool_call_endtool_resulttext_enddone (error on failure). Each event includes message_id; echo thread_id/run_id from the client if you want threading. Example requests:
# TypeScript (localhost:3000/kickoff)
curl -N http://localhost:3000/kickoff \
  -H "Content-Type: application/json" \
  -d '{"messages":[{"role":"user","content":"What are the top Product Hunt style launches right now?"}]}'

# Python (localhost:8000/kickoff)
curl -N http://localhost:8000/kickoff \
  -H "Content-Type: application/json" \
  -d '{"thread_id":"t1","run_id":"r1","messages":[{"role":"user","content":"What are the top Product Hunt style launches right now?"}]}'