Skip to main content
This LangGraph sample wires a ChatOpenAI model to a search_docs tool, streams intermediate graph state, and shows how to reuse the same thread across turns. Both TypeScript and Python servers expose a /kickoff NDJSON stream that CometChat can consume.

What You’ll Build

  • A LangGraph state machine with an assistant node and a tool node.
  • An in-memory knowledge base plus a search_docs tool that returns cited bullets.
  • Streaming runs that keep history via the built-in MemorySaver checkpointer.
  • A starting point you can wrap in HTTP + SSE for CometChat’s Bring Your Own Agent flow.

Prerequisites

  • TypeScript: Node.js 18+ (Node 20 recommended); OPENAI_API_KEY in .env (optional KNOWLEDGE_OPENAI_MODEL, default gpt-4o-mini).
  • Python: Python 3.10+; OPENAI_API_KEY in .env (optional MODEL, default gpt-4o-mini).
  • CometChat app + AI Agent entry.


How it works

  • GraphStateGraph(MessagesAnnotation) adds assistant (ChatOpenAI bound to tools) and tools (executes tool calls) nodes, with conditional edges that loop until no more tool calls are requested.
  • Toolingsearch_docs (in src/graph.ts) calls searchDocs over a small mock corpus (src/data/corpus.ts) and formats matches as markdown bullets for citations.
  • StateMemorySaver checkpoints are keyed by configurable.thread_id, so multiple turns share context.
  • Streamingapp.stream(..., { streamMode: "values" }) yields incremental states; each message printed in src/index.ts shows the graph progressing through tool calls and replies.

Setup (TypeScript)

1

Install

cd typescript/langgraph-knowledge-agent && npm install
2

Env

Copy ../.env.example to .env; set OPENAI_API_KEY (optional KNOWLEDGE_OPENAI_MODEL).
3

Run demo

npm run demo — “How do I stream intermediate results?” (streams tool calls + replies to stdout).
4

Run server

npm run serverPOST /kickoff on http://localhost:3000.

Setup (Python)

1

Install

cd python && python -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt
2

Env

Create .env with OPENAI_API_KEY (optional MODEL).
3

Run server

python -m langgraph_knowledge_agent.serverPOST /kickoff on http://localhost:8000.

Project structure


Step 1 - Inspect the LangGraph

buildKnowledgeGraph (in src/graph.ts) binds search_docs to ChatOpenAI, routes every assistant reply through shouldCallTools, and executes tool calls via runTools. The system sets temperature to 0 and defaults to gpt-4o-mini unless you override KNOWLEDGE_OPENAI_MODEL.

Streaming API (HTTP)

Event order (both TypeScript and Python servers): text_starttext_delta chunks → tool_call_starttool_call_argstool_call_endtool_resulttext_enddone (error on failure). Each event includes message_id; echo thread_id/run_id from the client if you want threading. Example requests:
# TypeScript (localhost:3000/kickoff)
curl -N http://localhost:3000/kickoff \
  -H "Content-Type: application/json" \
  -d '{"messages":[{"role":"user","content":"How do I stream intermediate results from LangGraph?"}]}'

# Python (localhost:8000/kickoff)
curl -N http://localhost:8000/kickoff \
  -H "Content-Type: application/json" \
  -d '{"thread_id":"t1","run_id":"r1","messages":[{"role":"user","content":"How do I stream intermediate results from LangGraph?"}]}'
messages must be non-empty; invalid payloads return 400.

Adapt for CometChat

  • Point CometChat BYO Agent at your public /kickoff endpoint (TypeScript or Python).
  • Parse the NDJSON events; render text_delta streaming, show tool_call_* steps if desired, and stop on text_end/done.
  • Keep OPENAI_API_KEY (and any model overrides) server-side; add auth headers on the route if needed.
  • Swap the mock search_docs implementation with your own retrieval layer while keeping the same tool signature.