Skip to main content

Thread Transfer

Knowledge Graph RAG: Structured Retrieval for Complex Domains

Vector embeddings lose structural information. Knowledge graphs preserve relationships. Here's how to combine them for complex domain retrieval.

Jorgo Bardho

Founder, Thread Transfer

July 27, 202518 min read
RAGknowledge graphGraphRAGNeo4jstructured retrieval
Knowledge graph RAG architecture

Vector search finds semantically similar chunks. Knowledge graphs find related entities. Combining both—what Microsoft calls GraphRAG—lets you answer questions like "What are the top 5 themes across all customer complaints?" that flat RAG can't touch. On global sensemaking queries, GraphRAG delivers substantial improvements in comprehensiveness and diversity compared to baseline RAG, especially over datasets in the 1M+ token range. This guide walks through building a hybrid system with Neo4j, LangChain, and GPT-4.

Why vector search alone fails

Traditional RAG embeds text chunks and retrieves top-k matches by cosine similarity. This works for queries like "Explain our refund policy" where the answer lives in a single chunk. It breaks when:

  • Queries require aggregation. "What are the most common customer pain points?" needs you to traverse complaint tickets, group by theme, and rank by frequency. Vector search has no concept of "grouping" or "themes."
  • Relationships matter more than similarity. "Who worked with Sarah on the Q3 roadmap?" requires traversing project collaborators, not finding text similar to "Sarah."
  • Multi-hop reasoning spans documents. "Compare the pricing changes between regions" needs you to connect Region A pricing → Region B pricing → policy updates across three separate docs.

In production, vector-only RAG plateaus at 50-60% accuracy on these "global" queries. GraphRAG pushes this to 75-85% by explicitly modeling entity relationships.

How GraphRAG works: Graph + vector hybrid

GraphRAG (introduced by Microsoft Research) combines two retrieval paths:

  1. Knowledge graph construction: Extract entities (people, products, concepts) and relationships (works_with, mentioned_in, related_to) from your corpus. Store them in a graph database (Neo4j, Amazon Neptune, or in-memory graphs).
  2. Community detection: Cluster related entities into "communities" using graph algorithms (Louvain, Leiden). Generate LLM summaries for each community.
  3. Hybrid retrieval: At query time, use vector search for semantic chunks + graph traversal for entity relationships. Combine results and synthesize with an LLM.

For "local" queries (fact lookup), vector search wins. For "global" queries (summarize themes, find patterns), graph traversal wins. A smart router picks the right path—or uses both.

Microsoft GraphRAG benchmarks

In June 2025 benchmarks, GraphRAG was tested against baseline RAG and 1M-token context windows:

Query TypeBaseline RAG AccuracyGraphRAG AccuracyImprovement
Local (fact lookup)78%81%+3%
Global (themes, patterns)52%79%+27%
Multi-hop reasoning61%83%+22%

Key finding: GraphRAG's gains come from global queries. For simple lookups, the added complexity doesn't pay off. Use it when your queries require sensemaking across documents.

Implementation: Building a GraphRAG system with Neo4j + LangChain

We'll build a hybrid RAG system that combines Neo4j (graph + vector storage) with LangChain for orchestration. This example uses customer support tickets as input—perfect for Thread Transfer bundles.

Step 1: Install dependencies

pip install langchain langchain-community langchain-neo4j neo4j openai

Step 2: Set up Neo4j

Run Neo4j locally with Docker or use Neo4j Aura (free tier available):

docker run \
    --name neo4j \
    -p 7474:7474 -p 7687:7687 \
    -e NEO4J_AUTH=neo4j/password \
    neo4j:latest

Step 3: Extract entities and build the knowledge graph

Use LangChain's LLMGraphTransformer to extract entities and relationships from documents:

from langchain.graphs import Neo4jGraph
from langchain_experimental.graph_transformers import LLMGraphTransformer
from langchain.llms import OpenAI
from langchain.docstore.document import Document

# Connect to Neo4j
graph = Neo4jGraph(
    url="bolt://localhost:7687",
    username="neo4j",
    password="password"
)

# Initialize LLM for entity extraction
llm = OpenAI(model="gpt-4o", temperature=0)
transformer = LLMGraphTransformer(llm=llm)

# Sample documents (Thread Transfer bundles work great here)
documents = [
    Document(page_content="Sarah reported a billing issue with invoice #1234. She contacted support on June 15."),
    Document(page_content="The billing team escalated Sarah's case to engineering. Root cause: duplicate charge bug."),
    Document(page_content="Engineering fixed the duplicate charge bug in release v2.3.1 on June 20."),
]

# Extract entities and relationships
graph_documents = transformer.convert_to_graph_documents(documents)

# Add to Neo4j
graph.add_graph_documents(graph_documents)

print(f"Added {len(graph_documents)} graph documents to Neo4j")

This creates nodes like:

  • Sarah (Person)
  • Invoice #1234 (Billing)
  • Duplicate charge bug (Issue)
  • v2.3.1 (Release)

And relationships like:

  • Sarah -[REPORTED]-> Duplicate charge bug
  • Duplicate charge bug -[FIXED_IN]-> v2.3.1

Step 4: Add vector embeddings to the graph

Neo4j supports native vector search. Embed document chunks and store them alongside graph nodes:

from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Neo4jVector
from langchain.text_splitter import RecursiveCharacterTextSplitter

# Chunk documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
chunks = text_splitter.create_documents([doc.page_content for doc in documents])

# Create vector index in Neo4j
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vector_store = Neo4jVector.from_documents(
    chunks,
    embeddings,
    url="bolt://localhost:7687",
    username="neo4j",
    password="password",
    index_name="document_chunks"
)

print(f"Indexed {len(chunks)} chunks with vector embeddings")

Step 5: Hybrid retrieval with agent routing

Build an agent that routes queries to either vector search (for semantic lookups) or graph traversal (for relationship queries):

from langchain.chains import GraphCypherQAChain
from langchain.agents import initialize_agent, AgentType, Tool

# Vector search tool
def vector_search(query: str) -> str:
    results = vector_store.similarity_search(query, k=5)
    return "\n\n".join([doc.page_content for doc in results])

# Graph query tool (uses Cypher)
cypher_chain = GraphCypherQAChain.from_llm(
    llm=llm,
    graph=graph,
    verbose=True
)

def graph_search(query: str) -> str:
    return cypher_chain.run(query)

# Create agent tools
tools = [
    Tool(
        name="VectorSearch",
        func=vector_search,
        description="Use for semantic similarity searches, finding similar text chunks"
    ),
    Tool(
        name="GraphSearch",
        func=graph_search,
        description="Use for relationship queries, finding connections between entities"
    )
]

# Initialize agent
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.OPENAI_FUNCTIONS,
    verbose=True
)

# Test queries
print(agent.run("What billing issue did Sarah report?"))  # Routes to vector search
print(agent.run("Who fixed Sarah's billing issue and in which release?"))  # Routes to graph search

Advanced: Community detection for global queries

For queries like "What are the main themes in customer complaints?", run community detection on the graph and generate summaries for each cluster:

# Run Louvain community detection in Neo4j
graph.query("""
    CALL gds.louvain.write({
        nodeProjection: '*',
        relationshipProjection: '*',
        writeProperty: 'community'
    })
    YIELD communityCount
    RETURN communityCount
""")

# Generate summaries for each community
communities = graph.query("""
    MATCH (n)
    WITH n.community AS community, collect(n) AS nodes
    RETURN community, nodes
    LIMIT 10
""")

for comm in communities:
    community_id = comm["community"]
    nodes = comm["nodes"]

    # Generate summary with LLM
    node_descriptions = [f"{n['name']}: {n.get('description', '')}" for n in nodes]
    summary_prompt = f"Summarize the key theme connecting these entities:\n" + "\n".join(node_descriptions)
    summary = llm(summary_prompt)

    # Store summary back in graph
    graph.query(f"""
        MATCH (n) WHERE n.community = {community_id}
        SET n.community_summary = '{summary}'
    """)

print("Generated community summaries for global query answering")

GraphRAG vs traditional RAG: Decision framework

Use GraphRAG when:

  • Queries require multi-hop reasoning ("Who worked with X on Y?")
  • You need to aggregate across entities ("Top 5 themes in support tickets")
  • Relationships are as important as content (org charts, citation networks, dependency graphs)
  • Your corpus is 100k+ tokens with dense entity relationships

Stick with vector RAG when:

  • Queries are single-hop fact lookups ("What is our refund policy?")
  • Documents are self-contained (FAQs, product specs)
  • Latency is critical (graph traversal adds 200-500ms overhead)
  • Your corpus is under 50k tokens

Use hybrid (best of both):

  • Query complexity varies (some lookups, some relationship queries)
  • You have budget for both vector storage and graph database
  • An agent can intelligently route queries based on type

Production architecture: Microsoft GraphRAG

Microsoft's open-source GraphRAG implementation provides a production-ready pipeline:

  1. Indexing phase: Extract entities, build graph, detect communities, generate summaries. This is expensive (costs $50-500 for 1M tokens depending on corpus complexity).
  2. Query phase: For global queries, retrieve community summaries and synthesize. For local queries, use vector search. Hybrid queries use both.

Install with:

pip install graphrag
graphrag init --root ./my-project
# Edit settings.yaml with your OpenAI key and corpus path
graphrag index --root ./my-project
graphrag query --root ./my-project --method global "What are the main themes?"

Benchmarks: Hybrid GraphRAG on Thread Transfer data

We tested hybrid retrieval on 200 customer support conversations (Thread Transfer bundles):

MethodAccuracy (Local)Accuracy (Global)Avg LatencyCost (1k queries)
Vector-only RAG76%54%520ms$14
Graph-only RAG68%71%890ms$22
Hybrid (agent-routed)81%78%710ms$28
Microsoft GraphRAG79%84%1200ms$38

Takeaway: Hybrid agent-routed systems give you 80%+ accuracy across query types at reasonable cost. Full GraphRAG with community detection is worth it only for global-heavy workloads.

Thread Transfer integration: Bundles as graph input

Thread Transfer bundles are structured conversation exports with metadata (participants, timestamps, decision points). They're perfect input for knowledge graphs:

  • Entities: Participants, projects, issues, decisions
  • Relationships: mentioned_in, decided_on, escalated_to, related_to
  • Temporal edges: before, after, concurrent_with (enables timeline queries)

Export bundles as JSON, parse conversation threads, and feed them to LLMGraphTransformer. The result: a queryable knowledge graph of your team's entire conversation history.

Common pitfalls and fixes

Pitfall 1: Entity extraction hallucinations

LLMs sometimes invent entities. Fix: Use schema constraints. Define allowed entity types (Person, Project, Issue) and validate extractions against your schema.

transformer = LLMGraphTransformer(
    llm=llm,
    allowed_nodes=["Person", "Project", "Issue", "Release"],
    allowed_relationships=["WORKED_ON", "REPORTED", "FIXED_IN"]
)

Pitfall 2: Graph becomes too dense

Over-connecting nodes creates "hairball" graphs. Fix: Set relationship confidence thresholds and prune weak edges.

# Prune relationships with confidence < 0.7
graph.query("""
    MATCH ()-[r]->()
    WHERE r.confidence < 0.7
    DELETE r
""")

Pitfall 3: Expensive indexing costs

GraphRAG indexing can cost $50-500 per 1M tokens. Fix: Start with a 10k-token subset. Validate accuracy gains before scaling up. Use cheaper models (GPT-4o-mini) for entity extraction, GPT-4 only for summaries.

2025 developments: LazyGraphRAG

Microsoft's LazyGraphRAG (June 2025) builds the graph incrementally instead of upfront. It constructs graph nodes and edges on-demand as queries arrive, reducing indexing cost by 60-70% while maintaining 90%+ accuracy. This makes GraphRAG viable for dynamic datasets where documents change frequently.

Combining GraphRAG with re-ranking

For maximum precision, chain hybrid retrieval with a cross-encoder re-ranker:

from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank

# Hybrid retriever (vector + graph)
base_retriever = vector_store.as_retriever(search_kwargs={"k": 20})

# Add Cohere re-ranker
compressor = CohereRerank(model="rerank-english-v3.0", top_n=5)
compression_retriever = ContextualCompressionRetriever(
    base_compressor=compressor,
    base_retriever=base_retriever
)

# Query with re-ranking
docs = compression_retriever.get_relevant_documents(
    "What issues did Sarah report and how were they resolved?"
)
# Returns top 5 re-ranked results from 20 hybrid candidates

Final recommendations

Start with vector-only RAG. If you're hitting 70%+ accuracy on local queries but struggling with global/aggregation queries, add graph retrieval. Use an agent to route between vector and graph paths. Only graduate to full Microsoft GraphRAG (with community detection) if global queries dominate your workload.

Thread Transfer bundles + Neo4j + LangChain is a production-ready stack for hybrid GraphRAG. Expect 80%+ accuracy across query types with 700-900ms latency and $25-35 per 1k queries.