AI Startup

How an AI infra startup used Linkeddit MCP to find 40 production-grade design partners without a single SDR

VectorwaveSeed-stage AI infra startup

Linkeddit MCP via Claude Desktop + CursorEngineer-to-engineer reply automationget_user_comments for workload qualification
40
Design partner signups
9 days
Avg. lead → first call
$0
Paid ad spend
87%
Qualification-to-call rate

Vectorwave builds vector-database infrastructure for production LLM workloads. The founders are engineers; they wanted a go-to-market motion that required zero 'marketing voice.' Linkeddit MCP inside Cursor gave them exactly that — a keyboard shortcut away from qualifying a Reddit lead with real engineering depth.

Background

The problem with every vector-DB startup pitch is the same: most replies are from hobbyists, not teams with production load. Vectorwave needed design partners with >50M embeddings and real latency constraints. Those engineers are on Reddit but don't answer cold email. The bet was that a genuine engineering reply on a technical thread would convert better than any outbound motion.

The problem

LinkedIn sequences had produced 4 calls in 6 weeks. Every call was with someone who'd read a blog post, not someone who had a vector DB problem. The team was burning runway on meetings that went nowhere. They needed qualification to happen before the call — ideally visible from the lead's own Reddit comment history.

Pipeline configuration

Vectorwave runs 2 Linkeddit pipelines. Each one is scoped to a narrow set of subreddits and keyword patterns so the lead queue never turns into noise.

Vector DB pain at scale

Subreddits
r/MachineLearningr/LocalLLaMAr/LangChain
Refresh cadence
Refreshed every 45 minutes
Keywords
vector DB at scalepinecone too expensiveweaviate latencyembedding pipeline falling overchromadb productionself-host vector database
Filters
  • Contactability score ≥ 70
  • OP must have ≥ 3 comments in ML-related subreddits in last 6 months
  • Thread must contain a specific scale indicator (rows, QPS, latency numbers)

RAG / agent infrastructure pain

Subreddits
r/AI_Agentsr/LangChainr/dataengineering
Refresh cadence
Refreshed every 2 hours
Keywords
RAG not scalingreranker latencyhybrid search productionagent eval framework
Filters
  • Contactability score ≥ 65
  • OP account karma ≥ 500
  • Thread score ≥ 10

AI Content Writer workflow

  1. 1.Content Writer is tuned to 'senior engineer reviewing a PR' — technical, specific, willing to admit tradeoffs.
  2. 2.Every draft includes: (a) a concrete architectural observation about the OP's described setup, (b) a numerical benchmark (even a ballpark), (c) an offer to jump on a call only if the OP explicitly asks.
  3. 3.Drafts that mention Vectorwave in the first paragraph are auto-rejected.
  4. 4.All drafts are reviewed by Daniel or the other co-founder before posting — no AI auto-post, ever.

Linkeddit MCP + AI integration

Vectorwave's team lives in Cursor and Claude Desktop. Linkeddit MCP is wired into both, so qualifying a lead and drafting a reply happens without leaving the IDE.

Linkeddit MCP tools used
  • search_leads— surfaces scored leads
  • get_user_comments— pulls the OP's last 50 comments to assess real production context
  • get_user_posts— checks if they've posted benchmarks or architecture writeups elsewhere
  • fetch_post_comments— reads existing replies to avoid repetition
External MCPs connected
  • Linear MCP — creates a 'Design Partner Prospect' issue with all qualification context attached
  • Calendly MCP — generates a single-use invite link when the OP asks for a call
  • Notion MCP — appends the thread URL + qualification notes to the partners database
Example Claude prompt
For lead id <id>, use get_user_comments to pull the last 50 comments. Tell me: (1) are they working on production workloads or a side project? (2) what stack are they using? (3) what's their likely embedding volume? (4) is there any signal they're at a company with budget? Then draft a reply that addresses their specific comment's architectural flaw.

Want to run this workflow yourself? Set up the Linkeddit MCP server or connect via the Claude connector.

Daily rhythm

  • Morning — Daniel runs a single Claude prompt: 'Any new MachineLearning leads overnight with score ≥ 80?'
  • Mid-morning — Qualifies 2–4 leads, drafts replies, posts from personal account.
  • Afternoon — If a reply gets a response, Claude drafts the follow-up DM with Vectorwave benchmark data inline.
  • End of week — Linear export: which prospects moved to 'scoping,' which went cold, which converted.

Thread breakdown

A senior engineer in r/MachineLearning posted: 'Running 300M embeddings on Weaviate, P99 is 2.1s and my VPs are going to kill me.' Vectorwave surfaced it at 8 minutes. Daniel's reply included a specific shard-rebalancing suggestion, admitted Weaviate's design tradeoff, and mentioned Vectorwave only as 'we've benchmarked this specific case — happy to share the numbers if useful.' OP DMed in 2 hours. Became a design partner 11 days later, now paying for production.

Subreddits monitored

r/MachineLearningr/LocalLLaMAr/LangChainr/AI_Agentsr/dataengineering

Results

  • 40 design partners in 3 months, all with real production workloads verified via comment history.
  • Qualification-to-first-call rate: 87% (vs 23% on outbound).
  • Average lead-to-call time: 9 days, down from 31 on cold email.
  • Zero paid acquisition — total GTM spend for the quarter was the founders' time and Linkeddit subscription.

Lessons

  • 1.get_user_comments is the most valuable MCP call in the stack. It's the difference between replying to a real engineer and a hobbyist.
  • 2.Never auto-post. Engineering Reddit can smell AI replies in one sentence.
  • 3.Let the OP ask for the call. The 87% qualification rate exists because the prospect self-qualifies by asking.

I hate sales. What I love is debugging someone's architecture in Cursor. Linkeddit MCP basically let me turn sales into debugging. Claude pulls the thread, get_user_comments qualifies them, I draft a reply, and if they bite, Linear creates the partner record. It's the first GTM motion I haven't resented.

Daniel Osei, Co-founder & CTO, Vectorwave

Run the AI-infra founder playbook

Same stack, same playbook. Free to start, under 5 minutes to set up.

Other case studies