I Ran a Manual Sequence Autopsy on 800 Contacts and Found Step 2 Drove 64% of All Positive Replies — I Spent 4 Hours in Google..

# youtube# marketing# automation# socialmedia
I Ran a Manual Sequence Autopsy on 800 Contacts and Found Step 2 Drove 64% of All Positive Replies — I Spent 4 Hours in Google..Vhub Systems

Six weeks. 800 contacts. 14 positive replies. 1.75% positive reply rate — which, honestly, felt fine. Sequences run, numbers come back, you move on.

Six weeks. 800 contacts. 14 positive replies. 1.75% positive reply rate — which, honestly, felt fine. Sequences run, numbers come back, you move on.

Then the SDR manager exported the CSV.

Two hours of pivot tables later, the real picture: Step 2 drove 9 of the 14 positive replies. Sixty-four percent of all conversion, buried in the middle of a six-step sequence. Steps 4, 5, and 6 — 300 combined sends — generated exactly zero positive replies. They were adding deliverability risk, burning unsubscribe budget, and damaging domain health. For zero return.

The sequence got cut to three steps. Step 2 became the lead value message. The next run hit 3.1% positive reply rate.

But here's the part that matters: that analysis took four hours. It happened once. The SDR manager has 11 other active sequences running right now, and none of them have ever been analyzed at step level. Every design decision for those sequences is based on gut feel.

That is the actual problem.


The Sequence Analytics Black Box: You Know Your Overall Reply Rate, But Not Which Step Is Driving It

Every sequencing tool — Apollo, Lemlist, Outreach, Salesloft — gives you the same basic dashboard: total sends, total opens, total replies, meetings booked. Sequence-level totals. The aggregate.

What they don't give you automatically: which step is generating the replies.

Three structural failures compound this:

Aggregate-only reporting. Apollo's standard dashboard shows sequence totals. Getting per-step metrics requires exporting the full activity log CSV and building pivot tables manually in Google Sheets. For one sequence, that's an hour. For twelve sequences running simultaneously, it's a recurring six-hour job that doesn't get done.

No cross-sequence aggregation. If you want to know which step position — first touch, second follow-up, breakup email — performs best across all your sequences, there is no native report for this. Each sequence is an island. The insight that "Step 2 consistently outperforms Step 1 across all our sequences" requires analyzing all sequences simultaneously, which the tool simply doesn't do.

No automated alerting. A sequence developing a deliverability problem at Step 1 — open rate dropping from 48% to 22% — generates no alert. You discover it when you manually check the dashboard, which happens at most monthly. By then, the sequence has burned 500 additional contacts on a failing sending domain.

The step-level data exists in your sequencing tool's API. This is a data routing problem, not a data collection problem.


What Step-Level Analytics Actually Reveals

The gap between aggregate metrics and step-level data isn't cosmetic. It changes what you optimize.

The dead weight sequence. A 6-step sequence shows 1.75% positive reply rate overall. Step-level breakdown:

Step Sends Open Rate Positive Reply Rate
Step 1 — Cold intro 800 44% 0.6%
Step 2 — Value follow-up 680 38% 1.2%
Step 3 — Case study 540 32% 0.2%
Step 4 — New angle 380 28% 0.0%
Step 5 — Objection handler 210 19% 0.0%
Step 6 — Breakup 150 15% 0.0%

Steps 4, 5, and 6 combined: 740 sends. Zero positive replies. Thirty-three percent of total send volume contributing nothing to conversion. Cut to three steps and reallocate that volume to higher-performing sequences.

The messaging gap signal. A sequence has 51% open rate at Step 1 but 0.4% positive reply rate. Open-to-reply conversion: 0.8% against an industry average of 4–6%. The subject line is working. The body is not converting. The fix is the body copy, not the subject line — but without step-level analytics, teams test different subject lines for months, optimizing the wrong variable.

The breakup email anomaly. Step 6 has a 4.2% positive reply rate — higher than Steps 2–5 combined. This is a real pattern in B2B outbound: the "I won't bother you again" framing triggers replies from contacts who were interested but hadn't responded. Actionable: promote the breakup email to Step 3. Capture that conversion earlier and cut sequence length by three steps.

None of these insights are visible in the aggregate dashboard.


Why Apollo, Outreach, and Salesloft Analytics All Fail the Same Way

"I manage 6 SDRs each running 3–4 sequences in Apollo. Our overall positive reply rate is 2.1%. I have no idea which step in which sequence is driving that number. I did a manual analysis last quarter and it took me 4 hours in Google Sheets. Step 2 of our 'mid-market CTO' sequence drove 71% of all positive replies — but I only found that out once. I need this running automatically every week. Is there an n8n workflow that queries Apollo's API and calculates per-step reply rates across all active sequences?"

Every tool at this stack level fails the same way:

Apollo ($49–$99/user/month): Per-step breakdown requires manual CSV export. For 10+ active sequences: 2–6 hours per analysis run. Apollo's API does expose per-step cadence data via /v1/emailer_campaigns and /v1/emailer_steps — but nothing reads it automatically and routes it to a weekly Slack digest.

Outreach / Salesloft ($100–$175/user/month): More sophisticated reporting, including step analytics in the UI — but step-level data is 3–4 clicks deep and isn't pushed to you. Cross-sequence aggregation isn't a native feature. And the price point excludes the target buyer: $1M–$5M ARR teams predominantly running Apollo or Lemlist.

Manual spreadsheet analysis (free, 2–6 hours/run): The universal fallback. Export CSV → import to Google Sheets → pivot table by sequence + step → compute metrics → share report. Problems: happens monthly at best; the SDR manager is doing data assembly instead of coaching SDRs; when the SDR manager is out, the analysis simply doesn't happen.

"We run 8 outbound sequences in Lemlist. We have no visibility into which step generates replies. Our gut says Step 1 and Step 2 matter most, but I've seen breakup emails at Step 6 generate more replies than the first touchpoint in some sequences. We're making sequence design decisions based on intuition. I need a dashboard or a weekly Slack alert that shows me: for each sequence, here's the reply rate at each step, here's which step has the highest positive reply rate, here's which step has an open-to-reply conversion gap (high open, low reply = messaging problem). Does this exist?"

It does now.


The Architecture: Apollo API + n8n + Google Sheets + Slack Weekly Digest

A six-component workflow that runs automatically and delivers action items every Monday morning.

Component 1 — Daily sequence data pull (7am). n8n scheduled trigger → Apollo API: GET /v1/emailer_campaigns (all active sequences) → for each sequence: GET /v1/emailer_steps (per-step data: sends, opens, replies, positive_replies, meetings_booked). Raw step data stored in Google Sheets: sequence_id, step_number, step_type, sends, opens, replies, positive_replies, date. Lemlist and Salesloft variant branches included.

Component 2 — Metric computation (n8n Code node). For each step: open_rate = opens/sends, reply_rate = replies/sends, positive_reply_rate = positive_replies/sends, open_to_reply_conversion = replies/opens. Four anomaly flags assigned automatically:

  • deliverability_flag: open_rate < 20%
  • messaging_gap_flag: open_rate > 40% AND positive_reply_rate < 0.5%
  • best_step_flag: positive_reply_rate ≥ 3× sequence average
  • dead_step_flag: sends > 50 AND positive_reply_rate = 0

Component 3 — Cross-sequence aggregation. Compute average positive_reply_rate by step position (Step 1, 2, 3, etc.) across all active sequences. Identify which step position performs best in aggregate. Included in weekly digest.

Component 4 — Apify contact enrichment (optional, for best-step responders). When best_step_flag fires on a step, enrich the contacts who replied using apify/linkedin-scraper — pull job title, seniority, company size. Identify the ICP profile of contacts who responded to the best-performing step. Insight example: "64% of positive replies from best-performing Step 2 came from VP-level contacts at 50–200 employee companies." Append ICP signal to the Google Sheets performance log for future targeting.

Component 5 — Real-time deliverability alert. If any step triggers deliverability_flag = true AND step sends increased by >50 since last check → immediate Slack alert: "🚨 Deliverability Alert — [Sequence Name] Step [N]: open rate dropped to [X%] — check sending domain health."

Component 6 — Weekly Slack digest (Monday 8am). Per-step performance table for each active sequence with anomaly flag indicators. Summary: best step position across all sequences, top 3 sequences by positive reply rate, worst 3 by dead_step_count, recommended action items.

Running cost: ~$2–$5/month (Apollo API included in existing plan; Apify enrichment optional, ~$2–$4/month for flagged contacts only; n8n self-hosted or cloud).


Step-by-Step Setup: Under 3 Hours

Step 1 — Connect your sequencing tool API (20 min). Apollo: generate API key in Settings → Integrations → API Keys. Test endpoint: GET https://api.apollo.io/v1/emailer_campaigns?api_key=YOUR_KEY. Should return all active sequences. Lemlist: API key from Settings → Team → API. Salesloft: OAuth2 client credentials.

Step 2 — Configure the step data pull (30 min). n8n HTTP Request node: set endpoint, authentication, pagination. Apollo returns 25 sequences per page — enable pagination to pull all sequences. Lemlist variant and Salesloft variant included as alternate branches in the workflow JSON.

Step 3 — Set anomaly flag thresholds (10 min). Open the n8n Code node. Edit ANOMALY_CONFIG: defaults are deliverability_flag at 20% open rate, messaging_gap_flag at 40% open / 0.5% positive reply, dead_step threshold at 50 sends. Adjust for your volume and industry.

Step 4 — Set up Google Sheets performance log (30 min). Import the included template: per-step tracking columns, anomaly flag columns, 90-day trend chart, sequence comparison table. Connect Google Sheets OAuth in n8n. Configure upsert logic: update existing rows by sequence_id + step_number + date.

Step 5 — Configure Slack alerts (20 min). Create Slack channel #sequence-analytics. Add Slack bot via n8n Slack integration. Real-time deliverability alert: triggers on deliverability_flag = true. Weekly digest: Monday 8am schedule with formatted table.

Step 6 — Optional Apify enrichment setup (30 min). Create Apify account. Import apify/linkedin-scraper. In n8n: add filter — only run enrichment when best_step_flag = true AND positive_replies > 5. Map LinkedIn data back to Google Sheets ICP column.

Step 7 — Test with historical data (30 min). Run workflow manually against last 7 days. Verify per-step metric calculations, check anomaly flags against what you already know about your sequences, preview the generated Slack digest.

Total setup time: ~3 hours.


What Your Monday Morning Looks Like After the Workflow Is Live

7:04am: Per-step metric computation completes. Anomaly flags assigned across 12 active sequences, 68 total steps. Three steps flagged deliverability_flag. Four steps flagged dead_step_flag. One step flagged best_step_flag — Step 2 of the "Mid-Market CTO" sequence: 4.1% positive reply rate against 1.4% sequence average.

7:06am: Slack alert fires for Sequence 7 Step 1 (open rate = 17%): "🚨 Deliverability Alert — Sequence 7 'Enterprise DevOps' Step 1: open rate dropped to 17% — check sending domain health."

8:00am Monday: Weekly digest in #sequence-analytics:

📊 Sequence Performance Report — Week of 2026-03-30

BEST STEP POSITION (cross-sequence): Step 2 — avg 2.8% positive reply rate
TOP SEQUENCES:
  1. Mid-Market CTO (Seq 3): Step 2 ⭐ 4.1% | Step 4 💀 0.0% (cut Step 4)
  2. SMB Head of Ops (Seq 1): Step 1 2.3% | Step 2 2.1% | Step 3 0.3%
  3. Enterprise DevOps (Seq 7): ⚠️ DELIVERABILITY FLAG — Step 1 open rate 17%

ACTION ITEMS:
  - Cut Steps 4, 5, 6 from Seq 3 (zero positive replies from 290 combined sends)
  - Rewrite Step 3 of Seq 6 (messaging gap: 47% open → 0.3% positive reply)
  - Check sending domain for Seq 7 Step 1 (deliverability flag)
  - Seq 3 Step 2 best-step ICP: VP-level, 50–200 employees → target more of this profile
Enter fullscreen mode Exit fullscreen mode

The SDR manager reviews in 10 minutes. Three clear action items. No CSV export. No pivot table. No four-hour Saturday analysis.


The $29 Workflow: What's Included

"Our CRO asked me to audit our outbound sequence performance last week. We have 14 active sequences. It took me and an analyst 6 hours to pull all the CSVs, clean the data, build the per-step pivot tables, and produce a report. The CRO looked at it for 10 minutes and said 'can we get this automatically every week?' I said 'yes in theory but right now it's a 6-hour manual process.' There has to be a better way. We're on Salesloft — their API has per-step cadence data. I just need someone to plumb it into a Slack report."

The workflow package includes:

  • n8n workflow JSON (import-ready): Daily Apollo API pull → per-step metric computation → anomaly flag assignment → cross-sequence aggregation → Apify contact enrichment → real-time Slack deliverability alert → weekly Monday Slack digest
  • Apollo API variant: Full endpoint reference, pagination handling for 20+ sequences, API key setup guide
  • Lemlist API variant: Alternate n8n branch for Lemlist-based teams (REST API + webhook)
  • Salesloft API variant: OAuth2 configuration guide, Cadence API step data endpoints
  • Google Sheets performance template: Pre-built per-step tracking sheet with 90-day trend charts, sequence comparison table, anomaly flag columns, automated best-step highlight
  • Anomaly flag configuration guide: Threshold adjustment guidelines for enterprise vs. SMB outbound benchmarks
  • Apify apify/linkedin-scraper setup: Contact enrichment configuration, ICP signal extraction, Google Sheets field mapping
  • Slack digest template: Weekly sequence ranking table, anomaly flag legend, action item format
  • Sequence optimization playbook: Decision tree for each flag type — cut vs. rewrite vs. test vs. replicate, dead step identification criteria, deliverability triage process, best-step replication framework

Stop making sequence design decisions based on aggregate metrics that hide which steps actually work.

Get the B2B Outbound Sequence Step Analytics Workflow — $29 → [GUMROAD_URL]


Bundle: B2B Outbound Intelligence Pack — $39

Pair this workflow with the Signal-Based Follow-Up Timing workflow: know which steps perform and make sure your follow-ups fire at the right behavioral moment — not on a fixed Day 3/5/7 schedule. Complete outbound optimization stack.

[Get the Outbound Intelligence Pack — $39 →] [GUMROAD_URL]


Article 70 | Pain #251 — B2B Outbound Sequence Step Analytics | Domain: B2B Sales Ops | Severity: 7.5/10 | 2026-04-01