Detect key price levels automatically using local extrema and kernel density estimation in pandas.
Support and resistance levels are among the most widely referenced concepts in technical analysis, yet most traders identify them by eye — a process that is subjective, slow, and impossible to scale. For algorithmic traders, the ability to detect these levels programmatically unlocks systematic entry and exit logic, automated alerts, and reproducible backtests. This article treats support and resistance not as a visual art but as a statistical problem with a concrete computational solution.
We will implement two complementary detection methods in Python: local extrema detection using scipy.signal to identify swing highs and lows in OHLC price data, and kernel density estimation (KDE) to find price clusters where the market has historically spent the most time. Together, these techniques produce a ranked list of significant price levels that can feed directly into a trading strategy or risk management system.
Most algo trading content gives you theory.
This gives you the code.3 Python strategies. Fully backtested. Colab notebook included.
Plus a free ebook with 5 more strategies the moment you subscribe.5,000 quant traders already run these:
Subscribe | AlgoEdge Insights
This article covers:
A support level is a price zone where buying pressure has historically been strong enough to halt a decline. A resistance level is where selling pressure has historically capped a rally. Most explanations stop there — at the narrative. But for code to work, we need a sharper definition: these are price regions where the market has repeatedly reversed or repeatedly stalled.
This gives us two measurable signals. First, local extrema in a price series. A swing low is a local minimum — a bar whose closing price is lower than the bars surrounding it. A swing high is a local maximum. These turning points mark where the market literally reversed, making them natural candidates for support and resistance. The more times price reverses near the same level, the stronger that level is considered to be.
Second, price density. If you plot a histogram of all closing prices over the past year, some price bins will contain far more observations than others. These high-density zones are where the market has spent the most time — consolidation zones, ranges, or areas of heavy accumulation and distribution. Kernel density estimation gives us a smooth, continuous version of this histogram, letting us find peaks in the price distribution algorithmically.
The insight is that these two methods are complementary. Extrema detection is sensitive to reversals — it finds exact turning points. KDE is sensitive to time — it finds zones of congestion. A price level that appears in both analyses is far more significant than one detected by only a single method. By combining them, we get a confidence-weighted list of levels rather than a flat catalog.
The implementation relies on yfinance for data, scipy for signal processing and KDE, and matplotlib for visualization. The key parameters control how sensitive each detection method is.
WINDOW: The lookback window for local extrema detection. Larger values find only major swing points; smaller values are noisier.KDE_BANDWIDTH: Controls the smoothing of the kernel density estimate. A tighter bandwidth finds more granular clusters; a wider one merges nearby zones.TOP_N_LEVELS: How many ranked levels to surface from the KDE peaks.MIN_LEVEL_SEPARATION: Minimum price distance between two distinct levels, expressed as a fraction of current price, to prevent clustering artifacts.
import yfinance as yf
import numpy as np
import pandas as pd
from scipy.signal import argrelextrema
from scipy.stats import gaussian_kde
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib.lines import Line2D
# --- Parameters ---
TICKER = "SPY"
PERIOD = "1y"
INTERVAL = "1d"
WINDOW = 10 # bars on each side for local extrema detection
KDE_BANDWIDTH = 0.015 # bandwidth for kernel density estimation
TOP_N_LEVELS = 6 # number of KDE-based levels to identify
MIN_LEVEL_SEPARATION = 0.005 # 0.5% minimum separation between distinct levels
We download OHLC data and apply argrelextrema from scipy.signal to identify swing highs (local maxima in the high series) and swing lows (local minima in the low series). Using the high and low columns — rather than close — ensures we capture the full price range reached at each swing point.
# --- Download Data ---
df = yf.download(TICKER, period=PERIOD, interval=INTERVAL, auto_adjust=True)
df.dropna(inplace=True)
df.columns = [c[0] if isinstance(c, tuple) else c for c in df.columns]
# --- Local Extrema Detection ---
highs = df["High"].values
lows = df["Low"].values
closes = df["Close"].values
# Indices of swing highs and lows
high_idx = argrelextrema(highs, np.greater_equal, order=WINDOW)[0]
low_idx = argrelextrema(lows, np.less_equal, order=WINDOW)[0]
swing_highs = highs[high_idx]
swing_lows = lows[low_idx]
# Combine all extrema into a single array for KDE
all_extrema = np.concatenate([swing_highs, swing_lows])
print(f"Swing highs detected: {len(swing_highs)}")
print(f"Swing lows detected: {len(swing_lows)}")
print(f"Total extrema: {len(all_extrema)}")
We fit a Gaussian KDE to the combined array of all swing highs and lows. The KDE produces a smooth probability density over the price range. We evaluate this density on a fine price grid, then extract local peaks — these are the price zones where the most reversals have clustered. A minimum separation filter prevents the algorithm from surfacing two nearly identical levels that represent the same zone.
# --- Kernel Density Estimation ---
price_grid = np.linspace(closes.min() * 0.98, closes.max() * 1.02, 2000)
kde = gaussian_kde(all_extrema, bw_method=KDE_BANDWIDTH)
density = kde(price_grid)
# --- Extract KDE Peaks (Candidate Levels) ---
peak_idx = argrelextrema(density, np.greater, order=20)[0]
peak_prices = price_grid[peak_idx]
peak_density = density[peak_idx]
# Sort by density (strongest levels first)
sorted_order = np.argsort(peak_density)[::-1]
peak_prices = peak_prices[sorted_order]
peak_density = peak_density[sorted_order]
# Apply minimum separation filter
current_price = closes[-1]
selected_levels = []
for price, dens in zip(peak_prices, peak_density):
too_close = any(
abs(price - s) / current_price < MIN_LEVEL_SEPARATION
for s in selected_levels
)
if not too_close:
selected_levels.append(price)
if len(selected_levels) >= TOP_N_LEVELS:
break
# Tag each level as support or resistance relative to current price
support_levels = sorted([l for l in selected_levels if l < current_price])
resistance_levels = sorted([l for l in selected_levels if l >= current_price])
print(f"\nCurrent Price: ${current_price:.2f}")
print(f"Support Levels: {[f'${l:.2f}' for l in support_levels]}")
print(f"Resistance Levels: {[f'${l:.2f}' for l in resistance_levels]}")
The chart overlays the closing price series with detected swing highs and lows (scatter points) and the final ranked support and resistance levels (horizontal lines). The KDE density curve is plotted in a lower panel to show the price distribution that drives level selection.
# --- Visualization ---
plt.style.use("dark_background")
fig, (ax1, ax2) = plt.subplots(
2, 1, figsize=(14, 9), gridspec_kw={"height_ratios": [3, 1]}, sharex=False
)
# --- Price Panel ---
ax1.plot(df.index, closes, color="#aaaaaa", linewidth=1.0, label="Close")
ax1.scatter(
df.index[high_idx], swing_highs,
color="#ff4c4c", s=30, zorder=5, label="Swing High"
)
ax1.scatter(
df.index[low_idx], swing_lows,
color="#4caf50", s=30, zorder=5, label="Swing Low"
)
for level in support_levels:
ax1.axhline(level, color="#4caf50", linewidth=1.2, linestyle="--", alpha=0.85)
ax1.text(df.index[-1], level, f" S ${level:.2f}", color="#4caf50",
fontsize=8, va="center")
for level in resistance_levels:
ax1.axhline(level, color="#ff4c4c", linewidth=1.2, linestyle="--", alpha=0.85)
ax1.text(df.index[-1], level, f" R ${level:.2f}", color="#ff4c4c",
fontsize=8, va="center")
ax1.axhline(current_price, color="white", linewidth=0.8, linestyle=":", alpha=0.5)
ax1.set_title(f"{TICKER} — Algorithmic Support & Resistance (KDE + Local Extrema)",
fontsize=13, pad=10)
ax1.set_ylabel("Price ($)")
legend_elements = [
Line2D([0], [0], color="#aaaaaa", label="Close"),
Line2D([0], [0], marker="o", color="w", markerfacecolor="#ff4c4c",
markersize=6, label="Swing High", linestyle="None"),
Line2D([0], [0], marker="o", color="w", markerfacecolor="#4caf50",
markersize=6, label="Swing Low", linestyle="None"),
mpatches.Patch(color="#ff4c4c", label="Resistance"),
mpatches.Patch(color="#4caf50", label="Support"),
]
ax1.legend(handles=legend_elements, loc="upper left", fontsize=8)
ax1.grid(alpha=0.15)
# --- KDE Panel ---
ax2.fill_between(price_grid, density, color="#5c85d6", alpha=0.4)
ax2.plot(price_grid, density, color="#5c85d6", linewidth=1.2)
for level in support_levels:
ax2.axvline(level, color="#4caf50", linewidth=1.0, linestyle="--", alpha=0.8)
for level in resistance_levels:
ax2.axvline(level, color="#ff4c4c", linewidth=1.0, linestyle="--", alpha=0.8)
ax2.axvline(current_price, color="white", linewidth=0.8, linestyle=":", alpha=0.5)
ax2.set_xlabel("Price ($)")
ax2.set_ylabel("Density")
ax2.set_title("Price Density (KDE of Swing Extrema)", fontsize=10)
ax2.grid(alpha=0.15)
plt.tight_layout()
plt.savefig("support_resistance.png", dpi=150, bbox_inches="tight")
plt.show()
Figure 1. Top panel shows the SPY closing price with swing highs (red) and lows (green) annotated, along with the six strongest algorithmic support and resistance levels. The lower KDE panel reveals the underlying price distribution — peaks in this density curve directly correspond to the horizontal levels drawn above.
Enjoying this strategy so far? This is only a taste of what's possible.
Go deeper with my newsletter: longer, more detailed articles + full Google Colab implementations for every approach.
Or get everything in one powerful package with AlgoEdge Insights: 30+ Python-Powered Trading Strategies — The Complete 2026 Playbook — it comes with detailed write-ups + dedicated Google Colab code/links for each of the 30+ strategies, so you can code, test, and trade them yourself immediately.
Exclusive for readers: 20% off the book with code
MEDIUM20.Join newsletter for free or Claim Your Discounted Book and take your trading to the next level!
Running this on one year of SPY daily data typically surfaces four to six meaningful price levels that align well with visually obvious zones — prior highs, post-correction consolidation ranges, and major breakout points. The KDE approach is particularly effective at surfacing zones rather than single prices. When multiple swing reversals cluster near the same level, the density peak becomes tall and narrow, indicating a high-conviction level. Broad, flat peaks indicate less defined zones where price has drifted rather than reversed sharply.
The minimum separation filter (MIN_LEVEL_SEPARATION = 0.005) prevents the algorithm from reporting five levels within a one-dollar band when a single zone is responsible. Tuning this parameter is important: too tight and you get redundant levels, too wide and you miss genuinely distinct nearby levels in a range-bound market.
In practice, the strongest detected support and resistance levels tend to correspond to periods where price consolidated for multiple sessions before breaking out or breaking down — exactly the zones where institutional order flow tends to accumulate. The WINDOW = 10 parameter is conservative by design, filtering out noise and ensuring only the most significant turns are counted in the density estimate.
Automated entry and exit logic. Detected support levels can serve as dynamic stop-loss anchors or limit order targets in a mean-reversion strategy. Resistance levels can trigger short entries or profit-taking rules without manual chart review.
Risk management overlays. Position sizing systems can widen stops when the current price is far from any detected level and tighten them as price approaches a high-density zone, reflecting the elevated probability of a reaction.
Breakout confirmation filters. A breakout through a strong resistance level (confirmed by a close above it) carries more statistical weight when that level was identified algorithmically from multiple swing-high clusters, rather than drawn by eye on the day.
Multi-ticker screening. Because the entire pipeline is parameterized and runnable on any ticker, it can be wrapped in a loop over a watchlist to generate a daily snapshot of key levels across dozens of instruments — a task that is impossible to perform manually at scale.
Lookback dependency. The detected levels are entirely a function of the chosen time period. A one-year lookback will surface different levels than a three-month lookback. There is no single correct answer — the appropriate period depends on the trading horizon, and practitioners should test multiple windows.
KDE bandwidth sensitivity. The KDE_BANDWIDTH parameter has an outsized effect on the number and location of detected levels. Very tight bandwidths fragment genuine zones into multiple micro-levels; very wide bandwidths merge distinct levels into one. There is no automatic optimal value — it requires calibration to the asset's historical volatility.
Static levels in dynamic markets. Support and resistance levels are not permanent. A level that held for six months can be invalidated in a single session on high volume. The algorithm has no concept of recency weighting unless explicitly implemented (e.g., giving higher weight to extrema in the last 60 days).
Works best on liquid, range-respecting assets. On highly trending assets or illiquid small-caps with gappy price action, swing extrema are sparse and the KDE surface becomes unreliable. Always inspect the output visually before deploying levels into a live strategy.
Survivorship in backtesting. If this detection is used as a feature in a backtest, care must be taken to only use levels detectable with data available at the time of each bar — not the full sample. Future-leaking support and resistance levels will produce misleadingly optimistic backtest results.
Programmatic detection of support and resistance transforms a subjective visual task into a reproducible, scalable analysis. By combining local extrema detection with kernel density estimation, we surface price levels that reflect both the location of historical reversals and the frequency with which price has revisited those zones — two distinct and complementary signals. The resulting ranked level list is objective, parameterizable, and ready to plug into broader strategy logic.
The natural next step is to add a recency weighting scheme — exponentially downweighting older swing points so that recent price action has more influence on the density estimate. Another productive extension is to compute these levels on multiple timeframes simultaneously (daily, weekly) and flag only the levels that appear on two or more timeframes, which historically correlates with stronger reactions.
If you found this useful, the same systematic approach extends naturally to volume profile analysis, order flow imbalance detection, and dynamic stop placement — all of which we cover in the AlgoEdge Insights newsletter. Follow along for more Python-powered quant research delivered in the same format.
Most algo trading content gives you theory.
This gives you the code.3 Python strategies. Fully backtested. Colab notebook included.
Plus a free ebook with 5 more strategies the moment you subscribe.5,000 quant traders already run these:
Subscribe | AlgoEdge Insights