
Hey folks! In Part 1, I laid the foundation of our MCP (Model Context Protocol) server for stock analysis. I created a basic setup to fetch historical data and calculate essential technical indicators. It was like building the engine of a car – functional, but not yet ready for the Daytona 500.
Today, I will soup up that engine by adding the promised features: relative strength calculations, volume profile analysis, pattern recognition, and risk management tools. Think of it as transforming our basic sedan into a high-performance trading machine. Let’s dive in!
Reviewing Our MCP Architecture
Before we add new features, let’s remind ourselves of our project structure:
mcp-trader/
├── pyproject.toml
├── README.md
├── .env
└── src/
└── mcp-trader/
├── __init__.py
├── server.py # Our core MCP server
├── indicators.py # Technical analysis functions
└── data.py # Data fetching layer
I’ll expand the indicators.py
file with new analysis tools and update our server.py
to expose these capabilities to Claude or any other MCP-compatible AI. These days, you can even rig this server to fetch stock recommendations without even leaving your IDE, if you like. We live in interesting times, friends. If you haven’t already set up the project from Part 1, make sure to check it out first.
Feature 1: Relative Strength Calculations
Any experienced trader knows that it’s not just how a stock performs in isolation, but how it performs compared to the broader market or its sector that matters. That’s what relative strength is all about – measuring outperformance or underperformance.
Let’s add this crucial metric to our indicators.py
file:
class RelativeStrength:
"""Tools for calculating relative strength metrics."""
@staticmethod
async def calculate_rs(
market_data,
symbol: str,
benchmark: str = "SPY",
lookback_periods: List[int] = [21, 63, 126, 252],
) -> Dict[str, float]:
"""
Calculate relative strength compared to a benchmark across multiple timeframes.
Args:
market_data: Our market data fetcher instance
symbol (str): The stock symbol to analyze
benchmark (str): The benchmark symbol (default: SPY for S&P 500 ETF)
lookback_periods (List[int]): Periods in trading days to calculate RS (default: [21, 63, 126, 252])
Returns:
Dict[str, float]: Relative strength scores for each timeframe
"""
try:
# Get data for both the stock and benchmark
stock_df = await market_data.get_historical_data(
symbol, max(lookback_periods) + 10
)
benchmark_df = await market_data.get_historical_data(
benchmark, max(lookback_periods) + 10
)
# Calculate returns for different periods
rs_scores = {}
for period in lookback_periods:
# Check if we have enough data for this period
if len(stock_df) <= period or len(benchmark_df) <= period:
# Skip this period if we don't have enough data
continue
# Calculate the percent change for both
stock_return = (
stock_df["close"].iloc[-1] / stock_df["close"].iloc[-period] - 1
) * 100
benchmark_return = (
benchmark_df["close"].iloc[-1] / benchmark_df["close"].iloc[-period]
- 1
) * 100
# Calculate relative strength (stock return minus benchmark return)
relative_performance = stock_return - benchmark_return
# Convert to a 1-100 score (this is simplified; in practice you might use a more
# sophisticated distribution model based on historical data)
rs_score = min(max(50 + relative_performance, 1), 99)
rs_scores[f"RS_{period}d"] = round(rs_score, 2)
rs_scores[f"Return_{period}d"] = round(stock_return, 2)
rs_scores[f"Benchmark_{period}d"] = round(benchmark_return, 2)
rs_scores[f"Excess_{period}d"] = round(relative_performance, 2)
return rs_scores
except Exception as e:
raise Exception(f"Error calculating relative strength: {str(e)}")
This method calculates the stock’s performance versus a benchmark (typically the S&P 500) over different time periods. The result is a standardized score that helps identify market leaders (scores above 80) and laggards.
Feature 2: Volume Profile Analysis
Volume profile analysis helps us understand where most trading activity occurs. It can identify key support and resistance levels based on trading volume at specific price points – areas where traders have shown significant interest.
Let’s add this to our indicators.py
file:
class VolumeProfile:
"""Tools for analyzing volume distribution by price."""
@staticmethod
def analyze_volume_profile(df: pd.DataFrame, num_bins: int = 10) -> Dict[str, Any]:
"""
Create a volume profile analysis by price level.
Args:
df (pd.DataFrame): Historical price and volume data
num_bins (int): Number of price bins to create (default: 10)
Returns:
Dict[str, Any]: Volume profile analysis
"""
try:
if len(df) < 20:
raise ValueError("Not enough data for volume profile analysis")
# Find the price range for the period
price_min = df["low"].min()
price_max = df["high"].max()
# Create price bins
bin_width = (price_max - price_min) / num_bins
# Initialize the profile
profile = {
"price_min": price_min,
"price_max": price_max,
"bin_width": bin_width,
"bins": [],
}
# Calculate volume by price bin
for i in range(num_bins):
bin_low = price_min + i * bin_width
bin_high = bin_low + bin_width
bin_mid = (bin_low + bin_high) / 2
# Filter data in this price range
mask = (df["low"] <= bin_high) & (df["high"] >= bin_low)
volume_in_bin = df.loc[mask, "volume"].sum()
# Calculate percentage of total volume
volume_percent = (
(volume_in_bin / df["volume"].sum()) * 100
if df["volume"].sum() > 0
else 0
)
profile["bins"].append(
{
"price_low": round(bin_low, 2),
"price_high": round(bin_high, 2),
"price_mid": round(bin_mid, 2),
"volume": int(volume_in_bin),
"volume_percent": round(volume_percent, 2),
}
)
# Find the Point of Control (POC) - the price level with the highest volume
poc_bin = max(profile["bins"], key=lambda x: x["volume"])
profile["point_of_control"] = round(poc_bin["price_mid"], 2)
# Find Value Area (70% of volume)
sorted_bins = sorted(
profile["bins"], key=lambda x: x["volume"], reverse=True
)
cumulative_volume = 0
value_area_bins = []
for bin_data in sorted_bins:
value_area_bins.append(bin_data)
cumulative_volume += bin_data["volume_percent"]
if cumulative_volume >= 70:
break
if value_area_bins:
profile["value_area_low"] = round(
min([b["price_low"] for b in value_area_bins]), 2
)
profile["value_area_high"] = round(
max([b["price_high"] for b in value_area_bins]), 2
)
return profile
except Exception as e:
raise Exception(f"Error analyzing volume profile: {str(e)}")
The volume profile analysis creates a histogram of volume distribution by price, identifying the Point of Control (price with the most volume) and the Value Area (range containing 70% of the volume). These are critical reference points for traders to understand where the market finds equilibrium.
Feature 3: Pattern Recognition
Pattern recognition is a bit more complex. We’ll implement a simple version that can detect some common chart patterns, such as double bottoms, double tops, and head-and-shoulders formations.
class PatternRecognition:
"""Tools for detecting common chart patterns."""
@staticmethod
def detect_patterns(df: pd.DataFrame) -> Dict[str, Any]:
"""
Detect common chart patterns in price data.
Args:
df (pd.DataFrame): Historical price data
Returns:
Dict[str, Any]: Detected patterns and their properties
"""
try:
if len(df) < 60: # Need enough data for pattern detection
return {
"patterns": [],
"message": "Not enough data for pattern detection",
}
patterns = []
# We'll use a window of the most recent data for our analysis
recent_df = df.tail(60).copy()
# Find local minima and maxima
recent_df["is_min"] = (
recent_df["low"].rolling(window=5, center=True).min()
== recent_df["low"]
)
recent_df["is_max"] = (
recent_df["high"].rolling(window=5, center=True).max()
== recent_df["high"]
)
# Get the indices, prices, and dates of local minima and maxima
minima = recent_df[recent_df["is_min"]].copy()
maxima = recent_df[recent_df["is_max"]].copy()
# Double Bottom Detection
if len(minima) >= 2:
for i in range(len(minima) - 1):
for j in range(i + 1, len(minima)):
price1 = minima.iloc[i]["low"]
price2 = minima.iloc[j]["low"]
date1 = minima.iloc[i].name
date2 = minima.iloc[j].name
# Check if the two bottoms are at similar price levels (within 3%)
if abs(price1 - price2) / price1 < 0.03:
# Check if they're at least 10 days apart
days_apart = (date2 - date1).days
if days_apart >= 10 and days_apart <= 60:
# Check if there's a peak in between that's at least 5% higher
mask = (recent_df.index > date1) & (
recent_df.index < date2
)
if mask.any():
max_between = recent_df.loc[mask, "high"].max()
if max_between > price1 * 1.05:
patterns.append(
{
"type": "Double Bottom",
"start_date": date1.strftime(
"%Y-%m-%d"
),
"end_date": date2.strftime("%Y-%m-%d"),
"price_level": round(
(price1 + price2) / 2, 2
),
"confidence": "Medium",
}
)
# Double Top Detection (similar logic, but for maxima)
if len(maxima) >= 2:
for i in range(len(maxima) - 1):
for j in range(i + 1, len(maxima)):
price1 = maxima.iloc[i]["high"]
price2 = maxima.iloc[j]["high"]
date1 = maxima.iloc[i].name
date2 = maxima.iloc[j].name
if abs(price1 - price2) / price1 < 0.03:
days_apart = (date2 - date1).days
if days_apart >= 10 and days_apart <= 60:
mask = (recent_df.index > date1) & (
recent_df.index < date2
)
if mask.any():
min_between = recent_df.loc[mask, "low"].min()
if min_between < price1 * 0.95:
patterns.append(
{
"type": "Double Top",
"start_date": date1.strftime(
"%Y-%m-%d"
),
"end_date": date2.strftime("%Y-%m-%d"),
"price_level": round(
(price1 + price2) / 2, 2
),
"confidence": "Medium",
}
)
# Check for potential breakouts
close = df["close"].iloc[-1]
recent_high = df["high"].iloc[-20:].max()
recent_low = df["low"].iloc[-20:].min()
# Resistance breakout
if close > recent_high * 0.99 and close < recent_high * 1.02:
patterns.append(
{
"type": "Resistance Breakout",
"price_level": round(recent_high, 2),
"confidence": "Medium",
}
)
# Support breakout (breakdown)
if close < recent_low * 1.01 and close > recent_low * 0.98:
patterns.append(
{
"type": "Support Breakdown",
"price_level": round(recent_low, 2),
"confidence": "Medium",
}
)
return {"patterns": patterns}
except Exception as e:
raise Exception(f"Error detecting patterns: {str(e)}")
This pattern recognition algorithm is quite simplified but demonstrates the concept. In a production environment, you should use a more sophisticated approach, possibly even machine learning for pattern recognition.
Feature 4: Risk Analysis
Risk analysis helps traders determine appropriate position sizes and stop levels based on their risk tolerance. Let’s implement it
Let’s implement it:
class RiskAnalysis:
"""Tools for risk management and position sizing."""
@staticmethod
def calculate_position_size(
price: float,
stop_price: float,
risk_amount: float,
account_size: float,
max_risk_percent: float = 2.0,
) -> Dict[str, Any]:
"""
Calculate appropriate position size based on risk parameters.
Args:
price (float): Current stock price
stop_price (float): Stop loss price
risk_amount (float): Amount willing to risk in dollars
account_size (float): Total trading account size
max_risk_percent (float): Maximum percentage of account to risk
Returns:
Dict[str, Any]: Position sizing recommendations
"""
try:
# Validate inputs
if price <= 0 or account_size <= 0:
raise ValueError("Price and account size must be positive")
if price <= stop_price and stop_price != 0:
raise ValueError(
"For long positions, stop price must be below entry price"
)
# Calculate risk per share
risk_per_share = abs(price - stop_price)
if risk_per_share == 0:
raise ValueError(
"Risk per share cannot be zero. Entry and stop prices must differ."
)
# Calculate position size based on dollar risk
shares_based_on_risk = int(risk_amount / risk_per_share)
# Calculate maximum position size based on account risk percentage
max_risk_dollars = account_size * (max_risk_percent / 100)
max_shares = int(max_risk_dollars / risk_per_share)
# Take the smaller of the two
recommended_shares = min(shares_based_on_risk, max_shares)
actual_dollar_risk = recommended_shares * risk_per_share
# Calculate position cost
position_cost = recommended_shares * price
# Calculate R-Multiples (potential reward to risk ratios)
r1_target = price + risk_per_share
r2_target = price + 2 * risk_per_share
r3_target = price + 3 * risk_per_share
return {
"recommended_shares": recommended_shares,
"dollar_risk": round(actual_dollar_risk, 2),
"risk_per_share": round(risk_per_share, 2),
"position_cost": round(position_cost, 2),
"account_percent_risked": round(
(actual_dollar_risk / account_size) * 100, 2
),
"r_multiples": {
"r1": round(r1_target, 2),
"r2": round(r2_target, 2),
"r3": round(r3_target, 2),
},
}
except Exception as e:
raise Exception(f"Error calculating position size: {str(e)}")
@staticmethod
def suggest_stop_levels(df: pd.DataFrame) -> Dict[str, float]:
"""
Suggest appropriate stop-loss levels based on technical indicators.
Args:
df (pd.DataFrame): Historical price data with technical indicators
Returns:
Dict[str, float]: Suggested stop levels
"""
try:
if len(df) < 20:
raise ValueError("Not enough data for stop level analysis")
latest = df.iloc[-1]
close = latest["close"]
# Calculate ATR-based stops
atr = latest.get("atr", df["high"].iloc[-20:] - df["low"].iloc[-20:]).mean()
# Different stop strategies
stops = {
"atr_1x": round(close - 1 * atr, 2),
"atr_2x": round(close - 2 * atr, 2),
"atr_3x": round(close - 3 * atr, 2),
"percent_2": round(close * 0.98, 2),
"percent_5": round(close * 0.95, 2),
"percent_8": round(close * 0.92, 2),
}
# Add SMA-based stops if available
for sma in ["sma_20", "sma_50", "sma_200"]:
if sma in latest and not pd.isna(latest[sma]):
stops[sma] = round(latest[sma], 2)
# Check for recent swing low
recent_lows = df["low"].iloc[-20:].sort_values()
if not recent_lows.empty:
stops["recent_swing"] = round(recent_lows.iloc[0], 2)
return stops
except Exception as e:
raise Exception(f"Error suggesting stop levels: {str(e)}")
This risk analysis class provides two key functions:
calculate_position_size
: Determines how many shares to buy based on your risk parameterssuggest_stop_levels
: Provides multiple stop-loss options based on ATR, percentage moves, and key support levels
Updating Our MCP Server
Now, let’s update our server.py
file to incorporate these new features:
import mcp.types as types
import mcp.server.stdio
import asyncio
from mcp.server.models import InitializationOptions
from mcp.server import NotificationOptions, Server
# Add our new imports
from .data import MarketData
from .indicators import (
TechnicalAnalysis,
RelativeStrength,
VolumeProfile,
PatternRecognition,
RiskAnalysis,
)
# Initialize our service instances
market_data = MarketData()
tech_analysis = TechnicalAnalysis()
rs_analysis = RelativeStrength()
volume_analysis = VolumeProfile()
pattern_recognition = PatternRecognition()
risk_analysis = RiskAnalysis()
# Keep the server initialization
server = Server("mcp-trader")
@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
"""List our stock analysis tools."""
return [
types.Tool(
name="analyze-stock",
description="Analyze a stock's technical setup",
inputSchema={
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "Stock symbol (e.g., NVDA)",
}
},
"required": ["symbol"],
},
),
types.Tool(
name="relative-strength",
description="Calculate a stock's relative strength compared to benchmark",
inputSchema={
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "Stock symbol to analyze",
},
"benchmark": {
"type": "string",
"description": "Benchmark symbol (default: SPY)",
"default": "SPY",
},
},
"required": ["symbol"],
},
),
types.Tool(
name="volume-profile",
description="Analyze volume distribution by price",
inputSchema={
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "Stock symbol to analyze",
},
"lookback_days": {
"type": "integer",
"description": "Number of days to analyze",
"default": 60,
},
},
"required": ["symbol"],
},
),
types.Tool(
name="detect-patterns",
description="Detect chart patterns in price data",
inputSchema={
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "Stock symbol to analyze",
}
},
"required": ["symbol"],
},
),
types.Tool(
name="position-size",
description="Calculate optimal position size based on risk parameters",
inputSchema={
"type": "object",
"properties": {
"symbol": {"type": "string", "description": "Stock symbol"},
"price": {
"type": "number",
"description": "Entry price (0 for current price)",
},
"stop_price": {"type": "number", "description": "Stop loss price"},
"risk_amount": {
"type": "number",
"description": "Dollar amount to risk",
},
"account_size": {
"type": "number",
"description": "Total account size in dollars",
},
},
"required": ["symbol", "stop_price", "risk_amount", "account_size"],
},
),
types.Tool(
name="suggest-stops",
description="Suggest stop loss levels based on technical analysis",
inputSchema={
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "Stock symbol to analyze",
}
},
"required": ["symbol"],
},
),
]
@server.call_tool()
async def handle_call_tool(
name: str, arguments: dict | None
) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
"""Handle tool execution requests."""
if not arguments:
raise ValueError("Missing arguments")
try:
# Original analyze-stock tool
if name == "analyze-stock":
symbol = arguments.get("symbol")
if not symbol:
raise ValueError("Missing symbol")
# Fetch data
df = await market_data.get_historical_data(symbol)
# Add indicators
df = tech_analysis.add_core_indicators(df)
# Get trend status
trend = tech_analysis.check_trend_status(df)
analysis = f"""
Technical Analysis for {symbol}:
Trend Analysis:
- Above 20 SMA: {"✅ " if trend["above_20sma"] else "❌ "}
- Above 50 SMA: {"✅ " if trend["above_50sma"] else "❌ "}
- Above 200 SMA: {"✅ " if trend["above_200sma"] else "❌ "}
- 20/50 SMA Bullish Cross: {"✅ " if trend["20_50_bullish"] else "❌ "}
- 50/200 SMA Bullish Cross: {"✅ " if trend["50_200_bullish"] else "❌ "}
Momentum:
- RSI (14): {trend["rsi"]:.2f}
- MACD Bullish: {"✅ " if trend["macd_bullish"] else "❌ "}
Latest Price: ${df["close"].iloc[-1]:.2f}
Average True Range (14): {df["atr"].iloc[-1]:.2f}
Average Daily Range Percentage: {df["adrp"].iloc[-1]:.2f}%
Average Volume (20D): {int(df["avg_20d_vol"].iloc[-1])}
"""
return [types.TextContent(type="text", text=analysis)]
# Relative Strength Analysis
elif name == "relative-strength":
symbol = arguments.get("symbol")
benchmark = arguments.get("benchmark", "SPY")
if not symbol:
raise ValueError("Missing symbol")
# Calculate relative strength
rs_results = await rs_analysis.calculate_rs(market_data, symbol, benchmark)
# Format the results
rs_text = f"""
Relative Strength Analysis for {symbol} vs {benchmark}:
"""
# Check if we have any results
if not rs_results:
rs_text += "Insufficient historical data to calculate relative strength metrics."
return [types.TextContent(type="text", text=rs_text)]
for period, score in rs_results.items():
if period.startswith("RS_"):
days = period.split("_")[1]
rs_text += f"- {days} Relative Strength: {score}"
# Add classification
if score >= 80:
rs_text += " (Strong Outperformance) ⭐⭐⭐"
elif score >= 65:
rs_text += " (Moderate Outperformance) ⭐⭐"
elif score >= 50:
rs_text += " (Slight Outperformance) ⭐"
elif score >= 35:
rs_text += " (Slight Underperformance) ⚠️"
elif score >= 20:
rs_text += " (Moderate Underperformance) ⚠️⚠️"
else:
rs_text += " (Strong Underperformance) ⚠️⚠️⚠️"
rs_text += "\n"
rs_text += "\nPerformance Details:\n"
for period in ["21d", "63d", "126d", "252d"]:
# Check if we have data for this period
if (
f"Return_{period}" not in rs_results
or f"Benchmark_{period}" not in rs_results
or f"Excess_{period}" not in rs_results
):
continue
stock_return = rs_results.get(f"Return_{period}")
benchmark_return = rs_results.get(f"Benchmark_{period}")
excess = rs_results.get(f"Excess_{period}")
if (
stock_return is not None
and benchmark_return is not None
and excess is not None
):
rs_text += f"- {period}: {symbol} {stock_return:+.2f}% vs {benchmark} {benchmark_return:+.2f}% = {excess:+.2f}%\n"
# If no performance details were added
if "\nPerformance Details:\n" == rs_text.split("\n")[-2] + "\n":
rs_text += "No performance details available due to insufficient historical data.\n"
return [types.TextContent(type="text", text=rs_text)]
# Volume Profile Analysis
elif name == "volume-profile":
symbol = arguments.get("symbol")
lookback_days = arguments.get("lookback_days", 60)
if not symbol:
raise ValueError("Missing symbol")
# Get historical data
df = await market_data.get_historical_data(symbol, lookback_days + 10)
# Analyze volume profile
profile = volume_analysis.analyze_volume_profile(df.tail(lookback_days))
# Format the results
profile_text = f"""
Volume Profile Analysis for {symbol} (last {lookback_days} days):
Point of Control (POC): ${profile["point_of_control"]} (Price level with highest volume)
Value Area: ${profile["value_area_low"]} - ${profile["value_area_high"]} (70% of volume)
Volume by Price Level (High to Low):
"""
# Sort bins by volume and format
sorted_bins = sorted(
profile["bins"], key=lambda x: x["volume"], reverse=True
)
for i, bin_data in enumerate(sorted_bins[:5]): # Show top 5 volume levels
profile_text += f"{i + 1}. ${bin_data['price_low']} - ${bin_data['price_high']}: {bin_data['volume_percent']:.1f}% of volume\n"
return [types.TextContent(type="text", text=profile_text)]
# Pattern Recognition
elif name == "detect-patterns":
symbol = arguments.get("symbol")
if not symbol:
raise ValueError("Missing symbol")
# Get historical data
df = await market_data.get_historical_data(symbol, lookback_days=90)
# Detect patterns
pattern_results = pattern_recognition.detect_patterns(df)
# Format the results
if not pattern_results["patterns"]:
pattern_text = f"No significant chart patterns detected for {symbol} in the recent data."
else:
pattern_text = f"Chart Patterns Detected for {symbol}:\n\n"
for pattern in pattern_results["patterns"]:
pattern_text += f"- {pattern['type']}"
if "start_date" in pattern and "end_date" in pattern:
pattern_text += (
f" ({pattern['start_date']} to {pattern['end_date']})"
)
pattern_text += f": Price level ${pattern['price_level']}"
if "confidence" in pattern:
pattern_text += f" (Confidence: {pattern['confidence']})"
pattern_text += "\n"
pattern_text += "\nNote: Pattern recognition is not 100% reliable and should be confirmed with other forms of analysis."
return [types.TextContent(type="text", text=pattern_text)]
# Position Sizing
elif name == "position-size":
symbol = arguments.get("symbol")
price = arguments.get("price", 0)
stop_price = arguments.get("stop_price")
risk_amount = arguments.get("risk_amount")
account_size = arguments.get("account_size")
if not all([symbol, stop_price, risk_amount, account_size]):
raise ValueError("Missing required parameters")
# If price is 0, get the current price
if price == 0:
df = await market_data.get_historical_data(symbol, lookback_days=5)
price = df["close"].iloc[-1]
# Calculate position size
position_results = risk_analysis.calculate_position_size(
price=price,
stop_price=stop_price,
risk_amount=risk_amount,
account_size=account_size,
)
# Format the results
position_text = f"""
Position Sizing for {symbol} at ${price:.2f}:
📊 Recommended Position:
- {position_results["recommended_shares"]} shares (${position_results["position_cost"]:.2f})
- Risk: ${position_results["dollar_risk"]:.2f} ({position_results["account_percent_risked"]:.2f}% of account)
- Risk per share: ${position_results["risk_per_share"]:.2f}
🎯 Potential Targets (R-Multiples):
- R1 (1:1): ${position_results["r_multiples"]["r1"]:.2f}
- R2 (2:1): ${position_results["r_multiples"]["r2"]:.2f}
- R3 (3:1): ${position_results["r_multiples"]["r3"]:.2f}
Remember what Ramada said: "Good trades don't just happen, they're the result of careful planning!"
"""
return [types.TextContent(type="text", text=position_text)]
# Suggest Stop Levels
elif name == "suggest-stops":
symbol = arguments.get("symbol")
if not symbol:
raise ValueError("Missing symbol")
# Get historical data
df = await market_data.get_historical_data(symbol, lookback_days=60)
# Add indicators
df = tech_analysis.add_core_indicators(df)
# Get stop suggestions
stops = risk_analysis.suggest_stop_levels(df)
latest_close = df["close"].iloc[-1]
# Format the results
stops_text = f"""
Suggested Stop Levels for {symbol} (Current Price: ${latest_close:.2f}):
ATR-Based Stops:
- Conservative (1x ATR): ${stops["atr_1x"]:.2f} ({((latest_close - stops["atr_1x"]) / latest_close * 100):.2f}% from current price)
- Moderate (2x ATR): ${stops["atr_2x"]:.2f} ({((latest_close - stops["atr_2x"]) / latest_close * 100):.2f}% from current price)
- Aggressive (3x ATR): ${stops["atr_3x"]:.2f} ({((latest_close - stops["atr_3x"]) / latest_close * 100):.2f}% from current price)
Percentage-Based Stops:
- Tight (2%): ${stops["percent_2"]:.2f}
- Medium (5%): ${stops["percent_5"]:.2f}
- Wide (8%): ${stops["percent_8"]:.2f}
Technical Levels:
"""
if "sma_20" in stops:
stops_text += f"- 20-day SMA: ${stops['sma_20']:.2f} ({((latest_close - stops['sma_20']) / latest_close * 100):.2f}% from current price)\n"
if "sma_50" in stops:
stops_text += f"- 50-day SMA: ${stops['sma_50']:.2f} ({((latest_close - stops['sma_50']) / latest_close * 100):.2f}% from current price)\n"
if "sma_200" in stops:
stops_text += f"- 200-day SMA: ${stops['sma_200']:.2f} ({((latest_close - stops['sma_200']) / latest_close * 100):.2f}% from current price)\n"
if "recent_swing" in stops:
stops_text += f"- Recent Swing Low: ${stops['recent_swing']:.2f} ({((latest_close - stops['recent_swing']) / latest_close * 100):.2f}% from current price)\n"
return [types.TextContent(type="text", text=stops_text)]
else:
raise ValueError(f"Unknown tool: {name}")
except Exception as e:
return [
types.TextContent(
type="text", text=f"\n<observation>\nError: {str(e)}\n</observation>\n"
)
]
# Keep the main function as is
async def main():
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="mcp-trader",
server_version="0.1.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
# Run as MCP server
asyncio.run(main())
Testing It Out with Claude
With our MCP server updated, we can now test these new features with Claude Desktop. Here’s an example of a conversation you might have:

What’s Next?
We’ve turned our basic MCP server into a multi-tool Swiss Army knife for stock analysis.
In future articles, I’ll show you how to:
- Add chart image generation capabilities
- Incorporate fundamental data analysis
- Build customized scanning tools
- Develop backtesting capabilities
For readers interested in a complete AI trading solution, I’d like to ask you to please consider exploring Capital Companion, my AI-powered trading assistant with many technical analysis capabilities, intelligent risk management, and portfolio optimization features.
The complete code for this tutorial is available on my GitHub. Remember that technical analysis tools are just one component of a comprehensive trading strategy while we all continue swapping symbols.