Spaces:
Running
on
Zero
Running
on
Zero
Commit
·
18b9531
1
Parent(s):
9b88b42
feat: complete MVP integration with Gradio UI, database schema, and tests
Browse files- Integrate Gradio UI with 3-phase workflow pipeline
- Implement portfolio input parser supporting multiple formats
- Add comprehensive results formatter with markdown output
- Create Supabase PostgreSQL schema with RLS policies
- Implement database persistence (save_analysis, get_analysis_history)
- Build integration test suite for MCP pipeline
- Fix Pydantic AI agent initialization (output_type parameter)
- Add missing config fields (anthropic_model, alpaca keys)
- Update README with setup instructions and architecture docs
Status: MVP ~90% complete, ready for deployment
- README.md +270 -5
- app.py +226 -15
- backend/agents/base_agent.py +6 -6
- backend/agents/portfolio_analyst.py +2 -2
- backend/config.py +13 -0
- backend/database.py +61 -9
- backend/mcp_servers/fmp_mcp.py +1 -1
- backend/mcp_servers/fred_mcp.py +1 -1
- database/schema.sql +152 -0
- database/setup_database.py +65 -0
- tests/__init__.py +1 -0
- tests/conftest.py +33 -0
- tests/test_integration.py +289 -0
README.md
CHANGED
|
@@ -1,14 +1,279 @@
|
|
| 1 |
---
|
| 2 |
title: Portfolio Intelligence Platform
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 5.49.1
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
license: mit
|
| 11 |
-
short_description:
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: Portfolio Intelligence Platform
|
| 3 |
+
emoji: 📊
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: purple
|
| 6 |
sdk: gradio
|
| 7 |
sdk_version: 5.49.1
|
| 8 |
app_file: app.py
|
| 9 |
pinned: false
|
| 10 |
license: mit
|
| 11 |
+
short_description: AI portfolio analysis with multi-agent MCP
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# 📊 Portfolio Intelligence Platform
|
| 15 |
+
|
| 16 |
+
AI-powered portfolio analysis using **multi-agent MCP orchestration** with quantitative models and LLM synthesis.
|
| 17 |
+
|
| 18 |
+
## 🎯 What It Does
|
| 19 |
+
|
| 20 |
+
The Portfolio Intelligence Platform analyses your investment portfolio using:
|
| 21 |
+
- **6 MCP Servers**: Yahoo Finance, Financial Modeling Prep, Trading-MCP, FRED, Portfolio Optimizer, Risk Analyzer
|
| 22 |
+
- **3-Phase Architecture**: Data Collection → Computation → AI Synthesis
|
| 23 |
+
- **Quantitative Models**: HRP, Black-Litterman, Mean-Variance optimisation
|
| 24 |
+
- **Risk Analysis**: VaR, CVaR, Monte Carlo simulation, Sharpe ratio
|
| 25 |
+
- **AI Insights**: Claude Sonnet 4.5 with Pydantic AI for structured analysis
|
| 26 |
+
|
| 27 |
+
## 🚀 Quick Start
|
| 28 |
+
|
| 29 |
+
### Prerequisites
|
| 30 |
+
|
| 31 |
+
- Python 3.12+
|
| 32 |
+
- uv package manager (install with `pip install uv`)
|
| 33 |
+
- Anthropic API key (for Claude)
|
| 34 |
+
- Optional: Supabase account for persistence
|
| 35 |
+
- Optional: FMP and FRED API keys for enhanced data
|
| 36 |
+
|
| 37 |
+
### Installation
|
| 38 |
+
|
| 39 |
+
1. **Clone the repository**:
|
| 40 |
+
```bash
|
| 41 |
+
git clone <your-repo-url>
|
| 42 |
+
cd Portfolio-Intelligence-Platform
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
2. **Install dependencies using uv**:
|
| 46 |
+
```bash
|
| 47 |
+
uv sync
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
3. **Set up environment variables**:
|
| 51 |
+
Copy `.env` file and fill in your API keys:
|
| 52 |
+
```bash
|
| 53 |
+
# Required
|
| 54 |
+
ANTHROPIC_API_KEY=your_anthropic_api_key_here
|
| 55 |
+
|
| 56 |
+
# Optional (for enhanced features)
|
| 57 |
+
SUPABASE_URL=your_supabase_url_here
|
| 58 |
+
SUPABASE_KEY=your_supabase_anon_key_here
|
| 59 |
+
FMP_API_KEY=your_financial_modeling_prep_key_here
|
| 60 |
+
FRED_API_KEY=your_fred_api_key_here
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
4. **Set up database (optional)**:
|
| 64 |
+
If using Supabase for persistence:
|
| 65 |
+
```bash
|
| 66 |
+
uv run python database/setup_database.py
|
| 67 |
+
```
|
| 68 |
+
This will output SQL that you need to run in your Supabase SQL Editor.
|
| 69 |
+
|
| 70 |
+
5. **Run the application**:
|
| 71 |
+
```bash
|
| 72 |
+
uv run python app.py
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
Visit http://localhost:7860 to use the application.
|
| 76 |
+
|
| 77 |
+
## 📖 Usage
|
| 78 |
+
|
| 79 |
+
### Basic Portfolio Analysis
|
| 80 |
+
|
| 81 |
+
1. Enter your portfolio holdings in the format:
|
| 82 |
+
```
|
| 83 |
+
AAPL 50
|
| 84 |
+
TSLA 25 shares
|
| 85 |
+
NVDA $5000
|
| 86 |
+
BTC 0.5
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
2. Click "Analyse Portfolio"
|
| 90 |
+
|
| 91 |
+
3. Get comprehensive analysis including:
|
| 92 |
+
- Portfolio health score
|
| 93 |
+
- Risk metrics (VaR, CVaR, Sharpe ratio)
|
| 94 |
+
- Optimised allocations (HRP, Black-Litterman, Mean-Variance)
|
| 95 |
+
- AI-generated insights and recommendations
|
| 96 |
+
- MCP server transparency
|
| 97 |
+
|
| 98 |
+
### Example Portfolios
|
| 99 |
+
|
| 100 |
+
Try these pre-loaded examples:
|
| 101 |
+
- **Tech Growth**: `AAPL 50\nTSLA 25 shares\nNVDA $5000`
|
| 102 |
+
- **Conservative**: `VOO 100 shares\nVTI 75 shares\nSCHD 50 shares`
|
| 103 |
+
- **Balanced**: `VTI $25000\nVXUS $15000\nBND $15000\nGLD $5000`
|
| 104 |
+
|
| 105 |
+
## 🏗️ Architecture
|
| 106 |
+
|
| 107 |
+
### Three-Phase Workflow
|
| 108 |
+
|
| 109 |
+
```
|
| 110 |
+
Phase 1: Data Layer (4 MCPs)
|
| 111 |
+
├─ Yahoo Finance: Real-time quotes, historical prices
|
| 112 |
+
├─ FMP: Company fundamentals, financial statements
|
| 113 |
+
├─ Trading-MCP: Technical indicators (RSI, MACD)
|
| 114 |
+
└─ FRED: Macroeconomic data
|
| 115 |
+
|
| 116 |
+
Phase 2: Computation Layer (2 MCPs)
|
| 117 |
+
├─ Portfolio Optimizer: HRP, Black-Litterman, Mean-Variance
|
| 118 |
+
└─ Risk Analyzer: VaR, CVaR, Monte Carlo, risk metrics
|
| 119 |
+
|
| 120 |
+
Phase 3: LLM Synthesis
|
| 121 |
+
└─ Claude Sonnet 4.5: AI insights and recommendations
|
| 122 |
+
```
|
| 123 |
+
|
| 124 |
+
### Technology Stack
|
| 125 |
+
|
| 126 |
+
- **Frontend**: Gradio 5.49.1
|
| 127 |
+
- **AI Framework**: Pydantic AI 1.18.0 (with native prompt caching)
|
| 128 |
+
- **Orchestration**: LangGraph
|
| 129 |
+
- **MCP Protocol**: FastMCP 2.13.0
|
| 130 |
+
- **Quantitative Finance**: PyPortfolioOpt, riskfolio-lib, arch
|
| 131 |
+
- **Database**: Supabase PostgreSQL (optional)
|
| 132 |
+
|
| 133 |
+
## 🔧 Development
|
| 134 |
+
|
| 135 |
+
### Project Structure
|
| 136 |
+
|
| 137 |
+
```
|
| 138 |
+
Portfolio-Intelligence-Platform/
|
| 139 |
+
├── app.py # Gradio interface
|
| 140 |
+
├── backend/
|
| 141 |
+
│ ├── config.py # Configuration management
|
| 142 |
+
│ ├── database.py # Supabase client
|
| 143 |
+
│ ├── mcp_router.py # MCP orchestration router
|
| 144 |
+
│ ├── agents/
|
| 145 |
+
│ │ ├── base_agent.py # Pydantic AI base agent
|
| 146 |
+
│ │ ├── portfolio_analyst.py # Portfolio analysis agent
|
| 147 |
+
│ │ └── workflow.py # LangGraph 3-phase workflow
|
| 148 |
+
│ ├── models/
|
| 149 |
+
│ │ ├── portfolio.py # Portfolio data models
|
| 150 |
+
│ │ └── agent_state.py # LangGraph state models
|
| 151 |
+
│ └── mcp_servers/
|
| 152 |
+
│ ├── yahoo_finance_mcp.py
|
| 153 |
+
│ ├── fmp_mcp.py
|
| 154 |
+
│ ├── trading_mcp.py
|
| 155 |
+
│ ├── fred_mcp.py
|
| 156 |
+
│ ├── portfolio_optimizer_mcp.py
|
| 157 |
+
│ └── risk_analyzer_mcp.py
|
| 158 |
+
├── database/
|
| 159 |
+
│ ├── schema.sql # Database schema
|
| 160 |
+
│ └── setup_database.py # Setup script
|
| 161 |
+
└── pyproject.toml # uv dependencies
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
### Running Tests
|
| 165 |
+
|
| 166 |
+
```bash
|
| 167 |
+
uv run pytest tests/
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
### Code Quality
|
| 171 |
+
|
| 172 |
+
```bash
|
| 173 |
+
# Format code
|
| 174 |
+
uv run black backend/ app.py
|
| 175 |
+
|
| 176 |
+
# Type checking
|
| 177 |
+
uv run mypy backend/ app.py
|
| 178 |
+
|
| 179 |
+
# Linting
|
| 180 |
+
uv run ruff backend/ app.py
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
## 🔑 API Keys
|
| 184 |
+
|
| 185 |
+
### Required
|
| 186 |
+
- **Anthropic API**: Get from https://console.anthropic.com/
|
| 187 |
+
|
| 188 |
+
### Optional (for enhanced features)
|
| 189 |
+
- **Supabase**: Get from https://supabase.com/ (free tier available)
|
| 190 |
+
- **Financial Modeling Prep**: Get from https://financialmodelingprep.com/ (250 requests/day free)
|
| 191 |
+
- **FRED**: Get from https://fred.stlouisfed.org/docs/api/api_key.html (free)
|
| 192 |
+
|
| 193 |
+
### Cost Estimates
|
| 194 |
+
|
| 195 |
+
With native prompt caching:
|
| 196 |
+
- ~£0.01-0.02 per portfolio analysis
|
| 197 |
+
- ~£1-2/month for 100 analyses
|
| 198 |
+
- Infrastructure: £0/month (using free tiers)
|
| 199 |
+
|
| 200 |
+
## 📊 Features
|
| 201 |
+
|
| 202 |
+
### Portfolio Optimization
|
| 203 |
+
|
| 204 |
+
- **Hierarchical Risk Parity (HRP)**: Diversification-focused allocation
|
| 205 |
+
- **Black-Litterman**: Bayesian approach with market equilibrium
|
| 206 |
+
- **Mean-Variance**: Classic Markowitz efficient frontier
|
| 207 |
+
|
| 208 |
+
### Risk Analysis
|
| 209 |
+
|
| 210 |
+
- **VaR/CVaR**: Value at Risk and Conditional VaR (95%, 99% confidence)
|
| 211 |
+
- **Monte Carlo Simulation**: 10,000 simulations for tail risk
|
| 212 |
+
- **Sharpe/Sortino Ratios**: Risk-adjusted return metrics
|
| 213 |
+
- **Maximum Drawdown**: Historical worst-case scenarios
|
| 214 |
+
|
| 215 |
+
### AI Insights
|
| 216 |
+
|
| 217 |
+
- Portfolio health scoring (0-10)
|
| 218 |
+
- Personalized recommendations
|
| 219 |
+
- Risk tolerance assessment
|
| 220 |
+
- Transparent reasoning with MCP server traceability
|
| 221 |
+
|
| 222 |
+
## 🚢 Deployment
|
| 223 |
+
|
| 224 |
+
### HuggingFace Spaces (Recommended)
|
| 225 |
+
|
| 226 |
+
This application is designed for one-click deployment to HuggingFace Spaces:
|
| 227 |
+
|
| 228 |
+
1. Push to your HuggingFace Space repository
|
| 229 |
+
2. Add secrets in Space settings (ANTHROPIC_API_KEY, etc.)
|
| 230 |
+
3. Application auto-deploys
|
| 231 |
+
|
| 232 |
+
### Local Production
|
| 233 |
+
|
| 234 |
+
```bash
|
| 235 |
+
# Using uv
|
| 236 |
+
uv run python app.py
|
| 237 |
+
|
| 238 |
+
# Or with Gunicorn (for production)
|
| 239 |
+
uv run gunicorn app:app --workers 4 --bind 0.0.0.0:7860
|
| 240 |
+
```
|
| 241 |
+
|
| 242 |
+
## 📝 License
|
| 243 |
+
|
| 244 |
+
MIT License - see LICENSE file for details.
|
| 245 |
+
|
| 246 |
+
## 🙏 Acknowledgements
|
| 247 |
+
|
| 248 |
+
- Built for MCP 1st Birthday Hackathon
|
| 249 |
+
- Uses Anthropic's Claude Sonnet 4.5
|
| 250 |
+
- Quantitative models from PyPortfolioOpt and riskfolio-lib
|
| 251 |
+
- MCP protocol by Anthropic
|
| 252 |
+
|
| 253 |
+
## 🐛 Troubleshooting
|
| 254 |
+
|
| 255 |
+
### Common Issues
|
| 256 |
+
|
| 257 |
+
**"Analysis failed" error**:
|
| 258 |
+
- Check that ANTHROPIC_API_KEY is set correctly in .env
|
| 259 |
+
- Verify API key has sufficient credits
|
| 260 |
+
|
| 261 |
+
**No market data returned**:
|
| 262 |
+
- Yahoo Finance may be rate-limited (yfinance is free but unofficial)
|
| 263 |
+
- Consider getting FMP API key for more reliable data
|
| 264 |
+
|
| 265 |
+
**Database errors**:
|
| 266 |
+
- Verify SUPABASE_URL and SUPABASE_KEY are correct
|
| 267 |
+
- Run database/setup_database.py to create tables
|
| 268 |
+
- Application works without database (analysis won't be saved)
|
| 269 |
+
|
| 270 |
+
## 📧 Support
|
| 271 |
+
|
| 272 |
+
For issues or questions:
|
| 273 |
+
1. Check the documentation in `finance-agentic-ai/` directory
|
| 274 |
+
2. Review existing GitHub issues
|
| 275 |
+
3. Create a new issue with detailed description
|
| 276 |
+
|
| 277 |
+
---
|
| 278 |
+
|
| 279 |
+
Built with ❤️ using Pydantic AI, LangGraph, and FastMCP
|
app.py
CHANGED
|
@@ -4,24 +4,229 @@ Main application file for the hackathon demo.
|
|
| 4 |
"""
|
| 5 |
|
| 6 |
import gradio as gr
|
| 7 |
-
|
| 8 |
-
import
|
|
|
|
|
|
|
|
|
|
| 9 |
from dotenv import load_dotenv
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
load_dotenv()
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
Args:
|
| 18 |
-
|
|
|
|
| 19 |
|
| 20 |
Returns:
|
| 21 |
-
|
| 22 |
"""
|
| 23 |
-
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
|
| 27 |
def create_interface() -> gr.Blocks:
|
|
@@ -36,7 +241,7 @@ def create_interface() -> gr.Blocks:
|
|
| 36 |
fill_height=True
|
| 37 |
) as demo:
|
| 38 |
gr.Markdown("# 📊 Portfolio Intelligence Platform")
|
| 39 |
-
gr.Markdown("AI-powered portfolio analysis using MCP orchestration")
|
| 40 |
|
| 41 |
with gr.Row():
|
| 42 |
with gr.Column(scale=1):
|
|
@@ -46,15 +251,15 @@ def create_interface() -> gr.Blocks:
|
|
| 46 |
label="Portfolio Holdings",
|
| 47 |
placeholder="AAPL 50\nTSLA 25 shares\nNVDA $5000\nBTC 0.5",
|
| 48 |
lines=10,
|
| 49 |
-
info="Enter one holding per line"
|
| 50 |
)
|
| 51 |
|
| 52 |
-
analyse_btn = gr.Button("Analyse Portfolio", variant="primary")
|
| 53 |
|
| 54 |
# Example portfolios
|
| 55 |
gr.Examples(
|
| 56 |
examples=[
|
| 57 |
-
["AAPL 50\nTSLA 25 shares\nNVDA $5000
|
| 58 |
["VOO 100 shares\nVTI 75 shares\nSCHD 50 shares"],
|
| 59 |
["VTI $25000\nVXUS $15000\nBND $15000\nGLD $5000"],
|
| 60 |
],
|
|
@@ -66,15 +271,21 @@ def create_interface() -> gr.Blocks:
|
|
| 66 |
gr.Markdown("## Analysis Results")
|
| 67 |
|
| 68 |
output = gr.Markdown(
|
| 69 |
-
value="Enter your portfolio and click
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
label="AI Insights"
|
| 71 |
)
|
| 72 |
|
| 73 |
# Event handlers
|
| 74 |
analyse_btn.click(
|
| 75 |
-
fn=
|
| 76 |
inputs=portfolio_input,
|
| 77 |
-
outputs=output
|
|
|
|
| 78 |
)
|
| 79 |
|
| 80 |
return demo
|
|
|
|
| 4 |
"""
|
| 5 |
|
| 6 |
import gradio as gr
|
| 7 |
+
import asyncio
|
| 8 |
+
import re
|
| 9 |
+
import logging
|
| 10 |
+
from typing import List, Dict, Any
|
| 11 |
+
from datetime import datetime
|
| 12 |
from dotenv import load_dotenv
|
| 13 |
|
| 14 |
+
from backend.mcp_router import mcp_router
|
| 15 |
+
from backend.agents.workflow import PortfolioAnalysisWorkflow
|
| 16 |
+
from backend.models.agent_state import AgentState
|
| 17 |
+
|
| 18 |
+
# Load environment variables
|
| 19 |
load_dotenv()
|
| 20 |
|
| 21 |
+
# Configure logging
|
| 22 |
+
logging.basicConfig(level=logging.INFO)
|
| 23 |
+
logger = logging.getLogger(__name__)
|
| 24 |
+
|
| 25 |
+
# Initialize workflow
|
| 26 |
+
workflow = PortfolioAnalysisWorkflow(mcp_router)
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def parse_portfolio_input(portfolio_text: str) -> List[Dict[str, Any]]:
|
| 30 |
+
"""Parse portfolio input text into structured holdings.
|
| 31 |
+
|
| 32 |
+
Supports formats:
|
| 33 |
+
- AAPL 50 (50 shares)
|
| 34 |
+
- TSLA 25 shares
|
| 35 |
+
- NVDA $5000 (dollar amount)
|
| 36 |
+
- BTC 0.5 (fractional shares/crypto)
|
| 37 |
+
|
| 38 |
+
Args:
|
| 39 |
+
portfolio_text: Raw text input from user
|
| 40 |
+
|
| 41 |
+
Returns:
|
| 42 |
+
List of holding dictionaries
|
| 43 |
+
"""
|
| 44 |
+
holdings = []
|
| 45 |
+
lines = portfolio_text.strip().split('\n')
|
| 46 |
+
|
| 47 |
+
for line in lines:
|
| 48 |
+
line = line.strip()
|
| 49 |
+
if not line:
|
| 50 |
+
continue
|
| 51 |
+
|
| 52 |
+
# Match patterns: TICKER QUANTITY [shares] or TICKER $AMOUNT
|
| 53 |
+
match = re.match(r'([A-Za-z]+)\s+(\$)?([0-9.]+)\s*(shares)?', line, re.IGNORECASE)
|
| 54 |
+
if match:
|
| 55 |
+
ticker = match.group(1).upper()
|
| 56 |
+
is_dollar = match.group(2) == '$'
|
| 57 |
+
amount = float(match.group(3))
|
| 58 |
+
|
| 59 |
+
holdings.append({
|
| 60 |
+
'ticker': ticker,
|
| 61 |
+
'quantity': amount if not is_dollar else 0,
|
| 62 |
+
'dollar_amount': amount if is_dollar else 0,
|
| 63 |
+
'cost_basis': 0 # Will be calculated from current price
|
| 64 |
+
})
|
| 65 |
+
else:
|
| 66 |
+
logger.warning(f"Could not parse line: {line}")
|
| 67 |
+
|
| 68 |
+
return holdings
|
| 69 |
+
|
| 70 |
+
|
| 71 |
+
async def run_analysis(portfolio_text: str) -> str:
|
| 72 |
+
"""Run portfolio analysis workflow.
|
| 73 |
+
|
| 74 |
+
Args:
|
| 75 |
+
portfolio_text: Raw portfolio input
|
| 76 |
+
|
| 77 |
+
Returns:
|
| 78 |
+
Formatted analysis results
|
| 79 |
+
"""
|
| 80 |
+
try:
|
| 81 |
+
# Parse input
|
| 82 |
+
holdings = parse_portfolio_input(portfolio_text)
|
| 83 |
+
|
| 84 |
+
if not holdings:
|
| 85 |
+
return "❌ **Error**: Could not parse any holdings. Please use format like:\n```\nAAPL 50\nTSLA 25 shares\nNVDA $5000\n```"
|
| 86 |
+
|
| 87 |
+
logger.info(f"Parsed {len(holdings)} holdings")
|
| 88 |
+
|
| 89 |
+
# Create initial state
|
| 90 |
+
initial_state: AgentState = {
|
| 91 |
+
'portfolio_id': f"demo_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
|
| 92 |
+
'user_query': 'Analyse my portfolio',
|
| 93 |
+
'risk_tolerance': 'moderate',
|
| 94 |
+
'holdings': holdings,
|
| 95 |
+
'historical_prices': {},
|
| 96 |
+
'fundamentals': {},
|
| 97 |
+
'economic_data': {},
|
| 98 |
+
'realtime_data': {},
|
| 99 |
+
'technical_indicators': {},
|
| 100 |
+
'optimisation_results': {},
|
| 101 |
+
'risk_analysis': {},
|
| 102 |
+
'ai_synthesis': '',
|
| 103 |
+
'recommendations': [],
|
| 104 |
+
'reasoning_steps': [],
|
| 105 |
+
'current_step': 'starting',
|
| 106 |
+
'errors': [],
|
| 107 |
+
'mcp_calls': []
|
| 108 |
+
}
|
| 109 |
+
|
| 110 |
+
# Run workflow
|
| 111 |
+
logger.info("Starting workflow execution...")
|
| 112 |
+
final_state = await workflow.run(initial_state)
|
| 113 |
|
| 114 |
+
# Format results
|
| 115 |
+
return format_analysis_results(final_state, holdings)
|
| 116 |
+
|
| 117 |
+
except Exception as e:
|
| 118 |
+
logger.error(f"Analysis error: {e}", exc_info=True)
|
| 119 |
+
return f"❌ **Error during analysis**: {str(e)}\n\nPlease check your API keys in .env file and try again."
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
def format_analysis_results(state: AgentState, holdings: List[Dict]) -> str:
|
| 123 |
+
"""Format workflow results for Gradio display.
|
| 124 |
|
| 125 |
Args:
|
| 126 |
+
state: Final workflow state
|
| 127 |
+
holdings: Original holdings list
|
| 128 |
|
| 129 |
Returns:
|
| 130 |
+
Formatted markdown string
|
| 131 |
"""
|
| 132 |
+
output = []
|
| 133 |
+
|
| 134 |
+
# Header
|
| 135 |
+
output.append("# 📊 Portfolio Analysis Results\n")
|
| 136 |
+
|
| 137 |
+
# Portfolio Summary
|
| 138 |
+
output.append("## 📈 Portfolio Summary\n")
|
| 139 |
+
tickers = [h['ticker'] for h in holdings]
|
| 140 |
+
output.append(f"**Holdings**: {', '.join(tickers)}\n")
|
| 141 |
+
output.append(f"**Number of positions**: {len(holdings)}\n")
|
| 142 |
+
|
| 143 |
+
# Check for errors
|
| 144 |
+
if state.get('errors'):
|
| 145 |
+
output.append("\n## ⚠️ Warnings\n")
|
| 146 |
+
for error in state['errors']:
|
| 147 |
+
output.append(f"- {error}\n")
|
| 148 |
+
|
| 149 |
+
# AI Synthesis
|
| 150 |
+
if state.get('ai_synthesis'):
|
| 151 |
+
output.append("\n## 🤖 AI Analysis\n")
|
| 152 |
+
output.append(state['ai_synthesis'])
|
| 153 |
+
output.append("\n")
|
| 154 |
+
|
| 155 |
+
# Recommendations
|
| 156 |
+
if state.get('recommendations'):
|
| 157 |
+
output.append("\n## 💡 Recommendations\n")
|
| 158 |
+
for i, rec in enumerate(state['recommendations'], 1):
|
| 159 |
+
output.append(f"{i}. {rec}\n")
|
| 160 |
+
|
| 161 |
+
# Risk Analysis
|
| 162 |
+
if state.get('risk_analysis'):
|
| 163 |
+
output.append("\n## ⚠️ Risk Metrics\n")
|
| 164 |
+
risk = state['risk_analysis']
|
| 165 |
+
if isinstance(risk, dict):
|
| 166 |
+
for key, value in risk.items():
|
| 167 |
+
if isinstance(value, (int, float)):
|
| 168 |
+
output.append(f"- **{key}**: {value:.2f}\n")
|
| 169 |
+
else:
|
| 170 |
+
output.append(f"- **{key}**: {value}\n")
|
| 171 |
+
|
| 172 |
+
# Optimization Results
|
| 173 |
+
if state.get('optimisation_results'):
|
| 174 |
+
output.append("\n## 🎯 Portfolio Optimisation\n")
|
| 175 |
+
opt = state['optimisation_results']
|
| 176 |
+
if isinstance(opt, dict):
|
| 177 |
+
for method, result in opt.items():
|
| 178 |
+
output.append(f"\n### {method}\n")
|
| 179 |
+
if isinstance(result, dict) and 'weights' in result:
|
| 180 |
+
output.append("**Suggested Allocation**:\n")
|
| 181 |
+
for ticker, weight in result['weights'].items():
|
| 182 |
+
output.append(f"- {ticker}: {weight*100:.1f}%\n")
|
| 183 |
+
|
| 184 |
+
# MCP Calls (for transparency)
|
| 185 |
+
if state.get('mcp_calls'):
|
| 186 |
+
output.append(f"\n## 🔧 MCP Servers Called\n")
|
| 187 |
+
mcp_servers = set()
|
| 188 |
+
for call in state['mcp_calls']:
|
| 189 |
+
if isinstance(call, dict) and 'mcp_server' in call:
|
| 190 |
+
mcp_servers.add(call['mcp_server'])
|
| 191 |
+
output.append(f"Used {len(mcp_servers)} MCP servers: {', '.join(sorted(mcp_servers))}\n")
|
| 192 |
+
|
| 193 |
+
# Execution time
|
| 194 |
+
output.append(f"\n---\n*Analysis completed at {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}*")
|
| 195 |
+
|
| 196 |
+
return '\n'.join(output)
|
| 197 |
+
|
| 198 |
+
|
| 199 |
+
def analyse_portfolio_sync(portfolio_input: str, progress=gr.Progress()) -> str:
|
| 200 |
+
"""Synchronous wrapper for async analysis (required by Gradio).
|
| 201 |
+
|
| 202 |
+
Args:
|
| 203 |
+
portfolio_input: Portfolio holdings as text
|
| 204 |
+
progress: Gradio progress tracker
|
| 205 |
+
|
| 206 |
+
Returns:
|
| 207 |
+
Analysis results
|
| 208 |
+
"""
|
| 209 |
+
if not portfolio_input.strip():
|
| 210 |
+
return "Please enter your portfolio holdings to get started."
|
| 211 |
+
|
| 212 |
+
# Show progress
|
| 213 |
+
progress(0.1, desc="Parsing portfolio...")
|
| 214 |
+
|
| 215 |
+
# Run async function in event loop
|
| 216 |
+
try:
|
| 217 |
+
loop = asyncio.new_event_loop()
|
| 218 |
+
asyncio.set_event_loop(loop)
|
| 219 |
+
|
| 220 |
+
progress(0.2, desc="Starting analysis workflow...")
|
| 221 |
+
result = loop.run_until_complete(run_analysis(portfolio_input))
|
| 222 |
+
|
| 223 |
+
progress(1.0, desc="Complete!")
|
| 224 |
+
loop.close()
|
| 225 |
+
|
| 226 |
+
return result
|
| 227 |
+
except Exception as e:
|
| 228 |
+
logger.error(f"Error in sync wrapper: {e}", exc_info=True)
|
| 229 |
+
return f"❌ **Error**: {str(e)}"
|
| 230 |
|
| 231 |
|
| 232 |
def create_interface() -> gr.Blocks:
|
|
|
|
| 241 |
fill_height=True
|
| 242 |
) as demo:
|
| 243 |
gr.Markdown("# 📊 Portfolio Intelligence Platform")
|
| 244 |
+
gr.Markdown("AI-powered portfolio analysis using multi-agent MCP orchestration")
|
| 245 |
|
| 246 |
with gr.Row():
|
| 247 |
with gr.Column(scale=1):
|
|
|
|
| 251 |
label="Portfolio Holdings",
|
| 252 |
placeholder="AAPL 50\nTSLA 25 shares\nNVDA $5000\nBTC 0.5",
|
| 253 |
lines=10,
|
| 254 |
+
info="Enter one holding per line (TICKER QUANTITY or TICKER $AMOUNT)"
|
| 255 |
)
|
| 256 |
|
| 257 |
+
analyse_btn = gr.Button("🚀 Analyse Portfolio", variant="primary", size="lg")
|
| 258 |
|
| 259 |
# Example portfolios
|
| 260 |
gr.Examples(
|
| 261 |
examples=[
|
| 262 |
+
["AAPL 50\nTSLA 25 shares\nNVDA $5000"],
|
| 263 |
["VOO 100 shares\nVTI 75 shares\nSCHD 50 shares"],
|
| 264 |
["VTI $25000\nVXUS $15000\nBND $15000\nGLD $5000"],
|
| 265 |
],
|
|
|
|
| 271 |
gr.Markdown("## Analysis Results")
|
| 272 |
|
| 273 |
output = gr.Markdown(
|
| 274 |
+
value="👈 Enter your portfolio and click **Analyse Portfolio** to get started.\n\n"
|
| 275 |
+
"The system will:\n"
|
| 276 |
+
"1. **Fetch market data** (prices, fundamentals, economic indicators)\n"
|
| 277 |
+
"2. **Run optimisations** (HRP, Black-Litterman, Mean-Variance)\n"
|
| 278 |
+
"3. **Analyse risk** (VaR, CVaR, Sharpe ratio)\n"
|
| 279 |
+
"4. **Generate AI insights** with actionable recommendations",
|
| 280 |
label="AI Insights"
|
| 281 |
)
|
| 282 |
|
| 283 |
# Event handlers
|
| 284 |
analyse_btn.click(
|
| 285 |
+
fn=analyse_portfolio_sync,
|
| 286 |
inputs=portfolio_input,
|
| 287 |
+
outputs=output,
|
| 288 |
+
show_progress=True
|
| 289 |
)
|
| 290 |
|
| 291 |
return demo
|
backend/agents/base_agent.py
CHANGED
|
@@ -21,7 +21,7 @@ class BasePortfolioAgent(Generic[T]):
|
|
| 21 |
|
| 22 |
def __init__(
|
| 23 |
self,
|
| 24 |
-
|
| 25 |
system_prompt: str,
|
| 26 |
agent_name: str,
|
| 27 |
model: str | None = None,
|
|
@@ -29,20 +29,20 @@ class BasePortfolioAgent(Generic[T]):
|
|
| 29 |
"""Initialize the base agent.
|
| 30 |
|
| 31 |
Args:
|
| 32 |
-
|
| 33 |
system_prompt: System prompt for the agent
|
| 34 |
agent_name: Name of the agent for logging
|
| 35 |
-
model: LLM model to use (defaults to settings.
|
| 36 |
"""
|
| 37 |
self.agent_name = agent_name
|
| 38 |
-
self.model = model or settings.
|
| 39 |
-
self.
|
| 40 |
self.system_prompt = system_prompt
|
| 41 |
|
| 42 |
# Initialize Pydantic AI agent with native prompt caching
|
| 43 |
self.agent = Agent(
|
| 44 |
self.model,
|
| 45 |
-
|
| 46 |
system_prompt=system_prompt,
|
| 47 |
)
|
| 48 |
|
|
|
|
| 21 |
|
| 22 |
def __init__(
|
| 23 |
self,
|
| 24 |
+
output_type: type[T],
|
| 25 |
system_prompt: str,
|
| 26 |
agent_name: str,
|
| 27 |
model: str | None = None,
|
|
|
|
| 29 |
"""Initialize the base agent.
|
| 30 |
|
| 31 |
Args:
|
| 32 |
+
output_type: Pydantic model class for structured output
|
| 33 |
system_prompt: System prompt for the agent
|
| 34 |
agent_name: Name of the agent for logging
|
| 35 |
+
model: LLM model to use (defaults to settings.anthropic_model)
|
| 36 |
"""
|
| 37 |
self.agent_name = agent_name
|
| 38 |
+
self.model = f"anthropic:{model or settings.anthropic_model}"
|
| 39 |
+
self.output_type = output_type
|
| 40 |
self.system_prompt = system_prompt
|
| 41 |
|
| 42 |
# Initialize Pydantic AI agent with native prompt caching
|
| 43 |
self.agent = Agent(
|
| 44 |
self.model,
|
| 45 |
+
output_type=output_type,
|
| 46 |
system_prompt=system_prompt,
|
| 47 |
)
|
| 48 |
|
backend/agents/portfolio_analyst.py
CHANGED
|
@@ -59,7 +59,7 @@ class PortfolioAnalystAgent(BasePortfolioAgent[PortfolioAnalysisOutput]):
|
|
| 59 |
|
| 60 |
def __init__(self):
|
| 61 |
super().__init__(
|
| 62 |
-
|
| 63 |
system_prompt=SYSTEM_PROMPT,
|
| 64 |
agent_name="PortfolioAnalyst",
|
| 65 |
)
|
|
@@ -133,7 +133,7 @@ class QuickInsightAgent(BasePortfolioAgent[BaseModel]):
|
|
| 133 |
confidence: float = Field(..., ge=0.0, le=1.0)
|
| 134 |
|
| 135 |
super().__init__(
|
| 136 |
-
|
| 137 |
system_prompt="You are a helpful financial assistant. Provide concise, accurate answers to investment questions.",
|
| 138 |
agent_name="QuickInsight",
|
| 139 |
)
|
|
|
|
| 59 |
|
| 60 |
def __init__(self):
|
| 61 |
super().__init__(
|
| 62 |
+
output_type=PortfolioAnalysisOutput,
|
| 63 |
system_prompt=SYSTEM_PROMPT,
|
| 64 |
agent_name="PortfolioAnalyst",
|
| 65 |
)
|
|
|
|
| 133 |
confidence: float = Field(..., ge=0.0, le=1.0)
|
| 134 |
|
| 135 |
super().__init__(
|
| 136 |
+
output_type=QuickInsight,
|
| 137 |
system_prompt="You are a helpful financial assistant. Provide concise, accurate answers to investment questions.",
|
| 138 |
agent_name="QuickInsight",
|
| 139 |
)
|
backend/config.py
CHANGED
|
@@ -14,6 +14,7 @@ class Settings(BaseSettings):
|
|
| 14 |
|
| 15 |
Attributes:
|
| 16 |
anthropic_api_key: Anthropic API key for Claude
|
|
|
|
| 17 |
supabase_url: Supabase project URL
|
| 18 |
supabase_key: Supabase anon key
|
| 19 |
fmp_api_key: Financial Modeling Prep API key
|
|
@@ -27,6 +28,10 @@ class Settings(BaseSettings):
|
|
| 27 |
default="",
|
| 28 |
validation_alias="ANTHROPIC_API_KEY"
|
| 29 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
# Database Configuration
|
| 32 |
supabase_url: Optional[str] = Field(
|
|
@@ -47,6 +52,14 @@ class Settings(BaseSettings):
|
|
| 47 |
default=None,
|
| 48 |
validation_alias="FRED_API_KEY"
|
| 49 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
# Application Settings
|
| 52 |
environment: str = Field(
|
|
|
|
| 14 |
|
| 15 |
Attributes:
|
| 16 |
anthropic_api_key: Anthropic API key for Claude
|
| 17 |
+
anthropic_model: Anthropic model ID (default: claude-sonnet-4-5-20250929)
|
| 18 |
supabase_url: Supabase project URL
|
| 19 |
supabase_key: Supabase anon key
|
| 20 |
fmp_api_key: Financial Modeling Prep API key
|
|
|
|
| 28 |
default="",
|
| 29 |
validation_alias="ANTHROPIC_API_KEY"
|
| 30 |
)
|
| 31 |
+
anthropic_model: str = Field(
|
| 32 |
+
default="claude-sonnet-4-5-20250929",
|
| 33 |
+
validation_alias="ANTHROPIC_MODEL"
|
| 34 |
+
)
|
| 35 |
|
| 36 |
# Database Configuration
|
| 37 |
supabase_url: Optional[str] = Field(
|
|
|
|
| 52 |
default=None,
|
| 53 |
validation_alias="FRED_API_KEY"
|
| 54 |
)
|
| 55 |
+
alpaca_api_key: Optional[str] = Field(
|
| 56 |
+
default=None,
|
| 57 |
+
validation_alias="ALPACA_API_KEY"
|
| 58 |
+
)
|
| 59 |
+
alpaca_secret_key: Optional[str] = Field(
|
| 60 |
+
default=None,
|
| 61 |
+
validation_alias="ALPACA_SECRET_KEY"
|
| 62 |
+
)
|
| 63 |
|
| 64 |
# Application Settings
|
| 65 |
environment: str = Field(
|
backend/database.py
CHANGED
|
@@ -3,7 +3,7 @@
|
|
| 3 |
Handles Supabase PostgreSQL connections and operations.
|
| 4 |
"""
|
| 5 |
|
| 6 |
-
from typing import Optional
|
| 7 |
from supabase import create_client, Client
|
| 8 |
from backend.config import settings
|
| 9 |
import logging
|
|
@@ -43,29 +43,81 @@ class Database:
|
|
| 43 |
"""
|
| 44 |
return self.client is not None
|
| 45 |
|
| 46 |
-
async def save_analysis(
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
Args:
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
analysis_result: Analysis results from AI agents
|
| 53 |
|
| 54 |
Returns:
|
| 55 |
True if saved successfully, False otherwise
|
| 56 |
"""
|
| 57 |
if not self.is_connected():
|
| 58 |
-
logger.warning("Database not connected
|
| 59 |
return False
|
| 60 |
|
| 61 |
try:
|
| 62 |
-
#
|
| 63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
return True
|
|
|
|
| 65 |
except Exception as e:
|
| 66 |
logger.error(f"Failed to save analysis: {e}")
|
| 67 |
return False
|
| 68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
# Global database instance
|
| 71 |
db = Database()
|
|
|
|
| 3 |
Handles Supabase PostgreSQL connections and operations.
|
| 4 |
"""
|
| 5 |
|
| 6 |
+
from typing import Optional, Dict, Any, List
|
| 7 |
from supabase import create_client, Client
|
| 8 |
from backend.config import settings
|
| 9 |
import logging
|
|
|
|
| 43 |
"""
|
| 44 |
return self.client is not None
|
| 45 |
|
| 46 |
+
async def save_analysis(
|
| 47 |
+
self,
|
| 48 |
+
portfolio_id: str,
|
| 49 |
+
analysis_results: Dict[str, Any]
|
| 50 |
+
) -> bool:
|
| 51 |
+
"""Save portfolio analysis results to database.
|
| 52 |
|
| 53 |
Args:
|
| 54 |
+
portfolio_id: Portfolio ID
|
| 55 |
+
analysis_results: Complete analysis results from workflow
|
|
|
|
| 56 |
|
| 57 |
Returns:
|
| 58 |
True if saved successfully, False otherwise
|
| 59 |
"""
|
| 60 |
if not self.is_connected():
|
| 61 |
+
logger.warning("Database not connected, skipping save")
|
| 62 |
return False
|
| 63 |
|
| 64 |
try:
|
| 65 |
+
# Extract data from analysis results
|
| 66 |
+
data = {
|
| 67 |
+
'portfolio_id': portfolio_id,
|
| 68 |
+
'holdings_snapshot': analysis_results.get('holdings', []),
|
| 69 |
+
'market_data': analysis_results.get('market_data', {}),
|
| 70 |
+
'risk_metrics': analysis_results.get('risk_analysis', {}),
|
| 71 |
+
'optimisation_results': analysis_results.get('optimisation_results', {}),
|
| 72 |
+
'ai_synthesis': analysis_results.get('ai_synthesis', ''),
|
| 73 |
+
'recommendations': analysis_results.get('recommendations', []),
|
| 74 |
+
'reasoning_steps': analysis_results.get('reasoning_steps', []),
|
| 75 |
+
'mcp_calls': analysis_results.get('mcp_calls', []),
|
| 76 |
+
'execution_time_ms': analysis_results.get('execution_time_ms'),
|
| 77 |
+
'model_version': analysis_results.get('model_version', 'claude-sonnet-4-5'),
|
| 78 |
+
}
|
| 79 |
+
|
| 80 |
+
# Insert into portfolio_analyses table
|
| 81 |
+
result = self.client.table('portfolio_analyses').insert(data).execute()
|
| 82 |
+
|
| 83 |
+
logger.info(f"Saved analysis for portfolio {portfolio_id}")
|
| 84 |
return True
|
| 85 |
+
|
| 86 |
except Exception as e:
|
| 87 |
logger.error(f"Failed to save analysis: {e}")
|
| 88 |
return False
|
| 89 |
|
| 90 |
+
async def get_analysis_history(
|
| 91 |
+
self,
|
| 92 |
+
portfolio_id: str,
|
| 93 |
+
limit: int = 10
|
| 94 |
+
) -> List[Dict[str, Any]]:
|
| 95 |
+
"""Get analysis history for a portfolio.
|
| 96 |
+
|
| 97 |
+
Args:
|
| 98 |
+
portfolio_id: Portfolio ID
|
| 99 |
+
limit: Maximum number of analyses to return
|
| 100 |
+
|
| 101 |
+
Returns:
|
| 102 |
+
List of analysis results
|
| 103 |
+
"""
|
| 104 |
+
if not self.is_connected():
|
| 105 |
+
return []
|
| 106 |
+
|
| 107 |
+
try:
|
| 108 |
+
result = self.client.table('portfolio_analyses') \
|
| 109 |
+
.select('*') \
|
| 110 |
+
.eq('portfolio_id', portfolio_id) \
|
| 111 |
+
.order('analysis_date', desc=True) \
|
| 112 |
+
.limit(limit) \
|
| 113 |
+
.execute()
|
| 114 |
+
|
| 115 |
+
return result.data if result.data else []
|
| 116 |
+
|
| 117 |
+
except Exception as e:
|
| 118 |
+
logger.error(f"Failed to get analysis history: {e}")
|
| 119 |
+
return []
|
| 120 |
+
|
| 121 |
|
| 122 |
# Global database instance
|
| 123 |
db = Database()
|
backend/mcp_servers/fmp_mcp.py
CHANGED
|
@@ -24,7 +24,7 @@ mcp = FastMCP("financial-modeling-prep")
|
|
| 24 |
|
| 25 |
# API Configuration
|
| 26 |
BASE_URL = "https://financialmodelingprep.com/api/v3"
|
| 27 |
-
API_KEY = settings.
|
| 28 |
|
| 29 |
|
| 30 |
class CompanyProfileRequest(BaseModel):
|
|
|
|
| 24 |
|
| 25 |
# API Configuration
|
| 26 |
BASE_URL = "https://financialmodelingprep.com/api/v3"
|
| 27 |
+
API_KEY = settings.fmp_api_key
|
| 28 |
|
| 29 |
|
| 30 |
class CompanyProfileRequest(BaseModel):
|
backend/mcp_servers/fred_mcp.py
CHANGED
|
@@ -22,7 +22,7 @@ mcp = FastMCP("fred")
|
|
| 22 |
|
| 23 |
# API Configuration
|
| 24 |
BASE_URL = "https://api.stlouisfed.org/fred"
|
| 25 |
-
API_KEY = settings.
|
| 26 |
|
| 27 |
|
| 28 |
class SeriesRequest(BaseModel):
|
|
|
|
| 22 |
|
| 23 |
# API Configuration
|
| 24 |
BASE_URL = "https://api.stlouisfed.org/fred"
|
| 25 |
+
API_KEY = settings.fred_api_key
|
| 26 |
|
| 27 |
|
| 28 |
class SeriesRequest(BaseModel):
|
database/schema.sql
ADDED
|
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
-- Portfolio Intelligence Platform Database Schema
|
| 2 |
+
-- For Supabase PostgreSQL
|
| 3 |
+
|
| 4 |
+
-- Enable UUID extension
|
| 5 |
+
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
| 6 |
+
|
| 7 |
+
-- Users table
|
| 8 |
+
CREATE TABLE IF NOT EXISTS users (
|
| 9 |
+
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
| 10 |
+
email VARCHAR(255) UNIQUE NOT NULL,
|
| 11 |
+
username VARCHAR(100) UNIQUE NOT NULL,
|
| 12 |
+
is_active BOOLEAN DEFAULT true,
|
| 13 |
+
is_demo BOOLEAN DEFAULT false,
|
| 14 |
+
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
| 15 |
+
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
| 16 |
+
);
|
| 17 |
+
|
| 18 |
+
CREATE INDEX idx_users_email ON users(email);
|
| 19 |
+
CREATE INDEX idx_users_username ON users(username);
|
| 20 |
+
|
| 21 |
+
-- Portfolios table
|
| 22 |
+
CREATE TABLE IF NOT EXISTS portfolios (
|
| 23 |
+
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
| 24 |
+
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
| 25 |
+
name VARCHAR(200) NOT NULL,
|
| 26 |
+
description TEXT,
|
| 27 |
+
risk_tolerance VARCHAR(20) DEFAULT 'moderate',
|
| 28 |
+
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
| 29 |
+
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
| 30 |
+
);
|
| 31 |
+
|
| 32 |
+
CREATE INDEX idx_portfolios_user_id ON portfolios(user_id);
|
| 33 |
+
CREATE INDEX idx_portfolios_created_at ON portfolios(created_at DESC);
|
| 34 |
+
|
| 35 |
+
-- Portfolio holdings table
|
| 36 |
+
CREATE TABLE IF NOT EXISTS portfolio_holdings (
|
| 37 |
+
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
| 38 |
+
portfolio_id UUID NOT NULL REFERENCES portfolios(id) ON DELETE CASCADE,
|
| 39 |
+
ticker VARCHAR(20) NOT NULL,
|
| 40 |
+
quantity NUMERIC(20, 8) NOT NULL,
|
| 41 |
+
cost_basis NUMERIC(20, 2),
|
| 42 |
+
asset_type VARCHAR(20) DEFAULT 'stock',
|
| 43 |
+
added_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
| 44 |
+
);
|
| 45 |
+
|
| 46 |
+
CREATE INDEX idx_holdings_portfolio_id ON portfolio_holdings(portfolio_id);
|
| 47 |
+
CREATE INDEX idx_holdings_ticker ON portfolio_holdings(ticker);
|
| 48 |
+
|
| 49 |
+
-- Portfolio analyses table
|
| 50 |
+
CREATE TABLE IF NOT EXISTS portfolio_analyses (
|
| 51 |
+
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
| 52 |
+
portfolio_id UUID NOT NULL REFERENCES portfolios(id) ON DELETE CASCADE,
|
| 53 |
+
analysis_date TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
| 54 |
+
|
| 55 |
+
-- Analysis results (JSONB for flexibility)
|
| 56 |
+
holdings_snapshot JSONB NOT NULL,
|
| 57 |
+
market_data JSONB,
|
| 58 |
+
risk_metrics JSONB,
|
| 59 |
+
optimisation_results JSONB,
|
| 60 |
+
ai_synthesis TEXT,
|
| 61 |
+
recommendations JSONB,
|
| 62 |
+
|
| 63 |
+
-- Metadata
|
| 64 |
+
reasoning_steps JSONB,
|
| 65 |
+
mcp_calls JSONB,
|
| 66 |
+
execution_time_ms INTEGER,
|
| 67 |
+
model_version VARCHAR(50),
|
| 68 |
+
|
| 69 |
+
-- Computed fields
|
| 70 |
+
total_value NUMERIC(20, 2),
|
| 71 |
+
health_score INTEGER CHECK (health_score >= 0 AND health_score <= 10),
|
| 72 |
+
risk_level VARCHAR(20),
|
| 73 |
+
|
| 74 |
+
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
|
| 75 |
+
);
|
| 76 |
+
|
| 77 |
+
CREATE INDEX idx_analyses_portfolio_id ON portfolio_analyses(portfolio_id);
|
| 78 |
+
CREATE INDEX idx_analyses_date ON portfolio_analyses(analysis_date DESC);
|
| 79 |
+
CREATE INDEX idx_analyses_health_score ON portfolio_analyses(health_score);
|
| 80 |
+
|
| 81 |
+
-- Create demo user for testing
|
| 82 |
+
INSERT INTO users (id, email, username, is_demo)
|
| 83 |
+
VALUES (
|
| 84 |
+
'00000000-0000-0000-0000-000000000001'::UUID,
|
| 85 |
+
'[email protected]',
|
| 86 |
+
'demo-user',
|
| 87 |
+
true
|
| 88 |
+
) ON CONFLICT (email) DO NOTHING;
|
| 89 |
+
|
| 90 |
+
-- Function to update updated_at timestamp
|
| 91 |
+
CREATE OR REPLACE FUNCTION update_updated_at_column()
|
| 92 |
+
RETURNS TRIGGER AS $$
|
| 93 |
+
BEGIN
|
| 94 |
+
NEW.updated_at = NOW();
|
| 95 |
+
RETURN NEW;
|
| 96 |
+
END;
|
| 97 |
+
$$ language 'plpgsql';
|
| 98 |
+
|
| 99 |
+
-- Triggers for updated_at
|
| 100 |
+
CREATE TRIGGER update_users_updated_at BEFORE UPDATE ON users
|
| 101 |
+
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
| 102 |
+
|
| 103 |
+
CREATE TRIGGER update_portfolios_updated_at BEFORE UPDATE ON portfolios
|
| 104 |
+
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
| 105 |
+
|
| 106 |
+
-- Grant permissions (Supabase uses RLS - Row Level Security)
|
| 107 |
+
-- These are basic permissions; adjust based on your auth setup
|
| 108 |
+
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
|
| 109 |
+
ALTER TABLE portfolios ENABLE ROW LEVEL SECURITY;
|
| 110 |
+
ALTER TABLE portfolio_holdings ENABLE ROW LEVEL SECURITY;
|
| 111 |
+
ALTER TABLE portfolio_analyses ENABLE ROW LEVEL SECURITY;
|
| 112 |
+
|
| 113 |
+
-- Basic RLS policies (adjust for production)
|
| 114 |
+
-- Allow users to read their own data
|
| 115 |
+
CREATE POLICY users_read_own ON users
|
| 116 |
+
FOR SELECT
|
| 117 |
+
USING (auth.uid()::UUID = id OR is_demo = true);
|
| 118 |
+
|
| 119 |
+
CREATE POLICY portfolios_read_own ON portfolios
|
| 120 |
+
FOR SELECT
|
| 121 |
+
USING (auth.uid()::UUID = user_id OR user_id = '00000000-0000-0000-0000-000000000001'::UUID);
|
| 122 |
+
|
| 123 |
+
CREATE POLICY portfolios_insert_own ON portfolios
|
| 124 |
+
FOR INSERT
|
| 125 |
+
WITH CHECK (auth.uid()::UUID = user_id OR user_id = '00000000-0000-0000-0000-000000000001'::UUID);
|
| 126 |
+
|
| 127 |
+
CREATE POLICY portfolios_update_own ON portfolios
|
| 128 |
+
FOR UPDATE
|
| 129 |
+
USING (auth.uid()::UUID = user_id OR user_id = '00000000-0000-0000-0000-000000000001'::UUID);
|
| 130 |
+
|
| 131 |
+
CREATE POLICY portfolios_delete_own ON portfolios
|
| 132 |
+
FOR DELETE
|
| 133 |
+
USING (auth.uid()::UUID = user_id OR user_id = '00000000-0000-0000-0000-000000000001'::UUID);
|
| 134 |
+
|
| 135 |
+
-- Similar policies for holdings and analyses
|
| 136 |
+
CREATE POLICY holdings_access_own ON portfolio_holdings
|
| 137 |
+
FOR ALL
|
| 138 |
+
USING (
|
| 139 |
+
portfolio_id IN (
|
| 140 |
+
SELECT id FROM portfolios
|
| 141 |
+
WHERE user_id = auth.uid()::UUID OR user_id = '00000000-0000-0000-0000-000000000001'::UUID
|
| 142 |
+
)
|
| 143 |
+
);
|
| 144 |
+
|
| 145 |
+
CREATE POLICY analyses_access_own ON portfolio_analyses
|
| 146 |
+
FOR ALL
|
| 147 |
+
USING (
|
| 148 |
+
portfolio_id IN (
|
| 149 |
+
SELECT id FROM portfolios
|
| 150 |
+
WHERE user_id = auth.uid()::UUID OR user_id = '00000000-0000-0000-0000-000000000001'::UUID
|
| 151 |
+
)
|
| 152 |
+
);
|
database/setup_database.py
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Database setup script for Supabase.
|
| 2 |
+
|
| 3 |
+
Run this script to set up the database schema in your Supabase instance.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import os
|
| 7 |
+
import sys
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
from supabase import create_client
|
| 10 |
+
from dotenv import load_dotenv
|
| 11 |
+
|
| 12 |
+
# Add parent directory to path
|
| 13 |
+
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 14 |
+
|
| 15 |
+
load_dotenv()
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def setup_database():
|
| 19 |
+
"""Set up database schema in Supabase."""
|
| 20 |
+
supabase_url = os.getenv("SUPABASE_URL")
|
| 21 |
+
supabase_key = os.getenv("SUPABASE_KEY")
|
| 22 |
+
|
| 23 |
+
if not supabase_url or not supabase_key:
|
| 24 |
+
print("❌ Error: SUPABASE_URL and SUPABASE_KEY must be set in .env file")
|
| 25 |
+
return False
|
| 26 |
+
|
| 27 |
+
try:
|
| 28 |
+
print("📊 Connecting to Supabase...")
|
| 29 |
+
client = create_client(supabase_url, supabase_key)
|
| 30 |
+
|
| 31 |
+
# Read schema file
|
| 32 |
+
schema_path = Path(__file__).parent / "schema.sql"
|
| 33 |
+
with open(schema_path, 'r') as f:
|
| 34 |
+
schema_sql = f.read()
|
| 35 |
+
|
| 36 |
+
print("🔧 Executing schema SQL...")
|
| 37 |
+
# Note: Supabase Python client doesn't support raw SQL execution
|
| 38 |
+
# You need to execute this in the Supabase SQL Editor instead
|
| 39 |
+
print("\n" + "="*80)
|
| 40 |
+
print("⚠️ IMPORTANT: Copy the SQL below and execute it in your Supabase SQL Editor")
|
| 41 |
+
print("="*80 + "\n")
|
| 42 |
+
print(schema_sql)
|
| 43 |
+
print("\n" + "="*80)
|
| 44 |
+
print("📝 Instructions:")
|
| 45 |
+
print("1. Go to your Supabase dashboard")
|
| 46 |
+
print("2. Navigate to SQL Editor")
|
| 47 |
+
print("3. Create a new query")
|
| 48 |
+
print("4. Copy and paste the SQL above")
|
| 49 |
+
print("5. Click 'Run' to execute")
|
| 50 |
+
print("="*80 + "\n")
|
| 51 |
+
|
| 52 |
+
# Verify connection
|
| 53 |
+
result = client.table('users').select("count", count='exact').execute()
|
| 54 |
+
print(f"✅ Database connection verified!")
|
| 55 |
+
|
| 56 |
+
return True
|
| 57 |
+
|
| 58 |
+
except Exception as e:
|
| 59 |
+
print(f"❌ Error setting up database: {e}")
|
| 60 |
+
return False
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
if __name__ == "__main__":
|
| 64 |
+
success = setup_database()
|
| 65 |
+
sys.exit(0 if success else 1)
|
tests/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
"""Tests package for Portfolio Intelligence Platform."""
|
tests/conftest.py
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Pytest configuration and fixtures."""
|
| 2 |
+
|
| 3 |
+
import pytest
|
| 4 |
+
import sys
|
| 5 |
+
from pathlib import Path
|
| 6 |
+
|
| 7 |
+
# Add parent directory to path for imports
|
| 8 |
+
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
@pytest.fixture(scope="session")
|
| 12 |
+
def event_loop():
|
| 13 |
+
"""Create an instance of the default event loop for the test session."""
|
| 14 |
+
import asyncio
|
| 15 |
+
loop = asyncio.get_event_loop_policy().new_event_loop()
|
| 16 |
+
yield loop
|
| 17 |
+
loop.close()
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
@pytest.fixture
|
| 21 |
+
def sample_portfolio_holdings():
|
| 22 |
+
"""Sample portfolio holdings for testing."""
|
| 23 |
+
return [
|
| 24 |
+
{'ticker': 'AAPL', 'quantity': 50, 'dollar_amount': 0, 'cost_basis': 150.0},
|
| 25 |
+
{'ticker': 'GOOGL', 'quantity': 30, 'dollar_amount': 0, 'cost_basis': 2800.0},
|
| 26 |
+
{'ticker': 'MSFT', 'quantity': 40, 'dollar_amount': 0, 'cost_basis': 350.0}
|
| 27 |
+
]
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
@pytest.fixture
|
| 31 |
+
def sample_portfolio_text():
|
| 32 |
+
"""Sample portfolio input text for testing."""
|
| 33 |
+
return "AAPL 50\nGOOGL 30 shares\nMSFT 40"
|
tests/test_integration.py
ADDED
|
@@ -0,0 +1,289 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Integration tests for Portfolio Intelligence Platform.
|
| 2 |
+
|
| 3 |
+
Tests the complete MCP pipeline from input to output.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import pytest
|
| 7 |
+
import asyncio
|
| 8 |
+
from decimal import Decimal
|
| 9 |
+
from typing import Dict, Any
|
| 10 |
+
|
| 11 |
+
from backend.mcp_router import router as mcp_router
|
| 12 |
+
from backend.agents.workflow import PortfolioAnalysisWorkflow
|
| 13 |
+
from backend.models.agent_state import AgentState
|
| 14 |
+
from app import parse_portfolio_input
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
class TestPortfolioInputParser:
|
| 18 |
+
"""Test portfolio input parsing."""
|
| 19 |
+
|
| 20 |
+
def test_parse_basic_shares(self):
|
| 21 |
+
"""Test parsing basic share format."""
|
| 22 |
+
input_text = "AAPL 50"
|
| 23 |
+
holdings = parse_portfolio_input(input_text)
|
| 24 |
+
|
| 25 |
+
assert len(holdings) == 1
|
| 26 |
+
assert holdings[0]['ticker'] == 'AAPL'
|
| 27 |
+
assert holdings[0]['quantity'] == 50.0
|
| 28 |
+
assert holdings[0]['dollar_amount'] == 0
|
| 29 |
+
|
| 30 |
+
def test_parse_with_shares_keyword(self):
|
| 31 |
+
"""Test parsing with 'shares' keyword."""
|
| 32 |
+
input_text = "TSLA 25 shares"
|
| 33 |
+
holdings = parse_portfolio_input(input_text)
|
| 34 |
+
|
| 35 |
+
assert len(holdings) == 1
|
| 36 |
+
assert holdings[0]['ticker'] == 'TSLA'
|
| 37 |
+
assert holdings[0]['quantity'] == 25.0
|
| 38 |
+
|
| 39 |
+
def test_parse_dollar_amount(self):
|
| 40 |
+
"""Test parsing dollar amount format."""
|
| 41 |
+
input_text = "NVDA $5000"
|
| 42 |
+
holdings = parse_portfolio_input(input_text)
|
| 43 |
+
|
| 44 |
+
assert len(holdings) == 1
|
| 45 |
+
assert holdings[0]['ticker'] == 'NVDA'
|
| 46 |
+
assert holdings[0]['dollar_amount'] == 5000.0
|
| 47 |
+
assert holdings[0]['quantity'] == 0
|
| 48 |
+
|
| 49 |
+
def test_parse_multiple_holdings(self):
|
| 50 |
+
"""Test parsing multiple holdings."""
|
| 51 |
+
input_text = "AAPL 50\nTSLA 25 shares\nNVDA $5000"
|
| 52 |
+
holdings = parse_portfolio_input(input_text)
|
| 53 |
+
|
| 54 |
+
assert len(holdings) == 3
|
| 55 |
+
assert holdings[0]['ticker'] == 'AAPL'
|
| 56 |
+
assert holdings[1]['ticker'] == 'TSLA'
|
| 57 |
+
assert holdings[2]['ticker'] == 'NVDA'
|
| 58 |
+
|
| 59 |
+
def test_parse_fractional_shares(self):
|
| 60 |
+
"""Test parsing fractional shares."""
|
| 61 |
+
input_text = "BTC 0.5"
|
| 62 |
+
holdings = parse_portfolio_input(input_text)
|
| 63 |
+
|
| 64 |
+
assert len(holdings) == 1
|
| 65 |
+
assert holdings[0]['ticker'] == 'BTC'
|
| 66 |
+
assert holdings[0]['quantity'] == 0.5
|
| 67 |
+
|
| 68 |
+
def test_parse_empty_lines(self):
|
| 69 |
+
"""Test parsing with empty lines."""
|
| 70 |
+
input_text = "AAPL 50\n\nTSLA 25\n\n"
|
| 71 |
+
holdings = parse_portfolio_input(input_text)
|
| 72 |
+
|
| 73 |
+
assert len(holdings) == 2
|
| 74 |
+
|
| 75 |
+
def test_parse_lowercase_tickers(self):
|
| 76 |
+
"""Test that tickers are converted to uppercase."""
|
| 77 |
+
input_text = "aapl 50"
|
| 78 |
+
holdings = parse_portfolio_input(input_text)
|
| 79 |
+
|
| 80 |
+
assert holdings[0]['ticker'] == 'AAPL'
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
class TestMCPServers:
|
| 84 |
+
"""Test individual MCP servers."""
|
| 85 |
+
|
| 86 |
+
@pytest.mark.asyncio
|
| 87 |
+
async def test_yahoo_finance_quote(self):
|
| 88 |
+
"""Test Yahoo Finance quote retrieval."""
|
| 89 |
+
result = await mcp_router.call_yahoo_finance_mcp(
|
| 90 |
+
action='get_quote',
|
| 91 |
+
params={'tickers': ['AAPL']}
|
| 92 |
+
)
|
| 93 |
+
|
| 94 |
+
assert 'result' in result
|
| 95 |
+
assert len(result['result']) > 0
|
| 96 |
+
assert result['result'][0]['ticker'] == 'AAPL'
|
| 97 |
+
|
| 98 |
+
@pytest.mark.asyncio
|
| 99 |
+
async def test_yahoo_finance_historical(self):
|
| 100 |
+
"""Test Yahoo Finance historical data."""
|
| 101 |
+
result = await mcp_router.call_yahoo_finance_mcp(
|
| 102 |
+
action='get_historical_data',
|
| 103 |
+
params={
|
| 104 |
+
'ticker': 'AAPL',
|
| 105 |
+
'period': '1mo',
|
| 106 |
+
'interval': '1d'
|
| 107 |
+
}
|
| 108 |
+
)
|
| 109 |
+
|
| 110 |
+
assert 'result' in result
|
| 111 |
+
assert 'dates' in result['result']
|
| 112 |
+
assert 'close_prices' in result['result']
|
| 113 |
+
assert len(result['result']['dates']) > 0
|
| 114 |
+
|
| 115 |
+
@pytest.mark.asyncio
|
| 116 |
+
async def test_portfolio_optimizer_hrp(self):
|
| 117 |
+
"""Test HRP portfolio optimization."""
|
| 118 |
+
# First get historical data
|
| 119 |
+
historical_result = await mcp_router.call_yahoo_finance_mcp(
|
| 120 |
+
action='get_historical_data',
|
| 121 |
+
params={
|
| 122 |
+
'ticker': 'AAPL',
|
| 123 |
+
'period': '1y',
|
| 124 |
+
'interval': '1d'
|
| 125 |
+
}
|
| 126 |
+
)
|
| 127 |
+
|
| 128 |
+
market_data = [
|
| 129 |
+
{
|
| 130 |
+
'ticker': 'AAPL',
|
| 131 |
+
'dates': historical_result['result']['dates'],
|
| 132 |
+
'close_prices': historical_result['result']['close_prices']
|
| 133 |
+
}
|
| 134 |
+
]
|
| 135 |
+
|
| 136 |
+
result = await mcp_router.call_portfolio_optimizer_mcp(
|
| 137 |
+
action='optimize_hrp',
|
| 138 |
+
params={
|
| 139 |
+
'market_data': market_data,
|
| 140 |
+
'symbols': ['AAPL']
|
| 141 |
+
}
|
| 142 |
+
)
|
| 143 |
+
|
| 144 |
+
assert 'result' in result
|
| 145 |
+
assert 'weights' in result['result']
|
| 146 |
+
assert 'AAPL' in result['result']['weights']
|
| 147 |
+
|
| 148 |
+
@pytest.mark.asyncio
|
| 149 |
+
async def test_risk_analyzer(self):
|
| 150 |
+
"""Test risk analysis."""
|
| 151 |
+
# Get historical data first
|
| 152 |
+
historical_result = await mcp_router.call_yahoo_finance_mcp(
|
| 153 |
+
action='get_historical_data',
|
| 154 |
+
params={
|
| 155 |
+
'ticker': 'AAPL',
|
| 156 |
+
'period': '1y',
|
| 157 |
+
'interval': '1d'
|
| 158 |
+
}
|
| 159 |
+
)
|
| 160 |
+
|
| 161 |
+
market_data = [
|
| 162 |
+
{
|
| 163 |
+
'ticker': 'AAPL',
|
| 164 |
+
'dates': historical_result['result']['dates'],
|
| 165 |
+
'close_prices': historical_result['result']['close_prices']
|
| 166 |
+
}
|
| 167 |
+
]
|
| 168 |
+
|
| 169 |
+
result = await mcp_router.call_risk_analyzer_mcp(
|
| 170 |
+
action='analyze_risk',
|
| 171 |
+
params={
|
| 172 |
+
'market_data': market_data,
|
| 173 |
+
'weights': {'AAPL': 1.0},
|
| 174 |
+
'portfolio_value': 10000,
|
| 175 |
+
'confidence_level': 0.95,
|
| 176 |
+
'method': 'historical'
|
| 177 |
+
}
|
| 178 |
+
)
|
| 179 |
+
|
| 180 |
+
assert 'result' in result
|
| 181 |
+
assert 'var_95' in result['result']
|
| 182 |
+
assert 'cvar_95' in result['result']
|
| 183 |
+
|
| 184 |
+
|
| 185 |
+
class TestWorkflow:
|
| 186 |
+
"""Test the complete workflow."""
|
| 187 |
+
|
| 188 |
+
@pytest.mark.asyncio
|
| 189 |
+
async def test_workflow_execution(self):
|
| 190 |
+
"""Test complete workflow execution."""
|
| 191 |
+
# Create initial state
|
| 192 |
+
initial_state: AgentState = {
|
| 193 |
+
'portfolio_id': 'test_portfolio_001',
|
| 194 |
+
'user_query': 'Analyse my portfolio',
|
| 195 |
+
'risk_tolerance': 'moderate',
|
| 196 |
+
'holdings': [
|
| 197 |
+
{'ticker': 'AAPL', 'quantity': 50, 'dollar_amount': 0, 'cost_basis': 0}
|
| 198 |
+
],
|
| 199 |
+
'historical_prices': {},
|
| 200 |
+
'fundamentals': {},
|
| 201 |
+
'economic_data': {},
|
| 202 |
+
'realtime_data': {},
|
| 203 |
+
'technical_indicators': {},
|
| 204 |
+
'optimisation_results': {},
|
| 205 |
+
'risk_analysis': {},
|
| 206 |
+
'ai_synthesis': '',
|
| 207 |
+
'recommendations': [],
|
| 208 |
+
'reasoning_steps': [],
|
| 209 |
+
'current_step': 'starting',
|
| 210 |
+
'errors': [],
|
| 211 |
+
'mcp_calls': []
|
| 212 |
+
}
|
| 213 |
+
|
| 214 |
+
# Initialize workflow
|
| 215 |
+
workflow = PortfolioAnalysisWorkflow(mcp_router)
|
| 216 |
+
|
| 217 |
+
# Run workflow
|
| 218 |
+
final_state = await workflow.run(initial_state)
|
| 219 |
+
|
| 220 |
+
# Verify results
|
| 221 |
+
assert final_state['current_step'] == 'complete'
|
| 222 |
+
assert len(final_state['mcp_calls']) > 0
|
| 223 |
+
assert final_state['ai_synthesis'] != ''
|
| 224 |
+
assert len(final_state['recommendations']) > 0
|
| 225 |
+
|
| 226 |
+
@pytest.mark.asyncio
|
| 227 |
+
async def test_workflow_with_multiple_holdings(self):
|
| 228 |
+
"""Test workflow with multiple holdings."""
|
| 229 |
+
initial_state: AgentState = {
|
| 230 |
+
'portfolio_id': 'test_portfolio_002',
|
| 231 |
+
'user_query': 'Analyse my diversified portfolio',
|
| 232 |
+
'risk_tolerance': 'moderate',
|
| 233 |
+
'holdings': [
|
| 234 |
+
{'ticker': 'AAPL', 'quantity': 50, 'dollar_amount': 0, 'cost_basis': 0},
|
| 235 |
+
{'ticker': 'GOOGL', 'quantity': 30, 'dollar_amount': 0, 'cost_basis': 0},
|
| 236 |
+
{'ticker': 'MSFT', 'quantity': 40, 'dollar_amount': 0, 'cost_basis': 0}
|
| 237 |
+
],
|
| 238 |
+
'historical_prices': {},
|
| 239 |
+
'fundamentals': {},
|
| 240 |
+
'economic_data': {},
|
| 241 |
+
'realtime_data': {},
|
| 242 |
+
'technical_indicators': {},
|
| 243 |
+
'optimisation_results': {},
|
| 244 |
+
'risk_analysis': {},
|
| 245 |
+
'ai_synthesis': '',
|
| 246 |
+
'recommendations': [],
|
| 247 |
+
'reasoning_steps': [],
|
| 248 |
+
'current_step': 'starting',
|
| 249 |
+
'errors': [],
|
| 250 |
+
'mcp_calls': []
|
| 251 |
+
}
|
| 252 |
+
|
| 253 |
+
workflow = PortfolioAnalysisWorkflow(mcp_router)
|
| 254 |
+
final_state = await workflow.run(initial_state)
|
| 255 |
+
|
| 256 |
+
# Verify all phases completed
|
| 257 |
+
assert 'historical_prices' in final_state
|
| 258 |
+
assert 'optimisation_results' in final_state
|
| 259 |
+
assert 'risk_analysis' in final_state
|
| 260 |
+
assert final_state['ai_synthesis'] != ''
|
| 261 |
+
|
| 262 |
+
|
| 263 |
+
@pytest.mark.skip(reason="Requires valid API keys")
|
| 264 |
+
class TestWithRealAPIs:
|
| 265 |
+
"""Tests that require real API keys."""
|
| 266 |
+
|
| 267 |
+
@pytest.mark.asyncio
|
| 268 |
+
async def test_fmp_company_profile(self):
|
| 269 |
+
"""Test FMP company profile retrieval."""
|
| 270 |
+
result = await mcp_router.call_fmp_mcp(
|
| 271 |
+
action='get_company_profile',
|
| 272 |
+
params={'ticker': 'AAPL'}
|
| 273 |
+
)
|
| 274 |
+
|
| 275 |
+
assert 'result' in result
|
| 276 |
+
|
| 277 |
+
@pytest.mark.asyncio
|
| 278 |
+
async def test_fred_economic_data(self):
|
| 279 |
+
"""Test FRED economic data retrieval."""
|
| 280 |
+
result = await mcp_router.call_fred_mcp(
|
| 281 |
+
action='get_economic_series',
|
| 282 |
+
params={'series_id': 'GDP'}
|
| 283 |
+
)
|
| 284 |
+
|
| 285 |
+
assert 'result' in result
|
| 286 |
+
|
| 287 |
+
|
| 288 |
+
if __name__ == "__main__":
|
| 289 |
+
pytest.main([__file__, "-v", "-s"])
|