返回列表
💰投资 · 🧠 阿头学

用 AI 预测市场,59% 胜率能稳定赚钱吗?

作者声称用 LSTM+MCDropout 模型在预测市场达到 59%-79% 胜率,但核心主张存在数据错配、流动性忽视、胜率数字自相矛盾等硬伤,整体是包装精美的营销文而非严谨研究。
打开原文 ↗

2026-03-12 原文链接 ↗
阅读简报
双语对照
完整翻译
原文
讨论归档

核心观点

  • 数据源与标的严重错配 文章宣称用 25 年数据训练,但 Polymarket 成立于 2020 年。用 2000-2008 年标普 500 数据预测 2024 年政治选举,属于典型的无效泛化,模型学到的可能只是泛化很差的市场 regime 特征。
  • 胜率数字前后漂移,口径不一致 标题说 79%,正文阶段 8 说 59%,阶段 9 又说"稳定高于 59% 有时到 78%"。这种随意跳跃暗示结果是挑选表现最好的片段(Cherry-picking),而非长期稳定回测。
  • 忽略交易摩擦与流动性陷阱 声称"一次性开出 100+ 笔交易",但预测市场深度极浅,这种规模买入会产生巨大滑点,瞬间吃掉 59% 胜率带来的微薄利润。没有提及手续费、滑点、到期结算机制如何影响最终收益。
  • MCDropout 被过度包装 把 Dropout 产生的随机失活预测分布叙述成"50 位独立分析师共识",容易夸大其统计含义。它可以作为不确定性启发式,但不天然意味着概率校准良好或模型鲁棒。
  • 技术分析指标在事件预测中失效 模型依赖 RSI、MACD 等传统金融指标,但预测市场合约(如"特朗普是否赢得大选")是事件驱动而非趋势驱动。K 线指标无法捕捉突发新闻或政策变动,这是"拿着锤子找钉子"。

跟我们的关联

  • 对 ATou 意味着什么 这篇文章展示了"只在高共识时下注"的决策结构——用多次采样估计不确定性,再加置信阈值做过滤。这套框架可迁移到 AI agent 设计:不是每个任务都自动执行,而是先判断不确定性,高置信自动化、低置信转人工。下一步可以在 agent 的自我校验层引入类似的"共识采样"机制。
  • 对增长团队意味着什么 "每周 3 笔高置信交易,胜过 20 笔不确定的交易"直接映射到投放和内容增长:不是每天发更多素材、上更多渠道,而是识别高胜率人群/创意/时机再集中下注。增长常死在低质量扩张,这里的核心是信号过滤优于全面覆盖
  • 对决策管理意味着什么 文中"50 个分析师意见分裂就不交易"很适合迁移到组织决策:当数据、用户反馈、销售反馈彼此冲突时,最优动作不是强行推进,而是先不做。很多坏决策不是因为方向错,而是在不确定性高的时候过早承诺资源。
  • 对时间序列系统意味着什么 文中严格按时间切分数据(2000-2020 训练,2021-2025 测试)的做法是基本防错框架。凡是做增长归因、用户流失预测、销量预测,都应套这条原则:按时间切分而非随机切分,验证"未来数据"表现而非历史拟合。

讨论引子

  • 如果一个模型在历史数据上 59% 准确率,但你无法验证它在真实交易中的滑点、手续费、流动性成本后是否仍有正期望,这个模型的实际价值是多少?
  • "只在高置信度时出手"这个策略在理论上合理,但如何判断置信度本身是否被过度拟合?MCDropout 的标准差是否真的能反映模型的真实不确定性,还是只是参数波动的表象?
  • 预测市场的本质是事件驱动,但文中模型用的是技术分析指标。这两者的逻辑冲突是否意味着模型实际上在学习的是市场情绪波动而非事件概率,一旦市场情绪与事件结果脱钩就会失效?

如果你用 25 年的市场数据训练一个 AI 智能体,你真的能靠它稳定做到每月 $20k 吗?

剧透:读到最后你会明白——只要抓住正确的时机,在预测市场做到 10 倍收益是可能的

在开始之前,先把这篇文章收藏一下并点个关注 每天在 Polymarket 等平台发布日更 alpha

人类大脑一次最多只能同时跟踪 19 件事。预测市场却由数百个变量共同驱动:新闻、成交量、动量、鲸鱼资金、情绪——全都在同一时间发生。

手动分析不可能。但代码可以。

这个机器人会一次性开出 100+ 笔交易,而且在进场前就已经知道每一笔的情景假设。就算胜率只有 64%——也已经是利润。

阶段 1 —— 时间周期

在预测市场里,有很多时间周期: - 1 天 - 7 天 - 30 天

模型可以跨不同周期做预测,但本文聚焦 1 天这份预测市场合约明天会以 YES 结算吗?

先从最简单的开始——只看一天。如果你能稳定地提前 1 天判断方向,这就已经是一套可用于日常交易的工具。

阶段 2 —— 模型预测什么(分类预测)

Target1 = 1 -> 明天更高 -> BUY

Target1 = 0 -> 明天更低 -> DONT BUY

很简单——模型不试图预测精确价格。它只回答一个问题:涨还是跌?

这和 Polymarket 合约的逻辑一致——你押的是事件结果,而不是一个精确数字

阶段 3 —— 通往结果的 38 个指标

train_val_cutoff = global_min_date + (global_max_date - global_min_date) * 0.8
train_cutoff     = global_min_date + (train_val_cutoff - global_min_date) * (1 - 0.2)

Training (2000–2020): BTC, S&P500, ETH, major prediction markets
Testing (2021–2025): same markets, but future the model never saw

模型看的不只是价格——它会分析过去 60 天里、覆盖 30 份预测市场合约的 38 种不同信号

callbacks = [
    EarlyStopping(monitor="val_loss", min_delta=1e-7, patience=15),
    ReduceLROnPlateau(monitor="val_loss", factor=0.5, min_lr=1e-7, patience=10),
]
history = model.fit(train_gen, validation_data=val_gen, epochs=50)

想象一下:每天手动对 30 份合约逐一分析 38 个指标

阶段 4 —— 我们如何按时间切分数据

Date        Close   Prob_Up_1  Prob_Std_1  Signal_1
2020-06-17  95.74   0.9995     0.0008      BUY
2020-06-23  97.31   5.45e-05   0.0         HOLD
2020-06-26  96.13   0.9995     0.0008      BUY

result[f"Signal_{h}"] = [
    "BUY" if p > 0.7 else "HOLD"
    for p in y_pred_mean[:, i]
]

重点:模型在过去数据上学习,在未来数据上测试。没有数据泄漏

这是人们搭建这类系统时最常犯的错误——用训练过的数据去测试。我们不这么做。模型在训练时从未见过 2021–2025——这是一场干净、诚实的测试

阶段 5 —— 神经网络架构

  • Conv1D - 在时间序列中寻找局部模式

  • LSTM - 记住长期依赖关系

  • MCDropout - 衡量预测不确定性

  • Sigmoid - 输出 0 到 1 的数(概率)

model = Sequential([
    Conv1D(32, kernel_size=3, activation="relu",
           input_shape=(WINDOW_SIZE + 1, n_features)),
    BatchNormalization(),
    MCDropout(0.2),
    LSTM(64, return_sequences=False),
    MCDropout(0.2),
    Dense(32, activation="relu"),
    Dense(1, activation="sigmoid"),
])
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])

模型只输出一个数字。比如 0.85——意味着模型有 85% 的把握认为明天价格会上涨。至于信不信,由你决定

阶段 6 —— 蒙特卡洛 Dropout(关键部分)

我们不只做一次预测,而是用不同的 dropout 率让模型跑 50 次。这样能得到 mean(置信度)和 std(不确定性)。 如果模型自己都无法达成一致,我们就不买

你可以把它理解为:同时问 50 位分析师。如果 50 个都说 BUY,我们就进场;如果一半说 BUY、一半说 HOLD——我们就跳过。

> 只有当出现一致共识时,我们才交易

阶段 7 —— BUY/HOLD 信号

置信度阈值是 70%。如果模型给出 BUY 但置信度低于 70%——我们就忽略这条信号,继续寻找机会

我们不会每天在每个市场都交易。我们只等强信号。每周 3 笔高置信交易,胜过 20 笔不确定的交易

区块 8 —— 训练

def add_features(df: pd.DataFrame) -> pd.DataFrame:
    df["Target1"] = (df["close"].shift(-1) > df["close"]).astype(int)

~59% 的准确率听起来不算高——但在预测市场,即便 59% 的胜率也能带来稳定盈利

MA10, MA20, MA30          — moving averages
RSI                       — overbought/oversold
MACD, MACD_Signal         — trend
BollingerUpper/Lower      — volatility bands
Volatility_10/20/30       — price fluctuation intensity
OBV                       — buyer/seller pressure
sentiment, num_articles   — news flow
insider_shares/amount     — insider activity
momentum_5d / momentum_20d

赌场只靠 2–3% 的优势就能赚钱。我们有 59%。每月 100+ 笔交易,数学站在我们这边

阶段 9 —— 最终结果

策略很简单——每天买入上涨概率最高的 前 3 个仓位

class MCDropout(Dropout):
    def call(self, inputs, training=None):
        return super().call(inputs, training=True)

def mc_dropout_predict(model, X, n_samples=50):
    preds = np.array([model(X, training=True).numpy() for _ in range(n_samples)])
    return preds.mean(axis=0), preds.std(axis=0), preds

灰线代表 10 个人每天随机开预测市场仓位。绿线是我们的机器人。差异一目了然!

结果持续跑赢随机策略,而且胜率稳定高于 59%——有时甚至能到 78%

跨 30 个市场的每一条信号、每一个指标、每一种模式——几秒内处理完。不是你做的,而是一个 24/7 运行的模型在做

我们把 25 年的市场数据压缩为 38 个指标,用 LSTM 神经网络对每次预测跑 50 次——而且 结果持续跑赢随机策略。

If you train an AI agent on 25 years of market data, can you realistically make $20k/month from it?

Spoiler: by the end you'll understand why 10x on prediction markets is possible if you catch the right moment

Before we start, bookmark this and drop a follow Posting daily alpha on Polymarket and more

The human brain сan tracks max 19 things at once. Prediction markets move on hundreds of variables: news, volume, momentum, whales, sentiment - all at the same time.

Manually impossible. But code can.

This bot will open 100+ trades already knowing their scenario. Even if we hit 64% win rate - that's already profit

如果你用 25 年的市场数据训练一个 AI 智能体,你真的能靠它稳定做到每月 $20k 吗?

剧透:读到最后你会明白——只要抓住正确的时机,在预测市场做到 10 倍收益是可能的

在开始之前,先把这篇文章收藏一下并点个关注 每天在 Polymarket 等平台发布日更 alpha

人类大脑一次最多只能同时跟踪 19 件事。预测市场却由数百个变量共同驱动:新闻、成交量、动量、鲸鱼资金、情绪——全都在同一时间发生。

手动分析不可能。但代码可以。

这个机器人会一次性开出 100+ 笔交易,而且在进场前就已经知道每一笔的情景假设。就算胜率只有 64%——也已经是利润。

**

Phase 1 - Time Horizons**

On Prediction markets there are many time horizons: - 1 Day - 7 Days - 30 Days

The model can predict across different horizons, but in this piece, the focus is 1 Day. Will this prediction market contract resolve YES tomorrow?

Starting simple - just one day. If you can consistently call the direction 1 day ahead, that's already a working tool for daily trading

阶段 1 —— 时间周期

在预测市场里,有很多时间周期: - 1 天 - 7 天 - 30 天

模型可以跨不同周期做预测,但本文聚焦 1 天这份预测市场合约明天会以 YES 结算吗?

先从最简单的开始——只看一天。如果你能稳定地提前 1 天判断方向,这就已经是一套可用于日常交易的工具。

Phase 2 - What the model predicts (predict classification)

Target1 = 1 -> tomorrow higher -> **BUY

Target1 = 0 -> tomorrow lower -> DONT BUY**

Simple - the model doesn't try to predict the exact price. It answers one question only: up or down?

Same logic as Polymarket contracts - you're betting on an event, not an exact number

阶段 2 —— 模型预测什么(分类预测)

Target1 = 1 -> 明天更高 -> BUY

Target1 = 0 -> 明天更低 -> DONT BUY

很简单——模型不试图预测精确价格。它只回答一个问题:涨还是跌?

这和 Polymarket 合约的逻辑一致——你押的是事件结果,而不是一个精确数字

Phase 3 - 38 indicators that lead us to the result

train_val_cutoff = global_min_date + (global_max_date - global_min_date) * 0.8
train_cutoff     = global_min_date + (train_val_cutoff - global_min_date) * (1 - 0.2)

Training (2000–2020): BTC, S&P500, ETH, major prediction markets
Testing (2021–2025): same markets, but future the model never saw

The model doesn't just look at price - it analyzes 38 different signals across 30 prediction market contracts over the last 60 days

callbacks = [
    EarlyStopping(monitor="val_loss", min_delta=1e-7, patience=15),
    ReduceLROnPlateau(monitor="val_loss", factor=0.5, min_lr=1e-7, patience=10),
]
history = model.fit(train_gen, validation_data=val_gen, epochs=50)

Imagine manually analyzing 38 indicators across 30 contracts every day

阶段 3 —— 通往结果的 38 个指标

train_val_cutoff = global_min_date + (global_max_date - global_min_date) * 0.8
train_cutoff     = global_min_date + (train_val_cutoff - global_min_date) * (1 - 0.2)

Training (2000–2020): BTC, S&P500, ETH, major prediction markets
Testing (2021–2025): same markets, but future the model never saw

模型看的不只是价格——它会分析过去 60 天里、覆盖 30 份预测市场合约的 38 种不同信号

callbacks = [
    EarlyStopping(monitor="val_loss", min_delta=1e-7, patience=15),
    ReduceLROnPlateau(monitor="val_loss", factor=0.5, min_lr=1e-7, patience=10),
]
history = model.fit(train_gen, validation_data=val_gen, epochs=50)

想象一下:每天手动对 30 份合约逐一分析 38 个指标

Phase 4 - How we split the data by time

Date        Close   Prob_Up_1  Prob_Std_1  Signal_1
2020-06-17  95.74   0.9995     0.0008      BUY
2020-06-23  97.31   5.45e-05   0.0         HOLD
2020-06-26  96.13   0.9995     0.0008      BUY

result[f"Signal_{h}"] = [
    "BUY" if p > 0.7 else "HOLD"
    for p in y_pred_mean[:, i]
]

Important - model learns on the past, gets tested on the future. No data leakage

This is the most common mistake people make building these systems - they test on the same data they trained on. We don't do that. The model never saw 2021-2025 during training - that's a clean honest test

阶段 4 —— 我们如何按时间切分数据

Date        Close   Prob_Up_1  Prob_Std_1  Signal_1
2020-06-17  95.74   0.9995     0.0008      BUY
2020-06-23  97.31   5.45e-05   0.0         HOLD
2020-06-26  96.13   0.9995     0.0008      BUY

result[f"Signal_{h}"] = [
    "BUY" if p > 0.7 else "HOLD"
    for p in y_pred_mean[:, i]
]

重点:模型在过去数据上学习,在未来数据上测试。没有数据泄漏

这是人们搭建这类系统时最常犯的错误——用训练过的数据去测试。我们不这么做。模型在训练时从未见过 2021–2025——这是一场干净、诚实的测试

Phase 5 - Neural network architecture

  • Conv1D - finds local patterns in the time series

  • LSTM - remembers long-term dependencies

  • MCDropout - measures prediction uncertainty

  • Sigmoid - outputs a number from 0 to 1 (probability)

model = Sequential([
    Conv1D(32, kernel_size=3, activation="relu",
           input_shape=(WINDOW_SIZE + 1, n_features)),
    BatchNormalization(),
    MCDropout(0.2),
    LSTM(64, return_sequences=False),
    MCDropout(0.2),
    Dense(32, activation="relu"),
    Dense(1, activation="sigmoid"),
])
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])

The model outputs just one number. Say 0.85 - means the model is 85% confident price goes up tomorrow. You decide whether to trust it

阶段 5 —— 神经网络架构

  • Conv1D - 在时间序列中寻找局部模式

  • LSTM - 记住长期依赖关系

  • MCDropout - 衡量预测不确定性

  • Sigmoid - 输出 0 到 1 的数(概率)

model = Sequential([
    Conv1D(32, kernel_size=3, activation="relu",
           input_shape=(WINDOW_SIZE + 1, n_features)),
    BatchNormalization(),
    MCDropout(0.2),
    LSTM(64, return_sequences=False),
    MCDropout(0.2),
    Dense(32, activation="relu"),
    Dense(1, activation="sigmoid"),
])
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])

模型只输出一个数字。比如 0.85——意味着模型有 85% 的把握认为明天价格会上涨。至于信不信,由你决定

Phase 6 - Monte Carlo Dropout (key part)

Instead of a single prediction, we run the model 50 times with different dropout rates. We get the mean (confidence) and std (uncertainty). If the model disagrees with itself, we don't buy

Think of it like asking 50 analysts at once. If all 50 say BUY, we enter. If half say BUY and half say HOLD - we skip.

> We only trade when there's consensus

阶段 6 —— 蒙特卡洛 Dropout(关键部分)

我们不只做一次预测,而是用不同的 dropout 率让模型跑 50 次。这样能得到 mean(置信度)和 std(不确定性)。 如果模型自己都无法达成一致,我们就不买

你可以把它理解为:同时问 50 位分析师。如果 50 个都说 BUY,我们就进场;如果一半说 BUY、一半说 HOLD——我们就跳过。

> 只有当出现一致共识时,我们才交易

Phase 7 - BUY/HOLD signal

Confidence threshold is 70%. If the model says BUY with less than 70% confidence - we ignore the signal and keep looking

We don't trade every day on every market. We wait for strong signals only. 3 confident trades a week beats 20 uncertain ones

阶段 7 —— BUY/HOLD 信号

置信度阈值是 70%。如果模型给出 BUY 但置信度低于 70%——我们就忽略这条信号,继续寻找机会

我们不会每天在每个市场都交易。我们只等强信号。每周 3 笔高置信交易,胜过 20 笔不确定的交易

BLOCK 8 - Training

def add_features(df: pd.DataFrame) -> pd.DataFrame:
    df["Target1"] = (df["close"].shift(-1) > df["close"]).astype(int)

~59% accuracy sounds modest - but on prediction markets, even 59% win rate gives consistent profit

MA10, MA20, MA30          — moving averages
RSI                       — overbought/oversold
MACD, MACD_Signal         — trend
BollingerUpper/Lower      — volatility bands
Volatility_10/20/30       — price fluctuation intensity
OBV                       — buyer/seller pressure
sentiment, num_articles   — news flow
insider_shares/amount     — insider activity
momentum_5d / momentum_20d

Casinos make money with just a 2-3% edge. We have 59%. With 100+ trades a month, the math works in our favor

区块 8 —— 训练

def add_features(df: pd.DataFrame) -> pd.DataFrame:
    df["Target1"] = (df["close"].shift(-1) > df["close"]).astype(int)

~59% 的准确率听起来不算高——但在预测市场,即便 59% 的胜率也能带来稳定盈利

MA10, MA20, MA30          — moving averages
RSI                       — overbought/oversold
MACD, MACD_Signal         — trend
BollingerUpper/Lower      — volatility bands
Volatility_10/20/30       — price fluctuation intensity
OBV                       — buyer/seller pressure
sentiment, num_articles   — news flow
insider_shares/amount     — insider activity
momentum_5d / momentum_20d

赌场只靠 2–3% 的优势就能赚钱。我们有 59%。每月 100+ 笔交易,数学站在我们这边

Phase 9 - Final result

Strategy is simple - every day we buy the top 3 positions with the highest probability of going up.

class MCDropout(Dropout):
    def call(self, inputs, training=None):
        return super().call(inputs, training=True)

def mc_dropout_predict(model, X, n_samples=50):
    preds = np.array([model(X, training=True).numpy() for _ in range(n_samples)])
    return preds.mean(axis=0), preds.std(axis=0), preds

The grey lines are 10 people opening random prediction market positions every day . The green line is our bot. Difference is obvious!

Result consistently beats random strategies and already gives more than 59% win rate - sometimes hitting 78%

Every signal, every indicator, every pattern across 30 markets - processed in seconds. Not by you. By a model that runs 24/7

We took 25 years of market data, compressed it into 38 indicators, ran it through an LSTM neural network 50 times per prediction - and the result consistently beats random strategies.

Full code on GitHub. Every one of you can try running it on your old laptop and increase your chances of successful trading

**You build your own life - so choose the right path / if this helped you don't forget to follow / **

and i'm already working on an improved version of this bot

Link: http://x.com/i/article/2031637168611905536

阶段 9 —— 最终结果

策略很简单——每天买入上涨概率最高的 前 3 个仓位

class MCDropout(Dropout):
    def call(self, inputs, training=None):
        return super().call(inputs, training=True)

def mc_dropout_predict(model, X, n_samples=50):
    preds = np.array([model(X, training=True).numpy() for _ in range(n_samples)])
    return preds.mean(axis=0), preds.std(axis=0), preds

灰线代表 10 个人每天随机开预测市场仓位。绿线是我们的机器人。差异一目了然!

结果持续跑赢随机策略,而且胜率稳定高于 59%——有时甚至能到 78%

跨 30 个市场的每一条信号、每一个指标、每一种模式——几秒内处理完。不是你做的,而是一个 24/7 运行的模型在做

我们把 25 年的市场数据压缩为 38 个指标,用 LSTM 神经网络对每次预测跑 50 次——而且 结果持续跑赢随机策略。

完整代码在 GitHub 上。你们每个人都可以在旧笔记本上试着跑起来,提高自己交易成功的概率

**你在打造自己的人生——所以要选对道路 / 如果这对你有帮助,别忘了关注 / **

而我已经在做这个机器人的改进版本了

链接: http://x.com/i/article/2031637168611905536

相关笔记

If you train an AI agent on 25 years of market data, can you realistically make $20k/month from it?

Spoiler: by the end you'll understand why 10x on prediction markets is possible if you catch the right moment

Before we start, bookmark this and drop a follow Posting daily alpha on Polymarket and more

The human brain сan tracks max 19 things at once. Prediction markets move on hundreds of variables: news, volume, momentum, whales, sentiment - all at the same time.

Manually impossible. But code can.

This bot will open 100+ trades already knowing their scenario. Even if we hit 64% win rate - that's already profit

**

Phase 1 - Time Horizons**

On Prediction markets there are many time horizons: - 1 Day - 7 Days - 30 Days

The model can predict across different horizons, but in this piece, the focus is 1 Day. Will this prediction market contract resolve YES tomorrow?

Starting simple - just one day. If you can consistently call the direction 1 day ahead, that's already a working tool for daily trading

Phase 2 - What the model predicts (predict classification)

Target1 = 1 -> tomorrow higher -> **BUY

Target1 = 0 -> tomorrow lower -> DONT BUY**

Simple - the model doesn't try to predict the exact price. It answers one question only: up or down?

Same logic as Polymarket contracts - you're betting on an event, not an exact number

Phase 3 - 38 indicators that lead us to the result

train_val_cutoff = global_min_date + (global_max_date - global_min_date) * 0.8
train_cutoff     = global_min_date + (train_val_cutoff - global_min_date) * (1 - 0.2)

Training (2000–2020): BTC, S&P500, ETH, major prediction markets
Testing (2021–2025): same markets, but future the model never saw

The model doesn't just look at price - it analyzes 38 different signals across 30 prediction market contracts over the last 60 days

callbacks = [
    EarlyStopping(monitor="val_loss", min_delta=1e-7, patience=15),
    ReduceLROnPlateau(monitor="val_loss", factor=0.5, min_lr=1e-7, patience=10),
]
history = model.fit(train_gen, validation_data=val_gen, epochs=50)

Imagine manually analyzing 38 indicators across 30 contracts every day

Phase 4 - How we split the data by time

Date        Close   Prob_Up_1  Prob_Std_1  Signal_1
2020-06-17  95.74   0.9995     0.0008      BUY
2020-06-23  97.31   5.45e-05   0.0         HOLD
2020-06-26  96.13   0.9995     0.0008      BUY

result[f"Signal_{h}"] = [
    "BUY" if p > 0.7 else "HOLD"
    for p in y_pred_mean[:, i]
]

Important - model learns on the past, gets tested on the future. No data leakage

This is the most common mistake people make building these systems - they test on the same data they trained on. We don't do that. The model never saw 2021-2025 during training - that's a clean honest test

Phase 5 - Neural network architecture

  • Conv1D - finds local patterns in the time series

  • LSTM - remembers long-term dependencies

  • MCDropout - measures prediction uncertainty

  • Sigmoid - outputs a number from 0 to 1 (probability)

model = Sequential([
    Conv1D(32, kernel_size=3, activation="relu",
           input_shape=(WINDOW_SIZE + 1, n_features)),
    BatchNormalization(),
    MCDropout(0.2),
    LSTM(64, return_sequences=False),
    MCDropout(0.2),
    Dense(32, activation="relu"),
    Dense(1, activation="sigmoid"),
])
model.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"])

The model outputs just one number. Say 0.85 - means the model is 85% confident price goes up tomorrow. You decide whether to trust it

Phase 6 - Monte Carlo Dropout (key part)

Instead of a single prediction, we run the model 50 times with different dropout rates. We get the mean (confidence) and std (uncertainty). If the model disagrees with itself, we don't buy

Think of it like asking 50 analysts at once. If all 50 say BUY, we enter. If half say BUY and half say HOLD - we skip.

> We only trade when there's consensus

Phase 7 - BUY/HOLD signal

Confidence threshold is 70%. If the model says BUY with less than 70% confidence - we ignore the signal and keep looking

We don't trade every day on every market. We wait for strong signals only. 3 confident trades a week beats 20 uncertain ones

BLOCK 8 - Training

def add_features(df: pd.DataFrame) -> pd.DataFrame:
    df["Target1"] = (df["close"].shift(-1) > df["close"]).astype(int)

~59% accuracy sounds modest - but on prediction markets, even 59% win rate gives consistent profit

MA10, MA20, MA30          — moving averages
RSI                       — overbought/oversold
MACD, MACD_Signal         — trend
BollingerUpper/Lower      — volatility bands
Volatility_10/20/30       — price fluctuation intensity
OBV                       — buyer/seller pressure
sentiment, num_articles   — news flow
insider_shares/amount     — insider activity
momentum_5d / momentum_20d

Casinos make money with just a 2-3% edge. We have 59%. With 100+ trades a month, the math works in our favor

Phase 9 - Final result

Strategy is simple - every day we buy the top 3 positions with the highest probability of going up.

class MCDropout(Dropout):
    def call(self, inputs, training=None):
        return super().call(inputs, training=True)

def mc_dropout_predict(model, X, n_samples=50):
    preds = np.array([model(X, training=True).numpy() for _ in range(n_samples)])
    return preds.mean(axis=0), preds.std(axis=0), preds

The grey lines are 10 people opening random prediction market positions every day . The green line is our bot. Difference is obvious!

Result consistently beats random strategies and already gives more than 59% win rate - sometimes hitting 78%

Every signal, every indicator, every pattern across 30 markets - processed in seconds. Not by you. By a model that runs 24/7

We took 25 years of market data, compressed it into 38 indicators, ran it through an LSTM neural network 50 times per prediction - and the result consistently beats random strategies.

Full code on GitHub. Every one of you can try running it on your old laptop and increase your chances of successful trading

📋 讨论归档

讨论进行中…