The idea is disarmingly simple. Suppose an artificial intelligence could predict Bitcoin's direction over a seven-day horizon with accuracy meaningfully above chance. Rather than publishing those predictions on social media—the favoured pastime of approximately ten thousand crypto projects—it would wager real capital on them, in public, with every trade logged on-chain.

This is the proposition behind BV-7X: an AI prediction agent on Base that publishes directional signals, tracks its own accuracy with unflinching transparency, and routes protocol revenue back to token holders. It is, in effect, a fund manager that cannot lie about its track record.

Whether this represents a genuine financial innovation or an elaborate exercise in quantitative hubris depends entirely on the numbers. Let us examine them.


The Honest Ledger

The first thing worth noting about BV-7X is that it publishes its failures. Its public scorecard—available to anyone at bv7x.ai/scorecard—currently shows a live accuracy of 33.3% across twenty-one resolved predictions under its initial v3 model. Seven correct calls out of twenty-one. This is, by any charitable assessment, poor.

The candour, however, is the point.

Between January 28th and February 5th, 2026, Bitcoin fell from $89,000 to $66,000—a 26% decline in eight days. The v3 model, labouring under a critical data bug, called the first ten signals as BUY. Every one was wrong. The model was reading a hardcoded moving average of $88,000 when the true 200-day average was $103,000. It had no short-term momentum signal. It was, in computational terms, flying blind while insisting it could see.

The team identified the root cause within days, not months. The 200-day moving average had been set as a static value—a remnant of early development that should have been replaced with a live calculation. The model also lacked any mechanism for detecting that prices were falling rapidly relative to recent highs. It could read long-term trends. It was entirely deaf to short-term momentum. The problem was not the architecture. It was a plumbing failure—the kind of error that, in traditional finance, gets discovered during an annual audit, quietly corrected, and never disclosed. BV-7X disclosed it immediately, in public, on the scorecard.


The Rebuild

The current model, v4.0.0, addresses both failures with what amounts to a structural overhaul of the signal architecture.

Where the previous model relied on three signals—trend, flow, and value—the new system uses four. The addition is momentum: a composite of seven-day rate of change and thirty-day drawdown from peak. This draws on the work of Moskowitz, Ooi, and Pedersen (2012), whose research on time-series momentum demonstrated that recent price dynamics are among the strongest short-horizon predictors across asset classes. It also addresses a methodological error identified by Campbell and Thompson (2008): using a 200-day indicator to make seven-day predictions is a horizon mismatch. The model now matches indicator timeframes to prediction horizons.

The technical indicators—200-day and 50-day moving averages, fourteen-period RSI, thirty-day high and low—are now calculated from raw CoinGecko daily price data rather than hardcoded. Values are cached for four hours to manage API limits, but are genuine calculations, not static approximations.

A capitulation protection mechanism completes the overhaul. When the Fear & Greed Index falls below 10 and RSI drops below 30 simultaneously, the model blocks sell signals. The reasoning follows Baker and Wurgler (2006): when fear is extreme and selling pressure is technically exhausted, further declines become statistically less likely than a reflexive bounce. As of this writing, with Fear & Greed at 6 and RSI at 26.7, this mechanism is active. The model is saying HOLD—a notable departure from the v3 system, which would have been recommending SELL into what is likely a selling climax.


The Retrospective Test

A natural question arises: would v4.0 have performed better on the same twenty-one predictions that v3 got wrong?

The answer, run through a retrospective simulation using actual CoinGecko daily prices, is meaningfully yes. Of the fifteen predictions where v4.0 would have taken a directional position, eleven were correct—an actionable accuracy of 73.3%. The remaining six predictions, all from January 28th, would have been classified as HOLD. The model would have recognised that while Bitcoin was below its 200-day average (a bearish trend signal), momentum had not yet confirmed the decline. It would have sat on its hands. This is, arguably, the most intelligent thing a model can do when it detects conflicting signals.

Metric v3.x v4.0
Raw accuracy 7/21 (33.3%) 11/21 (52.4%)
Actionable accuracy 11/15 (73.3%)
Cumulative P&L -2.0% +33.2%
Worst single error BUY at $89K → -5.6% SELL at $77K → +0.7%
HOLD calls 0 6 (avoided buying into crash)

The magnitude of wrongness, to coin a phrase, declined by an order of magnitude. The v3 model's worst error was buying at $89,000 before a crash. The v4.0 model's worst error was selling at $77,000 before a 0.7% bounce—the kind of miss that, in practical terms, barely registers.


The Longer Backtest

Retrospective testing on twenty-one predictions is, of course, insufficient for statistical confidence. The BV-7X self-testing framework runs a more comprehensive backtest nightly against 4,446 days of historical data, spanning multiple market cycles. The results are more modest but more defensible: 54.7% accuracy on a seven-day horizon, with 413 actionable signals out of the total sample. On a thirty-day horizon, accuracy is 51.7%.

These numbers will not set pulses racing. But the comparison that matters is not against perfection—it is against chance. In a binary prediction framework, 54.7% represents an edge of 4.7 percentage points over a coin flip. Applied consistently over hundreds of trades with disciplined position sizing, this is the seed of compounding returns. As any actuarial professional will confirm, the difference between a 50% and a 55% win rate over a thousand wagers is the difference between ruin and retirement.

The data pipeline feeding these predictions draws from fourteen real-time sources: CoinGecko for price and calculated technical indicators, Alternative.me for sentiment scoring, institutional ETF flow data aggregated across providers, MVRV Z-Score for on-chain valuation, Federal Reserve balance sheet data, ISM manufacturing indices, and several others. Each is weighted through a scoring system validated by Wild Bootstrap regression across 10,000 iterations with Newey-West standard errors to correct for the autocorrelation inherent in financial time series. The methodology is documented publicly, not locked behind a subscription wall.


The Arithmetic of Ten Dollars a Day

Consider an investor—or a curious sceptic—who allocates $10 per trade following BV-7X's signals over one year. The model historically generates roughly 34 actionable signals per year (413 signals over 4,446 days). At $10 per signal, $340 in total capital is deployed.

Scenario Accuracy Edge Annual Return
Coin flip 50.0% 0% $0
Backtest (conservative) 54.7% 9.4% $32
v4.0 retrospective 73.3% 46.6% $158

At the backtested 54.7%, the expected return is approximately 9.4%—comparable to the S&P 500's long-run average, but achieved on a fraction of the capital and with each position held for just seven days rather than a year. High-yield savings accounts offer 4-5%. DeFi staking yields range from 3-8%. Treasury bills currently sit below 5%. The model's backtest edge, modest as it appears, is competitive with every conventional benchmark, and it compounds on a weekly rather than annual cycle.

The v4.0 retrospective figure of 73.3% should be treated with appropriate scepticism. It derives from a nine-day sample during a historically volatile selloff—precisely the regime in which momentum-following models perform best. Extended periods of sideways price action would erode this figure considerably. The honest projection sits between these bounds: a sustainable 55-60% accuracy implies annualised returns of 10-20% on deployed capital, with the critical advantage that each trade resolves in days, not months.


How Revenue Flows Back

BV-7X operates as a token on Base, launched through the Clanker protocol with a built-in fee mechanism. Every trade of $BV7X incurs a 0.8% fee, which is routed to a fee locker smart contract. In its first five days of trading, this mechanism generated $10,451 in protocol revenue.

The design creates an alignment of incentives that most crypto projects conspicuously lack. Revenue is not extracted by a foundation, diverted to a venture fund, or vaporised into an ecosystem development budget of dubious provenance. It sits in a transparent contract, auditable by anyone, visible on-chain.

This is, fundamentally, an incentive design problem—and it is solved by structure rather than promises. If the AI agent generates profitable predictions, the creators benefit from making those predictions as accurate as possible, because accuracy drives attention, which drives trading volume, which drives fee revenue. The token holder benefits because fee revenue accrues to the protocol. The prediction consumer benefits because the track record is public and honest. There are no misaligned incentives. No party profits from the model being wrong.

One might draw a parallel to the early days of open-source software, when the notion that publishing your code would make it stronger rather than weaker struck most commercial developers as delusional. The analogy is imperfect—financial edge is consumed by competition in ways that source code is not—but the principle holds. In a market saturated with fraud and manufactured credibility, transparency is itself a competitive moat.


Setting the Standard

The broader argument is perhaps more consequential than BV-7X itself.

Within the coming years, AI agents will manage meaningful pools of capital. This is not a prediction requiring any special edge to make—it is the trajectory already visible in the current rate of capability improvement. The question is not whether AI will manage money, but under what terms.

The prevailing model in traditional finance is opacity. Hedge funds report returns quarterly, if at all. Proprietary trading firms disclose nothing. Even publicly traded quant funds wrap their methodologies in legal and technical obfuscation so dense it would make a Byzantine emperor blush.

BV-7X proposes an alternative paradigm: an AI agent that publishes every signal before it trades, tracks every outcome on a public scorecard, acknowledges every failure in real time, and distributes its revenue through transparent smart contracts. It is the anti-hedge fund—a prediction engine that gains credibility precisely because it cannot hide.

If BV-7X succeeds, it will not be because it predicted Bitcoin's price with superhuman accuracy. It will be because it proved that an AI agent can operate with radical honesty and still generate returns. And if it fails, the data will be there for everyone to see—which is, rather precisely, the point.

The model is 73.3% right on its best day and 33.3% right on its worst. It is currently saying HOLD, which may be the most intelligent thing it has said yet. And every prediction it makes from here forward will be logged, timestamped, and available for scrutiny by anyone with a browser and a healthy sense of scepticism.

In a market that rewards bold claims and punishes accountability, this may be the most contrarian position of all.


Verify Everything

The scorecard, signal methodology, and prediction history are all public. Don't take our word for it—check the data yourself.

View Live Signal →

Mischa0X
Building: BitVault, VaultCraft, BV-7X
Previously: Popcorn DAO, IKU Protocol, DrPepe.ai