Update docs with latest architecture and results

This commit is contained in:
2026-01-28 11:23:50 +08:00
parent e2974342d5
commit 34d6f0d808
5 changed files with 37 additions and 4 deletions

View File

@@ -16,3 +16,8 @@ Tools:
- `example/run_all_full.py` for one-command full pipeline + diagnostics.
Notes:
- If `use_quantile_transform` is enabled, run `prepare_data.py` with `full_stats: true` to build quantile tables.
Current status (high level):
- Two-stage pipeline (GRU trend + diffusion residuals).
- Quantile transform + post-hoc calibration enabled for continuous features.
- Latest metrics (2026-01-27 21:22): avg_ks ~0.405 / avg_jsd ~0.038 / avg_lag1_diff ~0.145.

View File

@@ -71,3 +71,10 @@
- `example/export_samples.py`
- `example/prepare_data.py`
- `example/config.json`
## 2026-01-27 — Full quantile stats in preparation
- **Decision**: Enable full statistics when quantile transform is active.
- **Why**: Stabilize quantile tables and reduce CDF mismatch.
- **Files**:
- `example/prepare_data.py`
- `example/config.json`

View File

@@ -27,3 +27,8 @@ YYYY-MM-DD
- Config: `example/config.json` (two-stage residual diffusion; user run on Windows)
- Result: 0.7096230 / 0.0331810 / 0.1898416
- Notes: slight KS improvement, lag-1 improves; still distribution/temporal trade-off.
## 2026-01-27
- Config: `example/config.json` (quantile transform + calibration, full stats)
- Result: 0.4046 / 0.0376 / 0.1449
- Notes: KS and lag-1 improved significantly; JSD regressed vs best discrete run.

View File

@@ -11,3 +11,6 @@
## Two-stage training with curriculum
- Hypothesis: train diffusion on residuals only after temporal GRU converges to low error.
## Discrete calibration
- Hypothesis: post-hoc calibration on discrete marginals can reduce JSD without harming KS.