11 KiB
Hybrid Diffusion for ICS Traffic (HAI 21.03) — Project Report
工业控制系统流量混合扩散生成(HAI 21.03)— 项目报告
1. Project Goal / 项目目标
Build a hybrid diffusion-based generator for industrial control system (ICS) traffic features, targeting mixed continuous + discrete feature sequences. The output is feature-level sequences, not raw packets. The generator should preserve:
- Distributional fidelity (continuous value ranges and discrete frequencies)
- Temporal consistency (time correlation and sequence structure)
- Protocol/field consistency (for discrete fields)
构建一个用于工业控制系统(ICS)流量特征的混合扩散生成模型,面向连续+离散混合特征序列。输出为特征级序列而非原始报文。生成结果需要同时保持:
- 分布一致性(连续值范围与离散取值频率)
- 时序一致性(时间相关性与序列结构)
- 字段/协议一致性(离散字段的逻辑一致)
This project is aligned with the STOUTER idea of structure-aware diffusion for spatiotemporal data, but applied to ICS feature sequences rather than cellular traffic.
本项目呼应 STOUTER 的结构先验+扩散思想,但应用于ICS 特征序列而非蜂窝流量。
2. Data and Scope / 数据与范围
Dataset used in the current implementation: HAI 21.03 (CSV feature traces)
当前实现使用的数据集: HAI 21.03(CSV 特征序列)
Data location (default in config):
dataset/hai/hai-21.03/train*.csv.gz
数据位置(config 默认):
dataset/hai/hai-21.03/train*.csv.gz
Feature split (fixed schema):
- Defined in
example/feature_split.json - Continuous features: sensor/process values
- Discrete features: binary/low-cardinality status/flag fields
timecolumn is excluded from modeling
特征拆分(固定 schema):
example/feature_split.json- 连续特征: 传感器/过程值
- 离散特征: 二值/低基数状态字段
time列不参与训练
3. End-to-End Pipeline / 端到端流程
One command pipeline:
python example/run_all.py --device cuda
一键流程:
python example/run_all.py --device cuda
Pipeline stages / 流程阶段
-
Prepare data (
example/prepare_data.py) -
Train model (
example/train.py) -
Generate samples (
example/export_samples.py) -
Evaluate (
example/evaluate_generated.py) -
Summarize metrics (
example/summary_metrics.py) -
数据准备(统计量与词表)
-
训练模型
-
生成样本并导出
-
评估指标
-
汇总指标
4. Technical Architecture / 技术架构
4.1 Hybrid Diffusion Model (Core) / 混合扩散模型(核心)
Defined in example/hybrid_diffusion.py.
Key components:
- Continuous branch: Gaussian diffusion (DDPM style)
- Discrete branch: Mask diffusion for categorical tokens
- Shared backbone: GRU + residual MLP + LayerNorm
- Embedding inputs:
- continuous projection
- discrete embeddings per column
- time embedding (sinusoidal)
- positional embedding (sequence index)
- optional condition embedding (
file_id)
Outputs:
- Continuous head: predicts target (
eps,x0, orv) - Discrete heads: predict logits for each discrete column
核心组成:
- 连续分支: 高斯扩散(DDPM)
- 离散分支: Mask 扩散
- 共享主干: GRU + 残差 MLP + LayerNorm
- 输入嵌入:
- 连续投影
- 离散字段嵌入
- 时间嵌入(正弦)
- 位置嵌入(序列索引)
- 条件嵌入(可选,
file_id)
输出:
- 连续 head:预测
eps/x0/v - 离散 head:各字段 logits
4.2 Feature Graph Mixer (Structure Prior) / 特征图混合器(结构先验)
Implemented in example/hybrid_diffusion.py as FeatureGraphMixer.
Purpose: inject learnable feature-dependency prior without dataset-specific hardcoding.
Mechanism:
- Learns a dense feature relation matrix
A - Applies:
x + x @ A - Symmetric stabilizing constraint:
(A + A^T)/2 - Controlled by scale and dropout
Config:
"model_use_feature_graph": true,
"feature_graph_scale": 0.1,
"feature_graph_dropout": 0.0
目的:在不写死特定数据集关系的情况下,引入可学习特征依赖先验。
机制:
- 学习稠密关系矩阵
A - 特征混合:
x + x @ A - 对称化稳定:
(A + A^T)/2 - 通过 scale/dropout 控制强度
4.3 Two-Stage Temporal Backbone / 两阶段时序骨干
Stage-1 uses a GRU temporal generator to model sequence trend in normalized space. Stage-2 diffusion then models the residual (x − trend). This decouples temporal consistency from distribution alignment.
第一阶段使用 GRU 时序生成器在归一化空间建模序列趋势;第二阶段扩散模型学习残差(x − trend),实现时序一致性与分布对齐的解耦。
5. Diffusion Formulations / 扩散建模形式
5.1 Continuous Diffusion / 连续扩散
Forward process:
x_t = sqrt(a_bar_t) * x_0 + sqrt(1 - a_bar_t) * eps
Targets supported:
- eps prediction (standard DDPM)
- x0 prediction (direct reconstruction)
- v prediction (v = sqrt(a_bar)*eps − sqrt(1-a_bar)*x0)
Current config default:
"cont_target": "v"
Sampling uses the target to reconstruct eps and apply standard DDPM reverse update.
**前向扩散:**如上公式。
支持的目标:
eps(噪声预测)x0(原样本预测)v(v‑prediction)
当前默认:cont_target = v
**采样:**根据目标反解 eps 再执行标准 DDPM 反向步骤。
5.2 Discrete Diffusion (Mask) / 离散扩散(Mask)
Forward process: replace tokens with [MASK] using cosine schedule:
p(t) = 0.5 * (1 - cos(pi * t / T))
Optional scale: disc_mask_scale
Reverse process: cross-entropy on masked positions only.
**前向:**按 cosine schedule 进行 Mask。 **反向:**仅在 mask 位置计算交叉熵。
6. Loss Design (Current) / 当前损失设计
Total loss:
L = λ * L_cont + (1 − λ) * L_disc + w_q * L_quantile
6.1 Continuous Loss / 连续损失
Depending on cont_target:
- eps target: MSE(eps_pred, eps)
- x0 target: MSE(x0_pred, x0)
- v target: MSE(v_pred, v_target)
Optional inverse-variance weighting:
cont_loss_weighting = "inv_std"
6.2 Discrete Loss / 离散损失
Cross-entropy on masked positions only.
6.3 Quantile Loss (Distribution Alignment) / 分位数损失(分布对齐)
Added to improve KS (distribution shape alignment):
- Compute quantiles on generated vs real x0
- Loss = Huber or L1 difference on quantiles
Stabilization:
quantile_loss_warmup_steps
quantile_loss_clip
quantile_loss_huber_delta
7. Training Strategy / 训练策略
Defined in example/train.py.
Key techniques:
- EMA of model weights
- Gradient clipping
- Shuffle buffer to reduce batch bias
- Optional feature graph prior
- Quantile loss warmup for stability
- Optional stage-1 temporal GRU (trend) + residual diffusion
Config highlights (example/config.json):
timesteps: 600
batch_size: 128
seq_len: 128
epochs: 10
max_batches: 4000
lambda: 0.7
cont_target: "v"
quantile_loss_weight: 0.1
model_use_feature_graph: true
use_temporal_stage1: true
8. Sampling & Export / 采样与导出
Defined in:
example/sample.pyexample/export_samples.py
Export steps:
- Reverse diffusion with conditional sampling
- Reverse normalize continuous values
- Clamp to observed min/max
- Restore discrete tokens from vocab
- Write to CSV
9. Evaluation Metrics / 评估指标
Implemented in example/evaluate_generated.py.
Continuous Metrics / 连续指标
- KS statistic (distribution similarity per feature)
- Quantile errors (q05/q25/q50/q75/q95)
- Lag‑1 correlation diff (temporal structure)
Discrete Metrics / 离散指标
- JSD over token frequency distribution
- Invalid token counts
Summary Metrics / 汇总指标
Auto-logged in:
example/results/metrics_history.csv- via
example/summary_metrics.py
10. Automation / 自动化
One‑click pipeline / 一键流程
python example/run_all.py --device cuda
Metrics logging / 指标记录
Each run appends:
timestamp,avg_ks,avg_jsd,avg_lag1_diff
11. Key Engineering Decisions / 关键工程决策
11.1 Mixed-Type Diffusion / 混合类型扩散
Continuous + discrete handled separately to respect data types.
11.2 Structure Prior / 结构先验
Learnable feature graph added to encode implicit dependencies.
11.3 v‑prediction
Chosen to stabilize training and improve convergence in diffusion.
11.4 Distribution Alignment / 分布对齐
Quantile loss introduced to directly reduce KS.
12. Known Issues / Current Limitations / 已知问题与当前局限
- KS remains high in many experiments, meaning continuous distributions are still misaligned.
- Lag‑1 may degrade when quantile loss is too strong.
- Loss spikes observed when quantile loss is unstable (mitigated with warmup + clip + Huber).
当前问题:
- KS 高,说明连续分布仍未对齐
- 分位数损失过强时会损害时序相关性
- 分位数损失不稳定时会出现 loss 爆炸(已引入 warmup/clip/Huber)
13. Suggested Next Steps (Research Roadmap) / 下一步建议(研究路线)
- SNR-weighted loss (improve stability across timesteps)
- Two-stage training (distribution first, temporal consistency second)
- Upgrade discrete diffusion (D3PM-style transitions)
- Structured conditioning (state/phase conditioning)
- Graph-based priors (explicit feature/plant dependency graphs)
14. Code Map (Key Files) / 代码索引(关键文件)
Core model
example/hybrid_diffusion.py
Training
example/train.py
Sampling & export
example/sample.pyexample/export_samples.py
Pipeline
example/run_all.py
Evaluation
example/evaluate_generated.pyexample/summary_metrics.py
Configs
example/config.json
15. Summary / 总结
This project implements a hybrid diffusion model for ICS traffic features, combining continuous Gaussian diffusion with discrete mask diffusion, enhanced with a learnable feature-graph prior. The system includes a full pipeline for preparation, training, sampling, exporting, and evaluation. Key research challenges remain in distribution alignment (KS) and joint optimization of distribution fidelity vs temporal consistency, motivating future improvements such as SNR-weighted loss, staged training, and stronger structural priors.
本项目实现了用于 ICS 流量特征的混合扩散模型,将连续高斯扩散与离散 Mask 扩散结合,并引入可学习特征图先验。系统包含完整的数据准备、训练、采样、导出与评估流程。当前研究挑战集中在连续分布对齐(KS)与分布/时序一致性之间的权衡,后续可通过 SNR‑weighted loss、分阶段训练与更强结构先验继续改进。