Files
mask-ddpm/paper.md
2026-02-07 13:24:36 +08:00

717 lines
96 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Introduction
Industrial control systems (ICS) form the backbone of modern critical infrastructure, which includes power grids, water treatment, manufacturing, and transportation, among others. These systems monitor, regulate, and automate the physical processes through sensors, actuators, programmable logic controllers (PLCs), and monitoring software. Unlike conventional IT systems, ICS operate in real time, closely coupled with physical processes and safetycritical constraints, using heterogeneous and legacy communication protocols such as Modbus/TCP and DNP3 that were not originally designed with robust security in mind. This architectural complexity and operational criticality make ICS highimpact targets for cyber attacks, where disruptions can result in physical damage, environmental harm, and even loss of life. Recent reviews of ICS security highlight the expanding attack surface due to increased connectivity, legacy systems vulnerabilities, and the inadequacy of traditional security controls in capturing the nuances of ICS networks and protocols [1, 2].
工业控制系统ICS构成了现代关键基础设施的支柱这些基础设施包括电网、水处理、制造业和交通运输等通过传感器、执行器、可编程逻辑控制器PLC和监控软件对物理过程进行监测、调节和自动化控制。与传统的信息技术IT系统不同ICS实时运行与物理过程和安全关键约束紧密耦合使用诸如Modbus/TCP和DNP3等异构和遗留通信协议而这些协议最初设计时并未充分考虑强大的安全性。这种架构的复杂性和操作的关键性使ICS成为网络攻击的高影响目标攻击造成的中断可能导致物理损坏、环境危害甚至人员伤亡。最近对ICS安全的评估强调由于连接性增加、遗留系统的漏洞以及传统安全控制措施在捕捉ICS网络和协议细微差别方面的不足攻击面正在不断扩大[1, 2]。
While machine learning (ML) techniques have shown promise for anomaly detection and automated cybersecurity within ICS, they rely heavily on labeled datasets that capture both benign operations and diverse attack patterns. In practice, real ICS traffic data — especially attacktriggered captures — are scarce due to confidentiality, safety, and legal restrictions, and available public ICS datasets are few, limited in scope, or fail to reflect current threat modalities. For instance, the HAI Security Dataset provides operational telemetry and anomaly flags from a realistic control system setup for research purposes, but must be carefully preprocessed to derive protocolrelevant features for ML tasks [3]. Data scarcity directly undermines model generalization, evaluation reproducibility, and the robustness of intrusion detection research, especially when training or testing ML models on realistic ICS behavior remains confined to small or outdated collections of examples [4].
虽然机器学习ML技术在工业控制系统ICS的异常检测和自动化网络安全方面展现出了潜力但它们严重依赖于能够捕捉正常操作和各种攻击模式的标记数据集。实际上由于保密、安全和法律限制真实的ICS流量数据尤其是攻击触发的捕获数据非常稀缺而可用的公共ICS数据集数量少、范围有限或者无法反映当前的威胁模式。例如HAI安全数据集提供了来自真实控制系统设置的操作遥测数据和异常标记用于研究目的但必须进行仔细的预处理才能提取与协议相关的特征用于ML任务[Kaggle HAI安全数据集]。[3]数据稀缺直接影响模型的泛化能力、评估的可重复性以及入侵检测研究的稳健性尤其是当对ML模型进行训练或测试时对真实ICS行为的研究仍然局限于少量或过时的示例集合[4]。
Synthetic data generation offers a practical pathway to mitigate these challenges. By programmatically generating featurelevel sequences that mimic the statistical and temporal structure of real ICS telemetry, researchers can augment scarce training sets, standardize benchmarking, and preserve operational confidentiality. Relative to raw packet captures, featurelevel synthesis abstracts critical protocol semantics and statistical patterns without exposing sensitive fields, making it more compatible with safety constraints and compliance requirements in ICS environments. Modern generative modeling — including diffusion models — has advanced significantly in producing highfidelity synthetic data across domains. Diffusion approaches, such as denoising diffusion probabilistic models, learn to transform noise into coherent structured samples and have been successfully applied to tabular or time series data synthesis with better stability and data coverage compared to adversarial methods [5, 6].
合成数据生成提供了一条切实可行的途径来缓解这些挑战。通过以编程方式生成模仿真实 ICS 遥测数据的统计和时间结构的特征级序列,研究人员可以扩充稀缺的训练集、规范基准测试,并保护操作机密性。相对于原始数据包捕获,特征级合成在不暴露敏感字段的情况下提取关键协议语义和统计模式,使其更符合 ICS 环境中的安全约束和合规要求。包括扩散模型在内的现代生成式建模在跨领域生成高保真合成数据方面取得了显著进展。扩散方法,如去噪扩散概率模型,学习将噪声转换为连贯的结构化样本,与对抗性方法相比,已成功应用于表格或时间序列数据合成,具有更好的稳定性和数据覆盖范围 [5, 6]。
Despite these advances, most existing work either focuses on packetlevel generation [7] or is limited to generic tabular data [5], rather than domainspecific control sequence synthesis tailored for ICS protocols where temporal coherence, multichannel dependencies, and discrete protocol legality are jointly required. This gap motivates our focus on protocol featurelevel generation for ICS — synthesizing sequences of protocolrelevant fields conditioned on their temporal and crosschannel structure. In this work, we formulate a hybrid modeling pipeline that decouples longhorizon trends and local statistical detail while preserving discrete semantics of protocol tokens. By combining causal Transformers with diffusionbased refiners, and enforcing deterministic validity constraints during sampling, our framework generates semantically coherent, temporally consistent, and distributionally faithful ICS feature sequences. We evaluate features derived from the HAI Security Dataset and demonstrate that our approach produces highquality synthetic sequences suitable for downstream augmentation, benchmarking, and integration into packetconstruction workflows that respect realistic ICS constraints.
尽管取得了这些进展,但大多数现有工作要么专注于数据包级生成 [7],要么仅限于通用表格数据 [5]而不是针对ICS协议量身定制的特定领域控制序列合成因为ICS协议需要同时满足时间连贯性、多通道依赖性和离散协议合法性。这一差距促使我们专注于ICS的协议特征级生成——基于其时间和跨通道结构合成与协议相关字段的序列。在这项工作中我们设计了一个混合建模流程在保留协议令牌离散语义的同时将长期趋势与局部统计细节解耦。通过将因果Transformer与基于扩散的细化器相结合并在采样过程中实施确定性有效性约束我们的框架生成了语义连贯、时间一致且分布忠实的ICS特征序列。我们评估了从HAI安全数据集导出的特征并证明我们的方法生成的高质量合成序列适用于下游增强、基准测试以及集成到尊重现实ICS约束的数据包构建工作流程中。
References for Introduction
[1] Machine learning in industrial control system (ICS) security: current landscape, opportunities and challenges https://dl.acm.org/doi/abs/10.1007/s10844-022-00753-1
@article{10.1007/s10844-022-00753-1,
author = {Koay, Abigail M. Y. and Ko, Ryan K. L and Hettema, Hinne and Radke, Kenneth},
title = {Machine learning in industrial control system (ICS) security: current landscape, opportunities and challenges},
year = {2022},
issue_date = {Apr 2023},
publisher = {Kluwer Academic Publishers},
address = {USA},
volume = {60},
number = {2},
issn = {0925-9902},
url = {https://doi.org/10.1007/s10844-022-00753-1},
doi = {10.1007/s10844-022-00753-1},
abstract = {The advent of Industry 4.0 has led to a rapid increase in cyber attacks on industrial systems and processes, particularly on Industrial Control Systems (ICS). These systems are increasingly becoming prime targets for cyber criminals and nation-states looking to extort large ransoms or cause disruptions due to their ability to cause devastating impact whenever they cease working or malfunction. Although myriads of cyber attack detection systems have been proposed and developed, these detection systems still face many challenges that are typically not found in traditional detection systems. Motivated by the need to better understand these challenges to improve current approaches, this paper aims to (1) understand the current vulnerability landscape in ICS, (2) survey current advancements of Machine Learning (ML) based methods with respect to the usage of ML base classifiers (3) provide insights to benefits and limitations of recent advancement with respect to two performance vectors; detection accuracy and attack variety. Based on our findings, we present key open challenges which will represent exciting research opportunities for the research community.},
journal = {J. Intell. Inf. Syst.},
month = oct,
pages = {377405},
numpages = {29},
keywords = {Operational technology, Cyber security, Dataset, Industrial control systems, Machine learning, Critical infrastructure}
}
[2] Securing Industrial Control Systems: Components, Cyber Threats, and Machine Learning-Driven Defense Strategies https://www.mdpi.com/1424-8220/23/21/8840
@ARTICLE{Nankya2023-gp,
title = "Securing industrial Control Systems: Components, cyber threats,
and machine learning-driven defense strategies",
author = "Nankya, Mary and Chataut, Robin and Akl, Robert",
abstract = "Industrial Control Systems (ICS), which include Supervisory
Control and Data Acquisition (SCADA) systems, Distributed
Control Systems (DCS), and Programmable Logic Controllers (PLC),
play a crucial role in managing and regulating industrial
processes. However, ensuring the security of these systems is of
utmost importance due to the potentially severe consequences of
cyber attacks. This article presents an overview of ICS
security, covering its components, protocols, industrial
applications, and performance aspects. It also highlights the
typical threats and vulnerabilities faced by these systems.
Moreover, the article identifies key factors that influence the
design decisions concerning control, communication, reliability,
and redundancy properties of ICS, as these are critical in
determining the security needs of the system. The article
outlines existing security countermeasures, including network
segmentation, access control, patch management, and security
monitoring. Furthermore, the article explores the integration of
machine learning techniques to enhance the cybersecurity of ICS.
Machine learning offers several advantages, such as anomaly
detection, threat intelligence analysis, and predictive
maintenance. However, combining machine learning with other
security measures is essential to establish a comprehensive
defense strategy for ICS. The article also addresses the
challenges associated with existing measures and provides
recommendations for improving ICS security. This paper becomes a
valuable reference for researchers aiming to make meaningful
contributions within the constantly evolving ICS domain by
providing an in-depth examination of the present state,
challenges, and potential future advancements.",
journal = "Sensors (Basel)",
publisher = "MDPI AG",
volume = 23,
number = 21,
pages = "8840",
month = oct,
year = 2023,
keywords = "SCADA; anomaly detection; artificial intelligence; attacks;
cyber defense; cyber threats; industrial control systems;
security; vulnerabilities",
copyright = "https://creativecommons.org/licenses/by/4.0/",
language = "en"
}
[3] HAI Security Dataset https://www.kaggle.com/datasets/icsdataset/hai-security-dataset
@misc{shin,
hyeok-ki_lee,
woomyo_choi,
seungoh_yun,
jeong-han_min,
byung gil_kim,
hyoungchun_2023,
title={HAI Security Dataset},
url={https://www.kaggle.com/dsv/5821622},
DOI={10.34740/KAGGLE/DSV/5821622},
publisher={Kaggle},
author={Shin,
Hyeok-Ki and Lee,
Woomyo and Choi,
Seungoh and Yun,
Jeong-Han and Min,
Byung Gil and Kim,
HyoungChun},
year={2023}
}
[4] Intrusion Detection in Industrial Control Systems Using Transfer Learning Guided by Reinforcement Learning https://doi.org/10.3390/info16100910
@Article{info16100910,
AUTHOR = {Ali, Jokha and Ali, Saqib and Al Balushi, Taiseera and Nadir, Zia},
TITLE = {Intrusion Detection in Industrial Control Systems Using Transfer Learning Guided by Reinforcement Learning},
JOURNAL = {Information},
VOLUME = {16},
YEAR = {2025},
NUMBER = {10},
ARTICLE-NUMBER = {910},
URL = {https://www.mdpi.com/2078-2489/16/10/910},
ISSN = {2078-2489},
ABSTRACT = {Securing Industrial Control Systems (ICSs) is critical, but it is made challenging by the constant evolution of cyber threats and the scarcity of labeled attack data in these specialized environments. Standard intrusion detection systems (IDSs) often fail to adapt when transferred to new networks with limited data. To address this, this paper introduces an adaptive intrusion detection framework that combines a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) model with a novel transfer learning strategy. We employ a Reinforcement Learning (RL) agent to intelligently guide the fine-tuning process, which allows the IDS to dynamically adjust its parameters such as layer freezing and learning rates in real-time based on performance feedback. We evaluated our system in a realistic data-scarce scenario using only 50 labeled training samples. Our RL-Guided model achieved a final F1-score of 0.9825, significantly outperforming a standard neural fine-tuning model (0.861) and a target baseline model (0.759). Analysis of the RL agents behavior confirmed that it learned a balanced and effective policy for adapting the model to the target domain. We conclude that the proposed RL-guided approach creates a highly accurate and adaptive IDS that overcomes the limitations of static transfer learning methods. This dynamic fine-tuning strategy is a powerful and promising direction for building resilient cybersecurity defenses for critical infrastructure.},
DOI = {10.3390/info16100910}
}
[5] TabDDPM: Modelling Tabular Data with Diffusion Models https://arxiv.org/abs/2209.15421
@InProceedings{pmlr-v202-kotelnikov23a,
title = {{T}ab{DDPM}: Modelling Tabular Data with Diffusion Models},
author = {Kotelnikov, Akim and Baranchuk, Dmitry and Rubachev, Ivan and Babenko, Artem},
booktitle = {Proceedings of the 40th International Conference on Machine Learning},
pages = {17564--17579},
year = {2023},
editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan},
volume = {202},
series = {Proceedings of Machine Learning Research},
month = {23--29 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v202/kotelnikov23a/kotelnikov23a.pdf},
url = {https://proceedings.mlr.press/v202/kotelnikov23a.html},
abstract = {Denoising diffusion probabilistic models are becoming the leading generative modeling paradigm for many important data modalities. Being the most prevalent in the computer vision community, diffusion models have recently gained some attention in other domains, including speech, NLP, and graph-like data. In this work, we investigate if the framework of diffusion models can be advantageous for general tabular problems, where data points are typically represented by vectors of heterogeneous features. The inherent heterogeneity of tabular data makes it quite challenging for accurate modeling since the individual features can be of a completely different nature, i.e., some of them can be continuous and some can be discrete. To address such data types, we introduce TabDDPM — a diffusion model that can be universally applied to any tabular dataset and handles any feature types. We extensively evaluate TabDDPM on a wide set of benchmarks and demonstrate its superiority over existing GAN/VAE alternatives, which is consistent with the advantage of diffusion models in other fields.}
}
[6] Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting https://arxiv.org/abs/2101.12072
@misc{rasul2021autoregressivedenoisingdiffusionmodels,
title={Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting},
author={Kashif Rasul and Calvin Seward and Ingmar Schuster and Roland Vollgraf},
year={2021},
eprint={2101.12072},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2101.12072},
}
[7] NetDiffusion: Network Data Augmentation Through Protocol-Constrained Traffic Generation. https://arxiv.org/abs/2310.08543
@misc{jiang2023netdiffusionnetworkdataaugmentation,
title={NetDiffusion: Network Data Augmentation Through Protocol-Constrained Traffic Generation},
author={Xi Jiang and Shinan Liu and Aaron Gember-Jacobson and Arjun Nitin Bhagoji and Paul Schmitt and Francesco Bronzino and Nick Feamster},
year={2023},
eprint={2310.08543},
archivePrefix={arXiv},
primaryClass={cs.NI},
url={https://arxiv.org/abs/2310.08543},
}
Related Work
Early generation of network data oriented towards "realism" mostly remained at the packet/flow header level, either through replay or statistical synthesis based on single-point observations. Swing, in a closed-loop, network-responsive manner, extracts user/application/network distributions from single-point observations to reproduce burstiness and correlation across multiple time scales [1]. Subsequently, a series of works advanced header synthesis to learning-based generation: the WGAN-based method added explicit verification of protocol field consistency to NetFlow/IPFIX [2], NetShare reconstructed header modeling as flow-level time series and improved fidelity and scalability through domain encoding and parallel fine-tuning [3], and DoppelGANger preserved the long-range structure and downstream sorting consistency of networked time series by decoupling attributes from sequences [4]. However, in industrial control system (ICS) scenarios, the original PCAP is usually not shareable, and public testbeds (such as SWaT, WADI) mostly provide process/monitoring telemetry and protocol interactions for security assessment, but public datasets emphasize operational variables rather than packet-level traces [5, 6]. This makes "synthesis at the feature/telemetry level, aware of protocol and semantics" more feasible and necessary in practice: we are more concerned with reproducing high-level distributions and multi-scale temporal patterns according to operational semantics and physical constraints without relying on the original packets. From this perspective, the generation paradigm naturally shifts from "packet syntax reproduction" to "modeling of high-level spatio-temporal distributions and uncertainties", requiring stable training, strong distribution fitting, and interpretable uncertainty characterization.
早期面向“真实感”网络数据的生成大多停留在分组/流头部层面要么重放、要么在单点观测的前提下做统计合成。Swing 以闭环、网络响应的方式,从单点观测中抽取用户/应用/网络分布,从而在多时间尺度上重现突发性与相关性[1];随后的一系列工作把对头部的合成推进到学习型生成:基于 WGAN 的方法在 NetFlow/IPFIX 上加入协议字段一致性的显式校验[2]NetShare 将头部建模重构为流级时间序列并用领域编码与并行微调提升保真与可扩展性[3]DoppelGANger 则通过将属性与序列解耦,保留网络化时间序列的长程结构与下游排序一致性[4]。然而在工业控制系统ICS场景原始 PCAP 通常不可分享,公开测试床(如 SWaT、WADI更多提供过程/监控遥测与协议交互,用于安全评估,但公开数据集中强调运行变量而非报文级踪迹[5,6]。这使得“特征/遥测层级的、协议与语义感知的合成”在实践上更具可行性与必要性:我们更关心在不依赖原始报文的前提下,按操作语义与物理约束重现高层分布与多尺度时序模式。沿着这一视角,生成范式也自然从“报文语法复现”转向“高层时空分布与不确定性的建模”,需要稳定训练、强分布拟合与可解释的不确定性刻画。
Diffusion models exhibit good fit along this path: DDPM achieves high-quality sampling and stable optimization through efficient ε parameterization and weighted variational objectives [7], the SDE perspective unifies score-based and diffusion, providing likelihood evaluation and prediction-correction sampling strategies based on probability flow ODEs [8]. For time series, TimeGrad replaces the constrained output distribution with conditional denoising, capturing high-dimensional correlations at each step [9]; CSDI explicitly performs conditional diffusion and uses two-dimensional attention to simultaneously leverage temporal and cross-feature dependencies, suitable for conditioning and filling in missing values [10]; in a more general spatio-temporal structure, DiffSTG generalizes diffusion to spatio-temporal graphs, combining TCN/GCN with denoising U-Net to improve CRPS and inference efficiency in a non-autoregressive manner [11], and PriSTI further enhances conditional features and geographical relationships, maintaining robustness under high missing rates and sensor failures [12]; in long sequences and continuous domains, DiffWave verifies that diffusion can also match the quality of strong vocoders under non-autoregressive fast synthesis [13]; studies on cellular communication traffic show that diffusion can recover spatio-temporal patterns and provide uncertainty characterization at the urban scale [14]. These results overall point to a conclusion: when the research focus is on "telemetry/high-level features" rather than raw messages, diffusion models provide stable and fine-grained distribution fitting and uncertainty quantification, which is exactly in line with the requirements of ICS telemetry synthesis. Meanwhile, directly entrusting all structures to a "monolithic diffusion" is not advisable: long-range temporal skeletons and fine-grained marginal distributions often have optimization tensions, requiring explicit decoupling in modeling.
扩散模型在这一路径上表现出良好的契合度DDPM 通过高效的 ε 参数化与加权变分目标实现高质量采样与稳定优化[7]SDE 视角把 score-based 与扩散统一起来,提供了基于概率流 ODE 的似然评估与预测—校正采样策略[8]。面向时间序列TimeGrad 用条件去噪替代受限的输出分布,在每一步捕获高维相关性[9]CSDI 显式地做条件扩散并用二维注意力同时利用时间与跨特征依赖,适合条件化与填补缺失[10]在更广义的时空结构上DiffSTG 将扩散推广到时空图,结合 TCN/GCN 与去噪 U-Net 以非自回归方式提升 CRPS 与推理效率[11]PriSTI 进一步增强条件特征与地理关系,在高缺失率与传感失效下保持鲁棒性[12]在长序列与连续域DiffWave 验证了扩散在非自回归快速合成下也能匹配强声码器的质量[13];蜂窝通信流量的研究表明,扩散可以在城市尺度恢复时空模式并提供不确定性刻画[14]。这些结果整体指向一个结论:当研究重点在“遥测/高层特征”而非原始报文时,扩散模型提供了稳定、精细的分布拟合与不确定性量化,正契合 ICS 遥测合成的需求。与此同时,直接把所有结构交给一个“单体扩散”并不可取:长程时序骨架与细粒度边际分布往往存在优化张力,需要在建模上显式解耦。
Looking further into the mechanism complexity of ICS: its channel types are inherently mixed, containing both continuous process trajectories and discrete supervision/status variables, and discrete channels must be "legal" under operational constraints. The aforementioned progress in time series diffusion has mainly occurred in continuous spaces, but discrete diffusion has also developed systematic methods: D3PM improves sampling quality and likelihood through absorption/masking and structured transitions in discrete state spaces [15], subsequent masked diffusion provides stable reconstruction on categorical data in a more simplified form [4], multinomial diffusion directly defines diffusion on a finite vocabulary through mechanisms such as argmax flows [20], and Diffusion-LM demonstrates an effective path for controllable text generation by imposing gradient constraints in continuous latent spaces [16]. From the perspectives of protocols and finite-state machines, coverage-guided fuzz testing emphasizes the criticality of "sequence legality and state coverage" [1719], echoing the concept of "legality by construction" in discrete diffusion: preferentially adopting absorption/masking diffusion on discrete channels, supplemented by type-aware conditioning and sampling constraints, to avoid semantic invalidity and marginal distortion caused by post hoc thresholding.
进一步看 ICS 的机制复杂性:其通道类型天然是混合的,既含连续的过程轨迹,也含离散的监督/状态变量且离散通道必须在操作约束下“合法”。前述时间序列扩散的进展主要发生在连续空间但离散扩散同样已形成系统化方法D3PM 在离散状态空间中以吸收/掩蔽和结构化转移提升采样质量与似然[15],后续的掩蔽扩散以更简化的形式在类别数据上提供稳定重构[4]多项分布Multinomial扩散通过 argmax flows 等机制把扩散直接定义在有限词表上[20],而 Diffusion-LM 则展示了在连续潜空间施加梯度约束以实现可控文本生成的有效路径[16]。结合协议与状态机视角,覆盖引导的模糊测试强调“序列合法性与状态覆盖”的关键性[1719],与离散扩散的“按构造合法”理念相呼应:在离散通道上优先采用吸收/掩蔽式扩散并辅以类型感知的条件化与采样约束,避免事后阈值化导致的语义无效与边际失真。
From the perspective of high-level synthesis, the temporal structure is equally indispensable: ICS control often involves delay effects, phased operating conditions, and cross-channel coupling, requiring models to be able to characterize low-frequency, long-range dependencies while also overlaying multi-modal fine-grained fluctuations on them. The Transformer series has provided sufficient evidence in long-sequence time series tasks: Transformer-XL breaks through the fixed-length context limitation through a reusable memory mechanism and significantly enhances long-range dependency expression [21]; Informer uses ProbSparse attention and efficient decoding to balance span and efficiency in long-sequence prediction [22]; Autoformer robustly models long-term seasonality and trends through autocorrelation and decomposition mechanisms [23]; FEDformer further improves long-period prediction performance in frequency domain enhancement and decomposition [24]; PatchTST enhances the stability and generalization of long-sequence multivariate prediction through local patch-based representation and channel-independent modeling [25]. Combining our previous positioning of diffusion, this chain of evidence points to a natural division of labor: using attention-based sequence models to first extract stable low-frequency trends/conditions (long-range skeletons), and then allowing diffusion to focus on margins and details in the residual space; meanwhile, discrete masking/absorbing diffusion is applied to supervised/pattern variables to ensure vocabulary legality by construction. This design not only inherits the advantages of time series diffusion in distribution fitting and uncertainty characterization [914], but also stabilizes the macroscopic temporal support through the long-range attention of Transformer, enabling the formation of an operational integrated generation pipeline under the mixed types and multi-scale dynamics of ICS.
从高层合成的角度时序结构同样不可或缺ICS 控制往往伴随延迟效应、阶段性工况与跨通道耦合需要模型既能刻画低频、长程依赖又能在其上叠加多模态的细粒度波动。Transformer 系列在长序列时间序列任务上已提供了充分证据TransformerXL 通过可复用的记忆机制突破固定长度上下文限制、显著增强长程依赖表达[21]Informer 在长序列预测中用 ProbSparse 注意力与高效解码兼顾跨度与效率[22]Autoformer 通过自相关与分解机制稳健建模长期季节与趋势[23]FEDformer 在频域增强与分解上进一步提升长周期预测的表现[24]PatchTST 则以局部补丁化的表示和通道独立建模提升长序列多变量预测的稳定性与泛化[25]。结合我们在前文对扩散的定位,这一证据链指向一种自然的分工:用注意力型序列模型先抽取稳定的低频趋势/条件(长程骨架),再让扩散在残差空间聚焦于边际与细节;与此同时,针对监督/模式类变量采用离散掩蔽/吸收扩散按构造保证词表合法性。该设计既继承了时间序列扩散在分布拟合与不确定性刻画上的优势[914],又借助 Transformer 的长距注意力稳固了宏观时序支撑,使得在 ICS 的混合类型与多尺度动态下形成可操作的一体化生成管线。
References for Related Work
[1] Realistic and responsive network traffic generation https://dl.acm.org/doi/10.1145/1159913.1159928
@article{10.1145/1151659.1159928,
author = {Vishwanath, Kashi Venkatesh and Vahdat, Amin},
title = {Realistic and responsive network traffic generation},
year = {2006},
issue_date = {October 2006},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {36},
number = {4},
issn = {0146-4833},
url = {https://doi.org/10.1145/1151659.1159928},
doi = {10.1145/1151659.1159928},
abstract = {This paper presents Swing, a closed-loop, network-responsive traffic generator that accurately captures the packet interactions of a range of applications using a simple structural model. Starting from observed traffic at a single point in the network, Swing automatically extracts distributions for user, application, and network behavior. It then generates live traffic corresponding to the underlying models in a network emulation environment running commodity network protocol stacks. We find that the generated traces are statistically similar to the original traces. Further, to the best of our knowledge, we are the first to reproduce burstiness in traffic across a range of timescales using a model applicable to a variety of network settings. An initial sensitivity analysis reveals the importance of capturing and recreating user, application, and network characteristics to accurately reproduce such burstiness. Finally, we explore Swing's ability to vary user characteristics, application properties, and wide-area network conditions to project traffic characteristics into alternate scenarios.},
journal = {SIGCOMM Comput. Commun. Rev.},
month = aug,
pages = {111122},
numpages = {12},
keywords = {burstiness, energy plot, generator, internet, modeling, structural model, traffic, wavelets}
}
@inproceedings{10.1145/1159913.1159928,
author = {Vishwanath, Kashi Venkatesh and Vahdat, Amin},
title = {Realistic and responsive network traffic generation},
year = {2006},
isbn = {1595933085},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/1159913.1159928},
doi = {10.1145/1159913.1159928},
abstract = {This paper presents Swing, a closed-loop, network-responsive traffic generator that accurately captures the packet interactions of a range of applications using a simple structural model. Starting from observed traffic at a single point in the network, Swing automatically extracts distributions for user, application, and network behavior. It then generates live traffic corresponding to the underlying models in a network emulation environment running commodity network protocol stacks. We find that the generated traces are statistically similar to the original traces. Further, to the best of our knowledge, we are the first to reproduce burstiness in traffic across a range of timescales using a model applicable to a variety of network settings. An initial sensitivity analysis reveals the importance of capturing and recreating user, application, and network characteristics to accurately reproduce such burstiness. Finally, we explore Swing's ability to vary user characteristics, application properties, and wide-area network conditions to project traffic characteristics into alternate scenarios.},
booktitle = {Proceedings of the 2006 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications},
pages = {111122},
numpages = {12},
keywords = {burstiness, energy plot, generator, internet, modeling, structural model, traffic, wavelets},
location = {Pisa, Italy},
series = {SIGCOMM '06}
}
[2] Flow-based network traffic generation using Generative Adversarial Networks https://arxiv.org/abs/1810.07795
@article{Ring_2019,
title={Flow-based network traffic generation using Generative Adversarial Networks},
volume={82},
ISSN={0167-4048},
url={http://dx.doi.org/10.1016/j.cose.2018.12.012},
DOI={10.1016/j.cose.2018.12.012},
journal={Computers & Security},
publisher={Elsevier BV},
author={Ring, Markus and Schlör, Daniel and Landes, Dieter and Hotho, Andreas},
year={2019},
month=may, pages={156172} }
[3] Practical GAN-based synthetic IP header trace generation using NetShare https://dl.acm.org/doi/abs/10.1145/3544216.3544251?download=true
@inproceedings{10.1145/3544216.3544251,
author = {Yin, Yucheng and Lin, Zinan and Jin, Minhao and Fanti, Giulia and Sekar, Vyas},
title = {Practical GAN-based synthetic IP header trace generation using NetShare},
year = {2022},
isbn = {9781450394208},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3544216.3544251},
doi = {10.1145/3544216.3544251},
abstract = {We explore the feasibility of using Generative Adversarial Networks (GANs) to automatically learn generative models to generate synthetic packet- and flow header traces for networking tasks (e.g., telemetry, anomaly detection, provisioning). We identify key fidelity, scalability, and privacy challenges and tradeoffs in existing GAN-based approaches. By synthesizing domain-specific insights with recent advances in machine learning and privacy, we identify design choices to tackle these challenges. Building on these insights, we develop an end-to-end framework, NetShare. We evaluate NetShare on six diverse packet header traces and find that: (1) across all distributional metrics and traces, it achieves 46\% more accuracy than baselines and (2) it meets users' requirements of downstream tasks in evaluating accuracy and rank ordering of candidate approaches.},
booktitle = {Proceedings of the ACM SIGCOMM 2022 Conference},
pages = {458472},
numpages = {15},
keywords = {synthetic data generation, privacy, network packets, network flows, generative adversarial networks},
location = {Amsterdam, Netherlands},
series = {SIGCOMM '22}
}
[4] Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions https://arxiv.org/abs/1909.13403
@inproceedings{Lin_2020, series={IMC 20},
title={Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions},
url={http://dx.doi.org/10.1145/3419394.3423643},
DOI={10.1145/3419394.3423643},
booktitle={Proceedings of the ACM Internet Measurement Conference},
publisher={ACM},
author={Lin, Zinan and Jain, Alankar and Wang, Chen and Fanti, Giulia and Sekar, Vyas},
year={2020},
month=oct, pages={464483},
collection={IMC 20} }
[5] SWaT: a water treatment testbed for research and training on ICS security https://ieeexplore.ieee.org/document/7469060
@INPROCEEDINGS{7469060,
author={Mathur, Aditya P. and Tippenhauer, Nils Ole},
booktitle={2016 International Workshop on Cyber-physical Systems for Smart Water Networks (CySWater)},
title={SWaT: a water treatment testbed for research and training on ICS security},
year={2016},
volume={},
number={},
pages={31-36},
keywords={Sensors;Actuators;Feeds;Process control;Chemicals;Chemical sensors;Security;Cyber Physical Systems;Industrial Control Systems;Cyber Attacks;Cyber Defense;Water Testbed},
doi={10.1109/CySWater.2016.7469060}}
[6] WADI: a water distribution testbed for research in the design of secure cyber physical systems https://www.researchgate.net/publication/315849116_WADI_a_water_distribution_testbed_for_research_in_the_design_of_secure_cyber_physical_systems
@inproceedings{10.1145/3055366.3055375,
author = {Ahmed, Chuadhry Mujeeb and Palleti, Venkata Reddy and Mathur, Aditya P.},
title = {WADI: a water distribution testbed for research in the design of secure cyber physical systems},
year = {2017},
isbn = {9781450349758},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3055366.3055375},
doi = {10.1145/3055366.3055375},
abstract = {The architecture of a water distribution testbed (WADI), and on-going research in the design of secure water distribution system is presented. WADI consists of three stages controlled by Programmable Logic Controllers (PLCs) and two stages controlled via Remote Terminal Units (RTUs). Each PLC and RTU uses sensors to estimate the system state and the actuators to effect control. WADI is currently used to (a) conduct security analysis for water distribution networks, (b) experimentally assess detection mechanisms for potential cyber and physical attacks, and (c) understand how the impact of an attack on one CPS could cascade to other connected CPSs. The cascading effects of attacks can be studied in WADI through its connection to two other testbeds, namely for water treatment and power generation and distribution.},
booktitle = {Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks},
pages = {2528},
numpages = {4},
keywords = {attack detection, cyber physical systems, cyber security, industrial control systems, water distribution testbed},
location = {Pittsburgh, Pennsylvania},
series = {CySWATER '17}
}
[7] Denoising Diffusion Probabilistic Models https://arxiv.org/abs/2006.11239
@inproceedings{NEURIPS2020_4c5bcfec,
author = {Ho, Jonathan and Jain, Ajay and Abbeel, Pieter},
booktitle = {Advances in Neural Information Processing Systems},
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin},
pages = {6840--6851},
publisher = {Curran Associates, Inc.},
title = {Denoising Diffusion Probabilistic Models},
url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf},
volume = {33},
year = {2020}
}
[8] Score-Based Generative Modeling through Stochastic Differential Equations https://arxiv.org/abs/2011.13456
@misc{song2021scorebasedgenerativemodelingstochastic,
title={Score-Based Generative Modeling through Stochastic Differential Equations},
author={Yang Song and Jascha Sohl-Dickstein and Diederik P. Kingma and Abhishek Kumar and Stefano Ermon and Ben Poole},
year={2021},
eprint={2011.13456},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2011.13456},
}
[9] Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting https://arxiv.org/abs/2101.12072
@misc{rasul2021autoregressivedenoisingdiffusionmodels,
title={Autoregressive Denoising Diffusion Models for Multivariate Probabilistic Time Series Forecasting},
author={Kashif Rasul and Calvin Seward and Ingmar Schuster and Roland Vollgraf},
year={2021},
eprint={2101.12072},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2101.12072},
}
[10] CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation https://arxiv.org/abs/2107.03502
@misc{tashiro2021csdiconditionalscorebaseddiffusion,
title={CSDI Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation},
author={Yusuke Tashiro and Jiaming Song and Yang Song and Stefano Ermon},
year={2021},
eprint={2107.03502},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={httpsarxiv.orgabs2107.03502},
}
[11] DiffSTG: Probabilistic Spatio-Temporal Graph Forecasting with Denoising Diffusion Models https://arxiv.org/abs/2301.13629
@misc{wen2024diffstgprobabilisticspatiotemporalgraph,
title={DiffSTG: Probabilistic Spatio-Temporal Graph Forecasting with Denoising Diffusion Models},
author={Haomin Wen and Youfang Lin and Yutong Xia and Huaiyu Wan and Qingsong Wen and Roger Zimmermann and Yuxuan Liang},
year={2024},
eprint={2301.13629},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2301.13629},
}
[12] PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation https://arxiv.org/abs/2302.09746
@misc{liu2023pristiconditionaldiffusionframework,
title={PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation},
author={Mingzhe Liu and Han Huang and Hao Feng and Leilei Sun and Bowen Du and Yanjie Fu},
year={2023},
eprint={2302.09746},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2302.09746},
}
[13] DiffWave: A Versatile Diffusion Model for Audio Synthesis https://arxiv.org/abs/2009.09761
@misc{kong2021diffwaveversatilediffusionmodel,
title={DiffWave: A Versatile Diffusion Model for Audio Synthesis},
author={Zhifeng Kong and Wei Ping and Jiaji Huang and Kexin Zhao and Bryan Catanzaro},
year={2021},
eprint={2009.09761},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2009.09761},
}
[14] Spatio-Temporal Diffusion Model for Cellular Traffic Generation https://ieeexplore.ieee.org/document/11087622
@ARTICLE{11087622,
author={Liu, Xiaosi and Xu, Xiaowen and Liu, Zhidan and Li, Zhenjiang and Wu, Kaishun},
journal={IEEE Transactions on Mobile Computing},
title={Spatio-Temporal Diffusion Model for Cellular Traffic Generation},
year={2026},
volume={25},
number={1},
pages={257-271},
keywords={Base stations;Diffusion models;Data models;Uncertainty;Predictive models;Generative adversarial networks;Knowledge graphs;Mobile computing;Telecommunication traffic;Semantics;Cellular traffic;data generation;diffusion model;spatio-temporal graph},
doi={10.1109/TMC.2025.3591183}}
[15] Structured Denoising Diffusion Models in Discrete State-Spaces https://arxiv.org/abs/2107.03006
@misc{austin2023structureddenoisingdiffusionmodels,
title={Structured Denoising Diffusion Models in Discrete State-Spaces},
author={Jacob Austin and Daniel D. Johnson and Jonathan Ho and Daniel Tarlow and Rianne van den Berg},
year={2023},
eprint={2107.03006},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2107.03006},
}
[16] Diffusion-LM Improves Controllable Text Generation https://arxiv.org/abs/2205.14217
@misc{li2022diffusionlmimprovescontrollabletext,
title={Diffusion-LM Improves Controllable Text Generation},
author={Xiang Lisa Li and John Thickstun and Ishaan Gulrajani and Percy Liang and Tatsunori B. Hashimoto},
year={2022},
eprint={2205.14217},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={httpsarxiv.orgabs2205.14217},
}
[17] AFLNet: Five Years Later On Coverage-Guided Protocol Fuzzing https://arxiv.org/html/2412.20324v1
@misc{meng2025aflnetyearslatercoverageguided,
title={AFLNet Five Years Later: On Coverage-Guided Protocol Fuzzing},
author={Ruijie Meng and Van-Thuan Pham and Marcel Böhme and Abhik Roychoudhury},
year={2025},
eprint={2412.20324},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2412.20324},
}
[18] Learn&Fuzz: Machine Learning for Input Fuzzing https://arxiv.org/abs/1701.07232
@misc{godefroid2017learnfuzzmachinelearninginput,
title={Learn&Fuzz: Machine Learning for Input Fuzzing},
author={Patrice Godefroid and Hila Peleg and Rishabh Singh},
year={2017},
eprint={1701.07232},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/1701.07232},
}
[19] NEUZZ: Efficient Fuzzing with Neural Program Smoothing https://arxiv.org/abs/1807.05620
@misc{she2019neuzzefficientfuzzingneural,
title={NEUZZ: Efficient Fuzzing with Neural Program Smoothing},
author={Dongdong She and Kexin Pei and Dave Epstein and Junfeng Yang and Baishakhi Ray and Suman Jana},
year={2019},
eprint={1807.05620},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/1807.05620},
}
[20] Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions https://arxiv.org/abs/2102.05379
@misc{hoogeboom2021argmaxflowsmultinomialdiffusion,
title={Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions},
author={Emiel Hoogeboom and Didrik Nielsen and Priyank Jaini and Patrick Forré and Max Welling},
year={2021},
eprint={2102.05379},
archivePrefix={arXiv},
primaryClass={stat.ML},
url={https://arxiv.org/abs/2102.05379},
}
[21] Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context https://arxiv.org/abs/1901.02860
@misc{dai2019transformerxlattentivelanguagemodels,
title={Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context},
author={Zihang Dai and Zhilin Yang and Yiming Yang and Jaime Carbonell and Quoc V. Le and Ruslan Salakhutdinov},
year={2019},
eprint={1901.02860},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/1901.02860},
}
[22] Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting https://arxiv.org/abs/2012.07436
@misc{zhou2021informerefficienttransformerlong,
title={Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting},
author={Haoyi Zhou and Shanghang Zhang and Jieqi Peng and Shuai Zhang and Jianxin Li and Hui Xiong and Wancai Zhang},
year={2021},
eprint={2012.07436},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2012.07436},
}
[23] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting https://arxiv.org/abs/2106.13008
@misc{wu2022autoformerdecompositiontransformersautocorrelation,
title={Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting},
author={Haixu Wu and Jiehui Xu and Jianmin Wang and Mingsheng Long},
year={2022},
eprint={2106.13008},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2106.13008},
}
[24] FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting https://arxiv.org/abs/2201.12740
@misc{zhou2022fedformerfrequencyenhanceddecomposed,
title={FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting},
author={Tian Zhou and Ziqing Ma and Qingsong Wen and Xue Wang and Liang Sun and Rong Jin},
year={2022},
eprint={2201.12740},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2201.12740},
}
[25] A Note on Extremal Sombor Indices of Trees with a Given Degree Sequence https://arxiv.org/abs/2211.11920
@article{2023,
title={A Note on Extremal Sombor Indices of Trees with a Given Degree Sequence},
volume={90},
ISSN={0340-6253},
url={http://dx.doi.org/10.46793/match.90-1.197D},
DOI={10.46793/match.90-1.197d},
number={1},
journal={Match Communications in Mathematical and in Computer Chemistry},
publisher={University Library in Kragujevac},
author={Damjanović, Ivan and Milošević, Marko and Stevanović, Dragan},
year={2023},
pages={197202} }
Methodology
Industrial control system (ICS) telemetry is intrinsically mixed-type and mechanistically heterogeneous: continuous process trajectories (e.g., sensor and actuator signals) coexist with discrete supervisory states (e.g., modes, alarms, interlocks), and the underlying generating mechanisms range from physical inertia to program-driven step logic. This heterogeneity is not cosmetic—it directly affects what “realistic” synthesis means, because a generator must jointly satisfy (i) temporal coherence, (ii) distributional fidelity, and (iii) discrete semantic validity (i.e., every discrete output must belong to its legal vocabulary by construction). These properties are emphasized broadly in operational-technology security guidance and ICS engineering practice, where state logic and physical dynamics are tightly coupled. [12]
工业控制系统ICS遥测数据天然是混合类型且机制异质的连续的过程轨迹如传感器与执行器信号与离散的监督状态如模式、告警与联锁并存其生成机理从物理惯性到程序驱动的阶跃逻辑不一而足。这种异质性并非表面现象它直接决定了“逼真”合成的内涵生成器需要同时满足i时间连贯性ii分布保真性以及iii离散语义有效性即离散输出必须按构造落在合法词表中。这些属性也被 OT 安全指南与 ICS 工程实践反复强调,因为状态逻辑与物理动力学通常是紧密耦合的。[12]
We formalize each training instance as a fixed-length window of length We model each training instance as a fixed-length window of length $$L$$, consisting of (i) continuous channels $$X\in\mathbb{R}^{L\times d_c}$$ and (ii) discrete channels $$Y=\{{y^{(j)}_{1:L}}\}_{j=1}^{d_d}$$, where each discrete variable $$y^{(j)}_t\in\mathcal{V}_j$$ belongs to a finite vocabulary $$\mathcal{V}_j$$. Our objective is to learn a generator that produces synthetic $$(\hat{X},\hat{Y})$$ that are simultaneously coherent and distributionally faithful, while also ensuring $$\hat{y}^{(j)}_t\in\mathcal{V}_j$$ for all $$j, t$$ by construction (rather than via post-hoc rounding or thresholding).
我们将每个训练样本形式化为长度为 $$L$$ 的定长窗口i连续通道 $$X\in\mathbb{R}^{L\times d_c}$$ 与ii离散通道 $$Y=\{{y^{(j)}_{1:L}}\}_{j=1}^{d_d}$$ 组成,其中每个离散变量 $$y^{(j)}_t\in\mathcal{V}_j$$ 属于有限词表 $$\mathcal{V}_j$$。我们的目标是学习一个生成器,使其输出的合成序列 $$(\hat{X},\hat{Y})$$ 同时具备时间一致与分布一致,并且对所有 $$j,t$$ 都按构造保证 $$\hat{y}^{(j)}_t\in\mathcal{V}_j$$(而非依赖事后取整或阈值化)。
A key empirical and methodological tension in ICS synthesis is that temporal realism and marginal/distributional realism can compete when optimized monolithically: sequence models trained primarily for regression often over-smooth heavy tails and intermittent bursts, while purely distribution-matching objectives can erode long-range structure. Diffusion models provide a principled route to rich distribution modeling through iterative denoising, but they do not, by themselves, resolve (i) the need for a stable low-frequency temporal scaffold, nor (ii) the discrete legality constraints for supervisory variables. [2,8] Recent time-series diffusion work further suggests that separating coarse structure from stochastic refinement can be an effective inductive bias for long-horizon realism. [6,7]
ICS 合成中一个关键的经验与方法学张力在于:若以单体目标进行优化,时间逼真性与边际/分布逼真性往往会相互牵制。以回归为主的序列模型容易过度平滑重尾与间歇突发而仅强调分布匹配的目标又可能侵蚀长程结构。扩散模型通过迭代去噪为丰富的分布建模提供了原则性路径但其本身并不能自动解决i稳定的低频时间骨架需求亦不能解决ii监督变量的离散合法性约束。[2,8] 近期时间序列扩散研究进一步表明,将粗结构与随机细化分离是一种有利于长时域逼真性的归纳偏置。[6,7]
[图片]
**PLACEHOLDER_ONLY_DO_NOT_USE_IN_REAL_PAPER**
Motivated by these considerations, we propose Mask-DDPM, organized in the following order:
1. Transformer trend module: learns the dominant temporal backbone of continuous dynamics via attention-based sequence modeling [1].
2. Residual DDPM for continuous variables: models distributional detail as stochastic residual structure conditioned on the learned trend [2, 6].
3. Masked diffusion for discrete variables: generates discrete ICS states with an absorbing/masking corruption process and categorical reconstruction [3,4].
4. Type-aware decomposition: a type-aware factorization and routing layer that assigns variables to the most appropriate modeling mechanism and enforces deterministic constraints where warranted.
基于上述考虑,我们提出 Mask-DDPM并将整体流程组织为以下顺序
1. Transformer 趋势模块:通过注意力序列建模学习连续动力学的主导时间骨架。[1]
2. 连续变量的残差 DDPM在趋势条件下以随机残差结构刻画分布细节。[2,6]
3. 离散变量的掩蔽扩散:以吸收/掩蔽的扰动过程与类别重构生成离散 ICS 状态。[3,4]
4. 类型感知分解:以类型化分解与路由层将变量分配到最合适的机制,并在需要时施加确定性约束。
This ordering is intentional. The trend module establishes a macro-temporal scaffold; residual diffusion then concentrates capacity on micro-structure and marginal fidelity; masked diffusion provides a native mechanism for discrete legality; and the type-aware layer operationalizes the observation that not all ICS variables should be modeled with the same stochastic mechanism. Importantly, while diffusion-based generation for ICS telemetry has begun to emerge, existing approaches remain limited and typically emphasize continuous synthesis or augmentation; in contrast, our pipeline integrates (i) a Transformer-conditioned residual diffusion backbone, (ii) a discrete masked-diffusion branch, and (iii) explicit type-aware routing for heterogeneous variable mechanisms within a single coherent generator. [10,11]
这一顺序是刻意设计的:趋势模块先建立宏观时间支架;残差扩散再将建模容量集中到微结构与边际保真;掩蔽扩散为离散合法性提供原生机制;类型感知层则将“并非所有 ICS 变量都应以相同随机机制建模”的观察落到可操作的路由与约束之上。尤其重要的是,尽管面向 ICS 遥测的扩散式生成已开始出现既有方法多仍局限于连续合成或数据增强相比之下我们在一个一致的生成器中同时集成了iTransformer 条件化的残差扩散主干ii离散掩蔽扩散分支以及iii面向异质变量机制的显式类型路由。[10,11]
---
Transformer trend module for continuous dynamics
连续动力学的 Transformer 趋势模块
We instantiate the temporal backbone as a causal Transformer trend extractor, leveraging self-attentions ability to represent long-range dependencies and cross-channel interactions without recurrence. [1] Compared with recurrent trend extractors (e.g., GRU-style backbones), a Transformer trend module offers a direct mechanism to model delayed effects and multivariate coupling—common in ICS, where control actions may influence downstream sensors with nontrivial lags and regime-dependent propagation. [1,12] Crucially, in our design the Transformer is not asked to be the entire generator; instead, it serves a deliberately restricted role: providing a stable, temporally coherent conditioning signal that later stochastic components refine.
我们将时间骨架实现为因果 Transformer 趋势提取器,利用自注意力在不依赖循环结构的前提下表征长程依赖与跨通道交互。[1] 相比循环式趋势提取器(如 GRU 风格主干Transformer 趋势模块能够更直接地刻画延迟效应与多变量耦合——这在 ICS 中十分常见,因为控制动作对下游传感器的影响往往存在非平凡时滞,并且传播关系会随工况而变化。[1,12] 需要强调的是,我们并不要求 Transformer 充当完整生成器;它被赋予一个刻意受限的角色:提供稳定、时间连贯的条件信号,供后续随机组件进行细化。
For continuous channels $$X$$, we posit an additive decomposition
$$X = S + R$$ ,
where $$S\in\mathbb{R}^{L\times d_c}$$ is a smooth trend capturing predictable temporal evolution, and $$R\in\mathbb{R}^{L\times d_c}$$ is a residual capturing distributional detail (e.g., bursts, heavy tails, local fluctuations) that is difficult to represent robustly with a purely regression-based temporal objective. This separation reflects an explicit division of labor: the trend module prioritizes temporal coherence, while diffusion (introduced next) targets distributional realism at the residual level—a strategy aligned with “predict-then-refine” perspectives in time-series diffusion modeling. [6,7]
对连续通道 $$X$$,我们采用加性分解
$$X = S + R$$
其中 $$S\in\mathbb{R}^{L\times d_c}$$ 表示捕捉可预测时间演化的平滑趋势,$$R\in\mathbb{R}^{L\times d_c}$$ 表示承载分布细节(如突发、重尾与局部波动)的残差部分,而这些细节往往难以用纯回归式时间目标稳定表示。该分解体现了明确的分工:趋势模块优先保证时间连贯性;随后引入的扩散则在残差层面追求分布逼真性,这与时间序列扩散中“先预测、再细化”的思路一致。[6,7]
We parameterize the trend $$S$$ using a causal Transformer $$f_\phi$$ . With teacher forcing, we train $$f_\phi$$to predict the next-step trend from past observations:
$$\hat{S}_{t+1} = f_\phi(X_{1:t}), \qquad t=1,\dots,L-1,$$
using the mean-squared error objective
$$\mathcal{L}_{trend}(\phi) = \frac{1}{(L-1)d_c}\sum_{t=1}^{L-1}\left\| \hat{S}_{t+1} - X_{t+1}\right\|_2^2$$
At inference, we roll out the Transformer autoregressively to obtain $$\hat{S}$$ , and then define the residual target for diffusion as $$R = X - \hat{S}$$. This setup intentionally “locks in” a coherent low-frequency scaffold before any stochastic refinement is applied, thereby reducing the burden on downstream diffusion modules to simultaneously learn both long-range structure and marginal detail. In this sense, our use of Transformers is distinctive: it is a conditioning-first temporal backbone designed to stabilize mixed-type diffusion synthesis in ICS, rather than an end-to-end monolithic generator. [1,6,10]
我们使用因果 Transformer $$f_\phi$$ 对趋势 $$S$$ 进行参数化。在 teacher forcing 下训练 $$f_\phi$$ 从历史观测预测下一步趋势:
$$\hat{S}_{t+1} = f_\phi(X_{1:t}), \qquad t=1,\dots,L-1,$$
并采用均方误差目标
$$\mathcal{L}_{trend}(\phi) = \frac{1}{(L-1)d_c}\sum_{t=1}^{L-1}\left\| \hat{S}_{t+1} - X_{t+1}\right\|_2^2.$$
推理阶段,我们以自回归方式展开 Transformer 得到 $$\hat{S}$$,并将扩散的残差目标定义为 $$R = X - \hat{S}$$。该设置刻意在任何随机细化发生之前“锁定”低频一致的时间支架,从而降低下游扩散模块同时学习长程结构与边际细节的负担。因而,这里 Transformer 的用法具有差异性:它是先条件化的时间主干,用于稳定 ICS 的混合类型扩散合成,而不是端到端的单体生成器。[1,6,10]
DDPM for continuous residual generation
连续残差生成的 DDPM
We model the residual RRR with a denoising diffusion probabilistic model (DDPM) conditioned on the trend $$\hat{S}$$. [2] Diffusion models learn complex data distributions by inverting a tractable noising process through iterative denoising, and have proven effective at capturing multimodality and heavy-tailed structure that is often attenuated by purely regression-based sequence models. [2,8] Conditioning the diffusion model on $$\hat{S}$$ is central: it prevents the denoiser from re-learning the low-frequency scaffold and focuses capacity on residual micro-structure, mirroring the broader principle that diffusion excels as a distributional corrector when a reasonable coarse structure is available. [6,7]
我们用条件于趋势 $$\hat{S}$$ 的去噪扩散概率模型DDPM来建模残差部分。[2] 扩散模型通过对可处理的加噪过程进行迭代反演来学习复杂分布,已被验证能够有效捕捉多峰性与重尾结构——这些结构常被纯回归式序列模型削弱。[2,8] 以 $$\hat{S}$$ 作为条件至关重要:它避免去噪器重复学习低频支架,使建模容量集中到残差微结构上,从而呼应一个更一般的原则:当已有合理的粗结构时,扩散更擅长作为分布层面的“校正器”。[6,7]
Let $$K$$ denote the number of diffusion steps, with a noise schedule $$\{\beta_k\}_{k=1}^K$$, $$\alpha_k = 1-\beta_k$$, and $$\bar{\alpha}_k=\prod_{i=1}^k \alpha_i$$ . The forward corruption process is:
$$q(r_k\mid r_0)=\mathcal{N}\left(\sqrt{\bar{\alpha}_k}r_0,\ (1-\bar{\alpha}_k)\mathbf{I}\right)$$
equivalently,
$$r_k = \sqrt{\bar{\alpha}_k}r_0 + \sqrt{1-\bar{\alpha}_k}\epsilon,\qquad \epsilon\sim\mathcal{N}(0,\mathbf{I})$$
The learned reverse process is parameterized as
$$p_{\theta}(r_{k-1}\mid r_k,\hat{S})=\mathcal{N}\left(\mu_{\theta}(r_k,k,\hat{S}),\ \Sigma_{\theta}(k)\right)$$
where $$\mu_\theta$$ is implemented by a Transformer denoiser that consumes (i) the noised residual $$r_k$$, (ii) a timestep embedding for $$k$$, and (iii) conditioning features derived from $$\hat{S}$$. This denoiser architecture is consistent with the growing use of attention-based denoisers for long-context time-series diffusion, while our key methodological emphasis is the trend-conditioned residual factorization as the object of diffusion learning. [2,7]
We train the denoiser using the standard DDPM $$\epsilon$$-prediction objective:
$$\mathcal{L}_{\text{cont}}(\theta)
=
\mathbb{E}_{k,r_0,\epsilon}
\left[
\left \|
\epsilon - \epsilon_{\theta}(r_k,k,\hat{S})
\right \|_2^2
\right]$$
Because diffusion optimization can exhibit timestep imbalance (i.e., some timesteps dominate gradients), we optionally apply an SNR-based reweighting consistent with Min-SNR training:
$$\mathcal{L}^{\text{snr}}_{\text{cont}}(\theta)
=
\mathbb{E}_{k,r_0,\epsilon}
\left[
w(k)\left\|
\epsilon - \epsilon_{\theta}(r_k,k,\hat{S})
\right\|_2^2
\right],
\qquad
w(k)=\frac{\mathrm{SNR}_k}{\mathrm{SNR}_k+\gamma}$$
where $$\mathrm{SNR}_k=\bar{\alpha}_k/(1-\bar{\alpha}_k)$$ and $$\gamma>0$$ is a cap parameter. [5]
After sampling $$\hat{R}$$ by reverse diffusion, we reconstruct the continuous output as
$$\hat{X} = \hat{S} + \hat{R}$$ .
Overall, the DDPM component serves as a distributional corrector on top of a temporally coherent backbone, which is particularly suited to ICS where low-frequency dynamics are strong and persistent but fine-scale variability (including bursts and regime-conditioned noise) remains important for realism. Relative to prior ICS diffusion efforts that primarily focus on continuous augmentation, our formulation elevates trend-conditioned residual diffusion as a modular mechanism for disentangling temporal structure from distributional refinement. [10,11]
总体而言DDPM 组件在时间连贯的主干之上充当分布校正器,这尤其适合 ICS低频动力学强且持久但细尺度变异包括突发与工况条件噪声对逼真性同样关键。相较以往主要侧重连续增强的 ICS 扩散工作,我们将“趋势条件化的残差扩散”提升为一种模块化机制,用以解耦时间结构与分布细化。[10,11]
Masked diffusion for discrete ICS variables
离散 ICS 变量的掩蔽扩散
Discrete ICS variables must remain categorical, making Gaussian diffusion inappropriate for supervisory states and mode-like channels. While one can attempt continuous relaxations or post-hoc discretization, such strategies risk producing semantically invalid intermediate states (e.g., “in-between” modes) and can distort the discrete marginal distribution. Discrete-state diffusion provides a principled alternative by defining a valid corruption process directly on categorical variables. [3,4] In the ICS setting, this is not a secondary detail: supervisory tags often encode control logic boundaries (modes, alarms, interlocks) that must remain within a finite vocabulary to preserve semantic correctness. [12]
离散 ICS 变量必须保持类别形式,因此高斯扩散并不适用于监督状态与模式类通道。尽管可以尝试连续松弛或事后离散化,但这类策略可能产生语义无效的中间状态(如“介于两种模式之间”的状态),并且会扭曲离散边际分布。离散状态扩散通过直接在类别变量上定义合法的扰动过程提供了更原则性的替代方案。[3,4] 在 ICS 场景中,这并非次要细节:监督标签往往编码控制逻辑边界(模式、告警、联锁),必须保持在有限词表内才能维持语义正确性。[12]
We therefore adopt masked (absorbing) diffusion for discrete channels, where corruption replaces tokens with a special $$\texttt{[MASK]}$$ symbol according to a schedule. [4] For each variable $$j$$, define a masking schedule $${m_k}_{k=1}^K$$ (with $$m_k\in[0,1]$$) increasing in $$k$$. The forward corruption process is
$$q(y^{(j)}_k \mid y^{(j)}_0)=
\begin{cases}
y^{(j)}, & \text{with probability } 1-m_k,\\
\texttt{[MASK]}, & \text{with probability } m_k,
\end{cases}$$
applied independently across $$j$$ and $$t$$. Let $$\mathcal{M}$$ denote the set of masked positions at step $$k$$. The denoiser $$h_{\psi}$$ predicts a categorical distribution over $$\mathcal{V}_j$$ for each masked token, conditioned on (i) the corrupted discrete sequence, (ii) the diffusion step $$k$$, and (iii) continuous context. Concretely, we condition on $$\hat{S}$$ and $$\hat{X}$$to couple supervisory reconstruction to the underlying continuous dynamics:
$$p_{\psi}\left(y^{(j)}_0 \mid y_k, k, \hat{S}, \hat{X}\right)
= h_{\psi}(y_k,k,\hat{S},\hat{X}).$$
This conditioning choice is motivated by the fact that many discrete ICS states are not standalone, they are functions of regimes, thresholds, and procedural phases that manifest in continuous channels. [12]
Training uses a categorical denoising objective:
$$\mathcal{L}_{\text{disc}}(\psi)
=
\mathbb{E}_{k}
\left[
\frac{1}{|\mathcal{M}|}\sum_{(j,t)\in\mathcal{M}}
\mathrm{CE}\left(
h_\psi\left(y_k,k,\hat{S},\hat{X}\right)_{j,t},\ y^{(j)}_{0,t}
\right)
\right]$$
Where $$\mathrm{CE}(\cdot,\cdot)$$is cross-entropy. At sampling time, we initialize all discrete tokens as $$\texttt{[MASK]}$$and iteratively unmask them using the learned conditionals, ensuring that every output token lies in its legal vocabulary by construction. This discrete branch is a key differentiator of our pipeline: unlike typical continuous-only diffusion augmentation in ICS, we integrate masked diffusion as a first-class mechanism for supervisory-variable legality within the same end-to-end synthesis workflow. [4,10]
其中 $$\mathrm{CE}(\cdot,\cdot)$$ 为交叉熵。采样时,我们将所有离散 token 初始化为 $$\texttt{[MASK]}$$,并使用学习到的条件分布逐步解掩蔽,从而按构造保证每个输出 token 都落在其合法词表内。该离散分支是流程的关键差异点:不同于 ICS 中常见的仅连续扩散增强,我们将掩蔽扩散作为监督变量合法性的一级机制纳入同一端到端合成流程。[4,10]
Type-aware decomposition as factorization and routing layer
作为分解与路由层的类型感知分解
Even with a trend-conditioned residual DDPM and a discrete masked-diffusion branch, a single uniform modeling treatment can remain suboptimal because ICS variables are generated by qualitatively different mechanisms. For example, program-driven setpoints exhibit step-and-dwell dynamics; controller outputs follow control laws conditioned on process feedback; actuator positions may show saturation and dwell; and some “derived tags” are deterministic functions of other channels. Treating all channels as if they were exchangeable stochastic processes can misallocate model capacity and induce systematic error concentration on a small subset of mechanistically distinct variables. [12]
即便已有趋势条件化的残差 DDPM 与离散掩蔽扩散分支,单一且统一的建模方式仍可能并非最优,因为 ICS 变量往往由本质不同的机理产生。例如,程序驱动的设定值呈现阶跃—驻留动态;控制器输出遵循受过程反馈条件化的控制律;执行器位置可能出现饱和与驻留;而部分“派生标签”则是其他通道的确定性函数。若将所有通道都视为可交换的随机过程,模型容量容易被错误分配,并导致误差系统性地集中在少数机制特殊的变量上。[12]
We therefore introduce a type-aware decomposition that formalizes this heterogeneity as a routing and constraint layer. Let $$\tau(i)\in{1,\dots,6}$$ assign each variable (i) to a type class. The type assignment can be initialized from domain semantics (tag metadata, value domains, and engineering meaning), and subsequently refined via an error-attribution workflow described in the Benchmark section. Importantly, this refinement does not change the core diffusion backbone; it changes which mechanism is responsible for which variable, thereby aligning inductive bias with variable-generating mechanism while preserving overall coherence.
We use the following taxonomy:
- Type 1 (program-driven / setpoint-like): externally commanded, step-and-dwell variables. These variables can be treated as exogenous drivers (conditioning signals) or routed to specialized change-point / dwell-time models, rather than being forced into a smooth denoiser that may over-regularize step structure.
- Type 2 (controller outputs): continuous variables tightly coupled to feedback loops; these benefit from conditional modeling where the conditioning includes relevant process variables and commanded setpoints.
- Type 3 (actuator states/positions): often exhibit saturation, dwell, and rate limits; these may require stateful dynamics beyond generic residual diffusion, motivating either specialized conditional modules or additional inductive constraints.
- Type 4 (process variables): inertia-dominated continuous dynamics; these are the primary beneficiaries of the Transformer trend + residual DDPM pipeline.
- Type 5 (derived/deterministic variables): algebraic or rule-based functions of other variables; we enforce deterministic reconstruction $$\hat{x}^{(i)} = g_i(\hat{X},\hat{Y})$$ rather than learning a stochastic generator, improving logical consistency and sample efficiency.
- Type 6 (auxiliary/low-impact variables): weakly coupled or sparse signals; we allow simplified modeling (e.g., calibrated marginals or lightweight temporal models) to avoid allocating diffusion capacity where it is not warranted.
我们采用如下类型划分:
- Type 1程序驱动/设定值类):外部命令触发、呈现阶跃—驻留特征的变量。这类变量更适合作为外生驱动(条件信号),或路由到专门的变点/驻留时间机制,而非强行交由偏好平滑的去噪器,从而避免对阶跃结构的过度正则化。
- Type 2控制器输出与反馈回路紧密耦合的连续变量适合在条件化建模中显式引入相关过程变量与设定值。
- Type 3执行器状态/位置):常见饱和、驻留与速率限制,可能需要超出通用残差扩散的状态性动态,从而引入专门的条件模块或额外的归纳约束。
- Type 4过程变量以惯性主导的连续动力学是“Transformer 趋势 + 残差 DDPM”管线的主要受益者。
- Type 5派生/确定性变量):其他变量的代数或规则函数;我们采用确定性重构 $$\hat{x}^{(i)} = g_i(\hat{X},\hat{Y})$$ 而非学习随机生成器,以提升逻辑一致性与样本效率。
- Type 6辅助/低影响变量):弱耦合或稀疏信号;允许使用简化建模(如校准边际或轻量时间模型),避免将扩散容量分配到收益有限的通道上。
Type-aware decomposition improves synthesis quality through three mechanisms. First, it improves capacity allocation by preventing a small set of mechanistically atypical variables from dominating gradients and distorting the learned distribution for the majority class (typically Type 4). Second, it enables constraint enforcement by deterministically reconstructing Type 5 variables, preventing logically inconsistent samples that purely learned generators can produce. Third, it improves mechanism alignment by attaching inductive biases consistent with step/dwell or saturation behaviors where generic denoisers may implicitly favor smoothness.
类型感知分解通过三种机制提升合成质量:其一,它通过避免少数机制异常变量主导梯度,改善容量分配并防止对多数类(通常为 Type 4的分布学习造成扭曲其二它通过对 Type 5 变量的确定性重构来施加约束,避免纯学习式生成器可能产生的逻辑不一致样本;其三,它通过为阶跃/驻留与饱和等行为注入匹配的归纳偏置来提升机制对齐,而通用去噪器往往会隐式偏向平滑化。
From a novelty standpoint, this layer is not merely an engineering “patch”; it is an explicit methodological statement that ICS synthesis benefits from typed factorization—a principle that has analogues in mixed-type generative modeling more broadly, but that remains underexplored in diffusion-based ICS telemetry synthesis. [9,10,12]
从创新点角度看该层并非简单的工程“补丁”而是一种明确的方法学主张ICS 合成应受益于类型化因子分解。该原则在更广义的混合类型生成建模中有对应思想,但在基于扩散的 ICS 遥测合成中仍缺乏系统性探索。[9,10,12]
Joint optimization and end-to-end sampling
联合优化与端到端采样
We train the model in a staged manner consistent with the above factorization, which improves optimization stability and encourages each component to specialize in its intended role. Specifically: (i) we train the trend Transformer $$f_{\phi}$$ to obtain $$\hat{S}$$; (ii) we compute residual targets $$R=X-\hat{S}$$ for the continuous variables routed to residual diffusion; (iii) we train the residual DDPM $$p_{\theta}(R\mid \hat{S})$$ and masked diffusion model $$p_{\psi}(Y\mid \text{masked}(Y), \hat{S}, \hat{X})$$; and (iv) we apply type-aware routing and deterministic reconstruction during sampling. This staged strategy is aligned with the design goal of separating temporal scaffolding from distributional refinement, and it mirrors the broader intuition in time-series diffusion that decoupling coarse structure and stochastic detail can mitigate “structure vs. realism” conflicts. [6,7]
我们采用与上述分解一致的分阶段训练方式以提升优化稳定性并促使各组件专注于其目标角色。具体而言i训练趋势 Transformer $$f_{\phi}$$ 得到 $$\hat{S}$$ii对路由至残差扩散的连续变量计算残差目标 $$R=X-\hat{S}$$iii训练残差 DDPM $$p_{\theta}(R\mid \hat{S})$$ 与掩蔽扩散模型 $$p_{\psi}(Y\mid \text{masked}(Y), \hat{S}, \hat{X})$$iv在采样阶段施加类型感知路由与确定性重构。该分阶段策略与“将时间支架与分布细化分离”的设计目标一致也呼应了时间序列扩散中的一般直觉解耦粗结构与随机细节能够缓解“结构 vs. 逼真”之间的冲突。[6,7]
A simple combined objective is
$$\mathcal{L} = \lambda\mathcal{L}_{\text{cont}} + (1-\lambda)\mathcal{L}_{\text{disc}}$$,
with $$\lambda\in[0,1]$$controlling the balance between continuous and discrete learning. Type-aware routing determines which channels contribute to which loss and which are excluded in favor of deterministic reconstruction. In practice, this routing acts as a principled guardrail against negative transfer across variable mechanisms: channels that are best handled deterministically (Type 5) or by specialized drivers (Type 1/3, depending on configuration) are prevented from forcing the diffusion models into statistically incoherent compromises.
其中 $$\lambda\in[0,1]$$ 用于控制连续与离散学习的权衡。类型感知路由决定哪些通道参与哪一部分损失哪些通道应被排除并转为确定性重构。在实践中该路由充当跨机制负迁移的原则性护栏对确定性处理更合适的通道Type 5或应由外生驱动/机制对齐模块承担的通道(根据配置可能为 Type 1/3不会迫使扩散模型在统计上做出不一致的折中。
At inference time, generation follows the same structured order: (i) trend $$\hat{S}$$via the Transformer, (ii) residual $$\hat{R}$$ via DDPM, (iii) discrete $$\hat{Y}$$ via masked diffusion, and (iv) type-aware assembly with deterministic reconstruction for routed variables. This pipeline produces $$(\hat{X},\hat{Y})$$ that are temporally coherent by construction (through $$\hat{S}$$), distributionally expressive (through $$\hat{R}$$ denoising), and discretely valid (through masked diffusion), while explicitly accounting for heterogeneous variable-generating mechanisms through type-aware routing. In combination, these choices constitute our central methodological contribution: a unified Transformer + mixed diffusion generator for ICS telemetry, augmented by typed factorization to align model capacity with domain mechanism. [2,4,10,12]
推理阶段的生成遵循同样的结构化顺序i用 Transformer 生成趋势 $$\hat{S}$$ii用 DDPM 生成残差 $$\hat{R}$$iii用掩蔽扩散生成离散序列 $$\hat{Y}$$iv类型感知装配并对被路由变量做确定性重构。该管线使输出 $$(\hat{X},\hat{Y})$$ 在时间上按构造一致(由 $$\hat{S}$$ 保证)、在分布上具有表达力(由 $$\hat{R}$$ 去噪提供)、在离散上保持合法(由掩蔽扩散保证),同时通过类型路由显式刻画异质变量机理。综合而言,这些选择构成了本文的核心方法贡献:一个面向 ICS 遥测的统一 Transformer + 混合扩散生成器,并以类型化分解将模型容量与领域机理对齐。[2,4,10,12]
References for Methodology Part
[1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. Attention Is All You Need. Advances in Neural Information Processing Systems (NeurIPS), 30, 2017.
🔗 https://arxiv.org/abs/1706.03762 | https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
[2] Ho, J., Jain, A., & Abbeel, P. Denoising Diffusion Probabilistic Models. Advances in Neural Information Processing Systems (NeurIPS), 33, 2020.
🔗 https://arxiv.org/abs/2006.11239 | https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf
[3] Austin, J., Johnson, D. D., Ho, J., Tarlow, D., & van den Berg, R. Structured Denoising Diffusion Models in Discrete State-Spaces. Advances in Neural Information Processing Systems (NeurIPS), 34, 2021.
🔗 https://arxiv.org/abs/2107.03006 | https://proceedings.neurips.cc/paper/2021/hash/958c530554f78bcd8e97125b70e6973d-Abstract.html
[4] Shi, J., Han, K., Wang, Z., Doucet, A., & Titsias, M. K. Simplified and Generalized Masked Diffusion for Discrete Data. arXiv preprint arXiv:2406.04329, 2024.
🔗 https://arxiv.org/abs/2406.04329
[5] Hang, T., Gu, S., Li, C., Bao, J., Chen, D., Hu, H., Geng, X., & Guo, B. Efficient Diffusion Training via Min-SNR Weighting Strategy. IEEE/CVF International Conference on Computer Vision (ICCV), pp. 74077417, 2023.
🔗 https://arxiv.org/abs/2303.09556 | https://openaccess.thecvf.com/content/ICCV2023/html/Hang_Efficient_Diffusion_Training_via_Min-SNR_Weighting_Strategy_ICCV_2023_paper.html
[6] Kollovieh, M., Ansari, A. F., Bohlke-Schneider, M., Fatir Ansari, A., & Salinas, D. Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting. Advances in Neural Information Processing Systems (NeurIPS), 36, 2023.
🔗 https://arxiv.org/abs/2307.11494 | https://proceedings.neurips.cc/paper_files/paper/2023/hash/5a1a10c2c2c9b9af1514687bc24b8f3d-Abstract-Conference.html
[7] Sikder, M. F., Ramachandranpillai, R., & Heintz, F. TransFusion: Generating Long, High Fidelity Time Series using Diffusion Models with Transformers. arXiv preprint arXiv:2307.12667, 2023.
🔗 https://arxiv.org/abs/2307.12667
[8] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B. Score-Based Generative Modeling through Stochastic Differential Equations. International Conference on Learning Representations (ICLR), 2021.
🔗 https://arxiv.org/abs/2011.13456 | https://openreview.net/forum?id=PxTIG12RRHS
[9] Shi, J., Xu, M., Hua, H., Zhang, H., Ermon, S., & Leskovec, J. TabDiff: a Mixed-type Diffusion Model for Tabular Data Generation. International Conference on Learning Representations (ICLR), 2025.
🔗 https://arxiv.org/abs/2410.20626 | https://openreview.net/forum?id=swvURjrt8z
Note: First author is Juntong Shi (not Zhang); title uses "Mixed-type" (v3+ of arXiv preprint)
[10] Yuan, Y., Sha, Y., Zhao, W., & Zhang, K. CTU-DDPM: Generating Industrial Control System Time-Series Data with a CNN-Transformer Hybrid Diffusion Model. Proceedings of the 2025 International Symposium on Artificial Intelligence and Computational Social Sciences (ACM AICSS '25), pp. 123132, 2025. DOI:10.1145/3776759.3776845.
🔗 https://dl.acm.org/doi/10.1145/3776759.3776845
Note: Correct title does not contain "Conditional Transformer U-net"; authors include Yusong Yuan and Yun Sha
[11] Sha, Y., Yuan, Y., Wu, Y., & Zhao, H. DDPM Fusing Mamba and Adaptive Attention: An Augmentation Method for Industrial Control Systems Anomaly Data. SSRN Electronic Journal, posted January 10, 2026. SSRN ID: 6055903. DOI:10.2139/ssrn.6055903.
🔗 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055903
Note: This is a preprint (not peer-reviewed); SSRN entry exists with Jan 10, 2026 posting date
[12] Stouffer, K., Lightman, S., Pillitteri, L., Abrams, M., Hahn, A., & Smith, J. Guide to Operational Technology (OT) Security (NIST Special Publication 800-82 Rev. 3). National Institute of Standards and Technology, September 2023.
🔗 https://csrc.nist.gov/pubs/sp/800/82/r3/final
Benchmark
We evaluate the proposed pipeline on feature sequences derived from the HAI Security Dataset, using fixed-length windows (L=96) that preserve the mixed-type structure of ICS telemetry. The goal of this benchmark is not only to report “overall similarity”, but to justify why the proposed factorization is a better fit for protocol feature synthesis: continuous channels must match physical marginals, discrete channels must remain semantically legal, and both must retain short-horizon dynamics that underpin state transitions and interlocks.
我们在从 HAI Security Dataset 导出的特征序列上评估所提出的流程采用定长窗口L=96以保留 ICS 遥测的混合类型结构。该基准测试不仅报告“整体相似度”,更旨在解释为何本文的因子化更适合协议特征合成:连续通道需要匹配物理边际分布,离散通道需要保持语义合法性,而两者都必须保留支撑状态切换与联锁逻辑的短时域动态。
This emphasis reflects evaluation practice in time-series generation, where strong results are typically supported by multiple complementary views (marginal fidelity, dependency/temporal structure, and downstream plausibility), rather than a single aggregate score. In the ICS setting, this multi-view requirement is sharper: a generator that matches continuous marginals while emitting out-of-vocabulary supervisory tokens is unusable for protocol reconstruction, and a generator that matches marginals but breaks lag structure can produce temporally implausible command/response sequences.
这种强调也呼应了时间序列生成领域的常见评估方式:高质量结果通常需要多个互补视角(边际分布、依赖/时间结构、下游可用性)共同支撑,而非依赖单一汇总分数。在 ICS 场景下,这一要求更为尖锐:若生成器虽匹配连续边际却产生越界的监督 token则无法用于协议重构若仅匹配边际却破坏滞后结构则会产生时间上不可信的命令/响应序列。
Positioning against prior evaluation practice
Recent ICS time-series generators often emphasize aggregate similarity scores and utility-driven evaluations (e.g., anomaly-detection performance) to demonstrate realism, which is valuable but can under-specify mixed-type protocol constraints. Our benchmark complements these practices by making mixed-type legality and per-feature distributional alignment explicit: discrete outputs are evaluated as categorical distributions (JSD) and are constrained to remain within the legal vocabulary by construction, while continuous channels are evaluated with nonparametric distribution tests (KS). This combination provides a direct, protocol-relevant justification for the hybrid design, rather than relying on a single composite score that may mask discrete failures.
与既有评估范式的对照
近期的 ICS 时间序列生成工作常以汇总相似度分数与效用导向评估例如异常检测性能来证明“逼真度”这类证据很有价值但可能不足以刻画混合类型协议约束。本文的基准测试对其形成补充我们将混合类型的合法性与特征级分布对齐显式化——离散输出以类别分布JSD评估并按构造限制在合法词表内连续通道则用非参数分布检验KS评估。该组合能够从协议特征角度直接支撑混合设计的必要性而不是依赖可能掩盖离散失败的单一复合分数。
Evaluation metrics
For continuous channels, we measure distributional alignment using the KolmogorovSmirnov (KS) statistic computed per feature between the empirical distributions of real and synthetic samples, and then averaged across features. For discrete channels, we quantify marginal fidelity with JensenShannon divergence (JSD) between categorical distributions per feature, averaged across discrete variables. To assess temporal realism, we compare lag-1 autocorrelation at the feature level and report the mean absolute difference between real and synthetic lag-1 autocorrelation, averaged across features. In addition, to avoid degenerate comparisons driven by near-constant tags, features whose empirical standard deviation falls below a small threshold are excluded from continuous KS aggregation; such channels carry limited distributional information and can distort summary statistics.
评估指标
对连续通道,我们在每个特征上比较真实与合成样本的经验分布,计算 KolmogorovSmirnovKS统计量并在特征维度上取平均以得到总体分布对齐程度。对离散通道我们在每个离散特征上比较类别分布并计算 JensenShannon divergenceJSD再在离散变量维度上取平均以衡量边际保真性。对时间逼真性我们比较特征级的 lag-1 自相关,并报告真实与合成 lag-1 自相关的平均绝对差(在特征维度上取平均)。此外,为避免近常量标签导致的退化比较,我们在连续 KS 汇总时排除经验标准差低于阈值的特征,因为此类通道携带的分布信息有限,容易扭曲汇总统计。
Quantitative results
Across three runs, the mean continuous KS is 0.3311 (std 0.0079) and the mean discrete JSD is 0.0284 (std 0.0073), indicating that the generator preserves both continuous marginals and discrete semantic distributions at the feature level. Temporal consistency is similarly stable across runs, with a mean lag-1 autocorrelation difference of 0.2684 (std 0.0027), suggesting that the synthesized windows retain short-horizon dynamical structure instead of collapsing to marginal matching alone. The best-performing instance (by mean KS) attains 0.3224, and the small inter-seed variance shows that the reported fidelity is reproducible rather than driven by a single favorable initialization.
定量结果
在三个独立运行上,连续 KS 的均值为 0.3311(标准差 0.0079),离散 JSD 的均值为 0.0284(标准差 0.0073表明生成器在特征层面同时较好地保留了连续边际与离散语义分布。时间一致性同样稳定lag-1 自相关差的均值为 0.2684(标准差 0.0027),说明合成窗口能够保留短时域动力学结构,而不仅仅是边际分布匹配。在三次运行中,按均值 KS 衡量的最佳实例达到 0.3224,表明该保真度具有可重复性,并非由单次有利初始化所驱动。
| Metric | Aggregation | Lower is better | Mean ± Std (3 seeds) |
|---|---|---:|---:|
| KS (continuous) | mean over continuous features | ✓ | 0.3311 ± 0.0079 |
| JSD (discrete) | mean over discrete features | ✓ | 0.0284 ± 0.0073 |
| Abs Δ lag-1 autocorr | mean over features | ✓ | 0.2684 ± 0.0027 |
Table: Summary of benchmark metrics (three independent seeds).
Benchmark 指标汇总(三个独立随机种子)。
![Benchmark overview figure (workflow, feature fidelity, dataset shift, and robustness).](figures/benchmark_panel.svg)
Benchmark 综合图(流程、特征级分布保真、训练集分布漂移与跨种子鲁棒性)。
![Seed robustness summary across three independent runs.](figures/benchmark_metrics.svg)
跨三次独立运行的鲁棒性汇总图。
![KS outlier attribution (top-K features and average KS after removing worst features).](figures/ranked_ks.svg)
KS 离群归因图Top-K 误差特征与“移除最差特征后”的平均 KS 变化)。
![CDF alignment for a representative set of high-KS continuous features: P1_B4002, P1_PIT02, P1_FCV02Z, P1_B3004.](example/results/cdf_P1_B4002.svg)
代表性高 KS 连续特征的 CDF 对齐P1_B4002。
![CDF alignment for P1_PIT02.](example/results/cdf_P1_PIT02.svg)
P1_PIT02 的 CDF 对齐图。
![CDF alignment for P1_FCV02Z.](example/results/cdf_P1_FCV02Z.svg)
P1_FCV02Z 的 CDF 对齐图。
![CDF alignment for P1_B3004.](example/results/cdf_P1_B3004.svg)
P1_B3004 的 CDF 对齐图。
![All continuous features distribution comparison (empirical CDF grid: generated vs real).](figures/cdf_grid.svg)
所有连续特征的分布对比(经验 CDF 网格:生成 vs 原始)。
![Discrete features categorical distribution comparison (dot plot: generated vs real).](figures/disc_points.svg)
离散特征的类别分布对比(点图:两种颜色分别代表生成与原始)。
![Generated line series (normalized by real min/max) for four representative features: P1_B4002, P1_PIT02, P1_FCV02Z, P1_B3004.](figures/lines.svg)
四个代表性特征的生成序列折线图(按真实 min/max 归一化)。
Why this benchmark highlights where the method helps
To make the benchmark actionable (and comparable to prior work), we report type-appropriate, interpretable statistics instead of collapsing everything into a single similarity score. This matters in mixed-type ICS telemetry: continuous fidelity can be high while discrete semantics fail, and vice versa. By separating continuous (KS), discrete (JSD), and temporal (lag-1) views, the evaluation directly matches the design goals of the hybrid generator: distributional refinement for continuous residuals, vocabulary-valid reconstruction for discrete supervision, and trend-induced short-horizon coherence.
为何该基准测试能够凸显方法优势
为使基准测试具备可操作性并便于与既有工作对比,我们报告与数据类型匹配且可解释的统计量,而非将所有差异压缩为单一相似度分数。这一点在混合类型的 ICS 遥测中尤为关键连续分布可能看似很接近但离散语义可能崩坏反之亦然。将连续KS、离散JSD与时间lag-1三个视角分离后评估即可与混合生成器的设计目标逐一对齐对连续残差的分布细化、对离散监督变量的词表合法重构以及由趋势分支诱导的短时域一致性。
In addition, the seed-averaged reporting mirrors evaluation conventions in recent diffusion-based time-series generation studies, where robustness across runs is increasingly treated as a first-class signal rather than an afterthought. In this sense, the small inter-seed variance is itself evidence that the factorized training and typed routing reduce instability and localized error concentration, which is frequently observed when heterogeneous channels compete for the same modeling capacity.
此外,对随机种子取均值的报告方式也与近年来扩散式时间序列生成研究的评估惯例一致:跨运行鲁棒性正逐渐被视为一项一等信号,而非事后补充。在这一意义上,跨种子的低方差本身也可视为证据:分解式训练与类型路由降低了不稳定性与局部误差集中,而这类现象常见于异质通道争夺同一建模容量的情形。
Error-attribution refinement for type-aware decomposition
To connect evaluation back to the typed factorization described in Methodology, we use feature-level metric attribution as a refinement signal for the type assignment. Starting from an initial semantic typing (value domain, operational role, and tag metadata), we compute per-feature divergences and inspect persistent outliers across runs. Features identified as near-deterministic (e.g., derived tags or low-variance indicators) are re-routed to deterministic reconstruction (Type 5) or simplified handling (Type 6), while step-and-dwell or saturation-prone channels are treated as exogenous drivers or routed to mechanism-aligned types (Type 1/3). This workflow does not change the overall generation order; it reallocates responsibility across typed mechanisms so that stochastic components focus on variables that benefit from probabilistic modeling, improving global fidelity while reducing localized error concentration.
类型感知分解的误差归因细化
为将评估与方法部分的类型化因子分解对应起来我们将特征级指标的归因作为类型分配的细化信号。具体地从初始语义类型取值域、运行角色与标签元信息出发我们计算每个特征的偏差并关注跨运行持续存在的离群项。被识别为近确定性的特征如派生标签或低方差指示量将被路由到确定性重构Type 5或简化处理Type 6而阶跃—驻留或易饱和通道则作为外生驱动或被路由到机制对齐的类型Type 1/3。该流程不改变整体生成顺序而是在类型机制之间重新分配“责任”使随机组件专注于真正受益于概率建模的变量从而在降低局部误差集中的同时提升整体保真度。
Future works
Conclusion