forked from manbo/internal-docs
Polish paper text, add refs and remove template
Edit arxiv-style/main.tex for clarity and wording (abstract and methodology), remove stray template comments and minor copy edits; append several bibliography entries to arxiv-style/references.bib; delete the unused arxiv-style/template.tex file; add texput.log (LaTeX compilation output). Primarily editorial and bibliography updates plus cleanup of the template artifact.
This commit is contained in:
@@ -24,16 +24,14 @@
|
|||||||
\usepackage{caption} % Better caption spacing
|
\usepackage{caption} % Better caption spacing
|
||||||
\usepackage{float} % Precise figure placement
|
\usepackage{float} % Precise figure placement
|
||||||
|
|
||||||
% 閺嶅洭顣?
|
|
||||||
\title{Mask-DDPM: Transformer-Conditioned Mixed-Type Diffusion for Semantically Valid ICS Telemetry Synthesis}
|
\title{Mask-DDPM: Transformer-Conditioned Mixed-Type Diffusion for Semantically Valid ICS Telemetry Synthesis}
|
||||||
|
|
||||||
% 閼汇儰绗夐棁鈧憰浣规)閺堢噦绱濋崣鏍ㄧХ娑撳娼版稉鈧悰宀€娈戝▔銊╁櫞
|
|
||||||
\date{}
|
\date{}
|
||||||
|
|
||||||
\newif\ifuniqueAffiliation
|
\newif\ifuniqueAffiliation
|
||||||
\uniqueAffiliationtrue
|
\uniqueAffiliationtrue
|
||||||
|
|
||||||
\ifuniqueAffiliation % 閺嶅洤鍣担婊嗏偓鍛健
|
\ifuniqueAffiliation
|
||||||
\author{
|
\author{
|
||||||
Zhenglan Chen \\
|
Zhenglan Chen \\
|
||||||
Aberdeen Institute of Data Science and Artificial Intelligence\\
|
Aberdeen Institute of Data Science and Artificial Intelligence\\
|
||||||
@@ -61,10 +59,8 @@
|
|||||||
}
|
}
|
||||||
\fi
|
\fi
|
||||||
|
|
||||||
% 妞ょ數婀佺拋鍓х枂
|
|
||||||
\renewcommand{\shorttitle}{\textit{arXiv} Template}
|
\renewcommand{\shorttitle}{\textit{arXiv} Template}
|
||||||
|
|
||||||
%%% PDF 閸忓啯鏆熼幑?
|
|
||||||
\hypersetup{
|
\hypersetup{
|
||||||
pdftitle={Your Paper Title},
|
pdftitle={Your Paper Title},
|
||||||
pdfsubject={cs.LG, cs.CR},
|
pdfsubject={cs.LG, cs.CR},
|
||||||
@@ -76,10 +72,9 @@ pdfkeywords={Keyword1, Keyword2, Keyword3},
|
|||||||
\maketitle
|
\maketitle
|
||||||
|
|
||||||
\begin{abstract}
|
\begin{abstract}
|
||||||
Industrial control systems (ICS) security research is increasingly constrained by the scarcity and non-shareability of realistic traffic and telemetry, especially for attack scenarios. To mitigate this bottleneck, we study synthetic generation at the protocol feature/telemetry level, where samples must simultaneously preserve temporal coherence, match continuous marginal distributions, and keep discrete supervisory variables strictly within valid vocabularies. We propose Mask-DDPM, a hybrid framework tailored to mixed-type, multi-scale ICS sequences. Mask-DDPM factorizes generation into (i) a causal Transformer trend module that rolls out a stable long-horizon temporal scaffold for continuous channels, (ii) a trend-conditioned residual DDPM that refines local stochastic structure and heavy-tailed fluctuations without degrading global dynamics, (iii) a masked (absorbing) diffusion branch for discrete variables that guarantees categorical legality by construction, and (iv) a type-aware decomposition/routing layer that aligns modeling mechanisms with heterogeneous ICS variable origins and enforces deterministic reconstruction where appropriate. Evaluated on fixed-length windows ($L=96$) derived from the HAI Security Dataset, Mask-DDPM achieves stable fidelity across seeds with mean KS = 0.3311 $\pm$ 0.0079 (continuous), mean JSD = 0.0284 $\pm$ 0.0073 (discrete), and mean absolute lag-1 autocorrelation difference = 0.2684 $\pm$ 0.0027, indicating faithful marginals, preserved short-horizon dynamics, and valid discrete semantics. The resulting generator provides a reproducible basis for data augmentation, benchmarking, and downstream ICS protocol reconstruction workflows.
|
Industrial control systems (ICS) security research is increasingly constrained by the scarcity and limited shareability of realistic communication traces and process measurements, especially for attack scenarios. To mitigate this bottleneck, we study synthetic generation at the protocol-feature and process-signal level, where samples must simultaneously preserve temporal coherence, match continuous marginal distributions, and keep discrete supervisory variables strictly within valid vocabularies. We propose Mask-DDPM, a hybrid framework tailored to mixed-type, multi-scale ICS sequences. Mask-DDPM factorizes generation into (i) a causal Transformer trend module that rolls out a stable long-range temporal scaffold for continuous channels, (ii) a trend-conditioned residual DDPM that refines local stochastic structure and heavy-tailed fluctuations without degrading global dynamics, (iii) a masked (absorbing) diffusion branch for discrete variables that guarantees valid symbol generation by construction, and (iv) a type-aware decomposition/routing layer that aligns modeling mechanisms with heterogeneous ICS variable origins and enforces deterministic reconstruction where appropriate. Evaluated on fixed-length windows ($L=96$) derived from the HAI Security Dataset, Mask-DDPM achieves stable fidelity across seeds with mean KS = 0.3311 $\pm$ 0.0079 (continuous), mean JSD = 0.0284 $\pm$ 0.0073 (discrete), and mean absolute lag-1 autocorrelation difference = 0.2684 $\pm$ 0.0027, indicating faithful marginals, preserved short-horizon dynamics, and valid discrete semantics. The resulting generator provides a reproducible basis for data augmentation, benchmarking, and downstream ICS protocol reconstruction workflows.
|
||||||
\end{abstract}
|
\end{abstract}
|
||||||
|
|
||||||
% 閸忔娊鏁拠?
|
|
||||||
\keywords{Machine Learning \and Cyber Defense \and ICS}
|
\keywords{Machine Learning \and Cyber Defense \and ICS}
|
||||||
|
|
||||||
% 1. Introduction
|
% 1. Introduction
|
||||||
@@ -102,7 +97,7 @@ Diffusion models exhibit good fit along this path: DDPM achieves high-quality sa
|
|||||||
|
|
||||||
Looking further into the mechanism complexity of ICS: its channel types are inherently mixed, containing both continuous process trajectories and discrete supervision/status variables, and discrete channels must be ``legal'' under operational constraints. The aforementioned progress in time series diffusion has mainly occurred in continuous spaces, but discrete diffusion has also developed systematic methods: D3PM improves sampling quality and likelihood through absorption/masking and structured transitions in discrete state spaces \citep{austin2023structureddenoisingdiffusionmodels}, subsequent masked diffusion provides stable reconstruction on categorical data in a more simplified form \citep{Lin_2020}, multinomial diffusion directly defines diffusion on a finite vocabulary through mechanisms such as argmax flows \citep{hoogeboom2021argmaxflowsmultinomialdiffusion}, and Diffusion-LM demonstrates an effective path for controllable text generation by imposing gradient constraints in continuous latent spaces \citep{li2022diffusionlmimprovescontrollabletext}. From the perspectives of protocols and finite-state machines, coverage-guided fuzz testing emphasizes the criticality of ``sequence legality and state coverage'' \citep{meng2025aflnetyearslatercoverageguided,godefroid2017learnfuzzmachinelearninginput,she2019neuzzefficientfuzzingneural}, echoing the concept of ``legality by construction'' in discrete diffusion: preferentially adopting absorption/masking diffusion on discrete channels, supplemented by type-aware conditioning and sampling constraints, to avoid semantic invalidity and marginal distortion caused by post hoc thresholding.
|
Looking further into the mechanism complexity of ICS: its channel types are inherently mixed, containing both continuous process trajectories and discrete supervision/status variables, and discrete channels must be ``legal'' under operational constraints. The aforementioned progress in time series diffusion has mainly occurred in continuous spaces, but discrete diffusion has also developed systematic methods: D3PM improves sampling quality and likelihood through absorption/masking and structured transitions in discrete state spaces \citep{austin2023structureddenoisingdiffusionmodels}, subsequent masked diffusion provides stable reconstruction on categorical data in a more simplified form \citep{Lin_2020}, multinomial diffusion directly defines diffusion on a finite vocabulary through mechanisms such as argmax flows \citep{hoogeboom2021argmaxflowsmultinomialdiffusion}, and Diffusion-LM demonstrates an effective path for controllable text generation by imposing gradient constraints in continuous latent spaces \citep{li2022diffusionlmimprovescontrollabletext}. From the perspectives of protocols and finite-state machines, coverage-guided fuzz testing emphasizes the criticality of ``sequence legality and state coverage'' \citep{meng2025aflnetyearslatercoverageguided,godefroid2017learnfuzzmachinelearninginput,she2019neuzzefficientfuzzingneural}, echoing the concept of ``legality by construction'' in discrete diffusion: preferentially adopting absorption/masking diffusion on discrete channels, supplemented by type-aware conditioning and sampling constraints, to avoid semantic invalidity and marginal distortion caused by post hoc thresholding.
|
||||||
|
|
||||||
From the perspective of high-level synthesis, the temporal structure is equally indispensable: ICS control often involves delay effects, phased operating conditions, and cross-channel coupling, requiring models to be able to characterize low-frequency, long-range dependencies while also overlaying multi-modal fine-grained fluctuations on them. The Transformer series has provided sufficient evidence in long-sequence time series tasks: Transformer-XL breaks through the fixed-length context limitation through a reusable memory mechanism and significantly enhances long-range dependency expression \citep{dai2019transformerxlattentivelanguagemodels}; Informer uses ProbSparse attention and efficient decoding to balance span and efficiency in long-sequence prediction \citep{zhou2021informerefficienttransformerlong}; Autoformer robustly models long-term seasonality and trends through autocorrelation and decomposition mechanisms \citep{wu2022autoformerdecompositiontransformersautocorrelation}; FEDformer further improves long-period prediction performance in frequency domain enhancement and decomposition \citep{zhou2022fedformerfrequencyenhanceddecomposed}; PatchTST enhances the stability and generalization of long-sequence multivariate prediction through local patch-based representation and channel-independent modeling \citep{2023}. Combining our previous positioning of diffusion, this chain of evidence points to a natural division of labor: using attention-based sequence models to first extract stable low-frequency trends/conditions (long-range skeletons), and then allowing diffusion to focus on margins and details in the residual space; meanwhile, discrete masking/absorbing diffusion is applied to supervised/pattern variables to ensure vocabulary legality by construction. This design not only inherits the advantages of time series diffusion in distribution fitting and uncertainty characterization \citep{rasul2021autoregressivedenoisingdiffusionmodels,tashiro2021csdiconditionalscorebaseddiffusion,wen2024diffstgprobabilisticspatiotemporalgraph,liu2023pristiconditionaldiffusionframework,kong2021diffwaveversatilediffusionmodel,11087622}, but also stabilizes the macroscopic temporal support through the long-range attention of Transformer, enabling the formation of an operational integrated generation pipeline under the mixed types and multi-scale dynamics of ICS.
|
From the perspective of high-level synthesis, the temporal structure is equally indispensable: ICS control often involves delay effects, phased operating conditions, and cross-channel coupling, requiring models to be able to characterize low-frequency, long-range dependencies while also overlaying multi-facated fine-grained fluctuations on them. The Transformer series has provided sufficient evidence in long-sequence time series tasks: Transformer-XL breaks through the fixed-length context limitation through a reusable memory mechanism and significantly enhances long-range dependency expression \citep{dai2019transformerxlattentivelanguagemodels}; Informer uses ProbSparse attention and efficient decoding to balance span and efficiency in long-sequence prediction \citep{zhou2021informerefficienttransformerlong}; Autoformer robustly models long-term seasonality and trends through autocorrelation and decomposition mechanisms \citep{wu2022autoformerdecompositiontransformersautocorrelation}; FEDformer further improves long-period prediction performance in frequency domain enhancement and decomposition \citep{zhou2022fedformerfrequencyenhanceddecomposed}; PatchTST enhances the stability and generalization of long-sequence multivariate prediction through local patch-based representation and channel-independent modeling \citep{2023}. Combining our previous positioning of diffusion, this chain of evidence points to a natural division of labor: using attention-based sequence models to first extract stable low-frequency trends/conditions (long-range skeletons), and then allowing diffusion to focus on margins and details in the residual space; meanwhile, discrete masking/absorbing diffusion is applied to supervised/pattern variables to ensure vocabulary legality by construction. This design not only inherits the advantages of time series diffusion in distribution fitting and uncertainty characterization \citep{rasul2021autoregressivedenoisingdiffusionmodels,tashiro2021csdiconditionalscorebaseddiffusion,wen2024diffstgprobabilisticspatiotemporalgraph,liu2023pristiconditionaldiffusionframework,kong2021diffwaveversatilediffusionmodel,11087622}, but also stabilizes the macroscopic temporal support through the long-range attention of Transformer, enabling the formation of an operational integrated generation pipeline under the mixed types and multi-scale dynamics of ICS.
|
||||||
|
|
||||||
% 3. Methodology
|
% 3. Methodology
|
||||||
\section{Methodology}
|
\section{Methodology}
|
||||||
@@ -411,12 +406,8 @@ Our main contributions are: (i) a causal Transformer trend module that provides
|
|||||||
We evaluated the approach on windows derived from the HAI Security Dataset and reported mixed-type, protocol-relevant metrics rather than a single aggregate score. Across seeds, the model achieves stable fidelity with mean KS = 0.3311 $\pm$ 0.0079 on continuous features, mean JSD = 0.0284 $\pm$ 0.0073 on discrete features, and mean absolute lag-1 autocorrelation difference 0.2684 $\pm$ 0.0027, indicating that Mask-DDPM preserves both marginal distributions and short-horizon dynamics while maintaining discrete legality.
|
We evaluated the approach on windows derived from the HAI Security Dataset and reported mixed-type, protocol-relevant metrics rather than a single aggregate score. Across seeds, the model achieves stable fidelity with mean KS = 0.3311 $\pm$ 0.0079 on continuous features, mean JSD = 0.0284 $\pm$ 0.0073 on discrete features, and mean absolute lag-1 autocorrelation difference 0.2684 $\pm$ 0.0027, indicating that Mask-DDPM preserves both marginal distributions and short-horizon dynamics while maintaining discrete legality.
|
||||||
|
|
||||||
Overall, Mask-DDPM provides a reproducible foundation for generating shareable, semantically valid ICS feature sequences suitable for data augmentation, benchmarking, and downstream packet/trace reconstruction workflows. Building on this capability, a natural next step is to move from purely legal synthesis toward controllable scenario construction, including structured attack/violation injection under engineering constraints to support adversarial evaluation and more comprehensive security benchmarks.
|
Overall, Mask-DDPM provides a reproducible foundation for generating shareable, semantically valid ICS feature sequences suitable for data augmentation, benchmarking, and downstream packet/trace reconstruction workflows. Building on this capability, a natural next step is to move from purely legal synthesis toward controllable scenario construction, including structured attack/violation injection under engineering constraints to support adversarial evaluation and more comprehensive security benchmarks.
|
||||||
% 閸欏倽鈧啯鏋冮悮?
|
|
||||||
\bibliographystyle{unsrtnat}
|
\bibliographystyle{unsrtnat}
|
||||||
\bibliography{references}
|
\bibliography{references}
|
||||||
|
|
||||||
\end{document}
|
\end{document}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -553,3 +553,55 @@ Reference for Benchmark
|
|||||||
year={2001},
|
year={2001},
|
||||||
publisher={Elsevier}
|
publisher={Elsevier}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@misc{austin2023structureddenoisingdiffusionmodels,
|
||||||
|
title={Structured Denoising Diffusion Models in Discrete State-Spaces},
|
||||||
|
author={Jacob Austin and Daniel D. Johnson and Jonathan Ho and Daniel Tarlow and Rianne van den Berg},
|
||||||
|
year={2023},
|
||||||
|
eprint={2107.03006},
|
||||||
|
archivePrefix={arXiv},
|
||||||
|
primaryClass={cs.LG},
|
||||||
|
url={https://arxiv.org/abs/2107.03006},
|
||||||
|
}
|
||||||
|
|
||||||
|
@article{10.1145/1151659.1159928,
|
||||||
|
author = {Vishwanath, Kashi Venkatesh and Vahdat, Amin},
|
||||||
|
title = {Realistic and responsive network traffic generation},
|
||||||
|
year = {2006},
|
||||||
|
issue_date = {October 2006},
|
||||||
|
publisher = {Association for Computing Machinery},
|
||||||
|
address = {New York, NY, USA},
|
||||||
|
volume = {36},
|
||||||
|
number = {4},
|
||||||
|
issn = {0146-4833},
|
||||||
|
url = {https://doi.org/10.1145/1151659.1159928},
|
||||||
|
doi = {10.1145/1151659.1159928},
|
||||||
|
abstract = {This paper presents Swing, a closed-loop, network-responsive traffic generator that accurately captures the packet interactions of a range of applications using a simple structural model. Starting from observed traffic at a single point in the network, Swing automatically extracts distributions for user, application, and network behavior. It then generates live traffic corresponding to the underlying models in a network emulation environment running commodity network protocol stacks. We find that the generated traces are statistically similar to the original traces. Further, to the best of our knowledge, we are the first to reproduce burstiness in traffic across a range of timescales using a model applicable to a variety of network settings. An initial sensitivity analysis reveals the importance of capturing and recreating user, application, and network characteristics to accurately reproduce such burstiness. Finally, we explore Swing's ability to vary user characteristics, application properties, and wide-area network conditions to project traffic characteristics into alternate scenarios.},
|
||||||
|
journal = {SIGCOMM Comput. Commun. Rev.},
|
||||||
|
month = aug,
|
||||||
|
pages = {111–122},
|
||||||
|
numpages = {12},
|
||||||
|
keywords = {burstiness, energy plot, generator, internet, modeling, structural model, traffic, wavelets}
|
||||||
|
}
|
||||||
|
|
||||||
|
@inproceedings{NEURIPS2020_4c5bcfec,
|
||||||
|
author = {Ho, Jonathan and Jain, Ajay and Abbeel, Pieter},
|
||||||
|
booktitle = {Advances in Neural Information Processing Systems},
|
||||||
|
editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin},
|
||||||
|
pages = {6840--6851},
|
||||||
|
publisher = {Curran Associates, Inc.},
|
||||||
|
title = {Denoising Diffusion Probabilistic Models},
|
||||||
|
url = {https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf},
|
||||||
|
volume = {33},
|
||||||
|
year = {2020}
|
||||||
|
}
|
||||||
|
|
||||||
|
@misc{song2021scorebasedgenerativemodelingstochastic,
|
||||||
|
title={Score-Based Generative Modeling through Stochastic Differential Equations},
|
||||||
|
author={Yang Song and Jascha Sohl-Dickstein and Diederik P. Kingma and Abhishek Kumar and Stefano Ermon and Ben Poole},
|
||||||
|
year={2021},
|
||||||
|
eprint={2011.13456},
|
||||||
|
archivePrefix={arXiv},
|
||||||
|
primaryClass={cs.LG},
|
||||||
|
url={https://arxiv.org/abs/2011.13456},
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,214 +0,0 @@
|
|||||||
\documentclass{article}
|
|
||||||
|
|
||||||
\usepackage{arxiv}
|
|
||||||
|
|
||||||
\usepackage[utf8]{inputenc} % allow utf-8 input
|
|
||||||
\usepackage[T1]{fontenc} % use 8-bit T1 fonts
|
|
||||||
\usepackage{hyperref} % hyperlinks
|
|
||||||
\usepackage{url} % simple URL typesetting
|
|
||||||
\usepackage{booktabs} % professional-quality tables
|
|
||||||
\usepackage{amsfonts} % blackboard math symbols
|
|
||||||
\usepackage{nicefrac} % compact symbols for 1/2, etc.
|
|
||||||
\usepackage{microtype} % microtypography
|
|
||||||
\usepackage{cleveref} % smart cross-referencing
|
|
||||||
\usepackage{lipsum} % Can be removed after putting your text content
|
|
||||||
\usepackage{graphicx}
|
|
||||||
\usepackage{natbib}
|
|
||||||
\usepackage{doi}
|
|
||||||
|
|
||||||
\title{A template for the \emph{arxiv} style}
|
|
||||||
|
|
||||||
% Here you can change the date presented in the paper title
|
|
||||||
%\date{September 9, 1985}
|
|
||||||
% Or remove it
|
|
||||||
%\date{}
|
|
||||||
|
|
||||||
\newif\ifuniqueAffiliation
|
|
||||||
% Comment to use multiple affiliations variant of author block
|
|
||||||
\uniqueAffiliationtrue
|
|
||||||
|
|
||||||
\ifuniqueAffiliation % Standard variant of author block
|
|
||||||
\author{ \href{https://orcid.org/0000-0000-0000-0000}{\includegraphics[scale=0.06]{orcid.pdf}\hspace{1mm}David S.~Hippocampus}\thanks{Use footnote for providing further
|
|
||||||
information about author (webpage, alternative
|
|
||||||
address)---\emph{not} for acknowledging funding agencies.} \\
|
|
||||||
Department of Computer Science\\
|
|
||||||
Cranberry-Lemon University\\
|
|
||||||
Pittsburgh, PA 15213 \\
|
|
||||||
\texttt{hippo@cs.cranberry-lemon.edu} \\
|
|
||||||
%% examples of more authors
|
|
||||||
\And
|
|
||||||
\href{https://orcid.org/0000-0000-0000-0000}{\includegraphics[scale=0.06]{orcid.pdf}\hspace{1mm}Elias D.~Striatum} \\
|
|
||||||
Department of Electrical Engineering\\
|
|
||||||
Mount-Sheikh University\\
|
|
||||||
Santa Narimana, Levand \\
|
|
||||||
\texttt{stariate@ee.mount-sheikh.edu} \\
|
|
||||||
%% \AND
|
|
||||||
%% Coauthor \\
|
|
||||||
%% Affiliation \\
|
|
||||||
%% Address \\
|
|
||||||
%% \texttt{email} \\
|
|
||||||
%% \And
|
|
||||||
%% Coauthor \\
|
|
||||||
%% Affiliation \\
|
|
||||||
%% Address \\
|
|
||||||
%% \texttt{email} \\
|
|
||||||
%% \And
|
|
||||||
%% Coauthor \\
|
|
||||||
%% Affiliation \\
|
|
||||||
%% Address \\
|
|
||||||
%% \texttt{email} \\
|
|
||||||
}
|
|
||||||
\else
|
|
||||||
% Multiple affiliations variant of author block
|
|
||||||
\usepackage{authblk}
|
|
||||||
\renewcommand\Authfont{\bfseries}
|
|
||||||
\setlength{\affilsep}{0em}
|
|
||||||
% box is needed for correct spacing with authblk
|
|
||||||
\newbox{\orcid}\sbox{\orcid}{\includegraphics[scale=0.06]{orcid.pdf}}
|
|
||||||
\author[1]{%
|
|
||||||
\href{https://orcid.org/0000-0000-0000-0000}{\usebox{\orcid}\hspace{1mm}David S.~Hippocampus\thanks{\texttt{hippo@cs.cranberry-lemon.edu}}}%
|
|
||||||
}
|
|
||||||
\author[1,2]{%
|
|
||||||
\href{https://orcid.org/0000-0000-0000-0000}{\usebox{\orcid}\hspace{1mm}Elias D.~Striatum\thanks{\texttt{stariate@ee.mount-sheikh.edu}}}%
|
|
||||||
}
|
|
||||||
\affil[1]{Department of Computer Science, Cranberry-Lemon University, Pittsburgh, PA 15213}
|
|
||||||
\affil[2]{Department of Electrical Engineering, Mount-Sheikh University, Santa Narimana, Levand}
|
|
||||||
\fi
|
|
||||||
|
|
||||||
% Uncomment to override the `A preprint' in the header
|
|
||||||
%\renewcommand{\headeright}{Technical Report}
|
|
||||||
%\renewcommand{\undertitle}{Technical Report}
|
|
||||||
\renewcommand{\shorttitle}{\textit{arXiv} Template}
|
|
||||||
|
|
||||||
%%% Add PDF metadata to help others organize their library
|
|
||||||
%%% Once the PDF is generated, you can check the metadata with
|
|
||||||
%%% $ pdfinfo template.pdf
|
|
||||||
\hypersetup{
|
|
||||||
pdftitle={A template for the arxiv style},
|
|
||||||
pdfsubject={q-bio.NC, q-bio.QM},
|
|
||||||
pdfauthor={David S.~Hippocampus, Elias D.~Striatum},
|
|
||||||
pdfkeywords={First keyword, Second keyword, More},
|
|
||||||
}
|
|
||||||
|
|
||||||
\begin{document}
|
|
||||||
\maketitle
|
|
||||||
|
|
||||||
\begin{abstract}
|
|
||||||
\lipsum[1]
|
|
||||||
\end{abstract}
|
|
||||||
|
|
||||||
|
|
||||||
% keywords can be removed
|
|
||||||
\keywords{First keyword \and Second keyword \and More}
|
|
||||||
|
|
||||||
|
|
||||||
\section{Introduction}
|
|
||||||
\lipsum[2]
|
|
||||||
\lipsum[3]
|
|
||||||
|
|
||||||
|
|
||||||
\section{Headings: first level}
|
|
||||||
\label{sec:headings}
|
|
||||||
|
|
||||||
\lipsum[4] See Section \ref{sec:headings}.
|
|
||||||
|
|
||||||
\subsection{Headings: second level}
|
|
||||||
\lipsum[5]
|
|
||||||
\begin{equation}
|
|
||||||
\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\theta)= {\frac {\alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\sum _{i=1}^{N} \sum _{j=1}^{N} \alpha _{i}(t)a^{w_t}_{ij}\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}
|
|
||||||
\end{equation}
|
|
||||||
|
|
||||||
\subsubsection{Headings: third level}
|
|
||||||
\lipsum[6]
|
|
||||||
|
|
||||||
\paragraph{Paragraph}
|
|
||||||
\lipsum[7]
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
\section{Examples of citations, figures, tables, references}
|
|
||||||
\label{sec:others}
|
|
||||||
|
|
||||||
\subsection{Citations}
|
|
||||||
Citations use \verb+natbib+. The documentation may be found at
|
|
||||||
\begin{center}
|
|
||||||
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
|
|
||||||
\end{center}
|
|
||||||
|
|
||||||
Here is an example usage of the two main commands (\verb+citet+ and \verb+citep+): Some people thought a thing \citep{kour2014real, keshet2016prediction} but other people thought something else \citep{kour2014fast}. Many people have speculated that if we knew exactly why \citet{kour2014fast} thought this\dots
|
|
||||||
|
|
||||||
\subsection{Figures}
|
|
||||||
\lipsum[10]
|
|
||||||
See Figure \ref{fig:fig1}. Here is how you add footnotes. \footnote{Sample of the first footnote.}
|
|
||||||
\lipsum[11]
|
|
||||||
|
|
||||||
\begin{figure}
|
|
||||||
\centering
|
|
||||||
\fbox{\rule[-.5cm]{4cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
|
|
||||||
\caption{Sample figure caption.}
|
|
||||||
\label{fig:fig1}
|
|
||||||
\end{figure}
|
|
||||||
|
|
||||||
\subsection{Tables}
|
|
||||||
See awesome Table~\ref{tab:table}.
|
|
||||||
|
|
||||||
The documentation for \verb+booktabs+ (`Publication quality tables in LaTeX') is available from:
|
|
||||||
\begin{center}
|
|
||||||
\url{https://www.ctan.org/pkg/booktabs}
|
|
||||||
\end{center}
|
|
||||||
|
|
||||||
|
|
||||||
\begin{table}
|
|
||||||
\caption{Sample table title}
|
|
||||||
\centering
|
|
||||||
\begin{tabular}{lll}
|
|
||||||
\toprule
|
|
||||||
\multicolumn{2}{c}{Part} \\
|
|
||||||
\cmidrule(r){1-2}
|
|
||||||
Name & Description & Size ($\mu$m) \\
|
|
||||||
\midrule
|
|
||||||
Dendrite & Input terminal & $\sim$100 \\
|
|
||||||
Axon & Output terminal & $\sim$10 \\
|
|
||||||
Soma & Cell body & up to $10^6$ \\
|
|
||||||
\bottomrule
|
|
||||||
\end{tabular}
|
|
||||||
\label{tab:table}
|
|
||||||
\end{table}
|
|
||||||
|
|
||||||
\subsection{Lists}
|
|
||||||
\begin{itemize}
|
|
||||||
\item Lorem ipsum dolor sit amet
|
|
||||||
\item consectetur adipiscing elit.
|
|
||||||
\item Aliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna.
|
|
||||||
\end{itemize}
|
|
||||||
|
|
||||||
|
|
||||||
\bibliographystyle{unsrtnat}
|
|
||||||
\bibliography{references} %%% Uncomment this line and comment out the ``thebibliography'' section below to use the external .bib file (using bibtex) .
|
|
||||||
|
|
||||||
|
|
||||||
%%% Uncomment this section and comment out the \bibliography{references} line above to use inline references.
|
|
||||||
% \begin{thebibliography}{1}
|
|
||||||
|
|
||||||
% \bibitem{kour2014real}
|
|
||||||
% George Kour and Raid Saabne.
|
|
||||||
% \newblock Real-time segmentation of on-line handwritten arabic script.
|
|
||||||
% \newblock In {\em Frontiers in Handwriting Recognition (ICFHR), 2014 14th
|
|
||||||
% International Conference on}, pages 417--422. IEEE, 2014.
|
|
||||||
|
|
||||||
% \bibitem{kour2014fast}
|
|
||||||
% George Kour and Raid Saabne.
|
|
||||||
% \newblock Fast classification of handwritten on-line arabic characters.
|
|
||||||
% \newblock In {\em Soft Computing and Pattern Recognition (SoCPaR), 2014 6th
|
|
||||||
% International Conference of}, pages 312--318. IEEE, 2014.
|
|
||||||
|
|
||||||
% \bibitem{keshet2016prediction}
|
|
||||||
% Keshet, Renato, Alina Maor, and George Kour.
|
|
||||||
% \newblock Prediction-Based, Prioritized Market-Share Insight Extraction.
|
|
||||||
% \newblock In {\em Advanced Data Mining and Applications (ADMA), 2016 12th International
|
|
||||||
% Conference of}, pages 81--94,2016.
|
|
||||||
|
|
||||||
% \end{thebibliography}
|
|
||||||
|
|
||||||
|
|
||||||
\end{document}
|
|
||||||
21
texput.log
Normal file
21
texput.log
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
This is pdfTeX, Version 3.141592653-2.6-1.40.28 (MiKTeX 25.12) (preloaded format=pdflatex 2026.4.14) 14 APR 2026 14:30
|
||||||
|
entering extended mode
|
||||||
|
restricted \write18 enabled.
|
||||||
|
%&-line parsing enabled.
|
||||||
|
**
|
||||||
|
|
||||||
|
! Emergency stop.
|
||||||
|
<*>
|
||||||
|
|
||||||
|
End of file on the terminal!
|
||||||
|
|
||||||
|
|
||||||
|
Here is how much of TeX's memory you used:
|
||||||
|
2 strings out of 467871
|
||||||
|
14 string characters out of 5435199
|
||||||
|
433733 words of memory out of 5000000
|
||||||
|
28986 multiletter control sequences out of 15000+600000
|
||||||
|
627721 words of font info for 40 fonts, out of 8000000 for 9000
|
||||||
|
1141 hyphenation exceptions out of 8191
|
||||||
|
0i,0n,0p,1b,6s stack positions out of 10000i,1000n,20000p,200000b,200000s
|
||||||
|
! ==> Fatal error occurred, no output PDF file produced!
|
||||||
Reference in New Issue
Block a user