diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/U-AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.txt b/papers/Topic2 Protocol-aware generation & fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.bib similarity index 100% rename from papers/Topic2 Protocol-aware generation & fuzzing/U-AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.txt rename to papers/Topic2 Protocol-aware generation & fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.bib diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.md b/papers/Topic2 Protocol-aware generation & fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.md new file mode 100644 index 0000000..051aa8c --- /dev/null +++ b/papers/Topic2 Protocol-aware generation & fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.md @@ -0,0 +1,47 @@ +# AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing + + + +**第一个问题**:请对论文的内容进行摘要总结,包含研究背景与问题、研究目的、方法、主要结果和结论,字数要求在150-300字之间,使用论文中的术语和概念。 + +协议实现是stateful且message-driven,同一消息在不同内部state下可能产生不同response,使传统coverage-guided greybox fuzzing(如AFL)难以有效测试网络协议。论文旨在对AFLNet(首个code- and state-coverage-guided protocol fuzzer)给出扩展技术讨论与大规模实证评估,并回顾其五年影响。方法上,AFLNet以message sequence作为seed,基于pcap录制/回放构建初始corpus,在线学习implemented protocol state machine(IPSM),用response code等标识state并统计#fuzz/#selected/#paths;在seed selection中交织queue顺序与state heuristics以导向progressive states;对序列分割为M1/M2/M3并在M2上施加protocol-aware与byte-level mutation;在同一bitmap中同时维护branch coverage与state transition coverage以判定interesting。结果显示:state feedback单独使用在部分对象上显著优于black-box;加入state feedback使state coverage平均提升35.67×,但对code coverage提升总体不显著;交织式seed-selection在综合code/state覆盖上最稳健。结论:state反馈能显著扩大协议状态空间探索,但“state定义”与吞吐等仍是关键挑战。 + +**第二个问题**:请提取论文的摘要原文,摘要一般在Abstract之后,Introduction之前。 + +Abstract—Protocol implementations are stateful which makes them difficult to test: Sending the same test input message twice might yield a different response every time. Our proposal to consider a sequence of messages as a seed for coverage-directed greybox fuzzing, to associate each message with the corresponding protocol state, and to maximize the coverage of both the state space and the code was first published in 2020 in a short tool demonstration paper. AFLNet was the first code- and state-coverage-guided protocol fuzzer; it used the response code as an indicator of the current protocol state. Over the past five years, the tool paper has gathered hundreds of citations, the code repository was forked almost 200 times and has seen over thirty pull requests from practitioners and researchers, and our initial proposal has been improved upon in many significant ways. In this paper, we first provide an extended discussion and a full empirical evaluation of the technical contributions of AFLNet and then reflect on the impact that our approach and our tool had in the past five years, on both the research and the practice of protocol fuzzing. + +**第三个问题**:请列出论文的全部作者,按照此格式:`作者1, 作者2, 作者3`。 + +Ruijie Meng, Van-Thuan Pham, Marcel Böehme, Abhik Roychoudhury + +**第四个问题**:请直接告诉我这篇论文发表在哪个会议或期刊,请不要推理或提供额外信息。 + +文段未给出会议或期刊信息。 + +**第五个问题**:请详细描述这篇论文主要解决的核心问题,并用简洁的语言概述。 + +核心问题是:如何将面向“单输入/近似无状态程序”的coverage-guided greybox fuzzing扩展到网络协议这种stateful、需要message sequence驱动且状态空间巨大的目标上,并同时兼顾code coverage与state space coverage。传统做法要么靠手工协议模型的stateful blackbox fuzzing(依赖不完备的状态/数据模型,且不保留“有趣”用例继续进化),要么把消息序列拼成文件交给AFL(无法聚焦关键消息、易生成大量无效序列)。论文围绕AFLNet提出并系统评估的一套解法:以消息序列为seed、在线推断IPSM并把状态反馈纳入引导与“interesting”判定,从而更系统地探索协议实现的状态与代码。简洁概述:让灰盒模糊测试“看见并利用协议状态”,从而可有效fuzz stateful protocols。 + +**第六个问题**:请告诉我这篇论文提出了哪些方法,请用最简洁的方式概括每个方法的核心思路。 + +(1) 消息序列作为seed的SCGF:把sequence of messages而非单文件输入作为进化种子,适配stateful server。(2) 录制/回放驱动(pcap→parse→send):从真实流量提取初始语料并可重复回放以执行fuzzing迭代。(3) 轻量协议学习IPSM(implemented protocol state machine):从response序列抽取state transitions,在线增量构建/更新状态机并维护#fuzz/#selected/#paths统计。(4) 面向progressive states的引导:按“盲点/新近/高产出”启发式选state,再在到达该state的子语料上做AFL式优先级选序列。(5) 交织式seed-selection:在coverage plateau时切换到state-heuristic重策略,否则按AFL队列顺序,兼顾吞吐与导向。(6) 三段式序列变异M1/M2/M3:固定前缀M1保证到达目标state,只在候选段M2做变异并继续执行后缀M3以观察传播效应。(7) 协议感知变异算子:对消息做replacement/insertion/duplication/deletion并与byte-level mutation堆叠。(8) 统一bitmap记录code+state覆盖:为state transition预留bitmap区域(SHIFT_SIZE),用分支与状态转移共同定义interesting seeds。 + +**第七个问题**:请告诉我这篇论文所使用的数据集,包括数据集的名称和来源。 + +基准为ProFuzzBench(Natella & Pham, ISSTA 2021工具/基准论文:ProFuzzBench: A benchmark for stateful protocol fuzzing),论文在其默认集成的网络协议实现(如Bftpd、DNSmasq、OpenSSH、TinyDTLS、Live555、ProFTPD、Pure-FTPd、Exim、DCMTK、Kamailio、forked-daapd、lightFTP等)上进行评测。 + +**第八个问题**:请列举这篇论文评估方法的所有指标,并简要说明这些指标的作用。 + +(1) Code coverage:以branch coverage(分支覆盖数)衡量探索到的代码范围,“未覆盖代码无法触发漏洞”。(2) State space coverage:以IPSM中构建的state transitions数量(以及状态数量/覆盖)衡量探索到的协议状态空间。(3) Vargha-Delaney effect size(Â12):衡量两组独立实验结果的优势概率/效应量,用于判断差异是否具有“显著优势”(文中以Â12≥0.71或≤0.29作为显著门槛)。(4) 时间维度覆盖趋势:branch covered over time(24小时曲线)用于对比不同变体达到同等覆盖所需时间(如提到“约6×/4×更快达到相同分支数”)。 + +**第九个问题**:请总结这篇论文实验的表现,包含具体的数值表现和实验结论。 + +RQ1(仅state反馈):AFLNetDARK(仅state feedback)在12个ProFuzzBench对象中有6个在code coverage上显著优于AFLNetBLACK(无code/state反馈):Bftpd、DNSmasq、Kamailio、lightFTP、ProFTPD、Pure-FTPd;并在OpenSSH与TinyDTLS上分别约6×与4×更快达到与BLACK相同的分支数;对state数量很少的对象(如DCMTK最终仅3个state)提升不明显。结论:当state数量“足够”时,state反馈可作为无代码插桩场景的有效引导。RQ2(state+code vs 仅code,表1):AFLNetQUEUE相对AFLNetCODE平均branch coverage提升仅+0.01%,但state coverage平均提升+35.67×;例如OpenSSH的state数从93.5提升到30480.9(+325.00×,Â12=1.00),DNSmasq从282.5到27364.0(+95.85×,Â12=1.00),Bftpd从170.5到334.0(+0.96×,Â12=1.00)。结论:额外state反馈极大扩展状态空间探索,但对代码覆盖提升整体不显著。RQ3(seed-selection策略,表2/3):交织策略AFLNet在综合表现上最好;其相对AFLNetQUEUE平均branch coverage为-0.52%但相对AFLNetIPSM为+1.65%,同时state coverage相对AFLNetQUEUE为+5.77%、相对AFLNetIPSM为+12.77%。结论:交织式seed-selection总体上最稳健,且state覆盖与code覆盖不呈简单正相关。 + +**第十个问题**:请清晰地描述论文所作的工作,分别列举出动机和贡献点以及主要创新之处。 + +动机:网络协议实现是stateful reactive systems,输入是message sequence而非单文件;AFL类CGF缺乏状态与序列结构认知,SBF又依赖手工模型且不做进化保种,导致难以深入状态空间并发现实现偏差/隐藏转移与漏洞。 + +贡献点:(1) 系统化阐释AFLNet:把消息序列纳入灰盒进化框架,并将state coverage与code coverage共同作为反馈信号。(2) 在线轻量IPSM学习与引导:通过response code等构造implemented protocol state machine,并用#fuzz/#selected/#paths等统计驱动state选择与导向。(3) 面向目标state的序列分割与变异策略(M1/M2/M3):既保证可达性又允许在关键段落聚焦变异。(4) 统一bitmap编码分支与状态转移覆盖:在AFL bitmap中为state transition预留空间,使“interesting”判定同时考虑state与code。(5) 五年后大规模评估与配置指导:在ProFuzzBench上做24h×10次的大样本实验,分离评估state反馈、state+code协同、以及seed-selection策略,并用Â12报告效应量。 + +主要创新之处:以“状态反馈+在线状态机学习”把灰盒fuzzing从无状态程序扩展到协议实现的状态空间探索,并给出可复现的消融式实证结论与最佳实践建议。 \ No newline at end of file diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/U-AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.pdf b/papers/Topic2 Protocol-aware generation & fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.pdf similarity index 100% rename from papers/Topic2 Protocol-aware generation & fuzzing/U-AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.pdf rename to papers/Topic2 Protocol-aware generation & fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing/AFLNet Five Years Later On Coverage-Guided Protocol Fuzzing.pdf diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/U-Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.bib b/papers/Topic2 Protocol-aware generation & fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.bib similarity index 100% rename from papers/Topic2 Protocol-aware generation & fuzzing/U-Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.bib rename to papers/Topic2 Protocol-aware generation & fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.bib diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.md b/papers/Topic2 Protocol-aware generation & fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.md new file mode 100644 index 0000000..8a7653c --- /dev/null +++ b/papers/Topic2 Protocol-aware generation & fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.md @@ -0,0 +1,47 @@ +# Learn&Fuzz Machine Learning for Input Fuzzing + + + +**第一个问题**:请对论文的内容进行摘要总结,包含研究背景与问题、研究目的、方法、主要结果和结论,字数要求在150-300字之间,使用论文中的术语和概念。 + +论文聚焦grammar-based fuzzing的关键瓶颈:需要手工编写input grammar,过程费时且易错;而复杂结构化格式(如PDF)又最依赖该类fuzzing。研究目的在于用neural-network-based statistical learning自动生成可用于输入fuzzing的grammar/生成模型,并解决learn&fuzz张力(learning偏向生成well-formed inputs,fuzzing需要破坏结构以覆盖error-handling与意外路径)。方法上对约63,000个non-binary PDF objects进行无监督训练,采用seq2seq RNN(LSTM)学习字符级概率分布,并提出NoSample/Sample/SampleSpace三种采样生成策略及基于概率分布引导“where to fuzz”的SampleFuzz算法。实验以Microsoft Edge PDF parser为目标,用instruction coverage、pass rate与AppVerifier监测bugs评估:SampleSpace在50 epochs达97% pass rate;整体覆盖最佳为Sample-40e;在learn+fuzz组合中SampleFuzz以567,634条指令覆盖、68.24% pass rate取得最高覆盖并优于多种random fuzzing基线,且在更长实验中发现并修复一个stack-overflow bug。结论:统计生成模型可自动学习输入结构,并用概率信息更智能地施加fuzz以提升覆盖。 + +**第二个问题**:请提取论文的摘要原文,摘要一般在Abstract之后,Introduction之前。 + +Abstract.Fuzzing consists of repeatedly testing an application with modified, or fuzzed, inputs with the goal of finding security vulnerabili-ties in input-parsing code. In this paper, we show how to automate the generation of an input grammar suitable for input fuzzing using sam-ple inputs and neural-network-based statistical machine-learning tech-niques. We present a detailed case study with a complex input format, namely PDF, and a large complex security-critical parser for this format, namely, the PDF parser embedded in Microsoft’s new Edge browser. We discuss (and measure) the tension between conflicting learning and fuzzing goals: learning wants to capture the structure of well-formed in-puts, while fuzzing wants to break that structure in order to cover unex-pected code paths and find bugs. We also present a new algorithm for this learn&fuzz challenge which uses a learnt input probability distribution to intelligently guide where to fuzz inputs. + +**第三个问题**:请列出论文的全部作者,按照此格式:`作者1, 作者2, 作者3`。 + +Patrice Godefroid, Hila Peleg, Rishabh Singh + +**第四个问题**:请直接告诉我这篇论文发表在哪个会议或期刊,请不要推理或提供额外信息。 + +arXiv:1701.07232v1 + +**第五个问题**:请详细描述这篇论文主要解决的核心问题,并用简洁的语言概述。 + +论文要解决的核心问题是:如何在无需人工编写格式规范的前提下,从sample inputs自动学习出“足够像grammar”的生成式输入模型,用于grammar-based fuzzing复杂结构化输入(以PDF为代表),并进一步在“生成尽量well-formed以深入解析流程”和“刻意引入ill-formed片段以触达异常/错误处理代码”之间取得可控平衡。传统黑盒/白盒fuzz对复杂文本结构格式不如grammar-based有效,但后者依赖手工grammar;已有grammar/automata学习方法对PDF对象这种“相对扁平但token/键值组合极多”的格式并不理想。本文用seq2seq RNN学习字符序列的概率分布作为统计grammar,并利用该分布在高置信位置定点“反向扰动”以实现learn&fuzz。简洁概述:用神经网络从样本自动学输入结构,并用学到的概率分布指导更有效的结构化fuzz。 + +**第六个问题**:请告诉我这篇论文提出了哪些方法,请用最简洁的方式概括每个方法的核心思路。 + +(1) seq2seq RNN统计输入建模:把PDF object当作字符序列,训练encoder-decoder(LSTM)学习p(x_t|x_p_t)且掷币触发(p_fuzz>t_fuzz),则用分布中最低概率字符替换(argmin),在“最不该出错的位置”注入异常以诱导解析器走入错误处理/意外路径。 (7) PDF对象嵌入整文件的host-append机制:将新对象按PDF增量更新规则附加到well-formed host(更新xref与trailer)以便对Edge PDF parser进行端到端测试。 + +**第七个问题**:请告诉我这篇论文所使用的数据集,包括数据集的名称和来源。 + +(1) PDF训练语料:从534个PDF文件中抽取约63,000个non-binary PDF objects;这534个PDF由Windows fuzzing team提供,且是对更大PDF集合做seed minimization后的结果;更大集合来源包括公开Web与历史fuzz用PDF。(论文未给该数据集专有名称)(2) 目标程序/基准:Microsoft Edge browser内嵌的Edge PDF parser(通过Windows团队提供的单进程test-driver执行)。(3) Host PDF集合:从上述534个PDF中选取最小的3个作为host1/host2/host3(约26Kb/33Kb/16Kb)用于将生成对象附加成完整PDF。 + +**第八个问题**:请列举这篇论文评估方法的所有指标,并简要说明这些指标的作用。 + +(1) Coverage(instruction coverage):统计执行过的唯一指令集合(dll-name, dll-offset标识),集合并集衡量一组测试的覆盖范围,是fuzzing有效性的核心指标。 (2) Pass rate:通过grep解析日志中是否有parsing-error来判定pass/fail,pass表示被解析器视为well-formed;主要用来估计学习质量与“结构保持程度”。 (3) Bugs:在AppVerifier监控下捕获内存破坏类缺陷(如buffer overflow、异常递归导致的stack overflow等),衡量真实漏洞发现能力。 + +**第九个问题**:请总结这篇论文实验的表现,包含具体的数值表现和实验结论。 + +基线覆盖(host与baseline):三份host单独覆盖约353,327(host1)到457,464(host2)条唯一指令,三者并集host123为494,652;将1,000个真实对象附加到host后,baseline123覆盖为553,873,且所有host自身pass rate为100%。学习质量(pass rate):Sample在10 epochs时pass rate已>70%;SampleSpace整体更高,50 epochs最高达97% pass rate。覆盖表现(学习不加fuzz):不同host对覆盖影响明显;总体覆盖最佳为Sample-40e(host123场景下胜出),且Sample-40e的覆盖集合几乎是其他集合的超集(相对SampleSpace-40e仅缺1,680条指令)。学习+fuzz对比(30,000个PDF/组,图8):SampleFuzz覆盖567,634、pass rate 68.24%为最高覆盖;次优Sample+Random覆盖566,964、pass rate 41.81%;Sample-10K覆盖565,590、pass rate 78.92%;baseline+Random覆盖564,195、pass rate 44.05%;SampleSpace+Random覆盖563,930、pass rate 36.97%。结论:存在coverage与pass rate张力,随机fuzz提升覆盖但显著降低通过率;SampleFuzz在约65%–70%通过率附近取得更佳折中并带来最高覆盖。漏洞:常规实验未发现bug(目标已被长期fuzz);但更长实验(Sample+Random,100,000对象/300,000 PDF,约5天)发现并修复一个stack-overflow bug。 + +**第十个问题**:请清晰地描述论文所作的工作,分别列举出动机和贡献点以及主要创新之处。 + +动机:grammar-based fuzzing对复杂结构化输入最有效,但手工编写input grammar“劳累/耗时/易错”,限制了在真实大型解析器(如浏览器PDF解析)上的应用;同时学习生成“规范输入”与fuzzing“破坏结构找漏洞”目标冲突,需要可控融合。 + +贡献点:(1) 首次将neural-network-based statistical learning(seq2seq RNN/LSTM)用于从样本自动学习可生成的输入模型,以自动化grammar生成用于fuzzing。 (2) 针对PDF这种超复杂格式,明确限定范围为non-binary PDF objects,并给出端到端工程方案(把生成对象按PDF增量更新规则附加到host形成完整PDF)以真实驱动Edge PDF parser评测。 (3) 系统分析并量化learn&fuzz张力:用pass rate刻画学习质量、用instruction coverage刻画fuzz有效性,展示两者此消彼长。 (4) 提出SampleFuzz:利用learned input probability distribution在模型高置信位置用低概率字符替换,实现“智能选择where to fuzz”,在覆盖与通过率之间取得更优折中并获得最高覆盖。 + +主要创新:把“概率语言模型”的不确定性/置信度直接转化为fuzzing决策信号(高置信处注入反常),从而在保持足够结构可深入解析的同时,更系统地触达错误处理与意外路径。 \ No newline at end of file diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/U-Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.pdf b/papers/Topic2 Protocol-aware generation & fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.pdf similarity index 100% rename from papers/Topic2 Protocol-aware generation & fuzzing/U-Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.pdf rename to papers/Topic2 Protocol-aware generation & fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing/Learn&Fuzz Machine Learning for Input Fuzzing.pdf diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/U-NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.bib b/papers/Topic2 Protocol-aware generation & fuzzing/NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.bib similarity index 100% rename from papers/Topic2 Protocol-aware generation & fuzzing/U-NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.bib rename to papers/Topic2 Protocol-aware generation & fuzzing/NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.bib diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.md b/papers/Topic2 Protocol-aware generation & fuzzing/NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.md new file mode 100644 index 0000000..ee79fc7 --- /dev/null +++ b/papers/Topic2 Protocol-aware generation & fuzzing/NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.md @@ -0,0 +1,60 @@ +# NEUZZ Efficient Fuzzing with Neural Program Smoothing + + + +**第一个问题**:请对论文的内容进行摘要总结,包含研究背景与问题、研究目的、方法、主要结果和结论,字数要求在150-300字之间,使用论文中的术语和概念。 + +论文指出传统graybox fuzzing多依赖evolutionary guidance,易陷入随机变异的低效序列,难以触发深层分支与hard-to-trigger bugs;而直接用gradient-guided optimization又会被真实程序的discontinuities、plateaus与ridges卡住。研究目的在于通过program smoothing构造可微的surrogate function,使梯度方法可用于覆盖导向fuzzing。方法上,NEUZZ用feed-forward NN学习输入到edge bitmap(分支/边覆盖)的平滑近似,采用binary cross-entropy训练并做label降维(合并总是共现的边);再基于∇x f_i(θ,x)选取top-k高梯度字节并按梯度符号生成变异,同时通过coverage-based filtration进行incremental learning以避免遗忘。主要结果:在10个真实程序上,NEUZZ在24小时内相对10个SOTA fuzzers达到最高edge coverage(最高达3×,文中亦报告对AFL最高约10×);发现31个此前未知bug(含2个CVE),并在LAVA-M与DARPA CGC上分别找到更多注入/真实漏洞。结论是:神经网络程序平滑+梯度引导变异能显著提升fuzz效率与覆盖,并具备良好可扩展性。 + +**第二个问题**:请提取论文的摘要原文,摘要一般在Abstract之后,Introduction之前。 + +Abstract—Fuzzing has become the de facto standard technique for finding software vulnerabilities. However, even state-of-the-art fuzzers are not very efficient at finding hard-to-trigger software bugs. Most popular fuzzers use evolutionary guidance to generate inputs that can trigger different bugs. Such evolutionary algorithms, while fast and simple to implement, often get stuck in fruitless sequences of random mutations. Gradient-guided optimization presents a promising alternative to evolutionary guidance. Gradient-guided techniques have been shown to significantly outperform evolutionary algorithms at solving high-dimensional structured optimization problems in domains like machine learning by efficiently utilizing gradients or higher-order derivatives of the underlying function. However,gradient-guidedapproachesarenotdirectly applicable to fuzzing as real-world program behaviors contain many discontinuities, plateaus, and ridges where the gradient-based methods often get stuck. We observe that this problem can be addressed by creating a smooth surrogate function approximating the target program’s discrete branching behavior. In this paper, we propose a novel program smoothing technique using surrogate neural network models that can incrementally learn smooth approximations of a complex, real-world program’s branching behaviors. We further demonstrate that such neural network models can be used together with gradient-guided input generation schemes to significantly increase the efficiency of the fuzzing process. OurextensiveevaluationsdemonstratethatNEUZZ significantly outperforms 10 state-of-the-art graybox fuzzers on 10 popular real-world programs both at finding new bugs and achieving higher edge coverage. NEUZZ found 31 previously unknown bugs (including two CVEs) that other fuzzers failed to find in 10 real-world programs and achieved 3X more edge coverage than all of the tested graybox fuzzers over 24 hour runs. Furthermore, NEUZZ also outperformed existing fuzzers on both LAVA-M and DARPA CGC bug datasets. + +**第三个问题**:请列出论文的全部作者,按照此格式:`作者1, 作者2, 作者3`。 + +Dongdong She, Kexin Pei, Dave Epstein, Junfeng Yang, Baishakhi Ray, Suman Jana + +**第四个问题**:请直接告诉我这篇论文发表在哪个会议或期刊,请不要推理或提供额外信息。 + +arXiv:1807.05620v4 + +**第五个问题**:请详细描述这篇论文主要解决的核心问题,并用简洁的语言概述。 + +核心问题是:覆盖导向fuzzing本质是优化问题(最大化new edge coverage/bugs),但真实程序的分支行为对输入是高度离散且不光滑的目标函数,导致两类主流方法各有瓶颈:evolutionary algorithms无法利用梯度结构而易低效停滞;gradient-guided optimization虽高效,但在程序的discontinuities、plateaus、ridges处梯度不可用/不可靠而“卡住”。NEUZZ要解决的就是“如何在不引入符号执行等高开销白盒平滑的前提下,把程序分支行为变成可微、可求梯度的近似函数,并让梯度真正能指导变异去触达未覆盖边与隐藏漏洞”。简洁概述:用可微的神经网络代理模型平滑程序分支,使梯度引导变异在真实程序上可用且更高效。 + +**第六个问题**:请告诉我这篇论文提出了哪些方法,请用最简洁的方式概括每个方法的核心思路。 + +(1) Neural program smoothing:训练feed-forward surrogate NN,把输入字节序列映射为edge bitmap的“平滑近似”,从而可微、可求梯度。 + (2) Edge-label降维预处理:仅保留训练集中出现过的边,并合并“总是共现”的边以缓解multicollinearity,减少输出维度(约从65,536降到~4,000)。 + (3) Gradient-guided mutation(Algorithm 1):对选定的输出边神经元计算∇x f_i(θ,x),选top-k高梯度字节作为critical bytes,按梯度符号对字节做增/减并clip到[0,255]生成定向变异。 + (4) Exponentially grow mutation target:从少量字节开始,逐轮扩大要变异的字节数,以覆盖更大输入空间同时保持单次搜索有效性。 + (5) Incremental learning + coverage-based filtration:把新发现(触发新边)的输入加入,并用“只保留能带来新覆盖的旧数据摘要”控制数据规模,迭代重训以提高代理模型精度并避免灾难性遗忘。 + (6) Magic-check辅助(LAVA/CGC场景):用定制LLVM pass插桩magic byte checks;用NN梯度先定位关键字节,再对相邻字节做局部穷举(4×256)以高效触发多字节条件。 + +**第七个问题**:请告诉我这篇论文所使用的数据集,包括数据集的名称和来源。 + +(1) 10个真实世界程序集:binutils-2.30(readelf -a, nm -C, objdump -D, size, strip)、harfbuzz-1.7.6、libjpeg-9c、mupdf-1.12.0、libxml2-2.9.7、zlib-1.2.11(论文表IIb列出;来源为对应开源项目/版本)。 + (2) LAVA-M bug dataset:LAVA项目的子集(base64、md5sum、uniq、who,含注入的magic-number触发漏洞;来源引用[28] LAVA)。 + (3) DARPA CGC dataset:Cyber Grand Challenge二进制/服务程序数据集(论文从中随机选50个binary评测;来源引用[26] CGC repository)。 + (4) 训练数据来源:先运行AFL-2.52b 1小时生成初始seed corpus与边覆盖标签,用于训练NN(平均每个程序约2K训练输入)。 + +**第八个问题**:请列举这篇论文评估方法的所有指标,并简要说明这些指标的作用。 + +(1) Bugs found / crashes:统计发现的真实漏洞与崩溃数量,用于衡量漏洞挖掘能力;内存类问题通过AddressSanitizer的stack trace去重,整数溢出通过人工分析+UBSan验证。 + (2) Edge coverage(new control-flow edges):以AFL的edge coverage report统计“新增边数”,作为覆盖导向fuzzing的核心效果指标。 + (3) 运行时间预算下的覆盖增长曲线:比较24h(真实程序)、5h(LAVA-M)、6h(CGC)内覆盖随时间变化,体现“到达新边的速度”。 + (4) 训练开销/时间(NEUZZ train(s), training time sec):衡量学习组件的成本(如与RNN fuzzer对比时报告训练时间差异)。 + (5) 固定变异预算下的覆盖(如1M mutations):在控制变异次数时比较不同方法/模型的有效性,排除训练时长差异干扰。 + (6) NN预测精度(test accuracy约95%平均):用于说明代理模型对分支行为预测质量(间接影响梯度指导有效性)。 + +**第九个问题**:请总结这篇论文实验的表现,包含具体的数值表现和实验结论。 + +真实程序(24h):NEUZZ在10个程序上均取得最高edge coverage(表VI示例:readelf -a 4,942;harfbuzz 6,081;nm -C 2,056;libxml 1,596;mupdf 487;zlib 376等),并在多程序上呈现“1小时内新增>1,000边”的领先速度;文中总结对AFL在9/10程序上分别约6×、1.5×、9×、1.8×、3.7×、1.9×、10×、1.3×、3×的边覆盖优势,并称相对次优fuzzer可达约4.2×、1.3×、7×、1.2×、2.5×等提升。真实漏洞(表III):在6个fuzzer对比中NEUZZ总计发现60个bug(AFL 29,AFLFast 27,VUzzer 14,KleeFL 26,AFL-laf-intel 60?——表中按项目/类型给出,且NEUZZ覆盖5类bug并额外拿到2个CVE:CVE-2018-19931/19932)。LAVA-M(5h):NEUZZ在base64找到48、md5sum 60、uniq 29、who 1,582(表IV),整体优于Angora等对比项。CGC(6h,50 binaries):NEUZZ触发31个有漏洞binary,AFL为21、Driller为25,且NEUZZ覆盖了AFL/Driller找到的全部并额外多6个(表V)。与RNN fuzzer对比(1M mutations,表VII):NEUZZ在readelf/libjpeg/libxml/mupdf上分别获得约8.4×/4.2×/6.7×/3.7×更多边覆盖,且训练开销约低20×。模型消融(表VIII):线性模型显著落后;增量学习进一步提升(如readelf -a:1,723→1,800→2,020)。结论:神经平滑+梯度定向变异在覆盖与找bug上均显著优于多种SOTA,且训练/执行开销可控、可扩展到大程序。 + +**第十个问题**:请清晰地描述论文所作的工作,分别列举出动机和贡献点以及主要创新之处。 + +动机:evolutionary fuzzing在深层逻辑与稀疏漏洞上效率低;梯度优化在高维结构化问题上更强,但直接用于程序会被分支离散性导致的不可微/不连续行为阻断;既有program smoothing依赖符号执行/抽象解释,开销大且不可扩展。 + +贡献点:(1) 提出“program smoothing对gradient-guided fuzzing至关重要”的核心观点,并把fuzzing形式化为优化问题,明确为何需要平滑代理。 (2) 设计首个可扩展的surrogate neural network program smoothing:用feed-forward NN学习输入→edge coverage bitmap的平滑近似,并通过标签降维解决训练可行性。 (3) 提出面向覆盖的gradient-guided mutation策略:利用∇x f_i(θ,x)定位critical bytes与变异方向,系统生成高价值变异而非均匀随机。 (4) 提出coverage-based filtration的incremental learning流程,持续用新覆盖数据纠正代理模型且避免灾难性遗忘。 (5) 实现NEUZZ并在真实程序、LAVA-M、CGC上进行大规模对比,证明在bug数量与edge coverage上显著超越10个SOTA fuzzers。 + +主要创新之处:用“可微代理模型”把离散分支行为平滑为可求梯度的函数,再把梯度直接转化为变异位置/方向的决策信号,从而以低开销获得比符号/污点等重分析更强的探索能力。 \ No newline at end of file diff --git a/papers/Topic2 Protocol-aware generation & fuzzing/U-NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.pdf b/papers/Topic2 Protocol-aware generation & fuzzing/NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.pdf similarity index 100% rename from papers/Topic2 Protocol-aware generation & fuzzing/U-NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.pdf rename to papers/Topic2 Protocol-aware generation & fuzzing/NEUZZ Efficient Fuzzing with Neural Program Smoothing/NEUZZ Efficient Fuzzing with Neural Program Smoothing.pdf