[Keynote] Is "AI" useful for fuzzing?

No cover

[Keynote] Is "AI" useful for fuzzing? (2024, FUZZING 2024)

Published Sept. 16, 2024 by FUZZING 2024.

View on Thinkst Citation

4 stars (1 review)

Discussion of AI and its applications to security seems unavoidable nowadays, and, alas, this keynote is no exception. But is it actually useful for problems we care about, like fuzzing? In classic academic fashion I will answer “maybe” at great length, but hopefully with enough concrete examples and references to actual code that the talk will be worth listening to. I will cover: 1) Places where it seems obviously misguided (input generation in the fuzzing loop); 2) Areas where it seems to have demonstrable benefits (harness generation); and 3) Promising future directions (generating input seeds, evolving input seed generators).

1 edition

A nice summary of the space

4 stars

As someone who sees a lot of LLM & security research, this keynote is a nice summary of where LLMs will likely (or have already) add value, and where they will never help, regardless of LLM ability.

In short, using LLMs to generate inputs is orders of magnitude too slow to outpace the shear speed of random/semi-random mutation. Using LLMs to generate fuzzing harnesses, and to build generator logic that generates inputs will pay off, LLMs can ingest specs, code, and revise their output to get around coverage blocks.