Harnessing the Hive Mind: How Semgrep and Nuclei Are Shaping the Future of Security Engineering

Security experts exert leverage at scale and across the planet.

Travis dropped

Crowd-sourced, open source, and customization-out-front tools continue to make noise in the security tooling marketplace. These are not new developments. We, of course, know about projects like FindBugs and select projects from OWASP. However, new entrants like Semgrep from r2c and Nuclei from ProjectDiscovery feel different. What has attracted VC funding to these projects, and what’s behind the excitement and buzz generated from practitioners in the security space?

To dig in - and perhaps to confirm suspicions - here are the collected results from discussions with old friends who use these tools on a daily basis. These key topics drove those conversations to uncover signal:

Variant analysis was an easy idea for industry analysts to latch onto but there’s more to it than that. What makes these tools attractive, in brief, revolves around the advantages of first-class customization support, not cost. Some of the best uses for security tooling aren’t what a vendor’s product people had in mind and the best rules for that tooling can only come from tailoring to the development of real software.

The stack

The interviews focused on the open source static analysis tool Semgrep and the open source dynamic analysis tool Nuclei. Universally from this group, these tools augment other open source software - discovery tools like zmap, other offerings from ProjectDiscovery like httpx, of course OWASP ZAP, and others fit into the strategy. Mostly, today, these tools augment commercial offerings from market incumbents like SonarQube, and Burp Suite.

Semgrep is an open source source code analysis tool forked from Facebook’s Pfff, a collection of static analysis tools released under LGPL 2.1. Semgrep is maintained by r2c, and the community. r2c commercially offers a platform for Semgreps’ integration into a software lifecycle, quality of life tools for rule authors, closed source versions of its engine, and other services.

Nuclei is one of several open source tools run by ProjectDiscovery - its focus is providing a fast vulnerability scanner that works by testing running software over a network. Project Discovery offers most of its tools under permissive open source licenses, and maintains a suite of products revolving around the managed execution, scaling, integration, and addressing other operationalization concerns.

Most respondents simply run these tools ad-hoc, whereas for scale or automation they put these tools into containers, and manage their execution with automation tools like jenkins. Several engineers mentioned that running Nuclei or Semgrep was fully automated, with results appearing in a ticket queue or pull request. Several more highlighted the need to collect direct feedback on checkers, forming a feedback loop with impacted engineering teams. And finally a form of central checker or rules management is common, with one respondent using a simple Git repo to source and manage contributions.

With the combination of Nuclei and Semgrep, you can do wonders as a security engineer.

Pros and cons

Key advantages of Semgrep are its dead-simple rule language, which most folks can learn in under a week, and its speed - orders of magnitude faster than commercial tools. Nuclei also has a simple format for writing new checks and a community of hundreds of active contributors. Both tools are open source; power users can deeply integrate these tools and have the assurance that they won’t suddenly lose key components of their security platforms or that product teams will be too slow to address key demands.

That’s pretty clear - the major advantages of these tools over incumbents is customizability, open-source-ness, and speed improvements, Ty Bross shared that often platforms will rate-limit his execution of Nuclei. These advantages may not sound huge, but this type of flexibility means that these tools are useful in expanded contexts from those imagined by the product managers at incumbent security tool vendors. The second advantage of this flexibility is that existing rules can be modified, sets of rules or checks can be curated, and rules and checks can be written and tailored to the way your populations of engineers deliver value. Parsia Hakimian further elaborated that Semgrep can often work in conditions where commercial tools fail - Semgrep can be run on smaller chunks of a project’s full code-base, on code that does not build at all.

General purpose defect discovery tools traditionally mined CVE and exploit details, characterized defects found in private assessments, or carried out exhaustive searches of undefined/insecure behavior for programming languages, execution environments, libraries, and frameworks. They did so by extrapolating from common use cases, such as generalizing from industry ‘best practices’ around cryptography with large teams of security researchers capable of generalizing and generating checks for commercial tools. Now, because the world changed, those same groups are struggling to keep up with the volume of libraries produced, and continue to struggle with understanding how populations of engineers actually use technology.

The cons of these tools are that they can lack the canned checks (including decades of legacy checks), analysis capability, polish, Gartner quadrant scoring, audit pleasing, standards criteria satisfying properties of incumbent commercial tools; Tim Michaud mentioned that one agency cited FedRAMP approval as a reason to keep an incumbent commercial tool in place. Parsia highlighted that with open source Semgrep he loses features like intra-file analysis. These tools sponsoring organization’s commercial offerings have started addressing these elements at price points that appear attractive to those considering multi-million-dollar contract renewals with incumbent vendors - flexibility that appeals to power users - and familiarity to a productive cohort of security practitioners.

Incumbent tools evolved both their analysis engines and checks together. They grew up and were architected in an era of magic and they carry with them that maturity, good and bad. New tools benefit from all of that prior effort - including a modern understanding of categories of defects and their root causes - and an understanding of crowdsource dynamics, but just haven’t yet accreted the same number of person hours to be drop-in-replacements for use-cases incumbent tools excel at.

Core value

The tailorability of rules (and a relatively sparse set of rules/checks that open source tools were shipped with) meant that many of these tools were first marketed as useful to a ‘variant analysis’ capability. This has given way to community efforts to adopt these tools, and private security engineering teams with no commercial interest in owning checker intellectual property have freely turned them over to improve everyone’s experience of using these tools. Security and engineering groups also encourage the adoption of open or in-house risk-eliminating practices like using secure-by-default architectures. And finally, due to their flexibility, these tools are useful outside of their apparent remit of finding problems in software.

Variant analysis is the elimination of a type of defect across a set of software - variant analysis usually means you’ve identified a single expression of a problem, and can generalize a check or rule for that expression across that population. An accessible example of variant analysis for Semgrep is;

  1. FooBar Bank’s login page suffered from SQL injection, the expression is the dataflow & lines of code responsible,
  2. A pattern is identified; information in GET parameters reaching a string used as a SQL query, and
  3. Semgrep is used to look for that pattern across all the Java code at FooBar Bank.

And a similar example for Nuclei;

  1. DNS servers are compromised with guessed credentials.
  2. A pattern is identified; FooBar Bank’s outsourced DNS experts keep setting up servers with the administrative password BigFour123!,
  3. Nuclei is used to look for SSH servers that have root/BigFour123! across FooBar Bank infrastructure. (Or those lacking MFA, or privileged access management, or …)

Closely related to a variant analysis use-case is the example of sharing rules for error-prone parts of platforms or frameworks, the formats of Nuclei templates or Semgrep rules are starting to become dominant in the open exchange of ideas between industry professionals. Similar to the distribution of YARA rules by the infrastructure security set, the fast community prototyping and evaluation of Semgrep rules for the reachability of a new CVE or defect type is gaining ground. Participating in this ecosystem means you’re better able to run with the herd.

…complexity sells better.

You used to have to beg for a vendor to support finding a particular CVE, or wait for someone else to figure out the combinations of sources, sinks, and mistakes across the particular set of libraries and frameworks used by your different development teams, or deal with arcane rule languages, but now you can do this in house in reaction to bug bounties, penetration testing reports, or in-house threat modeling and development stack assessments. Increasingly these teams who work closely with real developers and real code are open sourcing the checks they create in-house, putting open source tooling more and more in direct competition with market incumbents. Anshuman mentioned that his team puts Nuclei templates into their bug bounty response process, in one case forking an existing rule in response to a bug bounty report and tailoring it to his organization’s code-bases. With that rule in place their automated tooling is able to pick up variants or regressions into the future.

These tools are also enabling one of the highest return security activities: using secure-by-default architectural components. Secure-by-default eliminates problems, and eliminated problems don’t need to be looked for, accounted for, or managed-to-remediation - yet secure-by-default remains relatively rare. Using open source tools solves a tricky element of encouraging component adoption - they can be used to detect when developers have failed to use secure-by-default approaches, and then used to nudge those teams back towards safer pastures with whatever incentives and disincentives work best for your organization. For example, maybe you want development teams to eliminate the risk of cross-site-scripting vulnerabilities by using HTML templates instead of writing raw HTML responses in golang. Semgrep might be used to look for instances where http.ResponseWriter is not being used by template.Execute, and then you can trigger an array of things, such as dropping the developer a slack message about how cool HTML templates are, flagging a particular build for more stringent XSS checks and security delays, or just marking a particular project as more error-prone. Tim shared an example where his developers were working on an authentication and authorization package, as they developed their secure-by-default library, they began introducing Semgrep rules that gradually removed access to functionality this library provided. Anshuman shared that as good as it is to have secure-by-default frameworks, getting developers to use them is another issue, his tooling is able to nudge developers towards safe-by-default frameworks when a push to a repository happens.

Most interesting are unanticipated use cases - security tools commonly emit artifacts, have intermediate results, or have technical capabilities that could be leveraged for reasoning about populations of code and coders, if only they were unlocked. Open source tools don’t have these limitations - Semgrep and Nuclei can be used for portfolio level efforts to understand code, coding practices, and coding risk in ways that can drive interesting and effective behavior by security teams. One typical use case is software inventory and risk analysis - sure you can ask a development team if their project uses, for example, credit card services, but with Semgrep you can just write a query for them. With a set of checks that tell you what software properties express risk, you can realize many benefits of threat modeling in a totally automated way - use that information to automatically risk-score applications in your inventory, and respond with automation to address risk. Nuclei can similarly be used along with its Wappalyzer set to determine, experimentally, attacker-facing application stacks, which is a useful list to have when deciding what an initiative could focus on next.

One potential of these tools is teased by Alphabet’s Tricorders project, an effort to replace big-box static analysis tools with a largely in-house pipeline. One interesting observation was that engineers who were contributing to libraries or frameworks had begun writing and distributing their own checks and rules to enforce safe use of their APIs. Those engineers in some cases even simplified APIs to make issues easier for tooling to catch. Instead of incorporating pitfalls in documentation that developers need to read or memorize, guidance can be provided as code. Those engineers gain more agency over the safety or error-prone-ness of their own code, and seem to want to make that code less risky to build on. Tim’s firm has a solid infusion of ex-Google folks, and unsurprisingly, those engineering teams are already writing their own Semgrep checks, and his infrastructure folks incorporate automated Nuclei checks to enforce their own standards. Open source projects with simple rule languages like Nuclei and Semgrep are in a great position to become the lingua franca used by modern engineers and distributed alongside popular development frameworks to realize similar dynamics outside of the halls of super-high-tech-firms.

Operation

Starting with these tools doesn’t take much more than pulling copies down and running them against raw targets. Those squeezing good value from these tools all had to develop a few capabilities we can learn from:

These capability areas have their own levels of maturity, and reflect predominant and established use-cases. They’re ‘table stakes’ before getting into nascent capabilities like automated inventory risk assessment; it probably means mashing tool output up with inventory and pursuing further integrations with governance as code elements.

Insights

These tools can scale an individual bit of knowledge trapped inside a security engineer across a portfolio of code, or the world’s code. Their flexibility means you can count on these tools to tell you more about your software, rather than if that software just has a particular security problem.

Realizing secure by default, automated inventory, risk assessments, and threat modeling, means developing patterns and seeing what systems of incentives work best with different engineering populations. It’s a lot of work for any one organization - you’ll want to establish and cultivate ways of effectively collaborating in the commons on building these datasets.

Community sourced checks and rules are one way to keep pace with the growth of platform, library, framework, and development paradigms. Investors have started to realize the dynamics of addressing checker and rule-set scale with closed source business models that lack credible network effects, the advantages of last-mile customizability, and the advantages of letting skilled customers that are also engineers take the products to interesting and unanticipated places.

Everyone involved with security has run into burnout caused by a mismatch of commercial security tools with development practice, massive run-times because they need to reduce false positive rate in the face of outrageously risky programming practices, traps of point-solution driven penetrate and patch behavior, and those tools’ inflexibility to address those problems yourself. Give yourself, your initiative, and your community of security engineers the ability to overcome these problems by investing in, becoming familiar with, and leveraging these promising open source tools.

The best way to predict the future is to invent it.

Acknowledgements

I’ve long been curious about how the dynamics of open source tooling have been playing out in industry, and without the illuminating conversations I’ve had with old colleagues this article would not exist.

Thanks to Sammy Migues and Mike Doyle who provided extensive feedback and edits.