Radamsa is our attempt to make state of the art automatic black-box robustness testing more accessible to vendors and independent software developers. Many even old and mature programs have serious issues in parts exposed to data from potentially malicious sources. Some vendors and projects have sophisticated in-house tools for testing their products, and correspondingly crackers have their own tools tailored for their needs. Radamsa tool is an attempt to make a general purpose test case generator which others can easily add as a step towards more secure software development. The tool attempts to generate a wide variety of malformed input documents for programs in order to expose errors easily and early on in the development life cycle before they are found and possibly exploited using one of the plethora of similar ad-hoc tools.
From a practical point of view, Radamsa is a command line tool which is given a set of sample input files based on which it generates similar output files with varying definitions and amounts of similarity. The input files are analyzed using different techniques in the tool. The results are used to make interesting test cases, as opposed to just randomly corrupting the documents. The output files can be given to program being tested, and the ones causing interesting failures can be collected aside for further analysis or reporting. No BNF, DSL- or fuzzer hacking required, just a set of sample files.
Usage and installation instructions are available on our public Google code repository.
We at OUSPG have worked on various aspects of information security for many moons. In the early days we were reporting and fixing unique bugs in unique programs, from which the path led to finding bugs affecting many programs, protocol families and other more general metalevels. Some vendors welcome error reports and try to fix them, some are not interested and often seem to fail to understand the security implications, while others turn hostile.
Programs implemented in low-level languages tend to fail horribly when erroneous handling of malformed input data fails or leaks the data too deep into the program. We have written many tools for this purpose, and some have been quite good at finding errors from various programs processing various forms of data. The sad fact however is that writing a fuzzer which finds errors in some existing programs is not rocket science. Just about any modification to data, including flipping bits randomly, has in our tests exposed bugs in several real-world programs, many of which are probably exploitable.
Given that writing somewhat effective fuzzers is so easy, it is obvious that the net and security conferences are full of ad-hoc more or less black-box more or less useful fuzzers. They are often hard to install, use and customize for the task at hand. Having seen some of these and written some ourselves, we decided make a tool which would combine the coverage of exhaustive input testing and effectiveness of our best model assisted but black-box techniques into one tool which would be easy for developers to use. Instead of just using similar tools by ourselves, we want to allow security-conscious vendors and developers to run similar tests by themselves.
General Issues in Security
Complexity - Model Inference and Pattern Recognition
We work under the premises of unmanageable growth in software and system complexity and emergent behavior. Unanticipated features, as opposed to intentionally designed ones, have a major role in any modern non-trivial system. We have previously worked on natural sciences approaches to understanding artificial information processing systems, and developed and applied model inference and pattern recognition to both content and causality of signaling between different parts of systems. Radamsa is an experiment to apply a similar black-box approach to infer models of out of the unknown in robustness testing context.
Quality - Building Security In and Secure Software Development Life Cycle (SDL)
Software quality problems, wide impact vulnerabilities, phishing, botnets, criminal enterprise have proven that software and system security is not just an add-on despite past focus of the security industry (Antivirus software, firewalls, intrusion detection systems, Data Loss Prevention systems). Security, trust, dependability and privacy are issues have to be considered over the whole life-cycle of the system and software development, from requirements all the way to operations and maintenance. This is furthermore emphasized by the fact that large intelligent systems are emergent and do not follow a traditional development life-cycle. Building security in does not only make us more safe and secure but improves overall system quality and development efficiency. Security and safety are transformed from inhibitors to enablers.
We have developed and applied black-box testing methods to set quantitive robustness criteria. International recognition of Secure Development Life Cycle has provided us a way to map our research of different security aspects. Radamsa provides a lightweight test generation framework for the actors in the secure SDL to improve their awareness in these possible issues allowing them to improve the quality of the system they develop or use.
Awareness - Vulnerability Life Cycle
Intelligent systems are born with security flaws and vulnerabilities. New ones are introduced and old ones are eliminated as the system evolves. Any deployment of system components comes in generations that are have a different set of vulnerabilities. Technical, social, political and economical factors all affect this process. We have developed and applied processes for handling the vulnerability life-cycle. This work has been adopted in critical infrastructure protection. Awareness in vulnerabilities and processes to handle them for developers, users and the society all increase the survivability of emergent intelligent systems.
Although the initial main emphasis of Radamsa was on the research and application of structure inferences techniques, the current version is also becoming a practical application for fuzz test generation. As such it has and will find real problems (vulnerabilities), and sometimes while using it you may find problems from systems made and operated by other real living and breathing (still, if you act responsibly) people. We dearly recommend studying our disclosure publications and discussion tracking library on different aspects of responsible vulnerability disclosure and handling for figuring out what to do in such cases.
Radamsa and Secure Development
There are many approaches to improving the robustness of applications. Ideally all software would be expressed using formally specified languages and verified with automatic theorem proving against the known correct model given in the specification. Sadly the real world, and even troublesome sections of mathematical world, make such an approach less viable. Theorem provers do work for some areas, and terminating ones are already put to good use in type systems of modern programming languages, but in practice most programs are currently made more reliable using techniques such as static source code analysis, unit testing, model-checking, black- and white-box fuzzing and hired beards (penetration testers etc.). Each approach has it's benefits and they should be combined for best results.
The role of Radamsa is to be an easy way to take robustness testing into the development process. It can be used as such to gain some insight to the robustness of an application, or set up to act on the side with existing techniques and see if one of it's modules comes up with something that was missed in the existing test setup.
We hope that by lowering the bar for testing at least a few bugs might be found before they are found and possibly exploited by black hats.