pStorage, A PersistJS Wrapper for AJAX

I Recently came across PersistJS when I wanted to build a caching layer for AJAX response in the browser. Being a Chrome-Only guy, I automatically went for sessionStorage as my option.

Unfortunately, neither sessionStorage or localStorage are a standard that works for browsers such as the mighty (giggle) IE. So I went looking for something generic that will work cross-browser. eventually I found PersistJS, and after some testing I found that it fits my needs as it wraps around different client storage techniques which means using it will deliver on my browser agnostic needs.

Since I could not find a simple wrapper that will meet my needs, and since I saw TONS of requests for this stuff during my search, I decided to build a simple class for anyone that may need it.

PersistJS shortcomings

What I found a bit annoying in PersistJS is that it lacks:

  • Content Verification, to makes sure that the user gets a refreshed content if there is any change in the server content
  • Time To Live (TTL). as part of the concept of building local storage to cache your AJAX response, is to be able to refresh it when relevant.


The solution I came up with uses the wonderful CryptoJS for md5 hash, and PersistJS for client storage. So I wrote a class that can be embedded in your JavaScript and allow that missing functionality. I called it pStorage.

Note, if you want to make the client storage side a bit more tamper proof, you can include a salt, and add server side code that generates the “salt” parameter in gStorage, every time that a session begins, or even force it to change after TTL. in my example here, I wanted to stay generic.

Here is the code:

 * pStorage
 * --------
 * a wrapper around PersistJS an CryptoJS to allow TTL and Content Validation
 * by Barry Shteiman, 2014,
var pStorage = new function() {
    this.uid = 'my_pStorage';
    this.salt = '';
    this.datastore = new Persist.Store(this.uid);
    this.get = function get(key) {
        var entry = JSON.parse(this.datastore.get(key)||0);
        if (!entry) return null;
            if (entry.hash !== this.hash(entry.value + this.salt)) {
                return null;
            if (entry.ttl && entry.ttl + < {
                return null;
            return entry.value;
    this.set = function set(key,value,ttl) {
        this.datastore.set( key, JSON.stringify({
            ttl   : ttl || 0,
            now   :,
            hash  : this.hash(value + this.salt),
            value : value
        }) );
    this.del = function del(key) {
        var entry = JSON.parse(this.datastore.get(key)||"0");
        if (!entry) return null;
        else {
            return null;
    } = function now () {return+new Date}
    this.hash = function hash(value) {return CryptoJS.MD5(value).toString();}

Note, you can and should change the uid parameter to be unique to your application (replace ‘my_pStorage’ with the name of your application for example)

The functions  implemented are:

  • pStorage.get(key) – retrieve data, returns 0 if nothing found
  • pStorage.set(key,value,ttl) – save data in the storage
  • pStorage.del(key) – delete an item


To use the complete mechanism you will need to call both CryptoJS (either directly from googlecode or download the file) and PersistJS.

<script src=""></script>
<script src="ext/js/persist-min.js"></script>

The call itself is very easy, and a simple Ajax function can be built to incorporate the functionality.


Here is a simple example for an Ajax call (via jQuery) that looks for the data in the client storage, and if TTL is past, if content is different, or if it simply does not have it in cache – the call goes to the Ajax call.

function cachedJsonAjax (url,ttl,params) {
    var output;
    if (pStorage.get(url) == null) {
        output = $.ajax({
            url: '/path/' + url + '.php',
            type: 'POST',
            data: params,
            dataType: 'json',
            async: false
        pStorage.set(url, output, ttl);
    } else
        output = pStorage.get(url);
    return output;

All that is left, is to call this function, and point it to the right server resource in order to populate the data, relying on this caching layer in the front.

While this is all extremely simple, I hope this helps someone and saves you time :)

EXIF Cross Site Scripting, PHP Fun

As most security folks know, there are numerous ways to infect computers via infected image files, that contain binary viruses which are read/invoked by different image readers or the libraries that render them.

From my perspective, looking into web application security, I had an idea..

Many websites and applications nowadays, look into the picture’s EXIF information in order to extract valuable data such as the size, date, exposure, geo location, and basically any meta piece of information. well, some of the attributes are writable :)

I decided to design an experiment that will check if we can inject code directly into an EXIF tag and see if the application interpreters will execute on it. Even though I am sure that this sort of experiment has already been done in the past, but learning through experimenting is what I live for.

The Experiment

The experiment breaks into 2 sections:

  1. Will a standard JPG accept characters that are required for a simple Cross Site Scripting attack
  2. Will industry standard interpreters execute on the EXIF tags when read

First thing I did, was to install exiftool and reading the metadata off a picture that I took with my smartphone, and looked for an updatable EXIF field. For sake of this experiment, I went for the “Comment” tag, and simply wrote “barry” in it.


As you can see, the field was updated. Now, to conclude the first part of our experiment, I updated the field with a simple XSS test string to see that its accepted. It was.

We can conclude that the EXIF field accepts any character, and in specifically a string that is required for a XSS attack.

To check the second part of our experiment, I used a PHP server on a commercial hosting service (which is running what they consider their best practice PHP deployment), uploaded the manipulated image, and created the following code to display the EXIF information:



It is important to note that exif_read_data is a standard PHP function that reads the EXIF information into an array, and I used the “echo” command to loop through the output.

Here is the result:


We have therefor proved the second test in the experiment, there was no problem displaying an unchecked string from EXIF and running it as standard script. the XSS vector is valid.

Who Should Care?

With many websites accepting image uploads (some through mobile interfaces), and analyzing EXIF information to determine resolution, location, owner and other pieces of data – there is a risk of a web application attack. In this case we have achieved an XSS (Cross Site Scripting) attack, but this information could have been extracted/inserted into a database – enabling SQLi (SQL Injection) attack on the database.

The underlying problem does not reside in a simple echo of the malicious string.  Meta data is usually saved within the application database, on an application that parses through this kind of information (lets take any image broker including some of your favorite mobile apps). they keep this kind of meta data for indexing and search purposes. this effectively means that one could inject a Persistent XSS or an SQLi string into the database, and have it execute under certain conditions. not a good thing.

I believe that it is important to run filter engines on EXIF metadata just as if it is a normal web attack or a script injection vector.


CMS Hacking, Risks and Trends.

cmslogomeshYesterday, I presented in Imperva‘s monthly webinar (my employer and source of joy) the results of a trend analysis that i recently completed, which includes the research results in detail.

We explored through the trends of CMS adoption in the market, a threat analysis of hackers trends with 3rd party code/software and CMSs specifically, and we went on to show how does a hack campaign looks like either by an individual or by a botnet for industrialized cyber crime.

Today Imperva has uploaded the presentation for those of you who weren’t watching the live webinar but still interested in the presentation, Here it is. also check Imperva’s website for the live recording of the webinar.

The thing with 0-day Java Vulnerabilities is…

A few days ago, KrebsOnSecurity published a very well written article discussing a new 0-day Java vulnerability, and its effect on the Bit9 Hack. now lets be honest – Java Vulnerabilities are the hottest thing in security news right now.. if a week goes by without a new 0-day, someone is slipping or sleeping.

I would like to look at this from a different angle for a second, the threat landscape angle and the trending.

It is very interesting to see the changes in Hackers approach from reversing protocols and platforms for vulnerabilities that are usually platform dependent, and relying more and more on overarching architectures such as Java and Flash etc, which are platform agnostic.

This creates an interesting threat landscape that has the multiplatform effect. It is very common to mistake reports that say that Microsoft platforms or others are now less vulnerable than in the past. I believe that its just a matter of Hackers changing focus.

Taking a deep look into industrialized hacking, it fits the model well, by hunting for the latest and greatest vulnerabilities and perhaps buying them from a vulnerability broker that deals with 0-days, Hacker groups are able to leverage indirect campaigns and hit large numbers of infections or data theft. just look at the latest Attack that hit Facebook, Twitter, Apple, Microsoft and others. it was defiantly not a directed attack but an attempt to hit numbers. interesting enough it used a Java 0-day as vector of infection. this hit many organizations with different platforms and methods of securing themselves.

Java is the thing that everyone now blame, but yesterday it was the Microsoft platforms, and tomorrow it will be something else. its not about what’s more secure, its about what Hackers focus on.. at the end of the day, we should always look at the shortest funnels to cash, just like Hackers do. until then… keep your cup of coffee clean.


Tomcat Enumerator (for MSF)

The Tomcat Enumerator for Metasploit Framework

Gave myself a speed-coding challenge. have to introduce fun into coding or I will never do it. so i wanted to create something for the community that will allow fast enumeration of a Tomcat application server via the Metasploit Framework. I set myself for 2 hours this time , since it required building an environment, testing several installations and wiping bugs myself ( I should teach my wife some Ruby ! ), challenge complete.

enum_tomcat, What does it do ?

enum_tomcat is a post exploitation module (in the MSF repository as post/windows/gather/enum_tomcat) that operates as an enumerator over a meterpreter session on Windows, and evaluates the server for existing Tomcat server installations, and then enumerates the ports, users and main application (ROOT). services will be reported to the service repository, and everything else to the loot repository.

How can you get a copy ?

As a Metasploit pen tester, you should have access to update the repository every now and then, and therefor a simple msfupdate should do the trick. In order to use the module, you need to first obtain a meterpreter session (I am not going to dive into that, that is part of the pen testing scope of work and knowledge), you then need to issue the following command : run post/windows/gather/enum_tomcat.

As always, This is an Open Source contribution to the project, and therefor is available to everyone who wishes to use it.

enum_db Screenshot

Database Enumeration Module (for MSF)

Introducing enum_db for Metasploit Framework

Alright, so over the weekend I had time to convert some of my old scripts into ruby, because you have to keep your mind sharp in some way or another … when it occurred to me that I haven’t contributed to any open source project in a long long looooooooong time. so with the help of some VMs and a few spare hours, I converted a script that I wrote back in the days which I used to use quite a lot for pen testing purposes in different projects. I have committed it into the Metasploit Framework repository and it is now publicly available for the community usage.

enum_db, What does it do ?

enum_db is a post exploitation module (in the MSF repository as post/windows/gather/enum_db) that operates as an enumerator over a meterpreter session on Windows, and evaluates which Database flavors are installed on the host, and which Instances and Ports are available on them.

It supports Mssql, Mysql , Oracle, Sybase, DB2. and uses the vendor specific methods of identifying database installations, instances and connection ports.

There are 3 outputs are available once databases are enumerated as expected – on screen results, loot of the enumeration process and a service report that adds the discovered services to the MSF service table.

How do I get it and run it ?

As a Metasploit pen tester, you should have access to update the repository every now and then, and therefor a simple msfupdate should do the trick.

In order to use the module, you need to first obtain a meterpreter session (I am not going to dive into that, that is part of the pen testing scope of work and knowledge), you then need to issue the following command : run post/windows/gather/enum_db.

Final Words

This is of course an Open Source contribution to the project, and therefor is available to everyone who wishes to use it. use for good not for evil.

AppDoS Defined

This morning, an outline article defining AppDoS that I wrote, was published on Imperva’s Blog,

have a read would you ? Link here.

The article explores the differences between classic Denial of Service, and Application Oriented Denial of Service in a simplified manner.

LinkedIn’s No-CSO and my Personal Data.

The one thing that kept me thinking on a flight the other day was the latest new about LinkedIn’s 6.5 million password breach, and the followup lawsuit against it for violating its own privacy user agreement ensuring the security of one’s identity. and not specifically the story itself, but some of the aftermath that came out of LinkedIn afterwards.

LinkedIn have admitted to have no individual holding a CSO/CISO title and managing security for what is currently the biggest online social network for work relationships and job hunting. This was quite a shock to me, since I really thought that any organization of such size ( in terms of consumers and customers ) will have someone officially taking care of the security role.

This creates an interesting gap, since LinkedIn is not regulated ( no current regulation for social networking, and perhaps something should be there to protect personal information ) but it means that when a hack like this happens, who do you blame ? who takes the fall ? or in other words … if data i hold as an employer is breached, who is responsible ?

Now, i’m pretty sure that due to the publication of that fact, LinkedIn will resolve this soon enough, or at least they should. but it does make you think – where else did I put personal data online , and how is it protected if at all. are techies in charge of making my data safe ? or is there someone that actually holds that responsibility.

I encourage each and every one of you, who wish to put their data online, to make sure or at least check if those sites/companies are properly secured… hell… check in LinkedIn if they have a CSO :-)

Virtual Gold and Bot Threats

For the past couple of weeks, i have been away visiting Family in a country far far away, with an excellent internet connection and boring midnight hours, when I decided to play some games to help myself go to sleep. now, since this was quite an interest for me, I started looking in different sources for game stability, hacking and most important Bot crafting for this game and others, and so it began …

The game I chose to play is Diablo 3 ( I carry a flash drive with my favorites installed, everywhere ) and went into it. now … for those of you who dont know, games that involve getting items and collecting and improving characters online, usually also involve getting virtual gold. so you can then buy, repair, improve etc… but thats just game mechanics.

Virtual Gold becomes Real Money

Many of today’s games today, either if its betting games, or online multiplayer games (and maybe your favorite facebook games), enable users to purchase virtual gold for real money. and there is usually an agreed exchange rate.

This means that Virtual Gold gets an actual Dollar value and users can trade and sell/buy.

In come the Bots

By now, we all heard of people in the far east that play constantly just to harvest virtual gold, and then selling it for real money to the western i-wanna-improve-immidiatly gamers. and this took quite a lot of time, for what I would call medium level gain.

Now, from old times, even when I was a kid, we used to be able to write Bots that would play the game for us, or bend it in a way that we make more of that virtual gold faster and faster and making the game easy for us. When I looked into whats going on with people bending my latest favorite game, I came to realize how far have this industry gone…

Advanced and slightly more skilled Bot makers today, use easy to script tools such as AutoIt, in order to create bots that will actually walk the game, play it out, collect items and sell them later on, maximizing on virtual gold harvesting. this means that you can set a machine to harvest for virtual gold for 24/7 and then sell it !

Security Problems

Well, there are several security problems here. lets outline them

  1. External Scripting is not manipulating the Game
  2. Bot Nets

External Scripting is not manipulating the Game, Evades detection

Lets start with #1. in the old days, when Bots where written, they used to manipulate the in-game memory in order to change or reveal values. this meant that it was fairly easy for the game makers to create software to detect manipulation ( i believe that Blizzard, the maker of Diablo 3 – currently has the best manipulation detection software – warden, developed in-house ).

The way that AutoIt works, is just sending commands that can be based on nothing but pixel detection etc, and sending commands back to the game as if it is a set of keyboard and mouse. for example : Move mouse to (x,y) , Click mouse , Press “1”, wait for (500) ms.

Mitigating Bots

The problem this introduces is the potential inability for standard gaming software to detect such scripts by automation tools from running. and making those bots more common, and by that not just breaking the game for everyone ( unfair advantage ), but actually scamming the game for profit, which I believe is, or should be considered a crime.

This can be mitigated fairly easy. but then creates a problem to the gaming vendors, and the reason for it is that they then cross the line between a gaming product and a moderated security product that enforces rules, like a host IPS or AV, that can then say “if you run this tool in the background, the game wont start”.

I actually would encourage that approach for any game that involves Real Money interaction. it becomes quite popular to introduce Fraud Prevention to online systems such as Bank Accounting systems, and Online Brokering. I see no reason why Gaming should be any different, either if its Browser/Thin-Client based, or if its a desktop installed advanced game.

Bot Nets

There is a joke that mildly translates from Hebrew which says “the open-window, calls the thief” which i believe has a solid case here. Since if a platform introduces money exchange, hackers and organized hackers will find their way to exploit the system to gain profit. and are most likely to hurt both the game vendor and the players that pay to play and pay to improve their gear ( some games have millions of users that are considered addicts that will spend quite a lot of money to advance in a game ).

While Bots such as I mentioned are fairly simple, imagine the following scenario : Bot Net.

Imagine a shoe making factory, with many low paid workers making shoes for the man, which sells for profit… now imagine the Security world where an organized hacking organization can build a farm of computers, all running the game with a bot on it, farming for virtual gold on the games, and sending to a main computer that then sells it.

Imagine even worst… a Virus Bot that infests your own computer, and is then controlled by a C&C Bot Net that does just the same, leveraging your own computer , and your own account for profit.

Traditional Bot Net vs the Gaming Bot Net Potential

Many think that Bot Nets are meant for espionage, stealing credentials, looking into emails and maybe stealing credit card information. But what is the difference ? there is none!

Think of the goal : I want to make money off a bot-net. Which to me means, that the fact that you are farming via a game that allows it, or stealing credit card/transacting money is the same.

At the end, Awareness will be key here. Gaming vendors, HIPS/AV and Fraud Prevention vendors should look into it, making sure that they create content in time when these rise.

Some Hulk Afterthoughts

It was great to see how much attention the Hulk tool got over a short period of time, and even considered a malware by some folks who didnt read the fine print of “educational experiment” labeling. got tons of questions and improvement suggestions while some missed the main idea behind this.. its not meant to avoid a security control, but to bypass the caching engines to hit directly on the server.

The thing that got me most interested was the growing interest from the Hacktivist community in such tools ( TONS and TONS of emails… ) , but also the fact that some researchers went to analyze for mitigation in case of someone abusing the tool for malicious usage. needless to say, I used a very easy to fingerprint ( for application aware security products ) library called urllib2 as this was designed for a lab experiment and not to break the laws of physics.

One individual has sent me a nice test that he’s done with the tool, using a virtualized grid, running the tool from 300 hosts in a rate-limited LAN ( to demonstrate bandwidth constraints of a wan link ) at a server cluster running apache, with an apache caching server in front and got the same results as I did.

I believe that the goal was achieved, an individual or a company that wishes to see how their application copes with application denial of service, can have a tool to run the check, while is also able to prevent anyone else from using it. I do believe that the only way to prepare and mitigate a threat, is knowing it exists, hope this helped.