Cross-Site Trust Exploitation (XSTE)

Wednesday, May 21, 2014

XSTE... Catchy title, right? truth be told, it's a fancy name that a collegue of mine and I gave to a Content Spoofing attack we conducted on a penetration test last week. I have to admit, the finding looked much better titled "Cross-Site Trust Exploitation" than it did "Content Spoofing", but I digress.

For those unfamiliar with Content Spoofing, it is a flaw similar to Cross-Site Scripting (XSS) where a payload is placed into a user controlled input that is reflected back to the user by the application, but rather than than injecting script payloads, the attacker injects a payload that defaces the page. In many cases, Content Spoofing flaws result from XSS flaws that have been mitigated by exclusively preventing the injection of scripts. Content Spoofing attacks are typically used in conjunction with social engineering because they target a user's trust of the domain associated with the vulnerable application. Let me explain.

Let's say you come across a web page that looks something like this.

When I see something like this, I immediately think XSS. The page has clearly reflected something we control back to us.

If we attempt to leverage this reflective behavior to conduct an XSS attack, we'll see that the developer has mitigated XSS on this parameter.

  • Payload<script>alert(42)</script>
  • Result

In this case, HTML output encoding was used to mitigate XSS. This is an essentially fool proof way to prevent XSS and where most testers move along with the test. But there is still danger lurking here. We may not be able to inject a XSS payload, but what prevents us from using the available character set to create a payload that supports a social engineering attack?

  • Payload'' was not found \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ Security Alert!!! This Splunk Server Is Currently Under Attack. The Server's Secret Key May Be Compromised. Please Authenticate Through The Backup Proxy @ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ The path ''/en-US/secure_login
  • Result

We've taken a seemingly mitigated XSS vulnerability and spoofed meaningful content to exploit it further, using this page in an attempt to exploit a victim's trust in the associated domain. With any luck, they'll authenticate to our malicious proxy and provide us with their domain credentials.

Recon-ng Update

Friday, May 16, 2014

For those that have been following on social media, I have been referencing the "next verson of Recon-ng" for quite some time. I've made claims to new features, new modules, and increased usefullness. All of these promises come to fruition in the release of Recon-ng v4.0.0.

The sweeping changes of this revision come as a result of the revelation that the Metasploit Framework model of data storage and manipulaton dosen't fit well into the reconaissance methodology. Therefore, Recon-ng's approach to reconnaisance has changed, and users will notice that Recon-ng has begun to move away from feel of the Metasploit Framework to a structure and system that better fits the demands of a solid reconnassaince framework. Below is a summary of the changes users can expect to see in the new version of the Recon-ng framework.

Global Option Changes

One of the first things users will notice is that there are significantly fewer global options in the new version of Recon-ng. This is a result of the global options used as starting points (domain, company, netblock, etc.) being moved into the database. More on this in a moment. Another change that users will notice is the addition of the "STORE_TABLES" global option. This option sets a flag that tells the framework to store every ASCII table that is created by a framework module to the database. At the time of this update, the only modules that are impacted by this are the jigsaw/purchase_contact and pwnedlist/domain_ispwned modules.

Enhanced Framework Seeding

The most frequent feature request I've received since the release of Recon-ng has been the ability to use more than one domain, company name, netblock or location as starting points or "seeds". In the new version of Recon-ng, the seed information has been moved from the global options to independent tables in the database, allowing for multiples of each seed. This change allows me to introduce a new concept to the framework; that every piece of information stored in the database is a potential input "seed" from which new information can be harvested. This supports Recon-ng's new approach to infromation harvesting; transforming information into other types of information, similar to the approach of Maltego with its "transforms". In addition to the seed information types, other tables have also been added to the database for storing vulnerabilty and port scan information.

Now that users can no longer "set" the seed information as global options, the "add" command has been added to the framework to compensate. The "add" command allows users to add a record to any table in the database without the use of SQL. Users will now use the "add" command to add initial records to the database which will become seed information for the modules.

The "del" command has also been added to the framework to assist with deleting records. The "del" command requires the table name and "ROWID" of a record in the database. In order to facilitate this requirment, the SQLite built-in primary key "ROWID" column has been added to the show <table> command output.

Flexible Input Options

Originally, for this realease of Recon-ng, I wanted the framework to only interact with the database for IO. However, personal preference and the requests of others forced me to develop a system which allows for flexiblity in what modules can use as inputs. Previosuly, some modules had an option named "SOURCE" which allowed users to specify the source information in the form of the database, a single string, a text file, or a custom SQL query. Users will notice that the "SOURCE" option is now present in every module. This is a result of a change in the way modules are developed. Module developers are now required to provide a default SQL query which serves as the default input source for the module. The framework takes that data and dynamically creates the "SOURCE" option allowing the user to also take advantage of the other input options: single string, text file, or custom SQL query.

I've been asked, "Why provide a custom SQL query input option if the database is already the default?" The reason is that there will be times when users have 15 domains in the domains table, but may only want to use 2 of them as input into a module. The custom SQL query source option setting allows the user to set the "SOURCE" option to something like query select domain from domains where rowid between 1 and 2 which sets the input of the module to the domain column of the first 2 records in the domains table. It must be understood that this advanced usage of the "SOURCE" option requires knowledge of SQL. Additional information has been added to the show info command which explains the "SOURCE" options available and displays the default query set by the developer.

Module Changes

One of the issues with managing an open source project that consists largely of community developed pieces is that the project manager is held responsible for bugs that arrise in the contributed code. When contributors are unresponsive to requests to fix their code, the project manager is held accountable for the bad code and the reputation of the project and the project manager suffers. There have been many cases where I have received bug reports for modules that I didn't write, don't use, or preceive to have limited value, and I have elected to begin removing these modules from the framework. If I have removed a module that was previously useful, I will consider adding them back in a case-by-case basis or providing the code so that the module can be used locally.

Some new modules have been added with the new version of Recon-ng. The BuiltWith API has added the ability to enumerate contacts for target domains, so the builtwith contacts module was added. A module replicating the hash reversing script PyBozoCrack has also been added to the framework as another means to reverse harvested hashes. The pybozocrack module has been enhanced from the original script to support of any type of hash supported by the Python "hashlib" library.

I mentioned previously that the nature of the framework has changed from collecting information to that of transforming it. This required a restructuring of the module tree to provide visibility into what information is expected as input and what type of information results from each module. Therefore, the recon branch of the module tree now follows the following path structure: recon/<input table>-<output table>/<module>. This provides simplicity in determining what module is available for the action the user wants to take next. If the user wants to see all of the modules which accept a domain as input, they can simple search for the input table name "domains" followed by a dash: search domains-. If the user wants to see all of the modules which result in harvested hosts, they can simply search for the output table name "hosts" with a preceding dash: search -hosts.

Changes to the framework have impacted some module's behavior as well. The HTML reporting module is now much more comprehensive, as all of the data in the database is included in the report: static and dynamically generated tables. Also, modules which result in vulnerability or port infromation, such as the shodan_net, shodan_hostname, punkspider and xssed modules, have been modified to add the respective information to the database.

Other Framework Changes

Several less impactful changes have also been made to Recon-ng. API key data is no longer stored in a JSON file. The JSON file has been replaced with a SQLite database and all of the framework methods have been updated to compensate.

Auto migration has been implemented into the framework. Beginning with this version of Recon-ng, any required migrations will be conducted automatically the first time a workspace is loaded into the new version of the framework. Users should be advised that the new workspaces are not backwards compatible, so it is recommended that users backup workspaces before allowing the migration to take place.

I have received several feature requests to allow for more workspace manipulation from within the framework. Therefore, the "workspace" command has been changed to "workspaces" and a set of subcommands have been added to list, add, select and delete workspaces.

Data Flow

The changes to Recon-ng require users to understand the new flow of information through the framework. For example, users will want to make sure they have harvested all possible domains before they begin to run modules which use domains as input. Otherwise, repeated runs of modules will be required, exhausting API quotas or requiring complex custom SQL queries to prevent duplicate "SOURCE" inputs. Below is a step-by-step approach developed by using the new version of Recon-ng on several assessments. WARNING: The following example is not 100% complete. Please use it as a guide, not as an official methodology.

  • Add known seed information (domains, netblocks, company names, locations, etc.).
  • Run modules that leverage known netblocks. This exposes other domains and hosts from which domains can be harvested.
    • search netblocks-
  • Add new domains gleaned from the results if they have not automatically been added.
  • Run modules that conduct DNS brute forcing of TLDs and SLDs against current domains.
  • Have list of domains validated by the client.
  • Remove out-of-scope domains with the "del" command or generate a query which only selects the scoped domains as input.
  • Run modules that conduct DNS brute forcing of hosts against all domains.
  • Run host gathering modules. The timeout global option may need to be extended for the ssl_san, shodan_*, and vpnhunter modules.
    • search -hosts
  • Resolve IP addresses.
  • Run vhost enumeration modules.
  • Run port scan data harvesting modules.
    • search -ports
  • Use JOIN queries for data analysis.
    • query select hosts.ip_address,,, ports.port from hosts join ports using (ip_address)
  • Run vulnerability harvesting modules.
    • search -vulnerabilities
  • Resolve geolocations of harvested hosts.
  • Add distinct locations to the db.
    • query select distinct(latitude || ',' || longitude) as locations from hosts where locations not null
  • Run contact harvesting modules.
    • search -contacts
  • Mangle contacts into email addresses.
  • Run modules that convert email addresses into full contacts.
  • Run credential harvesting modules.
    • search -creds

Developer Changes

Many of the changes discussed above impact the way that modules are now developed. Therefore, developers will need to account for developmental changes. Below is quick list of the changes. See the Development Guide for more details.

  • Sensitive module options such as usernames and passwords have been moved to the API key processing system.
  • The module template has been changed to satisfy the default query requirement.
  • New methods have been added to support framwork changes: get_tables, add_<table>, summarize, debug
  • Methods have been removed: api_guard


Be sure to back up the ~/.recon-ng folder prior to using the new version of the framework, as the migrated workspaces may not be backwards compatible. Also, this is by far the largest revision the framework has undergone to date, so bugs are sure to exist. Please report any bugs to the issue tracker so that they can be resolved in a timely manner.

If you're interested in contributing to the framework, please see the issues page for module ideas, feature requests, and bug reports. All contributions are welcome from individuals with any level of Python experience, including no experience. I manage this project not only to provide a tool to the community, but to share my love of coding, mentor developers, and learn from others. Thanks again, and enjoy the framework.

Recon-ng Home Page

Raspberry Pi - Pianobar

Sunday, May 11, 2014

I tweeted a while back that I am using a Raspberry Pi and Pianobar to stream music to my whole-home audio system. I received a lot of requests to publish how I configured my system. At the time I didn't have any organized notes, so I didn't publish anything. However, the Pianobar developer changed some stuff recently that broke my old install, so I had to troubleshoot and rebuild. This time I took good notes and put this article together. The notes below are hastily thrown together and often use links in place of raw data, so if things seem confusing and you have questions, please hit me up on Twitter and I'll see what I can do to help.

I'm more comfortable in Debian environments, so I use Raspbian with my Raspberry Pi. Here are a few resources I used to get mine up and running. Rather than one resource giving me everything I needed to get started, I found bits and pieces from the various resources worked best.

With the Raspberry Pi up and running, I updated Raspbian and installed screen. Screen comes in handy during some of the lengthy steps involved with the manual install of Pianobar. Plus, screen is always a good tool to have around when working with a remote terminal.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install screen

I then proceeded to install Pianobar manually using the following steps.

# install dependencies
sudo apt-get install git libao-dev libgcrypt11-dev libgnutls-dev libfaad-dev libmad0-dev libjson0-dev make pkg-config
# install FFmpeg manually from source
git clone
cd FFmpeg
make clean
# 'make' can take several hours
sudo make install
cd ..
# install pianobar manually from source
git clone
cd pianobar
make clean
sudo make install
# configure alsa
sudo nano /usr/share/alsa/alsa.conf
# pcm.front cards.pcm.front => pcm.front cards.pcm.default

I used the following resource to configure Pianobar.

Notice the TLS fingerprint directive in the configuration file. For whatever reason, it is critical. If it is not correct, Pianobar will throw TLS errors and will not function. If I encounter TLS errors, the first thing I do is use the following command and check to see if Pandora has changed their TLS signature.

openssl s_client -connect < /dev/null 2> /dev/null | openssl x509 -noout -fingerprint | tr -d ':' | cut -d'=' -f2

If you didn't already notice, the previous link was actually the walkthrough to configuring the Android client for Pianobar. This is what I use to control Pianobar on my Raspberry Pi, but it isn't required. However, the configuration example still applies, with the exception of the "eventcommand" directive.

The Pianobar remote is only available for Android, so it doesn't work when I want to control music from my Macbook, iPad, or wife's iPhone. Therefore, I use an open source implementation of Airplay called ShairPort to enable Airplay streaming to my Raspberry Pi. Below is a good resource for configuring ShairPort on a Raspberry Pi.

Here is a summary of the commands I used to install ShairPort.

sudo apt-get install git libao-dev libssl-dev libcrypt-openssl-rsa-perl libio-socket-inet6-perl libwww-perl avahi-utils libmodule-build-perl
git clone
cd perl-net-sdp
perl Build.PL
sudo ./Build
sudo ./Build test
sudo ./Build install
cd ..
git clone
cd shairport
sudo make install
sudo cp shairport.init.sample /etc/init.d/shairport
cd /etc/init.d
sudo chmod a+x shairport
sudo update-rc.d shairport defaults
sudo nano shairport
sudo reboot

The Raspberry Pi audio sounded pretty terrible in its default configuration, so I used the alsamixer tool to tune the sound. I've found that a setting of 78 sounds really well with my system and allows me to elevate the volume to a reasonably high level before distortion occurs.

Let it be known that Raspberry Pis do not handle power outages well. After countless hours of troubleshooting and rebuilding due to power outage induced corruption, I finally got smart and decided it was time to make an image of a complete install for recovery purposes. Below is the process for doing so on OSX.

# gracefully shutdown the RPi
sudo shutdown -h now
# plug the USB drive/SD card into OSX
diskutil list
# note the device ID i.e. /dev/disk2 of the Raspberry Pi media
# using rdisk is preferable (quicker) as its the raw device
sudo dd if=/dev/rdisk2 of=backup.img bs=1m

So there you have it. Whew.