Category Archives: Security

BackStab Attack Takes Indirect Route To Mobile Data

Attack technique takes advantage of weak protections around mobile user’s backup files.

While there are plenty of mobile device vulnerabilities just waiting for bad guys to pick up on, some of the lowest hanging fruit for mobile-oriented attackers isn’t on the device itself. Instead, the softest target comes in the form of insecure back-ups stored on a traditional desktop or laptop.

Palo Alto Networks’ Unit 42 research team calls the technique “BackStab.” In a report out today by researchers with the team, they explain that this indirect route can nab attackers text messages, photos, geo-location data and just about anything else that’s been stored on a mobile device.

“While the technique is well-known, few are aware of the fact that malicious attackers and data collectors have been using malware to execute BackStab in attacks around the world for years,” writes report author Claud Xiao. “iOS devices have been the primary target, as default backup settings in iTunes® have left many user backups unencrypted and easily identified, but other mobile platforms are also at risk.”

According to the report, Unit 42 has found over 700 recent flavors of Trojans, adware and other hacking tools designed to target Windows and Mac systems containing user data from backup files from iOS and BlackBerry devices.  Several of the malware families discovered by the researchers have been around for at least five years. They explain that there are tons of public articles and video tutorials detailing how to carry out a BackStab attack. And unlike a lot of mobile device attacks, the attack doesn’t require for a targeted user to have a jailbroken device.

In the case of iOS attacks, often BackStab is made possible due to default settings on iTunes that don’t encrypt backed up data.

The report today detailed some of the most common tools that employ BackStab, including a dropped portable executable file often used in concert with the DarkComet remote access Trojan called USBStler. Interestingly, they also showed how RelevantKnowledge, a tool developed by Internet research firm comScore, leans on BackStab techniques to spy on consumers.

“We found that many RelevantKnowledge samples contain code to collect users’ iPhone and BlackBerry data through these mobile devices’ backup archives,” Xiao wrote. “During their execution, these samples will search for files under the Windows iTunes backup directory, collect information, compress it into a file and upload it to (comScore’s) web server.”

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.

Can a virtual machine “Hack” another VM running on the same physical machine?

Of course it is possible to exploit another VM running on the same hardware, given a working exploit. Additionally, one can exist.

The exploits that are used in this context are naturally different from ones that function when you’re running on the same machine you are trying to exploit a service on, and they tend to be quite a bit harder due to the increased isolation. However, some general approaches that can be used to accomplish such an exploit include:

  • Attack the hypervisor. If you can get a sufficiently privileged shell on the hypervisor given a VM, you can gain control over any VM on the system. The way to approach this is to look for data flows that exist from the VM into the hypervisor, and are highly hypervisor-dependant; things like paravirtualized drivers, clipboard sharing, display output, and network traffic tend to create this type of channel. For instance, a malicious call to a paravirtualized network device might lead to arbitrary code execution in the hypervisor context responsible for passing that traffic to the physical NIC driver.
  • Attack the hardware on the host. Many devices allow for firmware updates, and if it happens to be possible to access the mechanism for that from a VM, you could upload new firmware that favours your intentions. For instance, if you are permitted to update the firmware on the NIC, you could cause it to duplicate traffic bound for one MAC address (the victim’s), but with another destination MAC address (yours). For this reason many hypervisors filter such commands where possible; ESXi filters CPU microcode updates when they originate from a VM.
  • Attack the host’s architecture. The attack you cited, essentially yet another timing-based key disclosure attack, does this: it exploits the caching mechanism’s impact on operation timing to discern the data being used by the victim VM in its operations. At the core of virtualization is the sharing of components; where a component is shared, the possibility of a side channel exists. To the extent that another VM on the same host is able to influence the behaviour of the hardware while running in the victim VM’s context, the victim VM is controlled by the attacker. The referenced attack makes use of the VM’s ability to control the behaviour of the CPU cache (essentially shared universal state) so that the victim’s memory access times more accurately reveal the data it is accessing; wherever shared global state exists, the possibility of a disclosure exists also. To step into the hypothetical to give examples, imagine an attack which massages ESXi’s VMFS and makes parts of virtual volumes reference the same physical disk addresses, or an attack which makes a memory ballooning system believe some memory can be shared when in fact it should be private (this is very similar to how use-after-free or double-allocation exploits work). Consider a hypothetical CPU MSR (model-specific register) which the hypervisor ignores but allows access to; this could be used to pass data between VMs, breaking the isolation the hypervisor is supposed to provide. Consider also the possibility that compression is used so that duplicate components of virtual disks are stored only once – a (very difficult) side channel might exist in some configurations where an attacker can discern the contents of other virtual disks by writing to its own and observing what the hypervisor does. Of course a hypervisor is supposed to guard against this and the hypothetical examples would be critical security bugs, but sometimes these things slip through.
  • Attack the other VM directly. If you have a proximal host to the victim VM, you may be able to take advantage of relaxed access control or intentional inter-VM communication depending on how the host is configured and what assumptions are made when deploying access control. This is only slightly relevant, but it does bear mention.

Specific attacks will arise and be patched as time goes on, so it isn’t ever valid to classify some particular mechanism as being exploitable, exploitable only in lab conditions, or unexploitable. As you can see, the attacks tend to be involved and difficult, but which ones are feasible at a particular time is something that changes rapidly, and you need to be prepared.

That said, the vectors I’ve mentioned above (with the possible exception of the last one in certain cases of it) simply don’t exist in bare-metal environments. So yes, given that security is about protecting against the exploits you don’t know about and that aren’t in the wild as well as the ones which have been publicly disclosed, you may gain a little security by running in bare metal or at least in an environment where the hypervisor doesn’t host VMs for all and sundry.

In general, an effective strategy for secure application programming would be to assume that a computer has other processes running on it that might be attacker-controlled or malicious and use exploit-aware programming techniques, even if you think you are otherwise assuring no such process exists in your VM. However, particularly with the first two categories, remember that he who touches the hardware first wins.

The Healthcare Security Conundrum

It seems like ages ago the HIPAA guidelines were adopted. It got a bit more complex as the HITECH requirements and financial implications increased. Following that, Meaningful Use Stage 2, encryption and the like is creating some additional technical challenges. Protecting patient data and secure it using best practices that your organization can muster has been the goal. Fast-forward to today, all of the rules still apply, but the game has changed, hacking and breaches from unidentified and even foreign organizations and their intent is even murkier has raised the ante. They know the value of healthcare records and they have had some success at capturing them.

There was a Dustin Hoffman movie from the 1976, ‘Marathon Man’ (yes I am exposing my vintage); the simple question by the antagonist was ‘is it safe’? Poor Dustin Hoffman did not know what, where, how, why and when. He, as well as the audience was the receiver of the pain and fear. We find ourselves a similar situation; instead of diamonds it is our health records at risk. There is financial value in our health records, but the bad actors may not be out for only financial gain, it also affects brand value and reputation. The risks and stakes are high and the intruders may already be in our systems just looking around for something interesting.

So the ‘fear, uncertainty and doubt’ routine has reached our executives and they want to know ‘What can we do to prevent this from happening to us?’ Our teams are doing their best to train our consumers of IT services not to ‘click on that link’. The intrigue and creativeness of the hackers are sometimes unbelievable.

There are many examples both inside healthcare and other industries; however, healthcare is a target since the value of a health record is more than just a credit card number. In case you are interested: (HHS Breach Report). The net result is the top ten breaches for the last about 3 years is responsible for 136 million records. At a value of $ 150 per record has a potential street value of $20 billion.

Hence the fact that healthcare is a target.

How does VMware approach this area:

First, it is not a product; it is an approach, a layered approach that involves different organizations. Not one company can solve this complex area alone.

Our approach starts with an assessment to help to understand your security risks. We also work with several organizations that can help you assess your risk. We provide free tools to provide some immediate feedback. We follow that with a ‘Hardening Guide’, which is a step-by-step approach to remediating the risks to your virtual environments. One of the capabilities allows for workloads be better isolated through distributed firewall. This approach may include hardware, software and or services.

We have just completed a white paper for you to explore the VMware concept of Security and Network Virtualization for Healthcare (VMware Healthcare Security Whitepaper) and although we may not be able to catch the villain of this story, but we can ‘protect our house.’

Web Application Defense

Attackers are relentlessly looking to find and exploit any vulnerabilities that exist within web applications. Every web application has value for some criminal element. Cyber Crime syndicates value established web, site’s customers’ credit card data which is often improperly stored in many e-commerce sites. The target of opportunity is typically sites with a large customer base.

They will use the site as a distribution platform, booby-trapping the sites with exploit kits, malware or malicious scripts. One of the most common modes of attack is to inject malicious code into legitimate JavaScript already present on the compromised websites. This perpetuates the spread of a large percentage of malware.

“The JavaScript is automatically loaded by the HTML webpages and inherits the reputation of the main site and the legitimate JavaScript. If the illicit source code is detected by software, many times it is discarded as a false positive. If Administrators manually check their site’s source code, the malicious code is easily spotted.

It only takes a few moments as an Administrator to look over your web page and check for suspicious elements:

  1. Browser warnings – Does you’re built in web browser technology issue a warning when you visit your site. If your browser does alert you that you’re site isn’t to be trusted, take its advice seriously and manually check your source code.
  2. Something looks wrong – Scammers can create a perfect looking copy of your website. But often, through either incompetence or laziness, they’ll leave out graphics, features or links which you know should be there. Sometimes they will simply produce a basic password entry form or a pop-up window. Trust your instincts if doesn’t “feel” right, check your code.
  3. Wrong address – Phishers use tricks to disguise suspicious addresses. Sometimes the tricks are undetectable to the naked eye. So if your site’s login page appears to move from yoursite.com to yourste571-net.cn, alarm bells should be ringing (check your code).
  4. Insecure Connection – If your site has a secure connection “HTTPS” (which appears before the web address), check your browser for this code. If you see only a regular “HTTP” connection, or nothing at all, you know the connection isn’t secure and your page is almost certainly compromised (check your code).
  5. Check the Certificate – If your site uses high security web certificates as a reputable online service, make sure the green bar in the web address field in your browser is present, confirming the name of your company (who owns the page).
  6. Wants Too Much Information – Check your web login (when applicable) to make sure intruders can’t learn the entirety of your users login information by watching a log in once.
  7. No SiteKey – If your web site uses SiteKey to confirm you’re logging into a trusted site (by showing you a place of information that only that site ought to have access to – typically a graphic and a phrase chosen by you) make sure it is showing every time your users log in. Make sure no process simply skips over this step. If you do realize that your SiteKey information isn’t being shown at the appropriate time, check your source code.

Hacktivists may want to knock your site offline with a denial of service attack. Diverse groups have diverse end goals but they all share the common methodology of relentlessly enumerating and exploiting weaknesses in target web infrastructures.

You’re most prudent course of action is finding and fixing all your vulnerabilities before the bad guys do. There are different methods and tools to identify web application vulnerabilities, each with varying degrees of accuracy and coverage. The first technique uses static analysis tools that inspect the applications source code, or you can use dynamic analysis tools that interact with the live, running web application in it’s normal environment. The ideal remediation strategy from an accuracy and coverage perspective would be for organizations to identify and correct vulnerabilities within the source code of the web application itself. Unfortunately, in several real-world business scenarios, modifying the source code of a web application is not easy, expeditious or cost effective. You can place web applications in two main development categories: internal and external (which includes both commercial and open source applications). These development categories directly impact the time-to-fix metrics for remediating vulnerabilities.

Here is a look at some of the most common roadblocks found in the two main categories for updating web application source code.

Internally Developed Applications

The top challenge with remediating identified vulnerabilities for internally developed web applications is a simple lack of resources. Again, business owners must weigh the potential risk of an application compromise against the tangible cost of initiating a new project to remediate the identified vulnerabilities. When weighing these two options against each other, many organizations choose to gamble and not fix code issues and hopes no one exploits the vulnerabilities.

Many organizations come to realize that the cost of identifying the vulnerabilities often pales in comparison to that of actually fixing issues. This is especially true when vulnerabilities are found (not early in the design or testing phases but rather) after an application is already in production. In these situations, an organization usually decides that it is just too expensive to recode the application.

Externally Developed Applications

If a vulnerability is identified within an externally developed web application (either commercial or open source), the user most likely will be unable to modify the source code. In this situation, the user is essentially at the mercy of vendors, because he or she must wait for official patches to be released. Vendors usually have rigid patch release dates, which means an officially supported patch may be unavailable for an extended period of time.

Even in a situation where an official patch is available, or a source code fix could be applied, the normal patching process of most organizations is extremely time-consuming. This is usually due to the extensive regression testing required after code changes. It is not uncommon for these testing gates to be measured in weeks and months.

Another common scenario is when an organization is using a commercial application and the vender has gone out of business, or it is using a version that the vender no longer supports. In these situations, legacy application code can’t be patched. A common reason for an organization to use outdated vendor code is that in-house custom-coded functionality has been added to the original vender code. This functionality is often tied to a mission-critical business application, and prior upgrade attempts may break functionality.

Virtual Patching

The term virtual patching was coined by intrusion prevention system (IPS) vendors a number of years ago. The term is not application specific and it can be applied to other protocols. It is generally used as a term for Web Application firewalls (WAF). Virtual patching is a security policy enforcement layer that prevents the exploitation of a known vulnerability.

The virtual patch works because the security enforcement layer analyzes transactions and intercepts attacks in transit, so malicious traffic never reaches the web application. The result is that the application’s source code is not modified, and the exploitation attempt does not succeed.

Virtual patching’s aim is to reduce the exposed attack surface of the vulnerability. Depending on the vulnerability type, it may or may not be possible to completely remediate the flaw. For more complicated flaws, the best that can be done with a virtual patch is to identify if or when someone attempts to exploit the flaw. The main advantage of using the virtual patch is the speed at risk reduction. It provides quick risk reduction until a more complete source code fix is pushed into production.

The use of virtual patching in your remediation strategy has many benefits but it shouldn’t be used as a replacement for fixing vulnerabilities in the source code. Virtual patching is an operational security process used as a temporary mitigation option.

It can be compared to military battlefield triage. When Marines, Soldiers, Sailors or Airmen are injured in combat, Corpsmen or Medics (and sometime their buddies) attend to them quickly. Their purpose is to treat the injury, stabilize the subject and keep the subject alive until the subject can be transported to a full medical facility for comprehensive care. In this analogy the Corpsman or Medic is the virtual patch. If your web application has a vulnerability, you need to take the application to the “hospital” and have the developers fix the root cause. You wouldn’t send your troops into battle without medical support. The medical staff serves an important purpose on the battle field and the virtual patch serves an important purpose in your web production environment.