Saturday, January 10, 2009

The All New Polished Vista, Introducing "The Windows Mista"

What's the excitement behind Windows 7, do you remember this:


Ballmer: Windows 7 is Vista, just 'a lot better' [link]


"Windows Vista is good, Windows 7 is Windows Vista with clean-up in user interface [and] improvements in performance," Ballmer said. [link]


The event helped me to crystallize my thoughts. So to crystalline it... is it an UPGRADE or an UPDATE??  Microsoft prefers to call it an MAJOR UPGRADE to SELL MORE and ofcourse to get people move on from the Windows XP to Vista. For Vista users its can be called a MINOR UPGRADE or even better it should be an UPDATE or to put it more accurately a service pack. 


For Microsoft its a fairly significant upgrade, but for Vista users its not an overhaul of the operating system rather a signaficant update. But lets put a question to Mr. Ballmer, are they planning to sell Windows 7 to UPGRADERS and allow it for download as a service pack for UPDATERS??


If you dont understand, I cant explain it to you any better....the OS was not built for the user, it was built for Microsoft to make more $$ at your expense.....get used to paying for the true definition of "slack" ware... should be called Microsoft Mista. They Missed-another chance to do the right thing and make it secure.


Its just the same chocolate with a new packaging and a new brand name? 


"Polish doesn't change quartz into a diamond"

Wednesday, January 07, 2009

VMware Consolidated Backup Design Preparation and Understanding for Backup Administrators

While I am working on designing a Virtual Infrastructure Solution, I thought of penning down a few lessons learned for my future reference as well for consultants who are planning to design a similar solution. Backup is one important area to be considered. One of the advantages of purchasing VMware Infrastructure Enterprise (VI 3.5) is that along with the flagship ESX hypervisor there are additional licensed features and products included that are necessary to create business continuity for virtual machines (VMs). VMware Consolidated Backup (VCB) is one of these products. Often misunderstood as the complete answer for a virtual data center, VCB requires some preparation and understanding for backup administrators currently used to the traditional physical enterprise backup solution.

VCB is not the entire backup solution for virtual infrastructure
It is very rare that VCB allows administrators to completely remove all backup agents from virtualized servers. This is because VMware Consolidated Backup does not:
  • Perform specialized application backups (like Microsoft Exchange Information Store or Windows Server System State)
  • Perform file-level backups of non-Windows VMs
  • Provide management, cataloging or archiving of backup files
  • Provide direct file restores to virtual machines
  • VCB is a framework of scripts that needs to be integrated with a third-party backup application to provide these features.

VCB should be installed on a dedicated Windows Server
It is recommended VCB be installed on its own server. Also known as the VCB Proxy Server, this system has the following requirements:
  • Microsoft Windows Server 2003 Service Pack 1 (32‐bit or 64‐bit) or higher
  • Media repository managed by the third-party backup application's management server
  • The same storage protocol access as the ESX hosts to the VMFS LUNs where the VMs are stored. (i.e., host bus adapters (HBAs) for access to Fiber Channel storage or initiator configuration for iSCSI storage). Depending on the version of Windows Server used, automatic partition mounting will have to be disabled before attaching the VCB server to the VMFS LUNs
  • Dedicated disk storage for the VCB Holding Tank where backup and restore files are written
  • Third-party backup agent

VCB needs a large disk volume for a Holding Tank
Along with the shared access to the ESX LUNs, VCB also needs a large disk volume formatted as NTFS, which will become the Holding Tank for backup images. This volume can be on the SAN or the local VCB server's disks. The Holding Tank volume is where full VM images are placed both during backups and restores.

Therefore, the size of the Holding Tank is critical in the design. For example, if a virtual infrastructure consists of VMs that take up 1 TB of disk space and the expectation is that a full VM backup is to be taken nightly, then the Holding Tank volume needs to be large enough to support 1 TB of backups. Another scenario would be to alternate groups of full VM backups in order to decrease the required size of the volume. In this case, administrators still need to make sure the Holding Tank is large enough to hold the VM using the most disk space.


The role of the third-party backup agent
The third-party backup application does the actual backing up and management of the files. Once VCB copies a VM image to the Holding Tank it is then up to the third-party backup application to move those files to whatever media repository is in use. It is also the function of the agent to clear out the Holding Tank so that the next scheduled job has available disk space to complete.

In the case of file-level backups, VCB also mounts the copied VM image (in thumb drive style as previously mentioned) so that the backup agent can see the VM's file system. The backup agent can then perform full, incremental or differential file-level backups to the media repository. In some scenarios, the single agent on the VCB server can replace the multiple agents on the VMs.
VMware maintains a compatibility guide for supported third-party backup applications. Many of these supported applications have VCB integration modules that coordinate the scheduling of the VCB scripts and the agent backup from within the application's GUI.


Understanding VCB restore jobs
Restoring files leverages the third-party backup agent's ability to move files from the media repository back to the Holding Tank. Once the VM image is back, it can be copied in full to a VMFS volume or mounted like a thumb drive again so that individual files can be restored. An administrator must manually copy files to the restore location in both scenarios.
VMware Converter, most often used to migrate physical servers to virtual machines, can also create VMs from VCB images. Therefore, VMware Converter can be a more effective full VM restore tool in some cases. Check out VMware's Virtual Machine Backup Guide for more detailed information on implementing VCB.

Thursday, September 04, 2008

Security "Best" Practices

Do you think that just following security best practices will keep you and your users safe? Think again.

Recently, I've found 2 examples where following security best practices can actually expose you to security vulnerabilities, if you won't put your mind to it.


Example no. 1 - NoScript


Everybody who use Firefox and concerned about its own security and privacy uses NoScript. Unfortunately, for the customers of the PhishMe.com service, using NoScript will actually expose their private login credentials.


According to an eWeek article: "PhishMe, a new security SAAS offering from the Intrepidus Group, enables companies to launch mock phishing attacks against their own employees in the name of improving e-mail security...PhishMe does not collect sensitive information...JavaScript on the Web site overrides anything users actually input into fields during tests."


So, basically, using NoScript will disable JavaScript on the user's browser and will actually send over the sensitive information of the user.


Now, both of the teams here play fair in this game. Intrepidus Group follows some kind of privacy best practices by changing the HTML form to not send the user's private information over the network, and NoScript does it's own security best practice by disabling JavaScript on an unknown website.


But combined together, the PhishMe.com service will try to phish users' credentials using pages which are not in the trusted domain, NoScript will then disable the JavaScript on the fakephishing page and the phished users of the fake phishing attack will eventually expose their private credentials.


Example no. 2 - Plain Text Emails


From "forgot my password" to "Johnny Depp wants to be added to your friends list", many services today send notification emails to their users. Security best practices wave a big "no, no" on HTML emails, and suggest that you read your email messages in plain text. There are services which already do the job for you and send their messages in plain text.


Unfortunately, what most of those services forget is that on a plain text email, a text which begins with either a URL protocol handler (e.g. http://, https://, etc) or "www.", will automatically transform itself to a clickable link, on most if not all mail clients.


This becomes a big issue when the plain text message contains a user generated content. The exact problem is described in an advisory over the TwitPwn website.


Twitter sends their users a notification, each and every time a different user has started following them on twitter. This email contains the following template:


Hi, *Your full name*.

*Follower's full name* (*Follower's username*) is now following your updates on Twitter.

Check out *Follower's username*'s profile here:

http: //twitter.com/*Follower's username*

You may follow *Follower's username* as well by clicking on the "follow" button.

Best,

Twitter

Now, both the Follower's username and full name can be alerted by the attacker, as it is save in his own profile. The username was restricted to alphanumeric characters, and therefore cannot be used for the attack. But, the full name was only restricted by the size, around 25 characters, enough to put the attacker's malicious http://www.evil.com link. All the attacker had to do was to run a bot which automatically follow people, and just wait for the victims to click on the links in the mails that were sent by twitter.


This vulnerability was fixed by twitter, and now you cannot use the dot character in the full name.


Conclusion


This post was not intended to get people to stop following security "best" practices. On the contrary, I encourage you all to follow them. All I'm saying is that following those and other security "best" practices will not make you and your users bullet-proof safe. You will now need to be more careful and think about other vectors too...

Friday, August 01, 2008

Enterprise organizations must patch the Kaminsky DNS flaw NOW!!!

If you haven't heard about the current DNS vulnerability, here is a Reader's Digest-like summary. Security guru Dan Kaminsky found a vulnerability that could give the bad guys a relatively easy way to redirect Internet traffic. For example: You might think you are logging on to Bank of America's Web site. But instead, some hacker may have just exploited a domain name system vulnerability and is now in control of your identity.

Kaminsky deserves credit for finding this flaw and alerting the Internet community so it could fix the problem. This effort is well under way, but according to an article in yesterday's New York Times, Kaminsky believes that 41 percent of all DNS servers are still vulnerable, meaning that no one has patched these systems with new software that closes this gaping security hole.

The danger here is that most of the world will shrug its collective shoulders, dismissing this as a technology problem. The truth is that this is the Internet equivalent of a bridge collapse on Interstate 35W in Minneapolis. This disaster demonstrated that a critical piece of infrastructure was badly in need of repair. Unfortunately, the same is true of DNS, a critical but rickety technology.

Clearly the folks who control most of the Internet infrastructure get this. Comcast and Verizon have already patched their DNS servers, while AT&T is in the process of doing so. Great, but what about all of the companies with a large Internet presence? This is where the Internet may be most vulnerable, folks. According to ESG Research, 48 percent of large organizations (i.e. 1,000 employees or more) experienced at least one DNS outage in the past 12 months. What's more, 42 percent of these companies consider patching and upgrading DNS a time-consuming operational process. Given these statistics, my guess is that a lot of enterprises believe that the DNS problem doesn't really impact them, that it is really an Internet infrastructure problem. This is a misguided and dangerous perspective.

DNS anchors all Internet communications, thus it should be considered critical infrastructure. It's time that enterprise organizations realized this and started treating it accordingly. Hopefully Kaminsky's discovery will act as a change agent to fix the problem. Otherwise, we could all be in trouble.

Saturday, June 02, 2007

Basic RAID Levels defined

The various RAID types used in the storage world are defined by Level numbers. At the basic level, we have RAID Level 0 through 6. We also have various composite RAID types comprised of multiple RAID levels. Note that people often drop the word “Level” when referring to RAID types and this has become an accepted practice. Also note that even though same-sized hard drives are not technically required, RAID normally uses hard drives of similar size. Any implementation that uses different sized hard drives will result in wasted capacity.

RAID Level 0:
RAID Level 0 is the cluster-level implementation of data striping and it is the only RAID type that doesn’t care about fault tolerance. Clusters can vary in size and are user-definable but they are typically blocks of 64 thousand bytes. The clusters are evenly distributed across multiple hard drives. It’s used by people who don’t care about data integrity if a single drive fails. This RAID type is sometimes used by video editing professionals who are only using the drive as a temporary work space. It’s also used by some PC enthusiasts who want maximum throughput and capacity.

RAID Level 1:
RAID Level 1 is the pure implementation of data mirroring. In a nutshell RAID Level 1 gives you fault tolerance but it cuts your usable capacity in half and it offers excellent throughput and I/O performance. This RAID level is often used in servers for the system partition for enhanced reliability but PC enthusiasts can also get a nice performance boost from RAID Level 1. Using multiple independent RAID Level 1 volumes can offer the best performance for database storage.

RAID Level 2:
RAID Level 2 is a bit-level implementation of data striping with parity. The bits are evenly distributed across multiple hard drives and one of the drives in the RAID is designated to store parity. Out of an array with “N” number of drives, the total capacity is equal to the sum of “N-1″ hard drives. For example, an array with 6 equal sized hard drives will have the combined capacity of 5 hard drives. It’s interesting to note that this RAID level is almost forgotten and is very rarely used.

RAID Level 3:
RAID Level 3 is a byte-level implementation of data striping with parity. The bytes are evenly distributed across multiple hard drives and one of the drives in the RAID is designated to store parity. Out of an array with “N” number of drives, the total capacity is equal to the sum of “N-1″ hard drives. For example, an array with 4 equal sized hard drives will have the combined capacity of 3 hard drives. This RAID level is not so commonly used and is rarely supported.

RAID Level 4:
RAID Level 4 is a cluster-level implementation of data striping with parity. Clusters can vary in size and are user-definable but they are typically blocks of 64 thousand bytes. The clusters are evenly distributed across multiple hard drives and one of the drives in the RAID is designated to store parity. Out of an array with “N” number of drives, the total capacity is equal to the sum of “N-1″ hard drives. For example, an array with 8 equal sized hard drives will have the combined capacity of 7 hard drives. This RAID level is not so commonly used and is rarely supported.

RAID Level 5:
RAID Level 5 is a cluster-level implementation of data striping with DISTRIBUTED parity for enhanced performance. Clusters can vary in size and are user-definable but they are typically blocks of 64 thousand bytes. The clusters and parity are evenly distributed across multiple hard drives and this provides better performance than using a single drive for parity. Out of an array with “N” number of drives, the total capacity is equal to the sum of “N-1″ hard drives. For example, an array with 7 equal sized hard drives will have the combined capacity of 6 hard drives. This is the most common implementation of data striping with parity.

RAID Level 6:
RAID Level 6 is a cluster-level implementation of data striping with DUAL distributed parity for enhanced fault tolerance. It’s very similar to RAID Level 5 but it uses the equivalent capacity of two hard drives to store parity. RAID Level 6 is used in high-end RAID systems but it’s slowly becoming more common as technology becomes more commoditized. Dual parity allows ANY two hard drives in the array to fail without data loss which is unique in all the basic RAID types. If a drive fails in a RAID Level 5 array, you better hope there is a hot spare that will quickly restore the array to a healthy state in a few hours and you don’t get a second failure during that recovery time. RAID Level 6 allows that second drive failure during recovery and is considered the ultimate RAID Level for fault tolerance. Out of an array with “N” number of drives, the total capacity is equal to the sum of “N-2″ hard drives. For example, an array with 8 equal sized hard drives will have the combined capacity of 6 hard drives.

RAID Level 10 (composite of 1 and 0):
RAID Level 10 (sometimes called 1+0) is probably the most common composite RAID type used on the market both in the server and home/enthusiast market. For example, there are plenty of cheap consumer-grade RAID controllers that might support RAID Level 0, 1, and 10 that don’t support Level 5. The most common and recommended implementation of mirroring and striping is that mirroring is done before striping. This provides better fault tolerance because it can statistically survive more often with multiple drive failures and performance isn’t degraded as much when a single drive has failed in the array. RAID Level 0+1 which does striping before mirroring is considered an inferior form of RAID and is not recommended. RAID Level 10 is very commonly used in database applications because it provides good I/O performance when the application can’t distribute its own data across multiple storage volumes. But when the application knows how to evenly distribute data across multiple volumes, independent pairs of RAID Level 1 provides superior performance.

RAID storage

Since I’ve been doing a lot of work on storage technology both for the SABIC and for my home lately, I thought I should write an explanation of what RAID storage is. I won’t go in to every RAID type under the sun, I just want to cover the basic types of RAID and what the benefits and tradeoffs are.

RAID was originally defined as Redundant Array of Inexpensive Drives, but RAID setups were traditionally very expensive so the definition of “I” became Independent. The costs have recently come down significantly because of commoditization and RAID features are now embedded on to most higher-end motherboards. Storage RAIDs were primarily designed to improve fault tolerance, offer better performance, and easier storage management because it presents multiple hard drives as a single storage volume which simplifies storage management. Before we start talking about the different RAID types, I’m going to define some basic concepts first.

Fault tolerance defined:
Basic fault tolerance in the world of storage means your data is intact even if one or more hard drives fails. Some of the more expensive RAID types permit multiple hard drive failures without loss of data. There are also more advanced forms of fault tolerance in the enterprise storage world called path redundancy (AKA multi-path) which allows different storage controllers and the connectors that connect hard drives to fail without loss in service. Path redundancy isn’t considered a RAID technology but it is a form of storage fault tolerance.

Storage performance defined:
There are two basic metrics of performance in the world of storage. They are I/O performance and throughput. In general, read performance is more valued than write performance because storage devices spend the majority of their time reading data. I/O (Input/Output) performance is the measure of how many small random read/write requests can be processed in a single second and it is very important in the server world, especially database type applications. IOPS (I/O per second) is the common unit of measurement for I/O performance.

Throughput is the measurement of how much data can be read or written in a single second and it is important in certain server applications and very desirable for home use. Throughput is typically measured in MB/sec (megabytes transferred per second) though mbps (megabits per second) is sometimes also used to describe storage communication speeds. There is sometimes confusion between megabits versus megabytes since they sound alike. For example, 100 megabit FastEthernet might sound faster than a typical hard drive that gets 70 MB/sec but this would be like thinking that 100 ounces weighs more than 70 pounds. In reality, the hard drive is much faster because 70 MB/sec is equivalent to 560 mbps.

RAID techniques defined:
There are three fundamental RAID techniques and the various RAID types can use one or more of these techniques. The three fundamental techniques are:

  • Mirroring
  • Striping
  • Striping with parity

Mirroring:
Data mirroring stores the same data across two hard drives which provides redundancy and read speed. It’s redundant because if a single drive fails, the other drive still has the data. It’s great on read I/O performance and read throughput because it can independently process two read requests at the same time. In a well implemented RAID controller that uses mirroring, the read IOPS and read throughput (for two tasks) can be twice that of a single drive. Write IOPS and write throughput aren’t any faster than a single hard drive because they can’t be process independently since data must be written to both hard drives at the same time. The downside to mirroring is that your capacity is only half of the total capacity of all your hard drives so it’s expensive.

Striping:
Data striping distributes data across multiple hard drives. Striping scales very well on read and write throughput for single tasks but it has less read throughput than data mirroring when processing multiple tasks. A good RAID controller can produce single-task read/write throughput equal to the total throughput of each individual drive. Striping also produces better read and write IOPS though it’s not as effective on read IOPS as data mirroring. You also get a large consolidated drive volume equal to the total capacity of all the drives in the RAID array. Striping is rarely used by itself because it provides zero fault tolerance and a single drive failure causes not only the data on that drive to fail, but the entire RAID array. Striping is often used in conjunction with data mirroring or with parity.

Striping with parity:
Because striping alone is so unreliable in terms of fault tolerance, striping with parity solves the reliability problem at the expense of some capacity and a big hit on write IOPS and write throughput compared to just data striping. Data is striped across multiple hard drives just like normal data striping but a parity is generated and stored on one or more hard drives. Parity data allows a RAID volume to be reconstructed if one (sometimes two) hard drives fail within the array. Generating parity can be done in the RAID controller hardware or done via software (driver level, OS level, or add-on volume manager) using the general purpose processor. The hardware method of generating parity either results in an expensive RAID controller and/or poor throughput performance. The software method is computationally expensive though that’s no longer a problem with fast multi-core processors. Despite the performance and capacity penalty of using parity, parity uses up far less capacity than data mirroring while providing drive fault tolerance making this a very cost-effective form of reliable large-capacity storage.

Wednesday, May 31, 2006

Intel follows AMDs lead again?

Don't you just love provocative headlines like that? The first thing that pops to mind might have been "Ah, he's talking about the new dual core CPUs that Intel is shipping" or "Oh, that's probably a something having to do with lower power CPUs for laptops."

Nope. In this case, I'm talking about Intel following AMD's lead in abandoning 386 and 486 CPUs. AMD abandoned their 486 and 586 line back in 2002. Intel recently announced that they would no longer be producing 386, 486, and some other RISC processors after September 2007.

Although neither an earth-shattering announcement nor one that will probably shock the computer industry, it's interesting from a couple of angles. First, there's the whole history and End-Of-An-Era thing that the production end means. Secondly, there's the "wow" factor that Intel has still been able to sell these CPU classes 15 years after their peak popularity.

From the historical perspective, the 386-class CPUs changed the entire PC industry. Back before the first George Bush was in office, the 386 CPU ushered in the era of 32-bit computing that we're only now starting to see the end of. By adopting the 386 before IBM, Compaq put itself on the map and led to the overthrow of the entire IBM PC empire. Mated with Windows 3.1 and then Windows 95, the 386/486 securely placed Microsoft at the top of the software industry.

People long forgot about those CPUs. Before the turn of the century the Pentium line and its successors had made the 386 and 486 CPUs essentially obsolete. The CPUs have lived on as embedded processors, still crunching data bits inside of devices such as network controllers and data acquisition devices.

So, even though Intel will still be cranking out the creaky silicon for another year, the end is on the horizon. While we're pounding away on dual core processors and looking forward to 64-bit and quad-core processors for every day use, all we can say is "The King is Dead! Long Live the King!"

Wednesday, February 01, 2006

Danish Apology??

The row about the outrageous cartoons of our beloved Prophet Muhammad (peace be upon him) published first in Denmark and then in Norway has taken a new turn with the semblance of an apology from the editor of the offending newspaper. Danish Muslims say that they accept the apology — but it is unlikely to take the sting out of the situation or ease the hurt and resentment that millions of Muslims feel. Carsten Juste’s apology is disingenuous. He says that the cartoons were not intended to offend. The depictions of the Prophet as a terrorist were clearly intended to offend. How could they do otherwise? Juste makes it clear he thinks there was nothing intrinsically wrong with the cartoons; he is apologizing purely because Muslims took offense. “That’s what we’re apologizing for.’’ A very backhanded apology.

Juste then insists that what he did was perfectly legal. There are many things which are legal, but that does not make them right. Worse, he says he still does not regret publishing the cartoons. Does he not regret doing something that has done immense damage to Danish-Muslim relations? That has resulted in a boycott of Danish goods across the Muslim World? That has probably put Danish troops in Iraq at unique risk from Al-Qaeda? Juste’s bosses should dismiss him for his lack of judgment and the damage it has done to Denmark, politically and economically and all out of spite. That would go some way to calming the situation. Danish Prime Minister Anders Fogh Rasmussen has to act as well. His categorical refusal to apologize on the issue because it would be against the laws on freedom of speech is just as disingenuous. It would be perfectly within the bounds of political propriety for him to say how appalled he was and that Juste should go. That would come well within the bounds of his freedom of speech.

In any event, if it does go against the law, the answer is simple: Change the law. Follow the British example: Outlaw religious hatred. Once Prime Minister Tony Blair gets a new religious hatred bill through Parliament, it will be a criminal offense in the UK to publish cartoons like the Danish ones. No one can say that the UK is any less committed to freedom of speech than Denmark. But Blair understands there are limits to freedom of speech, just as there are to freedom of action; people do not have the right to stir up riots and racial hatred, encourage mass hysteria or heap abuse on religion any more than they do to rob, rape, cheat or kill.

Were Prime Minister Rasmussen to follow Blair’s lead and introduce a similar law in Denmark, Muslim anger would vanish, not least because the UK bill, although protecting all religions from attack, is in fact designed specifically with Islam in mind. Rasmussen has to think about Denmark’s political and economic interests. But he needs to realize what Tony Blair has realized — that Muslims are now an integral part of his country and that to attack their faith is to attack them. Freedom of speech has to be balanced by the freedom not to have one’s faith abused and ridiculed.

Microsoft's OneCare Has Holes

Microsoft's OneCare service has holes.

Microsoft's OneCare (http://www.windowsonecare.com) is a beta service that attempts to be an encompassing security product/service to protect an end-user's PC. Among several things, it provides antivirus and firewall services and policy configuration.

Anyway, I have found the following issues with the service:

1. Any program using JVM can bypass any OneCare firewall restriction.

2. Any signed program will automatically bypass any firewall restriction.

Both of these issues are a concern to security people. Any blanket security bypass rule is a bad idea. It just invites malicious hackers and other malware goons to exploit it. These settings, if they hold past the beta period, are especially troubling in light of the success that spyware and adware vendors have been having. They already routinely use signed controls to install themselves onto users PCs, and certainly they will continue to use them to bypass this service.

Deny by default is a good rule of thumb. Allow by default never is. I applaud Microsoft trying to give consumers yet another way to protect their PCs, but blanket security bypass rules aren't part of the solution.

IE7 beta: Read the fine print

Now, we're all used to just clicking "next" when we install new software.

That's a bad idea in general. First, many software programs today have incredibly invasive default settings that give the software permission to do everything short of audit your tax returns. Second, these days it is probably a good idea to skim over the details of know how your private information will or won't be handed over to any government that asks to take a peek.

In that spirit, I gave a quick read of the end-user license agreement that accompanies the public beta of Internet Explorer 7. The IE 7 license agreement is both humorous and disturbing.

Funny is the introduction, which begins by thanking the user for "choosing Microsoft."

"Everyone on the IE team (even the lawyers who reviewed the license terms below) wants to make your web browsing experience safer and easier," the agreement says.

From there, it gets pretty unfunny pretty quick.

Users agree that they will only use the software on a properly licensed Windows XP machine with Service Pack 2. (A couple steps later, Microsoft requires Windows Genuine Advantage validation to confirm this)

Even more ominous are Microsoft's warnings and limitations.

"The software is licensed 'as-is,' Microsoft warns in all-capital letters. "Services and information are provided 'as-is' and 'as available.' You bear the risk of using them."

But you really shouldn't be using the software on a real PC anyway.

"You may not test the software in a live operating environment unless Microsoft permits you to do so under another agreement."

Roughly translated, that means anyone can download the new browser, but most of us aren't supposed to be using it, apparently.

In the spirit of living dangerously, this blog was written in IE 7 (for test purposes only, of course).

VCAP-DCA (VDCA550) - FINALLY NAILED IT

I feel proud to inform you that I have passed my VMware Certified Advanced Professional - Data Centre Design (VCAP-DCD) certification exam s...