Commercial Windows Azure is now live, along with cloud-based storage

Microsoft Windows Azure top story badgeLate yesterday afternoon, right on schedule, Microsoft announced the "general availability" of Windows Azure, its cloud-based hosting service for .NET applications. With a release like this, "GA" is somewhat peculiar, especially since the service has actually been in operation for several months. But it does mean that a ribbon has been cut, and from now on, new accounts are being signed up as commercial licenses. Old accounts are being given the warning to either convert or jump ship.

From this point on, prospective Azure customers will be given the opportunity to experiment with a limited amount of storage and transaction bandwidth, for a limited time. From now until July 31, testers will be allowed free access to 25 hours of a small compute instance with 500 MB of storage, and 10,000 transactions on one SQL Azure database (which will only be free for the first three months). Usage above that level will be charged at the regular rate of $0.12 per hour plus $0.15 per GB per month storage, and $9.99 per month per 1 GB database.

MSDN Premium subscribers are being offered eight months of promotional trial, with 750 hours of Azure service with 10 GB of storage, and 3 SQL Azure databases limited to 7 GB data transfer in / 14 GB out. Usage over and above those levels will be charged to subscribers, but at slightly reduced rates: $0.114 for small compute instances per hour, scaling up to $0.912 per hour for extra-large instances, plus $0.15/GB/month for storage, and $9.49 per month per 1 GB database.

The surprise came from Microsoft this morning: Up until now, Azure developers have contended with the problem of accessing stored data as blobs (binary large objects), which makes perfect sense for apps running on a very large-scale platform. But such apps would likely have to be designed for such a platform to begin with, which makes the whole notion of transporting .NET apps to the cloud seem a lot less seamless.

Microsoft introduced developers last November at PDC to something they called "Xdrive," which would be a way for at least some .NET code created for conventional networks to be transported to the cloud with more minimal changes, if any. Under this system, stored data resources could continue to be addressed using the familiar single-letter drive names (X:\) employed by everyday NTFS. It took a reformation of the blob concept to make this possible, called the page blob; and while that reminds me of a great Federal Express commercial from the 1970s, its purpose is to create a blob whose characteristics are compatible with the page structure to which non-cloud applications store data.

"The Page Blob can be mounted as a drive only within the Windows Azure cloud, where all non-buffered/flushed NTFS writes are made durable to the drive (Page Blob)," reads a white paper released this week by Brad Calder and Andrew Edwards of the Azure Storage team. "If the application using the drive crashes, the data remains persistent via the Page Blob, and can be remounted when the application instance is restarted or remounted elsewhere for a different application instance to use. Since the drive is an NTFS formatted Page Blob, you can also use the standard blob interfaces to upload and download your NTFS VHDs to the cloud."

Of course, Microsoft couldn't keep the same cool name for it, so "Xdrive" is now being called "Windows Azure Drive." The company won't bill for it as a device, like it does for SQL Azure databases, but it will charge for storage space consumed and read/write transactions employed. Since Page Blob structures are limited to 1 TB in size, Azure Drives will also be 1 TB maximum.

Employing an Azure Drive is not completely seamless from a .NET developer's perspective; you don't just mount the X:\ drive, load up your old code, and have it run. The drive's creation, mounting, unmounting, copying, and snapshot-taking (for backups and rollbacks) still require Azure-specific code. But that code may reside in procedures that are separated from the .NET code that performs the actual transactions with the drive, so at least a sizable chunk of some .NET applications no longer needs to be rewritten.

Believe it or not, though, Azure Drive is not the surprise here: When Azure storage engineers created the page blob concept, they discovered a method for caching the blob's contents to a local disk accessible by the Azure virtual machine. There's a trick here that takes data from the cloud and stores it...in another cloud, literally, and in such a way that the costs are driven down.

"Caching the data on the local drive can reduce the read traffic to the page blob, which will lower the transaction cost," wrote Brad Calder this morning. "There is no additional charge for reads that are to the local disk cache. The transactions against the Page Blob will be counted towards billing. Note, even when the cache is enabled, all non-buffered and flushed writes are committed transactions to the Page Blob in durable storage."

Azure customers wishing to test Azure Drive, with or without the cool name, must sign up for the new February release of the Azure SDK, and upgrade their instances to run the new Guest OS 1.1.

Comments are closed.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.