Wednesday, 24 February 2010

Azure Computing Usage, Metering etc and how MS gets richer

As per the MS release, the azure usage is to be calculated based on the 'compute instance'. $0.12 per hour it appears is for the 'small compute' instance size. For others, multiply this by the number of CPUs as per the below image :







Caveats

Based on your requirement, you could go for the small/medium/Large/Extra Large instance size. Things sound good until we dig a bit deeper on this. There are a couple of weird bits about the compute metering:

1.) If your application instance has 10 roles (worker/web...), each role would add up to the hrs!

Eg: - if you have 10 roles active simultaneously for 1 hr, you are charged for 10hrs.

2.) There is no part calculation. Even if you had you application active in Azure for 5mins and you delete it , you would end up paying charge for 1 hr!. This also is effective for the roles:

Eg: - if you have 10 roles active simultaneously for 5minutes, you are charged for 10hrs (yes, 10hrs)

3.) The compute usage is not determined by the actual computing/processing usage. Say even if your role/instance is idle for 50mins in the 1 hr, you are still charged for 1hr! This means that as soon as you have your application deployed, you get billed - immaterial of its computing usage.


Simplifying it down, your usage charge for a month = Number of roles in your instance * 24 (hrs in a day) * 31 (days in a month) * Subscription-factor.


Where, Subscription-factor = 1 for small, 2 for Medium,4 for Large , 8 for Extra large.


4.) Another interesting bit is about the alignment with the hours - it appears if you deploy your application at 3:50pm, you would have to pay an additional hours charge (2pm-3pm, 3pm-4pm).


5.) Are you a developer? There is no developer account/scheme wherein you could test out your application in Azure for free, yet.


Anything that adds to the MS revenue is good (for them).




Update 15.05.2010
For the various queries raised in this regard to the Azure support team, the answers from them haven't been direct, but rather confusing. Please do not rely on this post for deducing the cost, but contact the Azure team directly.


Quickly calculate Azure ROI/TCO

Interested in quickly calculating the ROI/TCO for your application once it is deployed in Azure? Check out these two:

1.) http://www.microsoft.com/windowsazure/tco/

2.) http://neudesic.cloudapp.net/azureroi.aspx

View/Query tables/data in Azure Dev Storage

When deploying application on the development fabric, you would usually need to actually view the dev storage - say check out the tables, write a couple of SQLs against it etc. OOB, there isn’t any support in VS2010/tools from MS. Note that development fabric is different from the Azure Storage in the cloud. Development fabric, dev storage resides on your local machine.

A very good tool you could use to access the dev storage for free (in addition to the azure store if you are a registered user) is Cloud Storage Studio from Cerebrata. Check more here:

http://www.cerebrata.com/Products/CloudStorageStudio/Default.aspx

Do let know if you come across any more free/thin/sleek/nifty tool that works.

Friday, 5 February 2010

Concurrency & .NET

With each versions of the windows programming libraries, the options for concurrent programming seems to be on the rise. Gone are the days when you had to start with a plain CreateThread() Win32 call (remember setting all those security attributes?). Then came the wrappers right from CThread in MFC, TThread in VCL (a more elegant version - Delphi ruled those days).

With earlier versions of .NET, you had the Thread class, the BackgroundWorker class and highly recommended ThreadQueue class. (lets not worry about all the sync objects that came along). With multi-core machines all around, the possibilities in .NET 4.0 are endless :

a.) Parallel Extensions (PLINQ + TPL)

Integrating parallelism right into the framework design while expoiting the extension methods has made expressing concurrency easier. Had a loop that you wanted to execute in parallel? Just use the Parallel.For().
1 core? 2 core? n core? Not sure how to exploit them? Just use the framework provided by TPL (Task Parallel Library) - your applications would scale (not worrying about the internal design/syncs for the moment) based on the number of the cores. Nice. The best part is, C# language and the supporting framework structure appears to move towards the functional programming paradigm - wherein you are not worried about how to do the job but more about what to do. LINQ, TPL, Parallel-extensions etc seems to be inspired by this functional paradigm as in Haskell [my current interest area)] / F#.

Want to dig real deep with some great samples ? Check this out : http://code.msdn.microsoft.com/ParExtSamples

b.) Axum

A very interesting .NET programming language from the MS research yard to check out. A language built with concurrency as the primary design objective. You have 'agent's (think about a block of code being executed independently like threads) talking with each other through the 'channel's using the 'message's (think about the all sync-objects you used to get two threads to talk with each other, but easier). Very promising - you could write your core domain objects in C#, use them within Axum wherein you would ave laid out your concurrency logic.

Check out
http://msdn.microsoft.com/en-us/devlabs/dd795202.aspx , http://en.wikipedia.org/wiki/Axum_(programming_language)

c.) DirectCompute

Would like the exploit the massive processing power of your GPU? Check out the DirectCompute library. A DirectX 11/10 based framework that lets you offload tasks onto the GPU - awesome. In similar lines, also check out Brahma framework written by my ex-collegue Ananth at http://brahma.ananthonline.net

Dont miss the DirectCompute session video (
http://microsoftpdc.com/Sessions/P09-16) which also showed some cool applications. Was amazing to see the computationally intensive job being done by the GPU while the CPU stayed at ~0% utilization !

d.) Dryad

Yet another product from the MS research aresenal, Dryad appears to be more targetted at making writing distributed applications easier. Need to check this out in detail - once I find an HPC server to do the installation, then perhaps port DES to it?

Check it out further at
http://research.microsoft.com/en-us/projects/dryad/