Friday 17 April 2009

DES Uploaded to CodePlex

Have uploaded the initial version of my hobby project Distributed Execution System to http://des.codeplex.com/

Check the homepage at http://des.codeplex.com/ for a brief summary of the project.

Tuesday 24 March 2009

Boxed & Secured Execution Of a .NET Type

One of the usual needs in an application developers world is to instantiate a .NET type in a boxed/contained/isolated environment with zero impact to the current application process space. How do we do that? This article solves this in an easy manner, adoptable easily.

Usual Solution

The immediate answer is to use an Application Domain - create new application domain and instantiate the type within the new domain. Sounds straight forward. Sadly nope. If you thought the following lines of code would just work, you are mistaken. To reconfirm that it does not work, try unloading the app domain and then deleting the loaded assembly from the windows explorer. It does not let you delete the assembly file. What happened here?

//create the application domain and create an instance of the object
AppDomain clientDomain = AppDomain.CreateDomain("ClientTaskDomain");
Object executionObject = clientDomain.CreateInstanceAndUnwrap("ABC.Test", "MyTest");

//find the Execute method and call it.
MethodInfo executionMethod = executionObject.GetType().GetMethod("Execute");
returnData = executionMethod.Invoke(executionObject, null);


As the type ABC.Test.MyTest was not a MarshalByRefObject descendant, the type instance gets loaded into the main application domain. Instead if the type ABC.Test.MyTest did descend from MarshalByRefObject, the type would have been instantiated in the 'remote' application domain. Thats the way it is designed.

Easy Way Out

Have a proxy type created in your application which descendants from MarshalByRefObject. Instantiate this object and call a proxy routine on this proxy object which then instantiates the real type. In this way, as the proxy type is already created in the new application domain, the real type too would be created in the new application domain.

AppDomain clientDomain = AppDomain.CreateDomain("ClientTaskDomain");
try
{
AssemblyLoader _aLoader = (AssemblyLoader)clientDomain.CreateInstanceAndUnwrap("XYZ.Test", "XYZ.Test.AssemblyLoader");
returnData = _aLoader.LoadAndRun("ABC.Test", "MyTest");
}
finally
{
AppDomain.Unload(clientDomain);
}




where AssemblyLoader is defined as an MBR descendant as :


[Serializable]
public class AssemblyLoader : MarshalByRefObject
{
public Object Execute(string assemblyName, string typeName)
{
Assembly _assembly = Assembly.Load(assemblyName);
Type _type =_assembly.GetType(typeName);
MethodInfo _method =_type.GetMethod("Execute");
return _method.Invoke(Activator.CreateInstance(_type), null);
}
}


Using this method, we have made sure that "MyTest" is always instantiated in the new application domain.

Impersonate for Security

All good until now, but how do you make sure the executed code executes under a user supplied account ? Pretty simple if you know how to authenticate a username/password/domain. Sadly, there is no direct way to perform a windows authentication in .NET. Not sure why there isnt a "bool WindowsPrincipal.Authenticate(userName, passWord,domain)" routine ? No clues. We could go the LogonUser route, but it appears it has certain permission issues in NT/2000 basedmachines. Hence, lets write one using the NegotiateStream



public static class SSPIHelper
{
enum AuthenticationState { Unknown, Success, Failure } ;

public static WindowsPrincipal LogonUser(NetworkCredential credential)
{
string userName, domain, password;

userName = credential.UserName;
domain = credential.Domain;
password = credential.Password;

TcpListener tcpListener = new TcpListener(IPAddress.Loopback, 0);
tcpListener.Start();

WindowsIdentity id = null;
AuthenticationState authState = AuthenticationState.Unknown;

IAsyncResult serverResult = tcpListener.BeginAcceptTcpClient(delegate(IAsyncResult asyncResult)
{
using (NegotiateStream serverSide = new NegotiateStream(
tcpListener.EndAcceptTcpClient(asyncResult).GetStream()))
{
try
{
serverSide.AuthenticateAsServer(CredentialCache.DefaultNetworkCredentials,
ProtectionLevel.None, TokenImpersonationLevel.Impersonation);
id = (WindowsIdentity)serverSide.RemoteIdentity;
authState = AuthenticationState.Success;
}
catch (Exception e)
{
authState = AuthenticationState.Failure;
}
}
}, null);


using (NegotiateStream clientSide = new NegotiateStream(new TcpClient("localhost",
((IPEndPoint)tcpListener.LocalEndpoint).Port).GetStream()))
{
try
{
clientSide.AuthenticateAsClient(new NetworkCredential(userName, password, domain),
"", ProtectionLevel.None, TokenImpersonationLevel.Impersonation);
authState = AuthenticationState.Success;
}
catch (Exception E)
{
authState = AuthenticationState.Failure;
}
}

while (authState == AuthenticationState.Unknown) ;

tcpListener.Stop();
if (authState == AuthenticationState.Success)
return new WindowsPrincipal(id);
else
return null;
}
}




Ok, we have a windows principal. Now what ? Impersonate to execute the code using this principal, which happens to be the easy bit.


WindowsIdentity newId = (WindowsIdentity)windowsPrincipal.Identity; //the one received from SSPIHelper
WindowsImpersonationContext impersonatedUser = newId.Impersonate();


This makes sure that the code following the above Impersonate() call uses the provided identity. Once we want to revert back to the original identity, just do a Undo() (see below)

So effectively what we now have is an isolated and safe execution of a type provided by the client using the credentials supplied by them. To summarize, the code should look similar to this:


//authenticate the client supplied credentials
WindowsPrincipal windowsPrincipal = SSPIHelper.LogonUser(credentials);
WindowsIdentity newId = (WindowsIdentity)windowsPrincipal.Identity;

//impersonate
WindowsImpersonationContext impersonatedUser = newId.Impersonate();
try
{
//create the application domain and create an instance of the object
AppDomain clientDomain = AppDomain.CreateDomain("ClientTaskDomain");
try
{
//use the proxy MBR object
AssemblyLoader _aLoader = (AssemblyLoader)clientDomain.CreateInstanceAndUnwrap("XYZ.Test", "XYZ.Test.AssemblyLoader");
returnData = _aLoader.LoadAndRun("ABC.Test", "MyTest");//call the client's method
}
finally
{
AppDomain.Unload(clientDomain);
}
}
finally
{
impersonatedUser.Undo();//back to the normal a/c
File.Delete(assemblySaveLocation);//just to clean up things, clean the client's assembly too.
}


Thursday 16 October 2008

Chrome crashed


In fact I thought Chrome would never crash when one of the tab got screwed since there was this concept of each tab executing within a
different process.  A behaviour easily verified by browsing to about:memory and cross checking the PID with the windows process ID.

Sadly this near perfect browser crashed today :(











Should have noted down the steps that caused this. Have now enabled chrome to generate crash dumps as detailed here

While you are here, also check out chrome's process model here

Sunday 12 October 2008

WCF Service and a SilverLight 2 beta 2 Application

While trying to call a simple WCF service from a silverlight application, not everything goes smoothly from within VS 2008. Though coding itself did not take more than half an hour, to hack the stuff to work good took me around a day.
You would usually require to get past the following hurdle's:

1.) The default binding for the WCF service appears to be secured - wsHttpBinding, which is not currently supported by silverlight. This mean editing the web.config to use basicHttpBinding instead.

2.) Even if the WCF service project and the silverlight application is in the same solution, you would not be able to call the WCF service directly since silverlight does not support cross-domain calls. This appears to be a security check from the silverlight end. To get past this, you would have to specify that both the project (WCF and the silverlight host web application) to use the same virtual directory from the project properties (check the 'Use IIS Web Server' under project properties, web tab). 

In my case, I used "http://localhost/NumberService" for the WCF service and "http://localhost/SilverLightApplication1Web" for the web application.

The above changes should get the silverlight to successfully call the WCF service from within the VS 2008 IDE. Incase you still get cross-domain errors say when your WCF service is on a different host, host the web applications directly under IIS and make sure you have created a 'clientaccesspolicy.xml' file in the root of the virtual directory with the following content :

<?xml version="1.0" encoding="utf-8"?>
<access-policy>
  <cross-domain-access>
    <policy>
      <allow-from http-request-headers="*">
        <domain uri="*"/>
      </allow-from>
      <grant-to>
        <resource path="/" include-subpaths="true"/>
      </grant-to>
    </policy>
  </cross-domain-access>
</access-policy>

The above bit just makes sure that cross domain access is allowed from all URLs (check out the allow-from tag). I could not get this to work while debugging from within the VS IDE though.

If you are still having trouble making cross domain calls, the easiest option is to create a proxy webservice within the same domain as the web application. This proxy webservice can make the actual call to the webservice on a different domain.

3.) When trying to host the silverlight web application under IIS, you might get an IIS error about unknown MIME type 'xap'. In this case, just create a new entry under the MIME types within IIS for 'xap' as 'application/x-silverlight-2-b2'

4.) Strangely, once you have setup the above two projects, VS 2008 fails to load later when you try to reload the solution. It fails with a System.Runtime.InteropServices.COMException each time. The only way to get past this while making sure that the solution loads good is to load VS 2008 under the administrator credentials. From Vista, right click on the VS 2008 shortcut and click on 'Run As Administrator'

Further Silverlight Notes:
1.) Starting Silverlight 2 beta 2, It is quite easy to get a simple timer to work under Silverlight 2 beta 2. Just use the System.Windows.Threading.DispatcherTimer class.

2.) All calls to the WCF service is by default async. The service reference within the silverlight application always generates an async reference (the one with the 'Async' suffic to the method)

Sunday 9 March 2008

Bounce that spam please!

Looking at the number of spam emails (approximately 3 spams for 1 good email!), something needs to make sure that the same sender does not send a different spam. To handle this, we could have an auto-bounce feature such that once a spam is marked manually for auto-bounce, the next spam from the same user results in an 'invalid email-address' or 'inbox full' kind of messages.

The only issues we might face:

1.) This involves the mix-up of user-requirement with the underlying protocol!. No purists would like this.

2.) The spam-generators would start getting intelligent in sending spams - It might start using different 'from' addresses and messages against the same Inbox. If either of the email was delivered, it can safely assume that the email-id is valid. This could be resolved by letting the email-server perform auto-bounce using the same logic it uses currently for spam, rather than explicitly marking it for an 'auto-bounce'.


3.) In cases of the server automatically handling it, we might loose a couple of good emails since it was identified as spam and got auto-bounced.

Can this work ? How else can I avoid that spam from reaching my email box and avoid skimming through the hundreds of spam in search of that one good-email ?

Wednesday 5 March 2008

Injections

Dependency Injection:

Dependency Injection refers to the process by which functional components ('concerns' in AOP terms) are induced into an object such that the object could use its functionalities. Say, you had a Customer Management module and one of the function it does was audit the name of all users who updated a customer record. On the simplest terms, we might have the following classes :


MyBusinessObject - A base class for each the business entity classes.
Customer - The business entity containing the customer details - name, DOB, Address etc. This being derived from MyBusinessObject.
CustomerManager - Manages all business functions related with the customer. Say, adding customer, deleting, searching, modifying etc
MySimpleAudit - A class which does the audit the various operations.

In simplest case, CustomerManager class would directly instanstiate the MySimpleAudit class and call the appropriate Audit function. All good. If we have more Audit classes , say StackTraceAudit (which audits the stack trace too..why? I dont know), ObjectStreamAudit (Audits the current state of the object) , the standard design logic would call for interfaces to separate the functionality out, in our case perhaps into an IAudit interface.

IAudit
AuditDelete(MyBusinessObject)
AuditCreate(MyBusinessObject)
AuditModify(MyBusinessObject)

We would then make sure that all our Audit classes (MySimpleAudit, StackTraceAudit, ObjectStreamAudit) implement this interface. The only confused class is the CustomerManager class which does not know which IAudit implementing class to use. Ofcourse it could depend on a configuration entry to get the audit class name or it could just hard-code to use one of the class etc.

What if we could tell directly to the CustomerManager which audit object to use ? The crux of dependency injection is this . Injecting an object (IAudit Instance) instance into another object (CustomerManager) such that the injectee (the object that got injected! sic indeed) can use the functionality of the injected object (IAudit instance).

You could pass the instance of the injected object in three standard ways as part of the constructor , use a property or use an interface definition.

For our customer example, passing the object via a constuctor would be in the lines of :

class CustomerManager
{
private _audit IAudit;
CustomerManager (audit IAudit)...

DeleteCustomer(Customer customer)
{
_audit.AuditDelete(customer as MyBusinessObject)
}
}

In this case, when the CustomerManager class is instantiated, the right audit instance is passed along - eg :- new CustomerManager(new StackTraceAudit)

Stuff noted:
1.) CustomerManager is not disturbed for any changes/additions to the IAudit implementation
2.) Any new IAudit implementation class can be created without affecting the consuming class.
3.) What is depicted is effectively an 'Inversion of Control'. The control of locating, creating of the audit class being inverted to a different object.
4.) This pattern decouples the logic of - which object, from where etc out from the consumer object.


Policy Injection

For the similar scenario as above, assume you had a single Audit object consumed by CustomerManager. Now if there is a new requirement to include the functionalities of StackTrace, ObjectLogging too into the audit system, what would you do?


Though there are numerous immediate solutions to get the stuff working (modify existing Audit class to call the other audits as well, create yet another master class which calls all the audit objects etc), Policy Injection calls for creating a proxy object class for the currently available Audit class. It would be this proxy class which gets used by at the CustomerManager object instead of the Audit object.

The CustomerManager object might end up using a Factory pattern or a dependency injection pattern (!) to get the right proxy audit class (reread this line again till it makes sense.)

Now interestingly, what the Audit proxy object class would perform is this:
1.) On the way in (when the request for audit happened) , it would call the 'Pre' step routines of all registered audit handlers (StackTrace, ObjectStream etc) in a sequence and finally call the original Audit class routines.
2.) On the way back (when the request for audit is done), it might call further 'Post' step routines on each of the handlers in the reverse order.

As seen, from the point of view of the CustomerManager, it is dealing with only one object, which is the new Audit proxy object. Whenever it calls the proxy Audit class to audit, all the handlers would perform its audit (either in the pre/post routines) and finally call the original Audit object.


Monday 3 March 2008

Revisting Workflows in Sharepoint - Part 2

Creating workflows using Sharepoint Designer
It should be noted that any workflow you create using Sharepoint Designer is effectively a sequential workflow, wherein you define the sequence of the steps to be performed. Each step being composed of actions and conditions. Any new functionality would require you to create new custom activity in VS.NET and deploy it as an action such that it could be used from within Sharepoint Designer.

Some of the stuff noted in case of sharepoint designer (SD) :
a.) It does not allow for coding/scripting. The best you can define are rules.
b.) The association with a list/library/content type is done immediately when the workflow is first created.
c.) Unlike workflows created in VS.NET, SD does not allow for modification of active workflow.

Breaking out from SD
In case the limitations of sharepoint designer gets on your head, you might want to export the workflow from sharepoint designer to VS.NET. For this :

a.) Export the workflow as an FWP from within the designer.
b.) Rename .FWP to .CAB.
c.) Extract the .CAB file.
d.) Open the extracted .XOML, rules and other files from within VS.NET.


VS.NET namespaces
In addition to the activities which WWF provides, MOSS adds onto this list by enhancing the Base Activity Library (BAL) using custom activities. These activities are available in Microsoft.Sharepoint.WorkflowActions & Microsoft.Sharepoint.Workflow.

Interaction Points
From a developer perspective, there are usually four points in the life of a workflow task when the end user can interact with the workflow. Effectively, its these four interaction points that can be handled/customised by the developer.

Association : When the workflow template is associated the first time with a library/document/content type.
Initialization : When the workflow instance is initialized/created.
Completion : When each user completes their task/step.
Modification : When the workflow itself is modified.

All of the above interactions require you to create custom ASPX pages (for both WSS and MOSS). In case of MOSS, you also have the option to create these forms in InfoPath, which tends to be a lot easier to develop. The table at this blog should clear up any queries regarding which event to handle at each of the interaction points.

Developer Notes
1.) Unlike a WWF workflow, the base class the sharepoint workflow is either SharePointSequentialWorkflowActivity or SharePointStateWorkflowActivity based on your workflow type (sequential or state).
2.) OnWorkFloActivated needs to be first activity for a sharepoint workflow.
3.) Correlation Token (we had discussed this in an earlier post) needs to be set the same for the related activities.
4.) In WWF while you must have used ExternalDataExchangeAttribute , in MOSS most of the data exchange stuff is handled internally. (Is this a right assumption ?)
5.) DependencyProperty makes it simpler to access workflow object properties which are available after activation.