Check out the classes in Microsoft.Synchronization.Data.SqlServerCe to start with this and also check out the ADO.NET BOL.
Wednesday, 28 November 2007
MS Sync Framework
To prove that they are everywhere, they have now brought out a framework just for synchronising stuff across two domains. Based on a provider model, this could be extended to sync any data (files, tables,). Sync services for ADO.NET is one provider which you could readily use to sync data from your client machine to the DB server. The other one readily available being the file services provider.
Check out the classes in Microsoft.Synchronization.Data.SqlServerCe to start with this and also check out the ADO.NET BOL.
Check out the classes in Microsoft.Synchronization.Data.SqlServerCe to start with this and also check out the ADO.NET BOL.
SLP Services
Security services in .NET goes to the next level now with the introduction of these services. Obfuscation had until today been one of the most commonly used methods to hide your source code while SLP appears to use a new approach altogether.
The new set of keyword to learn for the day include:
SLP (Software Licensing and Protection) - The service itself.
SVML (Secure Virtual Machine Language) - Similiar to MSIL, bits of code which has been transformed.
SVM (Secure Virtual Machine) - To achieve code transformation, you select parts of the application you would want to secure. What SLP does at this point is to include an SVM with the many transformed SVML's as part of the application. When the assemblies are consumed by the client, these SVML's execute in its own SVM. To complicate things further for our hacker, each software vendor would supposedly get an SVM with a unique permutation. Effectively, the same code transformed by one vendor would not be readable by another vendor. Definitely something to watch out for.
In addition to the code protection, MS appears to have integrated product licensing & feature level activation into this service making it a complete security solution for .NET applications.
Code Protector SDK - You could use this SDK to transform your code into SVML using your custom permutation. Check out the Microsoft.Licensing namespace.
SLP Server - This server application could be used to manage the 'feature' activation through a web service and also perform customisation of packages. Instead of buying (and maintaining) this product, you could instead subscribe to the SLP online service provided by MS.
The new set of keyword to learn for the day include:
SLP (Software Licensing and Protection) - The service itself.
SVML (Secure Virtual Machine Language) - Similiar to MSIL, bits of code which has been transformed.
SVM (Secure Virtual Machine) - To achieve code transformation, you select parts of the application you would want to secure. What SLP does at this point is to include an SVM with the many transformed SVML's as part of the application. When the assemblies are consumed by the client, these SVML's execute in its own SVM. To complicate things further for our hacker, each software vendor would supposedly get an SVM with a unique permutation. Effectively, the same code transformed by one vendor would not be readable by another vendor. Definitely something to watch out for.
In addition to the code protection, MS appears to have integrated product licensing & feature level activation into this service making it a complete security solution for .NET applications.
Code Protector SDK - You could use this SDK to transform your code into SVML using your custom permutation. Check out the Microsoft.Licensing namespace.
SLP Server - This server application could be used to manage the 'feature' activation through a web service and also perform customisation of packages. Instead of buying (and maintaining) this product, you could instead subscribe to the SLP online service provided by MS.
Monday, 15 October 2007
Audience in MOSS
Concept
Audience feature in MOSS should not be mixed up with security features/trimming of data based on the user credentials and the resulting authorization. Instead, 'audience' lets you filter out undesirable data for the current user context. E.g.:- you would want to hide the sales data on the home page if the current user is not part of the sales team; this makes sure the marketing (or other) teams are not overloaded with unwanted information.
In MOSS, the idea is to setup a user/audience list such that webparts/lists can later show/hide the required information. This is supported out of the box.
Setting Up an Audience
From a shared service provider, you could add new audiences by creating rules using windows groups or distribution list or any of the property available against a user (name, address, department, manager etc).
To extend this audience definition, we could have custom property (say 'Day/Night Shift') added against a user from the Shared Services->User Profile settings. This makes sure that we can create rules based on this new custom property to show items specific to employee working in the day shift.
Consuming an Audience
Out of the box, sharepoint lets you apply audience-targeting for the following items
a.) WebParts - while designing the webpart, you could specify the target audience for this webpart. In this case, the webpart would be rendered only if the current-user is part of the target audience. In all other cases, the webpart would not be visible / rendered.
b.) Lists - While designing a list, you could specify that the list needs to have audience targeting available. In this case, you could use Content Query WebPart to filter out data
Consuming Audience programmatically
AudienceManager class - Acts as the entry point into the entire functionality of audience in MOSS. This class also implements the IRuntimeFilter interface to perform the targeting functionalities for webpart/lists.
For a custom user control to consume the Audience functionalities, one of the way is to setup an audience list for the custom control (a new property perhaps) such that at runtime, the control can use AudienceManager.IsCurrentUserInAudienceOf() method to check if the active user is part of the audience previously setup. Note that in this case, it’s up to the control developer to implement the required functionality of hiding/filtering data etc.
Audience feature in MOSS should not be mixed up with security features/trimming of data based on the user credentials and the resulting authorization. Instead, 'audience' lets you filter out undesirable data for the current user context. E.g.:- you would want to hide the sales data on the home page if the current user is not part of the sales team; this makes sure the marketing (or other) teams are not overloaded with unwanted information.
In MOSS, the idea is to setup a user/audience list such that webparts/lists can later show/hide the required information. This is supported out of the box.
Setting Up an Audience
From a shared service provider, you could add new audiences by creating rules using windows groups or distribution list or any of the property available against a user (name, address, department, manager etc).
To extend this audience definition, we could have custom property (say 'Day/Night Shift') added against a user from the Shared Services->User Profile settings. This makes sure that we can create rules based on this new custom property to show items specific to employee working in the day shift.
Consuming an Audience
Out of the box, sharepoint lets you apply audience-targeting for the following items
a.) WebParts - while designing the webpart, you could specify the target audience for this webpart. In this case, the webpart would be rendered only if the current-user is part of the target audience. In all other cases, the webpart would not be visible / rendered.
b.) Lists - While designing a list, you could specify that the list needs to have audience targeting available. In this case, you could use Content Query WebPart to filter out data
Consuming Audience programmatically
AudienceManager class - Acts as the entry point into the entire functionality of audience in MOSS. This class also implements the IRuntimeFilter interface to perform the targeting functionalities for webpart/lists.
For a custom user control to consume the Audience functionalities, one of the way is to setup an audience list for the custom control (a new property perhaps) such that at runtime, the control can use AudienceManager.IsCurrentUserInAudienceOf() method to check if the active user is part of the audience previously setup. Note that in this case, it’s up to the control developer to implement the required functionality of hiding/filtering data etc.
Sunday, 22 July 2007
Talking with Exchange Server
The usual requirements of talking with outlook can be handled by the rather extensive object model which outlook provides. Now, if you want to interact with outlook from a server based application (say ASP.NET or a remoting host), using object model might not be the right solution since you need outlook client installed, you might have to configure individual profiles etc. A better approach could be talk directly to the exchange server.
To talk with Exchange, the following approaches seems available (be-ware; even after you select you preferredAPI and talking channel, you could easily get lost in the n versions of the library one for each of the outlook versions.) :
1.) CDO-EX objects:
Of the various versions of CDO, the version for exchange - CDOEX could be used to manage components in the exchange server. The only issue here being that the application consuming CDOEX needs to be on the same machine as that of the server. CDO 1.2.1 does seem to let you access exchange servers remotely but could not get it to install on a machine without outlook 2007 :(
Note that as of Outlook 2007, it appears CDO is being provided as a separate download.
2.) WebDAV
The slowest of the lot and the most difficult to understand, uses plain http requests in an xml format to perform each action. The convenience (you can use it remotely too) of using this method usually outweighs the speed and the learning curve.
WebDAV notifications using HTTPU is interesting in that you get notifications from the remote server via UDP message. A simple explanation with example is available at infinitec.de
3.) Exchange OLE DB
An OLE Db provider for exchange sounds like the best possible way to talk with exchange server. Sadly, your happiness ends when msdn tells you that the application consuming this driver needs to be on the same server as that of the exchange. Err!
Effectively, if performance is your main concern, your preference should be to go for CDO/OleDB/WebDav (in that order). Perhaps the future release of the Exchange API/SDK might contain a Microsoft.Exchange.Server.Core assembly to talk directly and easily.
Shall talk about using WebDAV within a C# application in detail in one of the upcoming posts.
To talk with Exchange, the following approaches seems available (be-ware; even after you select you preferredAPI and talking channel, you could easily get lost in the n versions of the library one for each of the outlook versions.) :
1.) CDO-EX objects:
Of the various versions of CDO, the version for exchange - CDOEX could be used to manage components in the exchange server. The only issue here being that the application consuming CDOEX needs to be on the same machine as that of the server. CDO 1.2.1 does seem to let you access exchange servers remotely but could not get it to install on a machine without outlook 2007 :(
Note that as of Outlook 2007, it appears CDO is being provided as a separate download.
2.) WebDAV
The slowest of the lot and the most difficult to understand, uses plain http requests in an xml format to perform each action. The convenience (you can use it remotely too) of using this method usually outweighs the speed and the learning curve.
WebDAV notifications using HTTPU is interesting in that you get notifications from the remote server via UDP message. A simple explanation with example is available at infinitec.de
3.) Exchange OLE DB
An OLE Db provider for exchange sounds like the best possible way to talk with exchange server. Sadly, your happiness ends when msdn tells you that the application consuming this driver needs to be on the same server as that of the exchange. Err!
Effectively, if performance is your main concern, your preference should be to go for CDO/OleDB/WebDav (in that order). Perhaps the future release of the Exchange API/SDK might contain a Microsoft.Exchange.Server.Core assembly to talk directly and easily.
Shall talk about using WebDAV within a C# application in detail in one of the upcoming posts.
Saturday, 14 July 2007
SSW Code Auditor - a review
A quick search for code standards review tools for C# lead me to SSW Code Auditor. Among others (FxCop, Standards Master 2005,FMS Total..), this tool appeared to be something easy for an average developer to use from the first day.
Details
Once the trial version is downloaded, the first thing which would strike you are the pictures of all kinds of fruits (yes! apple, the sign of health to start with). The GUI tries to be very straightforward using a wizard kind of interface but is not effective. It would take atleast another 10mins before you realise that the 'database' is effectively a kind of project where you add each subsections to be tested, as 'jobs'. Not sure why this isnt just a project file and a list of jobs within it such that I can create multiple projects using File->New?
Anyways, once you add your list of folders, files which needs to be audited, you get to select the rules you want to be tested. The trial version appears to have 147 rules of all kinds enabled. Perhaps new standard rules would be added periodically by SSW as a rule-update file?
Could not add a new rule or edit a rule in this trial version. But would have been good if the trial version let you create one custom rule - just to check out things. A fully functional version which works for particular time period is recommended for bringing out trial versions of utils.
Within a normal wizard layout, the usual tendency is to click next next next.. finish. One non-standard UI design was the start/skip button within one of the wizard page. These buttons are the ones which check the selected files against the selected rules. What would have been a better UI design would be to bring down the start/stop/skip buttons instead of the back/next/cancel buttons.
The browser rendered result page tells if the application is healthy or not (images varying from apple to burger to denote these!) and the detailed list of issues it located. Thankfully, the results can be arranged by file names such that I can see all the issues my particular class has.
What strikes you while you use this application is the language of the messages contained in the forms and the reports. Its just simple and communicates good to the developer. The report tells you what is wrong in plain simple English with a quick tip. Great, when you think about the rather complex messages from FxCop.
The other nifty functions included emailing the results, scheduling the tests (again not available in trial version), creating a batch file which you could execute from the command line and also performing a test of the just checked-in file ( with Team Foundation Server). This feature would be really good - the developer would get the list of issues with his file as soon as its checked in - great.
In addition to the standalone application, the VS.NET plugin is what you would use on a daily basis. The plugin makes the distinction between FxCop and Code Auditor obvious when it lets you select assemblies with FxCop and source code with itself. Sadly, I could not get this to test just my active source file. It had to perform the test on the whole project each time.
The VS.NET plugin also appears to add two files (one for fxcop another for itself) into an individual solution item folder for each of the project in the solution. This definitely appears to clutter up the solution explorer. What would have been a lot better is a single solution item with all the files for all the project within the active solution.
To summarise, once you get a kick of this no-nonsense tool, it should be a pretty good companion during your daily development activity. Perhaps the next versions might also fix the obvious errors automatically.
All those fruits; from apples, bananas to strawberry has definitely made me hungry! I think am off to the kitchen.
Details
Once the trial version is downloaded, the first thing which would strike you are the pictures of all kinds of fruits (yes! apple, the sign of health to start with). The GUI tries to be very straightforward using a wizard kind of interface but is not effective. It would take atleast another 10mins before you realise that the 'database' is effectively a kind of project where you add each subsections to be tested, as 'jobs'. Not sure why this isnt just a project file and a list of jobs within it such that I can create multiple projects using File->New?
Anyways, once you add your list of folders, files which needs to be audited, you get to select the rules you want to be tested. The trial version appears to have 147 rules of all kinds enabled. Perhaps new standard rules would be added periodically by SSW as a rule-update file?
Could not add a new rule or edit a rule in this trial version. But would have been good if the trial version let you create one custom rule - just to check out things. A fully functional version which works for particular time period is recommended for bringing out trial versions of utils.
Within a normal wizard layout, the usual tendency is to click next next next.. finish. One non-standard UI design was the start/skip button within one of the wizard page. These buttons are the ones which check the selected files against the selected rules. What would have been a better UI design would be to bring down the start/stop/skip buttons instead of the back/next/cancel buttons.
The browser rendered result page tells if the application is healthy or not (images varying from apple to burger to denote these!) and the detailed list of issues it located. Thankfully, the results can be arranged by file names such that I can see all the issues my particular class has.
What strikes you while you use this application is the language of the messages contained in the forms and the reports. Its just simple and communicates good to the developer. The report tells you what is wrong in plain simple English with a quick tip. Great, when you think about the rather complex messages from FxCop.
The other nifty functions included emailing the results, scheduling the tests (again not available in trial version), creating a batch file which you could execute from the command line and also performing a test of the just checked-in file ( with Team Foundation Server). This feature would be really good - the developer would get the list of issues with his file as soon as its checked in - great.
In addition to the standalone application, the VS.NET plugin is what you would use on a daily basis. The plugin makes the distinction between FxCop and Code Auditor obvious when it lets you select assemblies with FxCop and source code with itself. Sadly, I could not get this to test just my active source file. It had to perform the test on the whole project each time.
The VS.NET plugin also appears to add two files (one for fxcop another for itself) into an individual solution item folder for each of the project in the solution. This definitely appears to clutter up the solution explorer. What would have been a lot better is a single solution item with all the files for all the project within the active solution.
To summarise, once you get a kick of this no-nonsense tool, it should be a pretty good companion during your daily development activity. Perhaps the next versions might also fix the obvious errors automatically.
All those fruits; from apples, bananas to strawberry has definitely made me hungry! I think am off to the kitchen.
Sunday, 24 June 2007
SQL Server + CLR -> Stuff noted
There appears to be many limitations and rules to be followed while using the CLR to write an SQL object:
. While using the context connection object via SqlConnection("context connection=true"), make sure only one connection object with this context is open at any point in time. 'Using' should be your friend here.
. If you want to initialize application data which needs to be re-used each time the function/trigger is called, you would want to use static variables. SQL Server would let you define readonly static variables only. (This effectively means, that all initialization has to happen at the constructor).
. Want to access a different database from within a function/any other CLR object ? Make sure the assembly has EXTERNAL_ACCESS granted.
. When your code starts to use locking/objects which use locking (most trace listeners do) , SQL Server complains like mad. At this point, you have no way out other than make sure that the login you created with the asymmetric key on the master DB has got the UNSAFE access.
. To make sure that debugging works OK, make sure that the symbols file (.pdb) is added to the assembly with an alter assembly add file statement.
. If you are accessing a different DB, the transaction for the current context would not be use full there since CLR would not let you. You would need to force a new transactionscope in this case.
As can be seen, there requires lots of hacks to make the stuff working in the CLR environment. The first draft of the code you wrote is guaranteed to not work with SQL CLR :)
May the CLR be with you.
. While using the context connection object via SqlConnection("context connection=true"), make sure only one connection object with this context is open at any point in time. 'Using' should be your friend here.
. If you want to initialize application data which needs to be re-used each time the function/trigger is called, you would want to use static variables. SQL Server would let you define readonly static variables only. (This effectively means, that all initialization has to happen at the constructor).
. Want to access a different database from within a function/any other CLR object ? Make sure the assembly has EXTERNAL_ACCESS granted.
. When your code starts to use locking/objects which use locking (most trace listeners do) , SQL Server complains like mad. At this point, you have no way out other than make sure that the login you created with the asymmetric key on the master DB has got the UNSAFE access.
. To make sure that debugging works OK, make sure that the symbols file (.pdb) is added to the assembly with an alter assembly add file statement.
. If you are accessing a different DB, the transaction for the current context would not be use full there since CLR would not let you. You would need to force a new transactionscope in this case.
As can be seen, there requires lots of hacks to make the stuff working in the CLR environment. The first draft of the code you wrote is guaranteed to not work with SQL CLR :)
May the CLR be with you.
Monday, 28 May 2007
SQL Server 2005 & A Global Trigger
With the introduction of CLR into SQL Server 2005, the programmer can now write functionalities exploiting the C# language and the vast .NET library. A closer look at integrating CLR trigger handlers reveals many interesting stuff:
Classes of concern
Namespace Microsoft.SqlServer.Server contains all of the classes we would use in a SQL CLR object. While writing triggers, we would be exposed to:
SqlContext - gives you back the execution context to get more information;In the case of triggers, we would use the SqlTriggerContext to get the TriggerAction - delete/insert/update, columns updated etc.
SqlPipe - This acts as the channel/pipe to send back stuff to SQL Server, ranging from debug messages to recordsets; using the Send method.
SqlTrigger Attribute - These attributes appear to be used only by the Visual Studio environment while deploying the CLR object (I think!), since most of these parameters (Target, Event) can also be specified using DDL statements.
Writing Trigger in CLR & Integrating
Once you create a DB project in SQL Server 2005, writing a trigger is quite simple. Create a new class, create a new public method with the SqlTrigger Attribute applied to it. Within the procedure you would want to use the SqlTriggerContext and perhaps the SqlPipe object (as described above)
Once everything is compiled right, go to your T-SQL editor and perform the following [you could also deploy the assembly directly from within VS.NET; but then, where is the fun? :)]
a.) create an assembly within SQL Server, pointing to our .NET dll file. You would need to specify EXTERNAL ACCESS if you are looking at cross database updations from within the trigger, which would also need special permissions.
b.) create the trigger against your table using the CREATE TRIGGER with EXTERNAL name pointing to the assembly.namespace.class.function.
c.) Thats it!
Global Trigger
Its interesting to note that if you havent specified a Target in the SqlTriggerAttribute, you could reuse the same assembly object for any number of tables. You just have to create a trigger against the table in concern using our assembly object. This way, you effectively have a single trigger codebase running against all of the tables. Easy to maintain :)
Accessing the Table Name from within Trigger
Unless you are writing trigger for a single table, you might want to know the table name for which the trigger is executing. There appears to be no straight property/method to do this (Why couldnt the context object just return this?) and what could be figured :
SELECT object_name(resource_associated_entity_id) FROM sys.dm_tran_locks WHERE request_session_id = @@spid and resource_type = 'OBJECT'
This basically gets the object name which is locked in this session. (a very dirty hack ,unless someone figured out a better way?)
Classes of concern
Namespace Microsoft.SqlServer.Server contains all of the classes we would use in a SQL CLR object. While writing triggers, we would be exposed to:
SqlContext - gives you back the execution context to get more information;In the case of triggers, we would use the SqlTriggerContext to get the TriggerAction - delete/insert/update, columns updated etc.
SqlPipe - This acts as the channel/pipe to send back stuff to SQL Server, ranging from debug messages to recordsets; using the Send method.
SqlTrigger Attribute - These attributes appear to be used only by the Visual Studio environment while deploying the CLR object (I think!), since most of these parameters (Target, Event) can also be specified using DDL statements.
Writing Trigger in CLR & Integrating
Once you create a DB project in SQL Server 2005, writing a trigger is quite simple. Create a new class, create a new public method with the SqlTrigger Attribute applied to it. Within the procedure you would want to use the SqlTriggerContext and perhaps the SqlPipe object (as described above)
Once everything is compiled right, go to your T-SQL editor and perform the following [you could also deploy the assembly directly from within VS.NET; but then, where is the fun? :)]
a.) create an assembly within SQL Server, pointing to our .NET dll file. You would need to specify EXTERNAL ACCESS if you are looking at cross database updations from within the trigger, which would also need special permissions.
b.) create the trigger against your table using the CREATE TRIGGER with EXTERNAL name pointing to the assembly.namespace.class.function.
c.) Thats it!
Global Trigger
Its interesting to note that if you havent specified a Target in the SqlTriggerAttribute, you could reuse the same assembly object for any number of tables. You just have to create a trigger against the table in concern using our assembly object. This way, you effectively have a single trigger codebase running against all of the tables. Easy to maintain :)
Accessing the Table Name from within Trigger
Unless you are writing trigger for a single table, you might want to know the table name for which the trigger is executing. There appears to be no straight property/method to do this (Why couldnt the context object just return this?) and what could be figured :
SELECT object_name(resource_associated_entity_id) FROM sys.dm_tran_locks WHERE request_session_id = @@spid and resource_type = 'OBJECT'
This basically gets the object name which is locked in this session. (a very dirty hack ,unless someone figured out a better way?)
Saturday, 19 May 2007
The Jukebox Issue
Problem at hand
What we had in place was a jukebox system wherein the client application on user machines would request songs from the main play list on the server. The server would pick up the next song in this queue and play it using media player on the server machine. The sound output is directed to the music system such that everyone around could listen to the many songs requested by others.
Everything appeared to go well until the non-jukie fellows started complaining - they did not want to hear to these songs, usually a different language/dialect. The music system had to go :(
Requirement
Now how do we fix this problem ? Music enthusiasts needed to be able to listen songs requested by others from their individual machines, perhaps using the headphones such that the non-jukies are not bothered.
Solution
Instead of directing the output from mediaplayer to the music system, use windows media encoder 9 series on the juke box server machine. Windows media encoder could encode music coming out from the soundcard into a live stream.
We needed a Win2003 box with Windows Media Services 9 series installed (comes with SP2) which can broadcast streams. Once we have this, setup the windows media services to feed from the media encoder stream off the jukebox server. What is remaining is simple. Ask all our dear enthusiasts to use Windows media player to listen to the live stream off the 2003 box. This way, the enthusiasts listen to the songs requested by other enthusiasts without disturbing the non-jukies.
Peace reigns once again.
What we had in place was a jukebox system wherein the client application on user machines would request songs from the main play list on the server. The server would pick up the next song in this queue and play it using media player on the server machine. The sound output is directed to the music system such that everyone around could listen to the many songs requested by others.
Everything appeared to go well until the non-jukie fellows started complaining - they did not want to hear to these songs, usually a different language/dialect. The music system had to go :(
Requirement
Now how do we fix this problem ? Music enthusiasts needed to be able to listen songs requested by others from their individual machines, perhaps using the headphones such that the non-jukies are not bothered.
Solution
Instead of directing the output from mediaplayer to the music system, use windows media encoder 9 series on the juke box server machine. Windows media encoder could encode music coming out from the soundcard into a live stream.
We needed a Win2003 box with Windows Media Services 9 series installed (comes with SP2) which can broadcast streams. Once we have this, setup the windows media services to feed from the media encoder stream off the jukebox server. What is remaining is simple. Ask all our dear enthusiasts to use Windows media player to listen to the live stream off the 2003 box. This way, the enthusiasts listen to the songs requested by other enthusiasts without disturbing the non-jukies.
Peace reigns once again.
Thursday, 19 April 2007
WWF - Persisting WorkFlow
Why Persist?
The workflow host, which could be anything from a console application to a full fledged service application like MOSS, is not expected to maintain the state of the workflow instance in memory all the time. This is simply to save server resources and make them available. This considers the fact that workflows could be running for days.
Persisting - The common path
To save/persist/dehydrate/stream/serialize (yes, all denote the same idea conceptually) workflows, you usually use a workflow persistence service object such as the SqlWorkflowPersistenceService. This object could either be consumed directly within your own hosting application codebase or set up via a config file.In either of the case, you could ask for an automatic save when the workflow is 'idle' using the UnloadOnIdle entry.
What needs to be noted here is that all objects used by our workflow should be serializable in order for the host to persist the workflow (and the related objects) OK. An exception is guaranteed otherwise.
When does the save happen?
The workflow runtime appears to persist the workflow on these scenarios ('persist points') :
Against an activity, when it gets completed. (Check out the PersistOnCloseAttribute declared against Activities.)
When the workflow is completed or idle (delays, event waits)
When the workflow is forcefully unloaded.
Writing Custom Persistence Layer
Overriding a few functions by descending from the WorkflowPersistenceService class makes it easy to write a custom persistence class. Further, this new class could be made active against the workflow via the config file. But, most of us should be happy with the out of box SqlWorkflowPersistenceService which does seem to do the job good.
Persistence under MOSS
MOSS as a host has its own persisting service which uses the SPWinOePersistenceService object by default. [haven't tried forcing a different persistence object via the config though]. Waiting for external actions which include delays, waiting for events to fire etc causes the workflow to be persisted/saved to DB. The workflow appears to be serialized to the WorkFlow table (check out the InstanceData column) in the Content DB for the site.
The workflow host, which could be anything from a console application to a full fledged service application like MOSS, is not expected to maintain the state of the workflow instance in memory all the time. This is simply to save server resources and make them available. This considers the fact that workflows could be running for days.
Persisting - The common path
To save/persist/dehydrate/stream/serialize (yes, all denote the same idea conceptually) workflows, you usually use a workflow persistence service object such as the SqlWorkflowPersistenceService. This object could either be consumed directly within your own hosting application codebase or set up via a config file.In either of the case, you could ask for an automatic save when the workflow is 'idle' using the UnloadOnIdle entry.
What needs to be noted here is that all objects used by our workflow should be serializable in order for the host to persist the workflow (and the related objects) OK. An exception is guaranteed otherwise.
When does the save happen?
The workflow runtime appears to persist the workflow on these scenarios ('persist points') :
Against an activity, when it gets completed. (Check out the PersistOnCloseAttribute declared against Activities.)
When the workflow is completed or idle (delays, event waits)
When the workflow is forcefully unloaded.
Writing Custom Persistence Layer
Overriding a few functions by descending from the WorkflowPersistenceService class makes it easy to write a custom persistence class. Further, this new class could be made active against the workflow via the config file. But, most of us should be happy with the out of box SqlWorkflowPersistenceService which does seem to do the job good.
Persistence under MOSS
MOSS as a host has its own persisting service which uses the SPWinOePersistenceService object by default. [haven't tried forcing a different persistence object via the config though]. Waiting for external actions which include delays, waiting for events to fire etc causes the workflow to be persisted/saved to DB. The workflow appears to be serialized to the WorkFlow table (check out the InstanceData column) in the Content DB for the site.
Wednesday, 18 April 2007
Sharepoint - Integrating MOSS+WWF+ASPX - Part 3
Exposing our WWF workflow to MOSS
There are two more bits we need to do in the WWF workflow application to make it available to MOSS. We need to declare the feature and the workflow xml.
feature.xml
MOSS introduced features for the developers to create site items/functionality which can later be linked with sharepoint collectons/sites. Within this XML file, you would also tell which XML would contain the feature specific details - in our case workflow.xml.
workflow.xml
Describes stuff about our workflow to sharepoint, including name, description , id etc. This also defines the pages which would be used for workflow Instantiation, association and modification. We shall have a look at an example of Instantiation later in a different blog entry. Modification of workflow (say you would to add more reviewers) at runtime needs a few extra steps and this is when the modification page comes into effect.
Defining Custom Pages for Task Initiation
For ease, all of these three pages needs to derive from Microsoft.Sharepoint.WebControls.LayoutsPageBase with sharepoint master pages (~/_layouts/application.master) being used in the ASPX definition. MOSS provides a lot many master pages which give the consistent look and feel of standard MOSS pages. The content placeholders within the masterpages would need to be filled in by us to define the various entries for the page. Since the master pages would not be usually available at the developer machine, designing these pages is not the easiest of task. Did try copying the pages locally to my machine, but VS.NET does not want to pick these, no matter what.
What we would want to do within the initialisation page is to serialize all the user entered stuff and call a Web.Site.WorkflowManager.StartWorkflow with the serialized data. Its this data which the OnWorkFlowActivated event in the WWF workflow would contain (refer to part 2 of this series)
The important points to note here would be the layouts page, the master page , calls to sharepoint functions and the way the page data transfer data to WWF via MOSS.
We have on more VS.NET task remaining, which is creating the task updating page. This page would be used by users to approve/reject tasks. This works a bit different from the three pages listed above; exploits ContentTypes. Next Blog.
There are two more bits we need to do in the WWF workflow application to make it available to MOSS. We need to declare the feature and the workflow xml.
feature.xml
MOSS introduced features for the developers to create site items/functionality which can later be linked with sharepoint collectons/sites. Within this XML file, you would also tell which XML would contain the feature specific details - in our case workflow.xml.
workflow.xml
Describes stuff about our workflow to sharepoint, including name, description , id etc. This also defines the pages which would be used for workflow Instantiation, association and modification. We shall have a look at an example of Instantiation later in a different blog entry. Modification of workflow (say you would to add more reviewers) at runtime needs a few extra steps and this is when the modification page comes into effect.
Defining Custom Pages for Task Initiation
For ease, all of these three pages needs to derive from Microsoft.Sharepoint.WebControls.LayoutsPageBase with sharepoint master pages (~/_layouts/application.master) being used in the ASPX definition. MOSS provides a lot many master pages which give the consistent look and feel of standard MOSS pages. The content placeholders within the masterpages would need to be filled in by us to define the various entries for the page. Since the master pages would not be usually available at the developer machine, designing these pages is not the easiest of task. Did try copying the pages locally to my machine, but VS.NET does not want to pick these, no matter what.
What we would want to do within the initialisation page is to serialize all the user entered stuff and call a Web.Site.WorkflowManager.StartWorkflow with the serialized data. Its this data which the OnWorkFlowActivated event in the WWF workflow would contain (refer to part 2 of this series)
The important points to note here would be the layouts page, the master page , calls to sharepoint functions and the way the page data transfer data to WWF via MOSS.
We have on more VS.NET task remaining, which is creating the task updating page. This page would be used by users to approve/reject tasks. This works a bit different from the three pages listed above; exploits ContentTypes. Next Blog.
Thursday, 5 April 2007
Sharepoint - Integrating MOSS+WWF+ASPX - Part 2
Starting with SharePoint Workflow Project
Once you have the sharepoint extension installed, the default project based on these templates would have the OnWorkFlowActivated activity placed as the first activity in the workflow. When Sharepoint initiates our workflow, its this activity it calls as the first step - acting as the entry point into our workflow.
Data Sharing
Sharepoint shares data with the workflow using one of the following ways:
1.) Instance of SPWorkflowActivationProperties would anytime contain the workflow properties, the most important of which is the initiation data. Initiation data could basically be any custom string passed in while you start the workflow from sharepoint. If you happen to have a custom initiation page (we shall discuss later) which explicitly initiates the workflow within sharepoint, it could perhaps serialise a data class instance based on the form data which we could use via this activation property.
2.) Events - Each of the OnXX events against an activity have event properties passed applicable to the context.
3.) Activity Properties - Each activity have certain properties which map to a workflow property/variable, which you usually setup during design time.
eg:- CreateTask activity has got two properties TaskID, TaskProperty. If you refer to the same variable in a different activity property, (say CompleteTask.TaskID), you are effectively refering to the same task. Also, any data you set to these variables are also passed back to Sharepoint. What you acheive here is sharing/relating data items between activities and between workflow & sharepoint.
Correlation-Token
Nearly all activities have a correlation token which is an identifier for the workflow context. This is for sharepoint to understand the context in which the activity is working. Eg:- you would use the same correlation-token for the createtask, ontaskchanged, completetask activities to specify that all these are of the same workflow context.
Does sharepoint persist the workflow?
Ofcourse it does. There is no way sharepoint can remember workflows running for days (say the task may not have been looked yet by the user). In these special cases when sharepoint is explicitly waiting for an event (during a while activity, during a delay activity etc), sharepoint automatically serializes the workflow. When the related event happens , sharepoint deserializes the workflow and returns control back to the workflow. At this point, its the correlation token which is used to relate the item in concern with the workflow context [eg:- identify the task context]
Example
So to try out some basic stuff, add the basic activities - CreateTaskActivity and within a WhileActivity put the OnTaskChangedActivity and finally a CompletetaskActivity. If your property links and the correlation token is right, there is nothing more to be defined at the workflow definition designer.
To define what happens at each activity (some code finally):
At the CreateTask activity, setup values for the task property like who is the recipient, description, taskid etc.
At the whileactivity, check if the task has been completed - perhaps using a public variable which was inturn set at the OnTaskChanged activity.
And at the completetask, setup the task status to 'complete'.
Guess we are done defining the Workflow.
Next lets define the custom pages which we would use for initiation of the task and also relate these pages to the workflow. Please await for part 3 :)
Once you have the sharepoint extension installed, the default project based on these templates would have the OnWorkFlowActivated activity placed as the first activity in the workflow. When Sharepoint initiates our workflow, its this activity it calls as the first step - acting as the entry point into our workflow.
Data Sharing
Sharepoint shares data with the workflow using one of the following ways:
1.) Instance of SPWorkflowActivationProperties would anytime contain the workflow properties, the most important of which is the initiation data. Initiation data could basically be any custom string passed in while you start the workflow from sharepoint. If you happen to have a custom initiation page (we shall discuss later) which explicitly initiates the workflow within sharepoint, it could perhaps serialise a data class instance based on the form data which we could use via this activation property.
2.) Events - Each of the OnXX events against an activity have event properties passed applicable to the context.
3.) Activity Properties - Each activity have certain properties which map to a workflow property/variable, which you usually setup during design time.
eg:- CreateTask activity has got two properties TaskID, TaskProperty. If you refer to the same variable in a different activity property, (say CompleteTask.TaskID), you are effectively refering to the same task. Also, any data you set to these variables are also passed back to Sharepoint. What you acheive here is sharing/relating data items between activities and between workflow & sharepoint.
Correlation-Token
Nearly all activities have a correlation token which is an identifier for the workflow context. This is for sharepoint to understand the context in which the activity is working. Eg:- you would use the same correlation-token for the createtask, ontaskchanged, completetask activities to specify that all these are of the same workflow context.
Does sharepoint persist the workflow?
Ofcourse it does. There is no way sharepoint can remember workflows running for days (say the task may not have been looked yet by the user). In these special cases when sharepoint is explicitly waiting for an event (during a while activity, during a delay activity etc), sharepoint automatically serializes the workflow. When the related event happens , sharepoint deserializes the workflow and returns control back to the workflow. At this point, its the correlation token which is used to relate the item in concern with the workflow context [eg:- identify the task context]
Example
So to try out some basic stuff, add the basic activities - CreateTaskActivity and within a WhileActivity put the OnTaskChangedActivity and finally a CompletetaskActivity. If your property links and the correlation token is right, there is nothing more to be defined at the workflow definition designer.
To define what happens at each activity (some code finally):
At the CreateTask activity, setup values for the task property like who is the recipient, description, taskid etc.
At the whileactivity, check if the task has been completed - perhaps using a public variable which was inturn set at the OnTaskChanged activity.
And at the completetask, setup the task status to 'complete'.
Guess we are done defining the Workflow.
Next lets define the custom pages which we would use for initiation of the task and also relate these pages to the workflow. Please await for part 3 :)
Tuesday, 3 April 2007
Sharepoint - Integrating MOSS+WWF+ASPX - Part 1
MOSS 2007 brings out the capability to use workflows defined within WWF; though not in the very easiest of ways. The number of steps to perform this action is a bit extensive. Sharepoint by default does let you define simple workflows using the sharepoint designer, but you would definitely have to go via the WWF path if there are too many custom business actions which needs to be performed during the workflow. Think about it, you get the full flexibility of the C# language and the .NET library once you start using the WWF - two steps away from heaven
Stuff you would need to define the workflow
WWF extension to VS.NET - to get the workflow designer and basic workflow project templates.
Sharepoint 2007 SDK - to get the sharepoint workflow templates and activities.
Quickest intro to WWF
Activities interact in sequence (sequential workfow) or via a trigger/state change (state machine workflow) to complete the work-flow. Activities are the building blocks for the workflow; once the WWF extensions are installed, you get a good set of activities to work with - while,if-else,delay,code etc etc.
Custom activities could be defined by the user (check out System.Workflow.Activities) such that these could be plugged into the workflow. While defining workflow, its interesting to note that the workflow definition could be defined in an XML file (XOML file actually) very easily. Its the same XML definition which gets depicted as interconnected boxes in the designer.
Once you have your workflow defined, whats remaining is hosting the same - the easiest option to test a workflow would be to write a console application which initiates the runtime (System.Workflow.Runtime.WorkflowRuntime) and starts the workflow by creating an instance of the previously created workflow using a WorkflowRuntime.CreateWorkFlow. Simple? Try it out... :)
Workflow for Sharepoint using WWF
Once you have the sharepoint SDK installed, you get a two new project templates and custom activities specific to sharepoint like createtask, deletetask, onwtaskchanged etc. We shall use these items in the next session to create a sharepoint workflow and later integrate this workflow into sharepoint.
Until then.
Stuff you would need to define the workflow
WWF extension to VS.NET - to get the workflow designer and basic workflow project templates.
Sharepoint 2007 SDK - to get the sharepoint workflow templates and activities.
Quickest intro to WWF
Activities interact in sequence (sequential workfow) or via a trigger/state change (state machine workflow) to complete the work-flow. Activities are the building blocks for the workflow; once the WWF extensions are installed, you get a good set of activities to work with - while,if-else,delay,code etc etc.
Custom activities could be defined by the user (check out System.Workflow.Activities) such that these could be plugged into the workflow. While defining workflow, its interesting to note that the workflow definition could be defined in an XML file (XOML file actually) very easily. Its the same XML definition which gets depicted as interconnected boxes in the designer.
Once you have your workflow defined, whats remaining is hosting the same - the easiest option to test a workflow would be to write a console application which initiates the runtime (System.Workflow.Runtime.WorkflowRuntime) and starts the workflow by creating an instance of the previously created workflow using a WorkflowRuntime.CreateWorkFlow. Simple? Try it out... :)
Workflow for Sharepoint using WWF
Once you have the sharepoint SDK installed, you get a two new project templates and custom activities specific to sharepoint like createtask, deletetask, onwtaskchanged etc. We shall use these items in the next session to create a sharepoint workflow and later integrate this workflow into sharepoint.
Until then.
Sharepoint BDC - Easy LOB Integration
BDC
MOSS 2007 brings out new capabilities in integrating third party data sources and line-of-business apps into the sharepoint environment. Once integrated, these data sources act 'quite' similar to the standard sources letting you apply the various sharepoint functions such as list, search etc.
There is no need to write complex custom handlers (nobody wants to write one - not recommended by the MOSS team either!) or IFilers, now with the introduction of Business Data Catalog - BDC
The idea is very simple - you could integrate any data source which has an adapter via the ADO.NET path or a webservice. Once you dig out the adapter,the only remaining step is to define the BDC. The BDC definition is an XML file conforming to the BDCMetaData.xsd schema. Tip - when you start with a new XML file using VS.NET 2005, go to the properties for this XML file and point the 'schema' to the BDCMetaData.xsd; this enables intellisense while editing the XML file.
Common items which you define in the BDC :
LobSystemInstance - This defines where the datasource is and the adapter to use for connection, think of it as the connection string you normally provide.
Entities - When you expose the data source, what you definitely need to tell MOSS is what items you need to make available from your datasouce. Say from the default SQL Server [pubs] DB, you might want to expose the employee items only. For each entity, you would need to define the properties (employee id, name etc) and the identifier (employee id) at the bare minimum.
Methods and Method Instances - This defines the actions you could perform against the entity. You would define the method defintion ; say by using an SQL command string with the parameters (parameter types could be .NET types, say System.Data.IDataReader, System.String etc).
Method instances is an interesting concept; the same procedure definition (a template) could have different roles (method instances) to play under different scenario. Method types define the role the method plays. Eg:- when you need to define a SpecificFinder and a Finder type, a single method template should suffice.
Some of the method types defined are quite smart; AccessChecker method type could be used to do a custom access filter of the items in MOSS just before it is shown to the user (say just before the search result is shown within MOSS). You could write stored procedures in the backend which tell MOSS whether the data needs to be shown to the specific user, then link it up as an AccessChecker in the BDC definition file. I think thats cool.
BDCMetaMan - A very handy tool where you define the connection, entities etc visually and the XML file is generated for you. The free version could be used as a draft for more hands on tweaking.
Check out BDC covered extensively at MSDN.
MOSS 2007 brings out new capabilities in integrating third party data sources and line-of-business apps into the sharepoint environment. Once integrated, these data sources act 'quite' similar to the standard sources letting you apply the various sharepoint functions such as list, search etc.
There is no need to write complex custom handlers (nobody wants to write one - not recommended by the MOSS team either!) or IFilers, now with the introduction of Business Data Catalog - BDC
The idea is very simple - you could integrate any data source which has an adapter via the ADO.NET path or a webservice. Once you dig out the adapter,the only remaining step is to define the BDC. The BDC definition is an XML file conforming to the BDCMetaData.xsd schema. Tip - when you start with a new XML file using VS.NET 2005, go to the properties for this XML file and point the 'schema' to the BDCMetaData.xsd; this enables intellisense while editing the XML file.
Common items which you define in the BDC :
LobSystemInstance - This defines where the datasource is and the adapter to use for connection, think of it as the connection string you normally provide.
Entities - When you expose the data source, what you definitely need to tell MOSS is what items you need to make available from your datasouce. Say from the default SQL Server [pubs] DB, you might want to expose the employee items only. For each entity, you would need to define the properties (employee id, name etc) and the identifier (employee id) at the bare minimum.
Methods and Method Instances - This defines the actions you could perform against the entity. You would define the method defintion ; say by using an SQL command string with the parameters (parameter types could be .NET types, say System.Data.IDataReader, System.String etc).
Method instances is an interesting concept; the same procedure definition (a template) could have different roles (method instances) to play under different scenario. Method types define the role the method plays. Eg:- when you need to define a SpecificFinder and a Finder type, a single method template should suffice.
Some of the method types defined are quite smart; AccessChecker method type could be used to do a custom access filter of the items in MOSS just before it is shown to the user (say just before the search result is shown within MOSS). You could write stored procedures in the backend which tell MOSS whether the data needs to be shown to the specific user, then link it up as an AccessChecker in the BDC definition file. I think thats cool.
BDCMetaMan - A very handy tool where you define the connection, entities etc visually and the XML file is generated for you. The free version could be used as a draft for more hands on tweaking.
Check out BDC covered extensively at MSDN.
Survival tips for the 'common programmer'
Learn Learn
For the 'common programmer'[R.K Laxman - 'Common Man' variant], the importance of a good foundation in computer science and continuously upgrading your knowledge cannot be emphasised further. The reason for this post are the many interviews conducted over the past few days which have been very disappointing and a few talks with my colleagues.
In addition to computer science fundamentals, what definitely appear to be missing from the many software professionals is the passion to learn stuff, the desire to look into the details to know how things works
Your basic foundation, which should have at least covered computer architecture, OS fundamentals, Networking fundamentals, programming concepts and constructs (for a more exhaustive list, check out the syllabus from any of the B.Tech or B.Sc Computer Science courses) seems to be missing.
Second, you need to be aware of whats around and happening in this field; now how would I do that ? Subscribe to postings via a good RSS reader - Google Reader is a good option. Nearly all websites support for RSS subscriptions. Most importantly, make sure you read through them periodically.
Information Overload ?
Now, while reading through the many stuff, how do you make sure its relevant to you ? There is no way anyone could read and understand each of the topic (that would take 25hrs daily... behind bars perhaps?). An easy option is to not go into the details of the implementation, but be aware of the concept; as in, know the fundas. Unless the posting itself is of an interesting nature and you want to go in deeper.
The same logic applies to newsgroup postings; subscribe to newsgroups which appear interesting, but be aware of what needs a closer read. The experts appear to 'read between the lines'; you could skip paragraphs and sentences to read through the article to get an overall idea. If it does appears interesting, go back and read all the lines.
Overall, just make sure you are updated - make the above two steps a habit :)
Look Further
Now, when you learn something new, make sure you delve a bit more deeper than the skin to understand the hows and whys. These two questions should clear a lot many doubts on why the stuff is there in the first place and how the stuff solves it.
eg:- Most of us appear to know that the foreach construct in C# lets you loop through each items in the collection (solves the 'why' part). All good. Now, how does it do it and how can I extend my System.Object descendant to be made usable within the foreach construct? Enter IEnumerable interface.
Another one - Threads in C# do let me run jobs in parallel ('why' part). Now how does the CLR manage user threads? Did you know that a thread need not be created at the OS level each time a /new Thread()/ is called ? Enter Thread pool managed by the CLR.
What needs to be stressed is the importance of going deeper into anything you learn by answering the above two questions each time.
All the best fellow programmers. Would like comments on how you guys learn and update yourself.
For the 'common programmer'[R.K Laxman - 'Common Man' variant], the importance of a good foundation in computer science and continuously upgrading your knowledge cannot be emphasised further. The reason for this post are the many interviews conducted over the past few days which have been very disappointing and a few talks with my colleagues.
In addition to computer science fundamentals, what definitely appear to be missing from the many software professionals is the passion to learn stuff, the desire to look into the details to know how things works
Your basic foundation, which should have at least covered computer architecture, OS fundamentals, Networking fundamentals, programming concepts and constructs (for a more exhaustive list, check out the syllabus from any of the B.Tech or B.Sc Computer Science courses) seems to be missing.
Second, you need to be aware of whats around and happening in this field; now how would I do that ? Subscribe to postings via a good RSS reader - Google Reader is a good option. Nearly all websites support for RSS subscriptions. Most importantly, make sure you read through them periodically.
Information Overload ?
Now, while reading through the many stuff, how do you make sure its relevant to you ? There is no way anyone could read and understand each of the topic (that would take 25hrs daily... behind bars perhaps?). An easy option is to not go into the details of the implementation, but be aware of the concept; as in, know the fundas. Unless the posting itself is of an interesting nature and you want to go in deeper.
The same logic applies to newsgroup postings; subscribe to newsgroups which appear interesting, but be aware of what needs a closer read. The experts appear to 'read between the lines'; you could skip paragraphs and sentences to read through the article to get an overall idea. If it does appears interesting, go back and read all the lines.
Overall, just make sure you are updated - make the above two steps a habit :)
Look Further
Now, when you learn something new, make sure you delve a bit more deeper than the skin to understand the hows and whys. These two questions should clear a lot many doubts on why the stuff is there in the first place and how the stuff solves it.
eg:- Most of us appear to know that the foreach construct in C# lets you loop through each items in the collection (solves the 'why' part). All good. Now, how does it do it and how can I extend my System.Object descendant to be made usable within the foreach construct? Enter IEnumerable interface.
Another one - Threads in C# do let me run jobs in parallel ('why' part). Now how does the CLR manage user threads? Did you know that a thread need not be created at the OS level each time a /new Thread()/ is called ? Enter Thread pool managed by the CLR.
What needs to be stressed is the importance of going deeper into anything you learn by answering the above two questions each time.
All the best fellow programmers. Would like comments on how you guys learn and update yourself.
Subscribe to:
Posts (Atom)