Tuesday, 26 February 2008

Webparts in MOSS 2007 - Part 2

In the first part, we talked about creating the simplest of WebPart and the classes of concern. This time, lets render something meaningfull in our webpart. How about the list of all the users in the system ? We would be using the MOSS libraries to get this information.

Steps
As usual, create a blank class derived from System.Web.UI.WebControls.WebParts.WebPart and override the render method as :

protected override void Render(HtmlTextWriter writer)
{
writer.Write(GetHTML());
}

private string GetHTML()
{
string result = "<table border=\"0\">";
try
{
ServerContext context = ServerContext.GetContext(Context);
UserProfileManager profileManager = new UserProfileManager(context);
foreach (UserProfile profile in profileManager)
{
if (profile.PublicUrl.AbsoluteUri != null)
{
result += "<tr><td><a href=\"" + SPEncode.HtmlEncode(profile.PublicUrl.AbsoluteUri) + "\"/>" +
SPEncode.HtmlEncode(profile[PropertyConstants.AccountName].ToString()) + "</a></td></tr>";
}
}
result += "</table>";
}
catch (Exception ex)
{
result += "<tr><td>" + ex.ToString() + "</td></tr></table>";
}
return result;
}


As seen, the GetHTML function returns the HTML string which needs to be rendered at the Render() function. GetHTML uses the server context together with the UserProfileManager to get all the user profiles in the Sharepoint system. In addition to this, we have rendered a link for each of the user such that it takes you to the home page of the user.

You would need to include the following namespaces in the 'using' section if not already done : Microsoft.Office.Server, Microsoft.Office.Server.UserProfiles, Microsoft.SharePoint.Utilities, Microsoft.SharePoint.

Follow the either of the two steps mentioned in the previous post to register this webpart on the server and test it out.

More rendering with data from the DB

Lets create another WebPart which renders data from the DB onto a DataGrid. The core idea remains the same. You perform the render on the items that you know and for your child controls (DataGrid, Label etc), you ask them to render themselves.The crux of the code is contained in the following :

protected override void Render(HtmlTextWriter writer)
{
EnsureChildControls(); //makes sure the child control were created

LoadData();

writer.RenderBeginTag("table");

writer.RenderBeginTag("tr");
writer.RenderBeginTag("td");
lblSubmit.RenderControl(writer);
writer.RenderEndTag();
writer.RenderEndTag();

writer.RenderBeginTag("tr");
writer.RenderBeginTag("td");
gridProductList.RenderControl(writer);
writer.RenderEndTag();
writer.RenderEndTag();

writer.RenderEndTag(); //end table
}

protected override void CreateChildControls()
{
base.CreateChildControls();

lblSubmit = new Label();
lblSubmit.Text = "Employee List";
Controls.Add(lblSubmit);

gridProductList = new DataGrid();
Controls.Add(gridProductList);
}


Note the usage of RenderBeginTag and RenderEndTag which generates the matching start and end tags. Within each of the tags, we render the specific control. As seen, the DataGrid (gridProductList) gets rendered in the appropriate table column within a html row.

LoadData() function referred in Render() basically loads the data into the DataGrid using standard data access calls. Any child control the webpart uses should desirably be created at the CreateChildControls method. Though we might as well do this when the WebPart gets created, writing it here ensures that EnsureChildControls() call it when required.

I hope this gives you a gist of how WebParts are created and used. In the next parts, we shall see about Custom Editors and inter WebPart communication.

Monday, 25 February 2008

Webparts in MOSS 2007 - Part 1

Webparts can be considered as reusable widgets (similar to the yahoo ones) which work independently (though they can communicate) and are individually configurable. This makes the task of showing up of different logical sections on the same web page easier; especially while doing independent development. Brought out initially with Sharepoint 2003, it was supported in ASP.NET extensively and now in MOSS 2007 .

OOB WebParts
Out of the box, MOSS provides you with numerous webparts which could be used with minimal configuration. You would want to check out the webpart gallery which list down these.

Some of the interesting O
OB webparts :
Image Webpart : To display images from sites/from another webpart

Site Aggregator : To display sites of your choice
RSS Viewer : To get feeds from any RSSContent Query : To display a content type.
Business Data List : List from LOB/Webservice configured in BDC

Development Concerns
As soon as you start to swim through webpart documentations, you are faced with the dilemma of two parent WebPart classes which provide more or less the same functionality. Which one do you use? From what I could figure out, System.Web.UI.WebControls.WebParts.WebPart is the most commonly used for WebPart development and the simplest.


The immediate descendant class defined in Microsoft.SharePoint.WebPartPages.WebPart was intentionally provided for compatibility with Sharepoint 2003 and also for performing complex webpart functionalities which include cross page web part communication, communication between webparts not in the same webzone, data caching etc.

The good part is, if you use the System.Web.UI.... WebPart, you could reuse the webpart in an ASP.NET application too, assuming your webpart does not have any MOSS specific calls. To summarise, stick onto the System.Web...WebPart for most of your regular requirements .


Classes of concern
WebPartManager - Microsoft (and now me) loves manager and provider classes. In this case, WebPartManager acts as the point of entry to access the various features of the various webparts in a page. This means that there is exactly one WebPartManager for a webpage. If you are using the masterpage provided by MOSS 2007, this would mean that the WebPartManager is already available to you (things get different when you do plain ASP.NET development)


WebPartZone - This is physical place/zone on the page where WebParts reside.

Writing the first WebPart

Now that we are decided on the parent webpart class to use for our WebPart, the main job we have is to tell what needs to be rendered. Simple enough, override the render function :)

public class MyFirstWebPart : System.Web.UI.WebControls.WebParts.WebPart
{

private string displayText = "MOSS Rocks";

[WebBrowsable(true), Personalizable(true), FriendlyName("Display Text")]
public string DisplayText
{
get { return displayText; }
set { displayText = value; }
}

protected override void Render(System.Web.UI.HtmlTextWriter writer)
{
writer.Write("Typed Text is " + displayText);
}
}


OK, so who is going to talk about the attributes? Here goes:

WebBrowsable - makes sure that this property is listed in the property editor when the WebPart is configured.

Personalizable - Setting a value of true on this attribute makes sure that the property value is maintained for individual users.
FriendlyName - The easiest of the lot, this displays the friendly text for this property in the property editor.

Deploying a WebPart
There are two ways to deploy a webpart to the server :
a.) Direct copy/register Copy the WebPart dll to the _app_bin of your application and mark the assembly as safe in web.config. There you go, WebPart is all setup in your current application and ready to use.



b.) CAB This is the recommended way to deploy webparts in the live environment. What you do is create a DWP (dashboard web part) XML file together with a manifest file, wrap it up in a CAB project. Use this CAB project together with the stsadm command line tool at the server to register your WebPart. Check out more about DWP files here.

In the next post, we shall look at rendering some meaningful stuff at the render method, Property Editors, Inter Communication between WebParts.

Wednesday, 28 November 2007

MS Sync Framework

To prove that they are everywhere, they have now brought out a framework just for synchronising stuff across two domains. Based on a provider model, this could be extended to sync any data (files, tables,). Sync services for ADO.NET is one provider which you could readily use to sync data from your client machine to the DB server. The other one readily available being the file services provider.

Check out the classes in Microsoft.Synchronization.Data.SqlServerCe to start with this and also check out the ADO.NET BOL.

SLP Services

Security services in .NET goes to the next level now with the introduction of these services. Obfuscation had until today been one of the most commonly used methods to hide your source code while SLP appears to use a new approach altogether.

The new set of keyword to learn for the day include:

SLP (Software Licensing and Protection) - The service itself.

SVML (Secure Virtual Machine Language) - Similiar to MSIL, bits of code which has been transformed.

SVM (Secure Virtual Machine) - To achieve code transformation, you select parts of the application you would want to secure. What SLP does at this point is to include an SVM with the many transformed SVML's as part of the application. When the assemblies are consumed by the client, these SVML's execute in its own SVM. To complicate things further for our hacker, each software vendor would supposedly get an SVM with a unique permutation. Effectively, the same code transformed by one vendor would not be readable by another vendor. Definitely something to watch out for.

In addition to the code protection, MS appears to have integrated product licensing & feature level activation into this service making it a complete security solution for .NET applications.

Code Protector SDK - You could use this SDK to transform your code into SVML using your custom permutation. Check out the Microsoft.Licensing namespace.

SLP Server - This server application could be used to manage the 'feature' activation through a web service and also perform customisation of packages. Instead of buying (and maintaining) this product, you could instead subscribe to the SLP online service provided by MS.

Monday, 15 October 2007

Audience in MOSS

Concept

Audience feature in MOSS should not be mixed up with security features/trimming of data based on the user credentials and the resulting authorization. Instead, 'audience' lets you filter out undesirable data for the current user context. E.g.:- you would want to hide the sales data on the home page if the current user is not part of the sales team; this makes sure the marketing (or other) teams are not overloaded with unwanted information.

In MOSS, the idea is to setup a user/audience list such that webparts/lists can later show/hide the required information. This is supported out of the box.

Setting Up an Audience

From a shared service provider, you could add new audiences by creating rules using windows groups or distribution list or any of the property available against a user (name, address, department, manager etc).

To extend this audience definition, we could have custom property (say 'Day/Night Shift') added against a user from the Shared Services->User Profile settings. This makes sure that we can create rules based on this new custom property to show items specific to employee working in the day shift.

Consuming an Audience

Out of the box, sharepoint lets you apply audience-targeting for the following items

a.) WebParts - while designing the webpart, you could specify the target audience for this webpart. In this case, the webpart would be rendered only if the current-user is part of the target audience. In all other cases, the webpart would not be visible / rendered.

b.) Lists - While designing a list, you could specify that the list needs to have audience targeting available. In this case, you could use Content Query WebPart to filter out data


Consuming Audience programmatically

AudienceManager class - Acts as the entry point into the entire functionality of audience in MOSS. This class also implements the IRuntimeFilter interface to perform the targeting functionalities for webpart/lists.

For a custom user control to consume the Audience functionalities, one of the way is to setup an audience list for the custom control (a new property perhaps) such that at runtime, the control can use AudienceManager.IsCurrentUserInAudienceOf() method to check if the active user is part of the audience previously setup. Note that in this case, it’s up to the control developer to implement the required functionality of hiding/filtering data etc.

Sunday, 22 July 2007

Talking with Exchange Server

The usual requirements of talking with outlook can be handled by the rather extensive object model which outlook provides. Now, if you want to interact with outlook from a server based application (say ASP.NET or a remoting host), using object model might not be the right solution since you need outlook client installed, you might have to configure individual profiles etc. A better approach could be talk directly to the exchange server.

To talk with Exchange, the following approaches seems available (
be-ware; even after you select you preferredAPI and talking channel, you could easily get lost in the n versions of the library one for each of the outlook versions.) :

1.) CDO-EX objects:
Of the various versions of CDO, the version for exchange - CDOEX could be used to manage components in the exchange server. The only issue here being that the application consuming CDOEX needs to be on the same machine as that of the server. CDO 1.2.1 does seem to let you access exchange servers remotely but could not get it to install on a machine without outlook 2007 :(

Note that as of Outlook 2007, it appears CDO is being provided as a separate download.

2.) WebDAV
The slowest of the lot and the most difficult to understand, uses plain http requests in an xml format to perform each action. The convenience (you can use it remotely too) of using this method usually outweighs the speed and the learning curve.

WebDAV notifications using HTTPU is interesting in that you get notifications from the remote server via UDP message. A simple explanation with example is available at infinitec.de

3.) Exchange OLE DB
An OLE Db provider for exchange sounds like the best possible way to talk with exchange server. Sadly, your happiness ends when msdn tells you that the application consuming this driver needs to be on the same server as that of the exchange. Err!

Effectively, if performance is your main concern, your preference should be to go for CDO/OleDB/WebDav (in that order). Perhaps the future release of the Exchange API/SDK might contain a Microsoft.Exchange.Server.Core assembly to talk directly and easily.

Shall talk about using WebDAV within a C# application in detail in one of the upcoming posts.


Saturday, 14 July 2007

SSW Code Auditor - a review

A quick search for code standards review tools for C# lead me to SSW Code Auditor. Among others (FxCop, Standards Master 2005,FMS Total..), this tool appeared to be something easy for an average developer to use from the first day.

Details

Once the trial version is downloaded, the first thing which would strike you are the pictures of all kinds of fruits (yes! apple, the sign of health to start with). The GUI tries to be very straightforward using a wizard kind of interface but is not effective. It would take atleast another 10mins before you realise that the 'database' is effectively a kind of project where you add each subsections to be tested, as 'jobs'. Not sure why this isnt just a project file and a list of jobs within it such that I can create multiple projects using File->New?

Anyways, once you add your list of folders, files which needs to be audited, you get to select the rules you want to be tested. The trial version appears to have 147 rules of all kinds enabled. Perhaps new standard rules would be added periodically by SSW as a rule-update file?

Could not add a new rule or edit a rule in this trial version. But would have been good if the trial version let you create one custom rule - just to check out things. A fully functional version which works for particular time period is recommended for bringing out trial versions of utils.

Within a normal wizard layout, the usual tendency is to click next next next.. finish. One non-standard UI design was the start/skip button within one of the wizard page. These buttons are the ones which check the selected files against the selected rules. What would have been a better UI design would be to bring down the start/stop/skip buttons instead of the back/next/cancel buttons.

The browser rendered result page tells if the application is healthy or not (images varying from apple to burger to denote these!) and the detailed list of issues it located. Thankfully, the results can be arranged by file names such that I can see all the issues my particular class has.

What strikes you while you use this application is the language of the messages contained in the forms and the reports. Its just simple and communicates good to the developer. The report tells you what is wrong in plain simple English with a quick tip. Great, when you think about the rather complex messages from FxCop.

The other nifty functions included emailing the results, scheduling the tests (again not available in trial version), creating a batch file which you could execute from the command line and also performing a test of the just checked-in file ( with Team Foundation Server). This feature would be really good - the developer would get the list of issues with his file as soon as its checked in - great.

In addition to the standalone application, the VS.NET plugin is what you would use on a daily basis. The plugin makes the distinction between FxCop and Code Auditor obvious when it lets you select assemblies with FxCop and source code with itself. Sadly, I could not get this to test just my active source file. It had to perform the test on the whole project each time.

The VS.NET plugin also appears to add two files (one for fxcop another for itself) into an individual solution item folder for each of the project in the solution. This definitely appears to clutter up the solution explorer. What would have been a lot better is a single solution item with all the files for all the project within the active solution.

To summarise, once you get a kick of this no-nonsense tool, it should be a pretty good companion during your daily development activity. Perhaps the next versions might also fix the obvious errors automatically.

All those fruits; from apples, bananas to strawberry has definitely made me hungry! I think am off to the kitchen.