Posted on 9. August 2014

SharePoint Integration Reloaded – Part 2

Part 1 of in this series described how SharePoint Server to Server SharePoint integration (new to CRM2013 SP1) works from the client interface perspective. In this post I'd like to share a bit more how this all works from the server side.

Authentication

When this feature was introduced my first question was about authentication. Having created a fair number of solutions that integrated SharePoint with Dynamics CRM on the server side I knew that this is a tricky area. Since this feature is only available for CRM Online to SharePoint Online where they are in the same tenant it makes authentication slightly simpler because there is already an existing trust in place between the two servers which allows Dynamics CRM to authenticate with SharePoint and act as the calling user. The HTTP request that comes from the client is inspected by the integration component and the UPN is used to authenticate with SharePoint as the same user rather than the service account. This is acting on behalf of the user is critical because when documents are created, checked in/out or queried, it must be performed under the account of the user and not the system account. Perhaps even more important, when CRM queries for documents it will only return those that the user has access to as configured in SharePoint.

If this feature is made available for On Prem customers I would expect that a configuration would have to be made available to provide the user's SharePoint username and password to use when performing server side operations.

Query SharePoint Documents

The new SharePoint sub grid that is rendered by CRM actually uses exactly the same query mechanism as any other entity – but rather than the query being sent to the CRM Database, it is handled by the SharePoint query handler. If you fire up Advanced Find, you'll see a new Entity named 'Documents' but if you query against this entity you will get the error:

The error is given by the SharePoint FetchXml conversion to CAML only works if a specific regarding object is provided – this means that you can only return records for a specific folder, rather than all documents in all document locations. When the refresh button is clicked on the client sub-grid there is FetchXml similar to the following executed:

<fetch distinct="false" no-lock="true" mapping="logical" page="1" count="50" returntotalrecordcount="true" >
    <entity name="sharepointdocument" >
        <attribute name="documentid" />
        <attribute name="fullname" />
        <attribute name="relativelocation" />
        <attribute name="sharepointcreatedon" />
        <attribute name="ischeckedout" />
        <attribute name="filetype" />
        <attribute name="fullname" />
        <attribute name="modified" />
        <attribute name="sharepointmodifiedby" />
        <attribute name="relativelocation" />
        <attribute name="documentid" />
        <attribute name="modified" />
        <attribute name="fullname" />
        <attribute name="title" />
        <attribute name="author" />
        <attribute name="sharepointcreatedon" />
        <attribute name="sharepointmodifiedby" />
        <attribute name="sharepointdocumentid" />
        <attribute name="filetype" />
        <attribute name="readurl" />
        <attribute name="editurl" />
        <attribute name="ischeckedout" />
        <attribute name="absoluteurl" />
        <filter type="and" >
            <condition attribute="regardingobjecttypecode" operator="eq" value="1" />
            <condition attribute="regardingobjectid" operator="eq" value="{1EF22CCD-9F19-E411-811D-6C3BE5A87DF0}" />
        </filter>
        <order attribute="relativelocation" descending="false" />
    </entity>
</fetch>

The interesting part here is that we can add filters not only by regarding object but we could also add our own filters for name or document type. Initially I was confused because running this Fetch query in the Xrm Toolbox FetchXml tester gave no results but as it turns out this uses the ExecuteFetchRequest rather than RetrieveMultiple and this new SharePoint integration is only implemented on the latter.

Internal Execute Messages

This new server to server functionality is exposed by a set of internal messages that are not documented in the SDK but by using Fiddler (the tool that give you super powers!), you can see these messages being called from the client when operation such as Check In/Check Out are called. Here is a list of these internal messages:

Message Name

Description

RetrieveMultipleRequest
(sharepointdocument)

Returns a list of document names from SharePoint for a specific document location. This query is converted into SharePoint CAML on the server and supports basic filter criteria and sorting.

NewDocumentRequest

Creates a new document location and matching SharePoint folder and is called when the documents sub grid is first shown with no document locations configured.

 

FileName – Name of the file to create including extension (e.g. NewDocument.docx)

RegardingObjectId – Guid of the record that the document location belongs to

RegardingObjectTypeCode – Object type code of the record the document location belongs to

LocationId – The ID of the document location to add the new document to (in case there are multiple)

CheckInDocumentRequest

CheckOutDocumentRequest

DisregardDocumentCheckoutRequest

This performs the check in/out operation on a specific document in SharePoint.

Entity – The document to check in/out with the 'documentid' property populated with the List Id in SharePoint.

CheckInComments

RetainCheckOut

CreateFolderRequest

Creates a new documents location in CRM and the corresponding SharePoint folder.

FolderName – the name to give to the SharePoint folder

RegardingObjectId – Guid of the record that the document location belongs to

RegardingObjectTypeCode – Object type code of the record the document location belongs to

 

Now before you get excited you can't use these requests on the server because you will get a 'The request <Request Name> cannot be invoked from the Sandbox.' (Yes, I did try!) This is expected since the sandbox does not have access to the HTTP context that contains the information about the calling user and so the authentication with SharePoint cannot take place.

I proved this using a Custom Workflow Activity that tried to call 'CreateFolder' and you see the following error.

These requests can however be called easily from JavaScript which opens up some interesting possibilities (if a little unsupported because these messages are not actually documented the SDK at the moment):

  1. Automatically create a document location using a different naming convention to the standard one via JavaScript onload of a record if there isn't one already.
  2. Provide a custom view of SharePoint documents using fetchxml – this could even be filtered to just show a particular file type by adding a condition similar to <condition attribute="filetype" operator="eq" value="jpeg"/>
  3. Provide custom buttons to create documents in SharePoint.

I hope you've found this interesting - Next time I'll show you how to get around the sandbox limitation to perform server side operation on SharePoint from a CRM Plugin or Workflow Activity.

@ScottDurow

Posted on 9. August 2014

Early Binding vs Late Binding Performance (Revisited)

After having an interesting debate on the CRM Community forums about the performance of Early verses Late Bound entities my friend Guido Preite pointed me at a good blog post on this subject by James Wood named 'CRM 2011 Early Binding vs Late Binding Performance'. I have always been an advocate of Early Bound types but it is true that the SDK still states in the 'Best Practices for Developing with Microsoft Dynamics CRM'

"…use of the Entity class results in slightly better performance than the early-bound entity types"

However it also states that that the disadvantages of using the late-bound Entity types is:

"…you cannot verify entity and attribute names at compile time"

I've seen many bugs introduced into code from the use of late bound types because typos can easily be introduced into the strings that are used to determine the entity and attribute logical names. Due to the productivity gains that come with Early Bound types I always recommend their use if the schema of your entities is known at compile time. There are times when this is not true or you are creating code that must run in a configurable way on many different entities or attributes in which case the late bound entity type is the only choice.

So what about performance?

  1. The SDK state that Late Bound types give 'slightly' better performance and states 'Serialization costs' as the reason.
  2. James' post states a 30% increase in speed for 200 Create operations, and <5% increased for 1500 operations.

So addressing each of these points in turn:

Although before CRM2011 there were serialization costs in using early bound types because they were serialised as part of the web service call, with CRM2011/2013 the early bound types just inherit from the Entity class and the early bound attribute properties simple set/get values from the underlying Attribute collection. The serialization when making SDK calls is effectively the same for both early and late. The main difference is due to the extra work that the OrganizationService Proxy has to do when converting the Early Bound type to the Entity type and then back against when it's received from the server. This is done using Reflection to first search for the Early bound classes and then by searching the classes for the one that matches the logical name received. This obviously will have a cost but it seems to be work that is done once per Service Channel and then cached to avoid any further cost.

James' tests are interesting but perhaps a bit misleading because the initial cost of this additional work is included in his overall speed calculations. This is probably why the overall percentage cost of early binding goes down as the number of records increases.

To remove this initial cost from the equation I adapted his code to introduce a warmup time. In tests I couldn't categorically show that either Late Bound or Early Bound had any performance difference once the OrganizationService was 'warmed up' with all reflection done and cached. In fact sometimes the test showed that early bound was quicker which leads me to believe that the main influencing factor is somewhere else like Database or Server performance. To make the results easier to interpret I have simply shown the average operation time after the warmup period. I also separated out the tests so that the Early Bound types were not compiled and picked in the Late Bound tests.

Each test was a warm up of creating 400 records and a run of creating 500 records.

Conclusions

Whilst it is true that using Early Bound classes incurs some cost of additional 'plumbing' – assuming that you are caching the WCF Service Channels (which the Microsoft.Xrm.Client.Services.OrganizationService does for you) because the difference in speed is so small (< there really is no reason not to use the Early Bound classes unless you have performance related issues and want to eliminate this as the cause.

If you are interesting, here is the code I used (based on James' code)

static void Main(string[] args)
{       
    int warmupCount = 400;
    int runCount = 500;
 
    CrmConnection connection = new CrmConnection("Xrm");
    var service = new OrganizationService(connection);
    
    CreateAccounts("Early Bound Test", warmupCount, runCount, () =>
    {
        Account a = new Account();
        a.Name = "Test Early Vs Late";
        service.Create(a);
    });


    CreateAccounts("Late Bound Test", warmupCount, runCount, () =>
    {
        Entity e = new Entity("account");
        e["name"] = "Test Early Vs Late";
        service.Create(e);
    });

    TidyUp();

    Console.WriteLine("Finished");
    Console.ReadKey();
}

static void CreateAccounts(String name, int warmup, int runs, Action action)
{
    Console.WriteLine("\n" + name);
    // Warm Up
    for (int i = 1; i <= warmup; i++)
    {
        if (i % 10 == 0)
            Console.Write("\r{0:P0}     Warmup   ", (((decimal)i / warmup) ));
        action();
    }

    // Run Test
    double runningTotal = 0;
    Stopwatch stopwatch = new Stopwatch();
    for (int i = 1; i <= runs; i++)
    {
        stopwatch.Reset();
        stopwatch.Start();
        action();
        stopwatch.Stop();
        runningTotal += stopwatch.ElapsedMilliseconds;
      
        double runningAverage = runningTotal / i;
        if (i % 10 == 0)
            Console.Write("\r{0:P0}     {1:N1}ms   ", (((decimal)i / runs)), runningAverage);
    }
}
Posted on 4. August 2014

SharePoint Integration Reloaded – Part 1

Back in the day when CRM2011 was first in beta I blogged about the exciting SharePoint integration and how it works. This post is about the exciting new server side SharePoint integration that is now available as part of CRM2013 SP1 Online.

There has already been some good posts on how to set up SharePoint sever-side Sync but in this series I'm going to explain how the server to server integration works in more detail and run through some scenarios of how it can be used for custom solutions.

CRM List Component Integration

Before CRM2013 SP1 was released the only option for SharePoint Integration was to use the CRM List Component. Each document location was surfaced on a record form record via an IFRAME that placed a SharePoint page inside the CRM page via the list component aspx page. This SharePoint page rendering the document library's default view with a CRM theme and provided the upload/download commands.

  1. The CRM form page is displayed in the browser and including an IFRAME that requested the configured document library page from SharePoint.
  2. The IFRAME shows the SharePoint document library styled to look like CRM. This required the user to be independently authenticated with SharePoint.
  3. Using any of the actions in the page (New/Upload etc.) sends requests directly to SharePoint.

Changing Landscape

This approach worked well but since the user was accessing SharePoint directly within the IFRAME they'd sometimes encounter authentication issues where they must be authenticated with SharePoint first and sometimes SharePoint needed to be configured to allow inclusion of content in IFRAMES. In addition to this the list component required a sandbox host to run but this feature is being phased out in SharePoint Online.

Server to Server SharePoint Integration (S2S)

With the introduction of CRM2013 SP1 a new type of integration has been developed that provides direct server to server integration between SharePoint and CRM thus removing the need for the user to be pre-authenticated with SharePoint on the client.

  1. The Record page includes a standard sub grid that is populated using the CRM entity query object model. CRM converts a RetrieveMultiple request on the SharePoint Document entity into SharePoint CAML (Collaborative Application Markup Language) query and sends it to the SharePoint Web Services. The important part here is that this query is run in the context of the currently logged on user and so they only see the document that they have access to in SharePoint (more on how this works in part 2 of this series).
  2. Documents are rendered inside the CRM Form HTML as a standard sub grid in the same way that any other record might be displayed.
  3. Using the New/Upload command bar buttons sends a request to CRM by way of an Execute Request in the same way that any other command bar buttons might do.
  4. CRM uses the SharePoint Web Service API to execute the requests and refreshes the sub grid.

This server to server integration only works for CRM Online/SharePoint Online combinations that are in the same tenant due to the nature of the server to server authentication and can be turned on in the Document Management Settings using the 'Enable server-based SharePoint integration'. There is a note that states that sandboxed solutions will not be supported in the future for SharePoint online.

Differences between List Component and Server-to-Server

Once S2S integration is enabled you'll see a similar view to the list component but it looks far more CRM2013 like. Apart from a slicker interface there are a few other differences:

Folder Support

The S2S sub grid doesn't support folders within the document library and so all documents are flattened down underneath the document location folder. The Location column does give you folder name which you can sort by to allow grouping by folder.

Custom Views

The great thing about having the documents queried by CRM is that you can create custom views of documents in the same way you would with any other entity in CRM. When using the list component the default view in SharePoint was rendered in the IFRAME meaning that to get new columns you had to have list customisation privileges on SharePoint such that all users would see the changes. With the new server to server integration you can select SharePoint columns to include in your own views and even add in your own filters using the CRM advance find interface. If you think about it – this is very cool!

Item Actions

The List Component was by nature very similar to the SharePoint list user interface and so it had more or less full support of actions that can be performed from SharePoint (with the exception of workflow operations). The server to server sub-grid provides all the main functions but with some options such as Alert Me, Send Short Cut, View History and Download a Copy being unavailable.

The S2S integration gives the following command bar actions:

This is in comparison to the List Component actions that are as shown below.

Inline Dialogs

With CRM2013's single page user experience any pop-out windows are supposed to be kept to a minimum. When using the list component operations (such as check-in/out) a new window would always pop out but with S2S integration an inline dialog shown instead. This really make it feel tightly integrated and slick.

Out of these differences, the lack of folder support is the only one that has had any significant effect on my solutions but actually can be seen as an advantage if using sub-folders to hold child entity documents. In this scenario all documents will be visible from the parent record's document view rather than rely on the user drilling down into each folder to see content.

That's all for now but in the next article in this series I'll show you more of how this functionality works under the covers.

Read Part 2

@ScottDurow

Posted on 2. August 2014

Polymorphic Workflow Activity Input Arguments

I often find myself creating 'utility' custom workflow activities that can be used on many different types of entity. One of the challenges with writing this kind of workflow activity is that InArguments can only accept a single type of entity (unlike activity regarding object fields).

The following code works well for accepting a reference to an account but if you want to accept account, contact or lead you'd need to create 3 input arguments. If you wanted to make the parameter accept a custom entity type that you don't know about when writing the workflow activity then you're stuck!

[Output("Document Location")]
[ReferenceTarget("account")]
public InArgument<EntityReference> EntityReference { get; set; }

There are a number of workarounds to this that I've tried over the years such as starting a child work flow and using the workflow activity context or creating an activity and using it's regarding object field – but I'd like to share with you the best approach I've found.

Dynamics CRM workflows and dialogs have a neat feature of being about to add Hyperlinks to records into emails/dialog responses etc. which is driven by a special attribute called 'Record Url(Dynamic)'

This field can be used also to provide all the information we need to pass an Entity Reference.

The sample I've provide is a simple Workflow Activity that accepts the Record Url and returns the Guid of the record as a string and the Entity Logical Name – this isn't much use on its own, but you'll be able to use the DynamicUrlParser.cs class in your own Workflow Activities.

[Input("Record Dynamic Url")]
[RequiredArgument]
public InArgument<string> RecordUrl { get; set; }
The DynamicUrlParser class can then be used as follows:

var entityReference = new DynamicUrlParser(RecordUrl.Get<string>(executionContext));
RecordGuid.Set(executionContext, entityReference.Id.ToString());
EntityLogicalName.Set(executionContext, entityReference.GetEntityLogicalName(service));

 

The full sample can be found in my MSDN Code Gallery.

@ScottDurow