Posted on 6. June 2014

Monitor, Monitor, Monitor

I once heard someone say that "the great thing about Dynamics CRM is that it just looks after itself" Whilst CRM2013 is certainly very good at performing maintenance tasks automatically, if you have a customised system it is important to Monitor, Monitor, Monitor! There are some advanced ways of setting up monitoring using tools such as System Center but just some regular simple monitoring tasks will go a long way for very little investment on your part:

1) Plugin Execution Monitoring

There is a super little entity called 'Plugin-in Type Statistics' that often seems to be overlooked in the long list of advanced find entities. This entity is invaluable for tracing down issues before they cause problems for your users and as defined by the SDK it is "used by the Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online platforms to record execution statistics for plug-ins registered in the sandbox (isolation mode)."

The key here is that it only records statistics for your sandboxed plugins. Unless there is a good reason not to (security access etc.) I would recommend that all of your plugins be registered in sandbox isolation. Of course Dynamics CRM online only allows sandboxed plugins anyway so you don't want to put up barriers not to move to the cloud.

To monitor this you can use advanced to show a sorted list by execution time or failure count descending:

If you spot any issues you can then proactively investigate them before they become a problem. In the screen shot above there are a few plugins that are taking more than 1000ms (1 second) to execute, but their execution count is low. I look for plugins that have a high execution count and high execution time, or those that have a high failure percent.

2) Workflow & Asynchronous Job Execution Monitoring

We all know workflows often can start failing for various reasons. Because of their asynchronous nature these failures can go unnoticed by users until it's too late and you have thousands of issues to correct. To proactively monitor this you can create a view (and even add to a dashboard) of System Jobs filtered by Status = Failed or Waiting and where the Message contains data. The Message attribute contains the full error description and stack trace, but the Friendly message just contains the information that is displayed at the top of the workflow form in the notification box.

3) Client Latency & Bandwidth Monitoring

Now that you've got the server-side under control you should also look at the client connectivity of your users. There is a special diagnostics hidden page that can be accessed by using a URL of the format:

http://<YourCRMServerURL>/tools/diagnostics/diag.aspx

As described by the implementation guide topic, "Microsoft Dynamics CRM is designed to work best over networks that have the following elements:

  • Bandwidth greater than 50 KB/sec
  • Latency under 150 ms"

After you click 'Run' on this test page you will get results similar to that shown below. You can see that this user is just above these requirements!

You can read more about the Diagnostic Page in Dynamics CRM. You can also monitor the client side using the techniques I describe in my series on Fiddler:

If you take these simple steps to proactively monitor your Dynamics CRM solution then you are much less likely to have a problem that goes un-noticed until you get 'that call'!

@ScottDurow

Posted on 15. April 2014

Fiddler2: The tool that gives you Superpowers - Part 2

This post is the second post in the series 'Fiddler – the tool that gives you superpowers!'

Invisibility

This time it's the superpower of Invisibility! Wow I hear you say!

Fiddler is a web debugger that sits between you and the server and so is in the unique position of being able to listen for requests for a specific file and rather than returning the version on the server return a version from your local disk instead. This is called and 'AutoResponder' and sounds like a super-hero it's self – or perhaps a transformer (robots in disguise).

If you are supporting a production system then the chances are that at some point your users have found an issue that you can't reproduce in Development/Test environments. Auto Responders can help by allowing us to update any web resource (html/JavaScript/Silverlight) locally and then test it against the production server without actually deploying it. The Auto Responder sees the request from the browser for the specific web resource and rather returning the currently deployed version, it gives the browser your local updated version so you can test it works before other users are affected.

Here are the steps to add an auto responder:

1) Install Fiddler (if you've not already!)

2) Switch to the 'Auto Responders' tab and check the two checkboxes 'Enable automatic responses' and 'Unmatched requests pass-through'

3) To ensure that the browser requests a version of the web resource rather than a cached version from the server you'll need to clear the browser cache using the convenient 'Clear Cache' button on the tool bar.

4) You can ensure that no versions get subsequently cached by selecting Rules-> Performance-> Disable Caching.

5) You can now use 'Add Rule' to add an auto responder rule. Enter a regular expression to match the web resource name

regex:(?insx).+/<Web Resource Name>([?a-z0-9-=&]+\.)*

then enter the file location of the corresponding webresource in your Visual Studio Developer Toolkit project.

You are now good to go so that when you refresh your browser the version of your web resource will be loaded into the browser directly from your Visual Studio project. No need to publish a file to the server and affect other users.

There is one caveat to this – If the script that you are debugging updates data then this approach is probably not a good idea until you are have fully tested the script in a non-production environment. Only once you have QAed and ready to deploy can be it be used against the production environment to check that the specific user's issue is fixed before you commit to deploying it to all users.

Read the next post on how to be faster than a speeding bullet!

@ScottDurow

 

Posted on 15. April 2014

Fiddler2: The tool that gives you Superpowers – Part 3

This post is the third post in the series 'Fiddler – the tool that gives you superpowers!'

Faster than a Speeding Bullet

If you have done any development of Web Resources with Dynamics CRM then I'm certain that you'll have become impatient whilst waiting to first deploy your solution and then publish it before you can test any changes. Everytime you need to make a change you need to go round this loop which can slow down the development process considerably. Using the Auto Responders I described in my previous post (Invisibility) you can drastically speed up this development process by using Fiddler to ensure changes you make to a local file in Visual Studio are reflected inside Dynamics CRM without waiting for deploying and publishing. You make the changes inside Visual Studio, simply save and refresh your browser and voilà!

Here some rough calculations on the time it could save you on a small project:

Time to Deploy

15

seconds

Time to Publish

15

seconds

Debug iterations

20

 

Number of web resources

30

 

Development Savings

5

hours

Time to reproduce live data in test/development

1

hour

Number of issues to debug in live

10

 

Testing Savings

10

hours

     

Total Savings for small project

15

hours

 

What is perhaps more important about this technique that it saves the frustration caused by having to constantly wait for web resource deployment and ensures that you stay in the development zone rather than being distracted by the latest cute kitten pictures posted on facebook!

Do remember to deploy and publish your changes once you've finished your development. It seems obvious but it is easily forgotten and you're left wondering why your latest widget works on your machine but not for others!

More information can be found on this at the following locations:

@ScottDurow

Posted on 15. April 2014

Fiddler2: The tool that gives you Superpowers – Part 4

This post is the fourth and final post in the series 'Fiddler – the tool that gives you superpowers!'

Ice Man

Perhaps Ice Man is the most tenuous super power claim but it's regarding a very important topic – HTTP Caching. Having a good caching strategy is key to having good client performance and not over-loading your network with unnecessary traffic. Luckily Dynamics CRM gives us an excellent caching mechanism – but there are situations where it can be accidently unknowingly bypassed:

  1. Not using relative links in HTML webresources
  2. Loading scripts/images dynamically without using the cache key directory prefix
  3. Not using the $webresource: prefix in ribbon/sitemap xml.

Luckily we can use Fiddler to keep our servers running ice cold by checking that files that are not being cached when they should be. There are types of caching that you need to look for:

Absolute expiration

These web resources will not show in Fiddler at all because the browser has located a cached version of the file with an absolute cache expiration date and so it doesn't need to request anything from the server. By default CRM provides an expiration date of 1 year from the date requested, but if the web resource is updated on the server then the Url changes and so a new version is requested. This is why you see a request similar to /OrgName/%7B635315305140000046%7D/WebResources/somefile.js. Upon inspection of the response you will see an HTTP header similar to:

HTTP/1.1 200 OK
Cache-Control: public
Content-Type: text/jscript
Expires: Tue, 14 Apr 2015 21:18:35 GMT

Once the web resource is downloaded on the client it is requested again until April 14 2015 unless a new version is published where CRM will request the file using a new cache key (the number between the Organization name and the WebResources directory). You can read more about this mechanism on my post about web resource caching.

ETAG Cached files

These resource are usually images and static JavaScript files that are assigned an ETAG value by the server. When the resource changes on the server it is assigned a different ETAG value. When the browser requests the file it sends the previous ETAG value if it hasn't been modified then the server responds with a 304 response meaning that the browser can use the local cached file.

Files that use ETAG caching will show in grey in Fiddler with a response code of 304:

During your web resource testing it is a good idea to crack open Fiddler and perform your unit tests – you should look for any non-304 requests for files that don't need to be downloaded every time they are needed.

Another way to ensure that your servers are running cool as ice is to look at the request execution length. Occasionally code can be written that accidently returns much too much data than required - perhaps all attributes are included or a where criteria is missing. These issues don't always present themselves when working on a development system that responds to queries very quickly, but as soon as you deploy to a production system with many users and large datasets, you start to see slow performance.

There are a number of ways you can test for this using Fiddler:

Visualise Request Times

The order in which your scripts, SOAP and REST requests are executed in can greatly affect the performance experienced by the user and so you can use Fiddler's Time line visualizer to see which requests are running is series and which are running in parallel. It also shows you the length of time the requests are taking to download so that you can identify the longest running requests and focus your optimisation efforts on those first.

  •  

    Simulate Slow Networks

    If you know that your users will be using a slow network to access CRM or you would just like to see how the application responds when the requests start to take longer because of larger datasets you can tell fiddler to add an artificial delay into the responses. To do this you can use the built in Rules->Performance->Simulate Modem Speeds but this usually results in an unrealistically slow response time. If you are using Auto Responders you can right-click on the Rule and use set Latency – but this won't work for Organization Service/REST calls. The best way I've found is to use the Fiddler Script:

    1) Select the 'Fiddler Script' Tab

    2) Select 'OnBeforeRequest' in the 'Go to' drop down

    3) Add the following line to the OnBeforeRequest event handler.

    This will add a 50 millisecond delay for every kB requested from the server which assuming there was no time server time would result in ~160 kbps downloads.

    If you've not used Fiddler for your Dynamics CRM Development yet I hope these posts are enough to convince you that you should give it a try – I promise you'll never look back!

    @ScottDurow

    Posted on 15. April 2014

    Fiddler2: The tool that gives you Superpowers – Part 1

    The next few posts are for those who saw me speaking at the most recent CRMUG UK Chapter meeting about Fiddler2 and wanted to know more (and as a thank you to those who voted for me in X(rm) factor!). I've been using Fiddler for web debugging for as long as I can remember and I can honestly say that I could not live without it when Developing Dynamics CRM extensions as well as supporting and diagnosing issues with existing solutions. I first blogged about it in connection with SparkleXRM development but this post elaborates further on the super powers it gives you!

    What is a Web Debugger?

    Fiddler2 is a Web Debugger which basically means that it sits between your browser and the server just like any normal proxy, but the difference is that it shows you all the HTTP traffic going back and forwards, allows you to visualise it in an easy to read format as well as allowing you to 'Fiddler' with it – hence the name.

    You can easily install fiddler for free by downloading it from http://www.telerik.com/fiddler.

    The following posts describe the superpowers that Fiddler can give you whilst you are developing solutions or supporting your end users.

    X-Ray Vision

    When you perform any actions in your browser whilst Fiddler is running then each and every request/response is being logged for your viewing pleasure. This log is incredibly useful when you need to see what requests your JavaScript or Silverlight is sending to the server. It shows you the error details even when the user interface may simply report that an 'Error has occurred' without any details. The prize for the most unhelpful error goes to Silverlight with its 'Not Found' message – the actual error can only be discovered with a tool like Fiddler2 by examining the response from the server to see the true exception that is hidden by Silverlight. The HTTP error code is your starting point and Fiddler makes it easy to see these at a glance by its colour coding of request status codes - the most important of which are HTTP 500 requests that are coloured red. For any solution you are developing, the bare minimum you should look at is for any 404 or 500 responses.

    If you wanted to diagnose a problem that a user was having with CRM that could not reproduce then try following these steps:

    1. Ask the user experiencing the issue to install Fiddler2 (this may require administrator privileges if their workstation is locked down).
    2. Get to the point where they can reproduce the problem – just before they click the button or run the query, or whatever!
    3. Start Fiddler
    4. Ask the user to reproduce the issue
    5. Ask the user to click File->Save->All Sessions and send you the file.
    6. Once you've got the file you can load it into your own copy of Fiddler to diagnose the issue.

    If the user has IE9 or above and they are not using the outlook client then the really neat thing about the latest version of Fiddler is that it can import the F12 Network trace. This allows you to capture a trace without installing anything on the client and then inspect it using Fiddlers user interface. To capture the network traffic using IE:

    1. Get to the point where they are about to reproduce the issue
    2. Press F12
    3. Press Ctrl-4
    4. Press F5 (to start the trace)
    5. Reproduce the issue
    6. Switch back to the F12 debugger window by selecting it
    7. Press Shift-F5 to stop the trace
    8. Click the 'Export Captured Traffic' button and send you the file

    Now you can load this file into fiddler using File->Import Sessions->IE's F12 NetXML format file.

    Once you found the requests that you are interested in you can then use the inspectors to review the contents – the request is shown on the top and the response is shown on the bottom half of the right panel. Both the request and response inspectors gives you a number of tabs to visualise in different ways depending on the content type. If you are looking at JavaScript, HTML or XML your best bet is the SyntaxView tab that even has a 'Format Xml' and 'Format Script/JSON' option on the context menu. This is great to looking at SOAP requests and responses that are sent from JavaScript to make sure they are correctly formatted.

    The following screen shows a soap request from JavaScript and inspectors in syntax view with 'Format Xml' selected.

  •  

    This technique is going to save you lots of time when trying to work out what is going on over the phone to your users!

    Next up is Invisibility!

    @ScottDurow

     

    Posted on 28. January 2014

    Chrome Dynamics CRM Developer Tools

    Chrome already provides a fantastic set of Developer tools for HTML/Javascript, but now thanks to Blake Scarlavai at Sonoma Partners we have the Chrome CRM Developer Tools.

    This fantastic Chome add in provides lots of gems to make debugging forms and testing fetchXml really easy:

    Form Information
    - Displays the current form’s back-end information
    - Entity Name
    - Entity Id
    - Entity Type Code
    - Form Type
    - Is Dirty
    - Ability to show the current form’s attributes’ schema names
    - Ability to refresh the current form
    - Ability to enable disabled attributes on the current form (System Administrators only)
    - Ability to show hidden attributes on the current form (System Administrators only)


    Current User Information
    - Domain Name
    - User Id
    - Business Unit Id


    Find
    - Ability to open advanced find
    - Set focus to a field on the current form
    - Display a specific User and navigate to the record (by Id)
    - Display a specific Privilege (by Id)


    Test
    - Ability to update attributes from the current form (System Administrators only)
    - This is helpful when you need to update values for testing but the fields don’t exist on the form


    Fetch
    - Execute any Fetch XML statement and view the results


    Check it out in the chrome web store - https://chrome.google.com/webstore/detail/sonoma-partners-dynamics/eokikgaenlfgcpoifejlhaalmpeihfom

    Posted on 27. September 2013

    Microsoft.Xrm.Client (Part 3b): Configuration via app/web.config

    In this series we have been looking at the Developer Extensions provided by the Microsoft.Xrm.Client assembly:

    Part 1 - CrmOrganizationServiceContext and when should I use it?

    Part 2 - Simplified Connection Management & Thread Safety

    Part 3a – CachedOrganizationService

    So far in this series we have learnt how to use the Microsoft.Xrm.Client developer extensions to simplify connecting to Dynamics CRM, handling thread safety and client side caching.

    This post shows how the configuration of the Dynamics CRM connection and associated OrganizationService context can be configured using an app.config or web.config file. This is esspecially useful with ASP.NET Portals and Client Applications that don't need the facility to dynamically connect to different Dynamics CRM Servers.

    If you only want to configure connection string, then you would add a app.config or web.config with the following entry:

    <connectionStrings>
    <add name="Xrm" connectionString="Url=http://<server>/<org>"/>
    </connectionStrings>

    The Connection string can take on the following forms:

    On Prem (Windows Auth)

    Url=http://<server>/<org>;

    On Prem (Windows Auth with specific credentials)

    Url=http://<server>/<org>; Domain=<domain>; Username=<username>; Password=<password>;

    On Prem (Claims/IFD)

    Url=https://<server>; Username=<username>; Password=<password>;

    On Line (Windows Live)

    Url=https://<org>.crm.dynamics.com; Username=<email>; Password=<password>; DeviceID=<DeviceId>; DevicePassword=<DevicePassword>;

    On Line (O365)

    Url=https://<org>.crm.dynamics.com; Username=<email>; Password=<password>;"

    You can find a full list of connection string parameters in the SDK

    You can then easily instantiate an OrganizationService using:

    var connection = new CrmConnection("Xrm");
    var service = new OrganizationService(connection);

    If you want to simplify creation of the ServiceContext, and make it much easier to handle thread safety – you can use a configuration file that looks like:

    <configuration>
    <configSections>
        <section name="microsoft.xrm.client" type="Microsoft.Xrm.Client.Configuration.CrmSection, Microsoft.Xrm.Client"/>
    </configSections>
    <microsoft.xrm.client>
        <contexts>
            <add name="Xrm" type="Xrm.XrmServiceContext, Xrm" serviceName="Xrm"/>
        </contexts>
        <services>
            <add name="Xrm" type="Microsoft.Xrm.Client.Services.OrganizationService, Microsoft.Xrm.Client"
            instanceMode="[Static | PerName | PerRequest | PerInstance]"/>
        </services>
    </microsoft.xrm.client>
    </configuration>

    You can then instantiate the default context simply by using:

    var context = CrmConfigurationManager.CreateContext("Xrm") as XrmServiceContext;

    The most interesting part of this is the instanceMode. It can be Static, PerName, PerRequest, PerInstance. By setting it to PerRequest, you will get a OrganizationService per ASP.NET request in a portal scenario – making your code more efficient and thread safe (provided you are not using asynchronous threads in ASP.NET).

    The examples above I find are the most common configurations, although you can also specify multiple contexts with optional caching if required for specific contexts – the SDK has a full set of configuration file examples. Using the configuration file technique can simplify your code and ensure your code is thread safe.

    On the topic of thread safety, it was recently brought to my attension that there appears to a bug with the ServiceConfigurationFactory such that if you are instantiating your OrganizationService passing the ServiceManager to the constructor in conjunction with EnableProxyTypes, you can occationally get a threading issue with the error "Collection was modified; enumeration operation may not execute". The work around to this is to ensure that the call to EnableProxyTypes is wrapped in the following:

    if (proxy.ServiceConfiguration.CurrentServiceEndpoint.EndpointBehaviors.Count == 0)
    {
        proxy.EnableProxyTypes();
    }

    More information can be found in my forum post.

    Next up in this series are the Utility and extension functions in the Microsoft.Xrm.Client developer extensions.

    @ScottDurow

     

    Posted on 9. September 2013

    Do you understand MergeOptions?

    If you use LINQ queries with the OrganizationServiceContext then understanding MergeOptions is vital. At the end of this post I describe the most common 'gotcha' that comes from not fully understanding this setting.

    The OrganizationServiceContext implements a version of the 'Unit of Work' pattern (http://martinfowler.com/eaaCatalog/unitOfWork.html ) that allows us to make multiple changes on the client and then submit with a single call to 'SaveChanges'. The MergeOption property alters the way that the OrganizationServiceContext handles the automatic tracking of objects when returned from queries. It is important to understand what's going on since by default LINQ queries may not return you the most recent version of the records from the server, but rather a 'stale' versions that is currently being tracked.

    What is this 'Merge' they speak of?!

    The SDK entry on MergeOptions talks about 'Client side changes being lost' during merges.

    The term 'merge' is nothing to do with merging of contacts/leads/accounts – but describes what happens when the server is re-queried within an existing context and results from a previous query are returned rather than new copies of each record. It is a record ID based combination, not an attribute merge – so a record is either re-used from the current context, or a new instance is returned that represents the version on the server.

    In order to describe the different options, consider the following code:

    // Query 1
    var contacts = (from c in context.ContactSet
                    select new Contact
                    {
                        ContactId = c.ContactId,
                        FirstName = c.FirstName,
                        LastName = c.LastName,
                        Address1_City = c.Address1_City
    
                    }).Take(1).ToArray();
    
    // Update 1
    Contact contact1 = contacts[0];
    contact1.Address1_City = DateTime.Now.ToLongTimeString();
    context.UpdateObject(contact1);
    
    // Query 2
    var contacts2 = (from c in context.ContactSet
                     select c
              ).Take(2).ToArray();
    
    // Update 2
    var contact2 = contacts2[0];
    contact2.Address1_City = DateTime.Now.ToLongTimeString();
    context.UpdateObject(contact2);
    
    // Save Changes
    context.SaveChanges();
    

    MergeOption.NoTracking

    Perhaps the best place to start is the behaviour with no tracking at all.

    • Query 1 – Will return all matching contacts but not add them to the tracking list
    • Update 2 – Will throw and exception because the contact is not being tracked. You would need to use context.Attach(contact) to allow this update to happen
    • Query 2 – This query will pull down new copies of all contacts from the server include a new version of contact 1
    • Update 2 – We now have two version of the same contact with different city attribute value. The UpdateObject will fail without Attach first being called. If you attempt to attach contact2 after attaching contact1 you will receive the error 'The context is already tracking a different 'contact' entity with the same identity' because contact1 is already tracked and has the same ID.

    MergeOption.AppendOnly (Default Setting)

    When using the OrganizationServiceContext, by default it will track all objects that are returned from LINQ queries. This means that the second query will return the instance of the contacts that have already been returned from query 1. Critically this means that any changes made on the server between query 1 and query 2 (or any additional attributes queried using projection) will not be returned.

    • Query 1 – Will return all matching contacts and add them to the tracking list
    • Update 2 – Will succeed because the contact is being tracked
    • Query 2 – Will return the same instances that are already being tracked. The only records that will be returned from the server will be those that are not already being tracked. This is the meaning of 'AppendOnly'. The query still returns the data from the server, but the OrganizationServiceContext redirects the results to the instances already in the tracking list meaning that any changes made on the server since Query 1 will not be reflected in the results.
    • Update 2 – Will succeed since contact1 and contact2 are the same object. Calling UpdateObject on the same instance more than once is acceptable.

    MergeOption.PreserveChanges

    PreserveChanges is essentially the same as AppendOnly except:

    • Query 2 – Will return the same instances that are already being tracked provided they have an EntityState not equal to Unchanged. This means that contact2 will be the same instance as contac1 because it has been updated, but other instances in the contacts1 and contacts2 results will be new instances.

    The result of this is that queries will not pick up the most recent changes on the server if a tracked version of that record has been edited in the current context.

    MergeOption.OverwriteChanges

    With a MergeOption of OverwriteChanges, the query behaviour will effectively be as per NoTracking however the tracking behaviour is like AppendOnly and PreserverChanges:

    • Query 1 – Will return all matching contacts and add each on to the tracking list (as per AppendOnly and PreserveChanges)
    • Update 2 – Will succeed because the contact is being tracked (as per AppendOnly and PreserveChanges)
    • Query 2 – This query will pull down new copies of all contacts from the server include a new version of contact 1 (as per NoTracking). Previously tracked contact1 will no longer be tracked, but the new version (contact2) will be.
    • Update 2 – Will succeed and the values on contact1 will be lost.

    The MergeOption has a subtle but important effect on the OrganizationServiceContext, and without truly understanding each setting you might see unexpected results if you stick with the default 'AppendOnly'. For instance, you might update the value on the server between queries, but because a record is already tracked, re-querying will not bring down the latest values. Remember that all of this behaviour only is true for the same context – so if you are creating a new context then any previously tracked/modified records will no longer be tracked.

    LINQ Projection 'Gotcha'

    The most common issue I see from not fully understanding MergeOptions (and yes I made this mistake too! Smile) is the use of the default AppendOnly setting in conjunction with LINQ projection. In our code example Query 1 returns a projected version of the contact that only contains 4 attributes. When we re-query in Query 2 we might expect to see all attribute values but because we are already tracking the contacts our query will only return the previously queried 4 attributes! This can hide data from your code and cause some very unexpected results!

    In these circumstances, unless you really need tracking and fully understand MergeOptions, I recommend changing the MergeOptions to 'NoTracking'.

    @ScottDurow

    Posted on 27. August 2013

    Microsoft. Xrm. Client (Part 3a): CachedOrganizationService

    In this series we have been looking at the Developer Extensions provided by the Microsoft.Xrm.Client assembly:

    Part 1 - CrmOrganizationServiceContext and when should I use it?

    Part 2 - Simplified Connection Management & Thread Safety

    This 3rd part in the series demonstrates when and how to use the CachedOrganizationService.

    When writing client applications and portals that connect to Dynamics CRM there are many situations where you need to retrieve data and use it in multiple places. In these situations it is common practice to implement a caching strategy which although can be easy implemented using custom code can quickly add complexity to your code if you're not careful.

    The CachedOrganizationService provides a wrapper around the OrganizationServiceProxy and provides a caching service that is essentially transparent to your code with the cache being automatically invalidated when records are updated by the client. The CachedOrganizationService inherits from OrganizationService and uses the same CrmConnection instantiation so you can almost swap your existing OrganizationService with a CachedORganizationService so that your code can benefit from caching without any changes. There are always some pieces of data that you don't want to cache, and so you will need to plan your caching strategy carefully.

    Using an CachedOrganizationService

    You have two choices when it comes to instantiating the objects required:

    1. Manual Instantiation – Full control over the combination of the CrmConnection, OrganizationService & OrganizationServiceContext
    2. Configuration Manager Instantiation – App/Web.config controlled instantiation using a key name.

    Part 3b will show how to use the Configuration Management, but for now we'll explicitly instantiate the objects so you can understand how they work together.

    CrmConnection connection = new CrmConnection("CRM");       
    using (OrganizationService service = new CachedOrganizationService(connection))
    using (CrmOrganizationServiceContext context = new CrmOrganizationServiceContext(service))
    {
    …
    }
    

    Using the CachedOrganizationService to create your Service Context gives your application automatic caching of queries. Each query results is stored against the query used and if when performing further queries, if there is a matching query, the results are returned from the cache rather than using a server query.

    Cached Queries

    In the following example, the second query will not result in any server request, since the same query has already been executed.

    QueryByAttribute request = new QueryByAttribute(Account.EntityLogicalName);
    request.Attributes.Add("name");
    request.Values.Add("Big Account");
    request.ColumnSet = new ColumnSet("name");
    
    // First query will be sent to the server
    Account acc1 = (Account)service.RetrieveMultiple(request).Entities[0];
    
    // This query will be returned from cache
    Account acc2 = (Account)service.RetrieveMultiple(request).Entities[0];
    

    If another query is executed that requests different attribute values (or has different criteria), then the query is executed to get the additional values:

    QueryByAttribute request2 = new QueryByAttribute(Account.EntityLogicalName);
    request.Attributes.Add("name");
    request.Values.Add("Big Account");
    request.ColumnSet = new ColumnSet("name","accountnumber");
    
    // This query will be sent to the server because the query is different
    Account acc3 = (Account)service.RetrieveMultiple(request).Entities[0];

    Cloned or Shared

    By default, the CachedOrganizationSevice will return a cloned instance of the cached results, but it can be configured to return the same instances:

    ((CachedOrganizationService)service).Cache.ReturnMode =
            OrganizationServiceCacheReturnMode.Shared;
    
    QueryByAttribute request = new QueryByAttribute(Account.EntityLogicalName);
    request.Attributes.Add("name");
    request.Values.Add("Big Account");
    request.ColumnSet = new ColumnSet("name");
    
    // First query will be sent to the server
    Account acc1 = (Account)service.RetrieveMultiple(request).Entities[0];
    
    // This query will be returned from cache
    Account acc2 = (Account)service.RetrieveMultiple(request).Entities[0];
    Assert.AreSame(acc1, acc2);
    

    The assertion will pass because a ReturnMode of 'Shared' will return the existing values in the cache and not cloned copies (the default behaviour).

    Automatic Invalidated Cache on Update

    If you then go on to update/delete and entity that exists in a cached query result, then the cache is automatically invalidated resulting in refresh the next time it is requested.

    Coupling with CrmOrganizationServiceContext

    In Part 1 we saw that the CrmOrganizationServiceContext provided a 'Lazy Load' mechanism for relationships, however it would execute a metadata request and query every time the relationship Entity set was queried. When this is coupled with the CachedOrganizationService it gives us the complete solution. In the following example, we perform two LINQ queries against the Account.contact_customer_accounts relationship, the first returns all the related contacts (all attributes), and the second simply retrieves the results from the cache. You don't need to worry about what is loaded and what is not.

    // Query 1
    Console.WriteLine("Query Expected");
    Xrm.Account acc = (from a in context.CreateQuery()
                       where a.Name == "Big Account"
                       select new Account
                       {
                           AccountId = a.AccountId,
                           Name = a.Name,
                       }).Take(1).FirstOrDefault();
    
    // Get the contacts from server
    Console.WriteLine("Query Expected");
    var accounts = (from c in acc.contact_customer_accounts
                    select new Contact
                    {
                        FirstName = c.FirstName,
                        LastName = c.LastName
                    }).ToArray();
    
    // Get the contacts again - from cahce this time
    Console.WriteLine("No Query Expected");
    var accounts2 = (from c in acc.contact_customer_accounts
                     select new Contact
                     {
                         FirstName = c.FirstName,
                         LastName = c.LastName,
                         ParentCustomerId = c.ParentCustomerId
                     }).ToArray();

    Thread Safety

    Provided you are using the CachedOrganizationService with the CrmConnection class, then all the same multi-threading benefits apply. The Client Authentication will only be performed initially and when the token expires, and the cache automatically handles locking when being access by multiple threads.

    Pseudo Cache with OrganizationServiceContext.MergeOption = AppendOnly

    The OrganizationServiceContext has built in client side tracking of objects and a sort of cache when using a MergeOption Mode of AppendOnly (the default setting). With MergOption=AppendOnly once an entity object has been added to the context, it will not be replaced by a instance on subsequent LINQ queries. Instead, the existing object is re-used so that any changes made on the client remain. This means that even if a new attribute is requested and the CachedOrganizationService executes the query accordingly, it will look as though it hasn't been a new query because the OrganizationServiceContext still returns the object that it is currently tracking.

    // Query 1
    Xrm.Account acc = (from a in context.CreateQuery()
                       where a.Name == "Big Account"
                       select new Account
                       {
                           AccountId = a.AccountId,
                           Name = a.Name,
                       }).Take(1).FirstOrDefault();
    
    Assert.IsNull(acc.AccountNumber); // We didn’t request the AccountNumber
    
    // Query 2
    // Because there is an additional attribute value requested, this will query the server
    Xrm.Account acc2 = (from a in context.CreateQuery()
                        where a.Name == "Big Account"
                        select new Account
                        {
                            AccountId = a.AccountId,
                            Name = a.Name,
                            AccountNumber = a.AccountNumber
                        }).Take(1).FirstOrDefault();
    
    Assert.AreSame(acc, acc2); // MergeOption=AppendOnly preserves existing objects and so the first tracked object is returned
    Assert.IsNull(acc.AccountNumber); // Account Number will be null even though it was returned from the server

    This can lead to the conclusion that the CachedOrganizationService isn't detecting that we want to return a new attribute that wasn't included in the first query, but it actually isn't anything to do with caching since the OrganizationServiceContext will behave like this even if there was no CachedOrganizationService in use. If you were to use a MergeOption of NoTracking, PreserveChanges or Overwrite changes you wouldn't see the above behaviour because the 2nd query would always return a new instance of the account with the AcccountNumber attribute value loaded.

    Next in this series I'll show you how to configure the CachedOrganisationSevice and OrganizationServiceContext using the web/app.config.

    @ScottDurow

    Posted on 22. August 2013

    Multi-Entity Search with SparkleXRM

    The new tablet client for Dynamics CRM 2013 has a fantastic looking multi-entity search but it is not yet available in the Web Client.

    I thought this would be a good opportunity to create another SparkleXRM sample to achieve a similar feature with Dynamics CRM 2011.

    You can check out the sample by installing the managed solutions:

    Once installed, it creates a sitemap entry that runs shows the Multi Entity Search HTML Web resource. By default, it'll show the Account, Contact, Lead, Activity & Opportunity entities, but can show other entities by passing parameters.

    Each entity grid shows the 'Quick Find' view for the given entity and displays the head and column names in the user's chosen language. The grids are fixed widths, but will wrap according to the screen width available.

    The sample shows the following features:

    1. Using the Metadata Query SDK to retrieve entity & attribute types and display names in the user's chosen language.
    2. Grids displaying the page size defined in the user's settings.
    3. Using the grid data binder and parsing fetchxml/layoutxml
    4. Rendering grids with clickable links to entity records.
    5. MVVM binding with asynchronous queries

    It achieves all of this in very few lines of code. You can take a look at the sample code on GitHub:

    @ScottDurow