Posted on 11. November 2017

executionContext hits the big time!

You've seen the executionContext in the event registration dialog and you might even have used it on occasion. Well with the release of Dynamic 365 Customer Engagement Version 9, it has been elevated to be the replacement for Xrm.Page.

The document describing the replacement for Xrm.Page details it as ExecutionContext.getFormContext – due to the capitalisation of ExecutionContext it implies that this is a global object, when in fact it must be passed as a parameter to the event handler by checking 'Pass execution context as first parameter' checkbox.

Oddly - It's still unchecked by default given its importance!

So why the change?

Imagine that we want to create a client side event on the Contact entity that picks up the parent account's telephone number and populates the contact's telephone when 'Use Account Phone' is set to Yes. We add the event code to both the form field on change and the editable grid on change for the 'Use Account Phone' field.

If we were to use the 'old' Xrm.Page.getAttribute method –it would work on the form but it wouldn't work within the grid on change event handler.

This is where the executionContext shines – it can provide a consistent way of getting to the current entity context irrespective of where the event is being fired from (form or grid).

Show me the code!

The following event handler is written using typescript – but it's essentially the same in JavaScript without the type declarations.

The important bit is that the executionContext is defined an argument to the event handler and attribute values are retrieved from the context returned by it's getFormContext() method.

static onUseCompanyPhoneOnChanged(executionContext: Xrm.Page.EventContext) {
    var formContext = executionContext.getFormContext();
    const company =<Xrm.Page.LookupAttribute>("parentcustomerid");
    const usePhone =<Xrm.Page.BooleanAttribute>(dev1_useaccounttelephone);
    const parentcustomeridValue = company.getValue();

    // If usePhone then set the phone from the parent customer
    if (usePhone.getValue() &&
        parentcustomeridValue != null &&
        parentcustomeridValue[0].entityType === "account") {
        const accountid = parentcustomeridValue[0].id;
        Xrm.WebApi.retrieveRecord("account", accountid, "?$select=telephone1")
            .then(result => {

Some Additional Notes:

  1. All the attributes that are used in the event must be in the subgrid row (parent customer attribute in this case).
  2. You can access the parent form container attributes using

Xrm.Page still works in Version 9 but it's a good idea to start thinking about giving executionContext the attention it deserves!

Hope this helps!

Posted on 24. June 2017

Not all Business Process Flow entities are created equal

As you probably know by now, when you create Business Process Flows in 8.2+ you'll get a new custom entity that is used to store running instances (if not then read my post on the new Business Process Flow entities).

When your orgs are upgraded to 8.2 from a previous version then the business process flow entities will be created automatically for you during the upgrade. They are named according to the format:


Notice that the prefix is new_. This bothered me when I first saw it because if you create a Business Process Flow as part of a solution then the format will be:


Here lies the problem. If you import a pre-8.2 solution into an 8.2 org, then the Business Process Flows will be prefixed with the solution prefix – but if the solution is in-place upgraded then they will be prefixed with new.

Why is this a problem?

Once you've upgraded the pre-8.2 org to 8.2 then the Business Process Flows will stay named as new_ and included in the solution. When you then import an update to the target org – the names will conflict with each other and you'll get the error:

"This process cannot be imported because it cannot be updated or does not have a unique name."

Source 8.1 Org
Solution with myprefix_

Empty 8.2 Org




BPF entity created - myprefix_BPF_xxx

Upgraded to 8.2

BPF entity created - new_BPF_xxx





"This process cannot be imported because it cannot be updated or does not have a unique name."

new_BPF_xxx conflicts with myprefix_BPF_xxx


How to solve

Unfortunately, there isn't an easy way out of this situation. There are two choices:

  1. If you have data in the target org that you want to keep – you'll need to recreate the BPFs in the source org so that they have the myprefix_ - you can do this by following the steps here -
  2. If you are not worried about data in the target org you can delete those BPFs and re-import the solution exported from the upgraded 8.2 source org.

The good news is that this will only happen to those of you who have source and target orgs upgraded at different times – if you upgrade your DEV/UAT/PROD at the same time you'll get BPFs entities all prefixed with new_


Posted on 14. March 2017

There is something rather different about Dynamics 365 Business Process Flows!

The new business process flow designer in Dynamics 365 is lovely! However, I'm not going to talk about that since it's rightly had lots of love by others already.

For me the biggest change in Dynamics 365 is the fact that running Business Process Flows (BPFs) are now stored as entity records. Instance details are no longer held as fields on the associated record. I first visited this topic back in the CRM2013 days with the introductions of Business Process Flows where I described how to programmatically change the process.

Previously when a BPF was started, all of the state about the position was held on the record it was run on was stored in fields on the record itself:

  • Process Id: The ID of the BPF running
  • Stage Id: The ID of the BPF step that was active
  • Traversed Path: A comma separated string listing the GUIDs of current path of steps taken through the BPF. This is to support BPFs with branching logic.

With the new Dynamics 365 BPFs, each process activated is automatically has an entity created that looks just like any other custom entity. The information about the processes running and any record is now stored as instances of this entity with a N:1 relationship to the parent record and any subsequent related entities. This BPF entity has similar attributes that were stored on the parent entity, but with the following additions:

  • Active Stage Id: The ID of the BPF step that is active – replaces the Stage Id attribute.
  • Activate Stage Started On: The Date Time that the current step was started on – this allows calculation of the amount of time it has been active for
  • State & Status: Each BPF Instance has its own state that allows finishing and abandoning before other BPF are run.


In addition to making migration of data with running BPFs a little easier - this approach has the following advantages:

  1. You can control access to BPFs using standard entity role privileges
  2. You can have multiple BPFs running on the same record
  3. You can see how long the current stage has been active for
  4. You can Abandon/Finish a BPF

BPF Privileges

Prior to Dynamics365, you would have controlled which roles could access your BPF using the Business Process Flow Role Check list.     In Dynamics 365 when you click the 'Enable Security Roles' button your BPF you are presented with a list of Roles that you can open up and define access in the 'Business Process Flow' tab:

Multiple BPFs on the same record

Switching BPFs no longer overwrites the previous active step – meaning that you can 'switch' back to a previously started BPF and it will carry on from the same place. This means that BPFs can run in parallel on the same record.

  • If a user does not have access to the running BPF they will see the next running BPF in the list (that they have access to).
  • If the user has no access to any BPF that is active – then no BPF is shown at all.
  • If user has read only access to the BPF that is running, then they can see it, but not change the active step.
  • When a new record is created, the first BPF that the user has create privileges on is automatically started.

When you use the Switch Process dialog, you can now see if the Business Process Flow is already running, who started it and when it was run.

NOTE: Because the roles reference the BPF entities – you must also include the system generated BPF entities in any solution you intend to export and import into another system.

Active Step timer

Now that we have the ability to store addition data on the running BPF instance, we have the time that the current step was started on. This also means that when switching between processes, we can see the time spent in each step in parallel running BPFs.


Since each BPF has its own state fields, a business process can be marked as Finished – or Abandoned at which point it becomes greyed out and read only.

When you 'Abandon' or 'Finish' a BPF it is moved into the 'Archived' section of the 'Switch Process' dialog.

NOTE: You might think that this means that you could then run the BPF a second time, but in-fact it can only have a single instance per BPF – and you must 'Reactivate' it to use it again.

  • Reactivating an Abandoned BPF will start at the previously active step
  • Reactivating a Finished BPF will start it from the beginning again.


Imagine your business has a sales process that requires an approval by a Sales Manager. At a specific step in that sales process you could run a workflow to start a parallel BPF that only the Sales Managers have access to. When they view the record, making the Approval BPF higher in the ordered list of BPFS will mean that they will see the Approval BPF instead of the main Sales Process. They can then advance the steps to 'Approved' and mark as Finished. This could then in turn start another Workflow that updates a field on the Opportunity. Using this technique in combination with Field Level Security gives a rather neat solution for custom approval processes.

When I first saw this change I admit I was rather nervous because it was such a big system change. I've now done a number of upgrades to Dynamics 365 and the issues I found have all been resolved.
I'm really starting to like the new possibilities that Parallel BPFs brings to Dynamics 365.


Posted on 11. March 2017

Simplified Connection Management & Thread Safety (Revisited)

There is one certainty in the world and that is that things don't stay the same! In the Dynamics 365 world, this is no exception, with new features and SDK features being released with a pleasing regularity. Writing 'revisited' posts has become somewhat of a regular thing these days.

In my previous post on this subject back in 2013 we looked at how you could use a connection dialog or connection strings to get a service reference from the Microsoft.Xrm.Client library and how it can be used in a thread safe way.


For a while now there has been a replacement for the Microsoft.Xrm.Client library – the Microsoft.Xrm.Tooling library. It can be installed from NuGet using:

Install-Package Microsoft.CrmSdk.XrmTooling.CoreAssembly

When you use the CrmServerLoginControl, the user interface should look very familiar because it's the same that is used in all the SDK tools such that Plugin Registration Tool.

The sample in the SDK shows how to use this WPF control.

The WPF control works slightly differently to the Xrm.Client ShowDialog() method – since it gives you much more flexibility over how the dialog should behave and allows embedding inside your WPF application rather than always having a popup dialog.

Connection Strings

Like the dialog, the Xrm.Tooling also has a new version of the connection string management – the new CrmServiceClient accepts a connection string in the constructor. You can see examples of these connection strings in the SDK.

CrmServiceClient crmSvc = new CrmServiceClient(ConfigurationManager.ConnectionStrings["Xrm"].ConnectionString);

For Dynamics 365 online, the connection would be:

    <add name="Xrm" connectionString="AuthType=Office365;; Password=passcode;Url=" />

Thread Safety

The key to understanding performance and thread safety of calling the Organization Service is the difference between the client proxy and the WCF channel. As described by the 'Improve service channel allocation performance' topic from the best practice entry in the SDK, the channel should be reused because creating it involves time consuming metadata download and user authentication.

The old Microsoft.Xrm.Client was thread safe and would automatically reuse the WCF channel that was already authenticated. The Xrm.Tooling CrmServiceClient is no exception. You can create a new instance of CrmServiceClient and existing service channels will be reused if one is available on that thread. Any calls the same service channel will be locked to prevent thread issues.

To demonstrate this, I first used the following code that ensures that a single CrmServiceClient is created per thread.

Parallel.For(1, numberOfRequests,
    new ParallelOptions() { MaxDegreeOfParallelism = maxDop },
    () =>
        // This is run for each thread
        var client = new CrmServiceClient(username,
               useUniqueInstance: false,
               useSsl: false,
               isOffice365: true);
        return client;
    (index, loopState, client) =>
        // Make a large request that takes a bit of time
        QueryExpression accounts = new QueryExpression("account")
            ColumnSet = new ColumnSet(true)
        return client;
    (client) =>

With a Degree of Parallelism of 4 (the number of threads that can be executing in parallel) and a request count of 200, there will be a single CrmServiceClient created for each thread and the fiddler trace looks like this:

Now to prove that the CrmServiceClient handles thread concurrency automatically, I moved the instantiation into loop so that every request would create a new client:

Parallel.For(1, numberOfRequests,
    new ParallelOptions() { MaxDegreeOfParallelism = maxDop },
    (index) =>
        // This is run for every request
        var client = new CrmServiceClient(username,
               useUniqueInstance: false,
               useSsl: false,
               isOffice365: true);
        // Make a large request that takes a bit of time
        QueryExpression accounts = new QueryExpression("account")
            ColumnSet = new ColumnSet(true)

Running this still shows a very similar trace in fiddler:

This proves that the CrmServiceClient is caching the service channel and returning a pre-authenticated version per thread.

In contrast to this, if we set the useUniqueInstance property to true on the CrmServiceClient constructor, we get the following trace in fiddler:

So now each request is re-running the channel authentication for each query – far from optimal!

The nice thing about the Xrm.Tooling library is that it is used exclusively throughout the SDK – where the old Xrm.Client was an satellite library that came from the legacy ADX portal libraries.

Thanks to my friend and fellow MVP Guido Preite for nudging me to write this post!


Posted on 21. November 2015

Option-Set, Lookup or Autocomplete

In the constant struggle to improve data quality it is common to avoid using free-text fields in favour of select fields. This approach has the advantage of ensuring that data is entered consistently such that it can easily be searched and reported upon.

There are a number of choices of approaches to implementing select fields in Dynamics CRM. This post aims to provide all the information you need to make an informed choice of which to use on a field by field basis. The options are:

  • Option-Set Field - stored in the database as an integer value which is rendered in the user interface as a dropdown list. The labels can be translated into multiple languages depending on the user's language selection.
  • Lookup Field - a search field that is stored in the database as a GUID ID reference to another entity record. The Lookup field can only search a single entity type.
  • Auto Complete Field - a 'free text' field that has an auto complete registered using JavaScript to ensure that text is entered in a consistent form. The term 'autocomplete' might be a bit misleading since the field is not automatically completed but instead you must select the suggested item from the list. This is a new feature in CRM 2016 that you can read more about in the preview SDK.
The following table provides an overview of the aspects that this post discusses for each option:
  • Number of items in list – The larger the list and the likelihood that it will grow, the more this becomes important.
  • Filtering based on user's business unit - This is especially important where you have different values that apply to different parts of the business and so the list of options must be trimmed to suit.
  • Adding new Items - Ease of adding values frequently by business users.
  • Removing values – Ease of removing values without affecting existing records that are linked to those values.
  • Multi-Language – Having options translated to the user's selected language
  • Dependant/Filtered options - This is important where you have one select field that is used to filter another such as country/city pairs.
  • Additional information stored against each option - This is important if you have information that you need to store about the selected item such as the ISO code of a country.
  • Mapping between entities - Is the option on multiple entity types? Often the same list of items is added as a field in multiple places and multiple entities. This can be important when using workflows/attribute maps to copy values between different entity types.
  • Number of select fields - The more select fields you have across all your entities, the more important this becomes.
  • Filters, Reports and Advanced Find - When creating advanced finds and views, a user should be able to select from a list of values rather than type free text.
  • Configure once, deploy everywhere – One key design goal should be that once a field is configured, it should be easily used across the web, outlook, tablet and phone clients.

Option-Set Fields

Option-Sets are the default starting point for select fields.

Number of items in list (Option-sets)

No more than ~100 for performance reasons. All items are downloaded into the user interface which will cause performance problems for large lists – especially where there are lots of option-sets on the same form.

Filtering based on user's business unit (Option-sets)

Requires JavaScript to filter items dynamically based on the user's role/business unit.

Ease of adding values frequently by business users (Option-sets)

Option-Sets require a metadata configuration change and a re-publish that would usually be done out of hours by an administrator. It is best practice to do this on a development environment and then import a solution into production. Adding new values to the list isn't something that can be done by business users.

Removing values over time (Option-sets)

Removing items causes data loss in old records. Items can be removed using JavaScript to preserve old records, but Advanced Find will still show the values.

Multi-Language Options (Option-sets)

Each option-set item can be translated into multiple languages.

If you need to have the select field multi-language then an option-set is probably your best choice unless it is going to be a long list, in which case you'll need to make a compromise.

Dependant/Filtered options (Option-sets)

Requires JavaScript to filter options.

Additional information stored against each option (Option-sets)

It is not possible to store additional data other than the label and integer value of the option-set. You would need to store somewhere else in a lookup table format.

Mapping between entities (Option-sets)

Use a global option-set that can be defined once and used by multiple option-set fields.

Number of select fields (Option-sets)

You can have as many select fields as your entity forms will allow. The more fields you have the slower the form will load and save. 

Search/Filtering (Option-sets)

Option-sets are always presented as a drop down in advanced fine and view filters.

Configure once, deploy everywhere (Option-sets)

Works across all clients including phone and tablet native apps.

Option-sets are the most 'native' choice for select fields and will work in all deployment types without much consideration.


Lookup Fields with Custom Entity

Lookup fields allow selecting a single reference to a custom entity using a free text search.

Number of items in list (Lookup)

Unlimited list items subject to database size. Since all list items are not downloaded to the user interface (unlike option-sets) the user can search quickly for the item they need.

Filtering based on user's business unit (Lookup)

Security Roles can be used in combination with a user owned lookup entity so that lookup records are visible to subset of users.

Ease of adding values frequently by business users (Lookup)

New records can easily be added by users using the 'Add New' link. Control over who can add new items can be done using Security Roles.

If you need business users to add values regularly then a Lookup field is a good choice. The Configuration Migration tool in the SDK can be used to easily move values between environments.


Removing values over time (Lookup)

Old items can be easily deactivated and will no longer show in lookup fields (including in advanced finds) however existing records will retain their selected value (unlike when option-set items are removed).

If you need to make changes constantly to the list and remove items without affecting previous records then a lookup field is most likely your best choice.


Multi-Language Options (Lookup)

Not possible without complex plugin server side code to dynamically return the name in the current user's language.

Dependant/Filtered options (Lookup)

Lookup filtering options can be added in the form field properties or can be added via JavaScript for more complex scenarios.

Lookups are the easiest and quickest to setup dependant lists without writing code. This filtering will also work on tablet/mobile clients without further consideration.

Additional information stored against each option (Lookup)

Additional information can be stored as attributes on the lookup entity records. Lookup views can show up to 2 additional attributes within the inline lookup control.

If you are integrating with another system that requires a special foreign key to be provided, lookup entities are good way of storing this key.


Mapping between entities (Lookup)

Lookups can easily be mapped between records using attribute maps/workflows or calculated fields.

Number of select fields (Lookup)

CRM Online is limited to 300 custom entities.

This is an important consideration and it's unlikely to be a good idea to use Lookup entities for all of your select fields.
If you are using CRM online you'll likely have to always use a combination of lookups and option-sets due to the limit of 300 custom entities. Don't take the decision to make all your select fields as lookups.


Search/Filtering (Lookup)

Lookups are presented as search fields in Advanced Find and Filters.

Configure once, deploy everywhere (Lookup)

Works across all clients including phone and tablet native apps. If working offline however, all lookup values may not be available.

Text Field Auto Completes (CRM 2016)

Autocompletes are a free text field with an on key press event added to show an autocomplete. The great thing about autocompletes is that they can show icons and additional action links.See below for an example of how to use autocompletes in Javascript.

Number of items in list (Autocomplete)

An autocomplete field can only show as many items as you return at a time but you'll want to put a limit for performance reasons.

If you need the select field to be more like a combo-box where users can type their own values or select from predefined items then autocomplete text fields are a good choice.


Filtering based on user's business unit (Autocomplete)

You can add any search filtering you need using JavaScript.


Ease of adding values frequently by business users (Autocomplete)

If the autocomplete is using a lookup entity to store the data displayed then the same considerations would apply as for Lookup Entities. If the values are hard coded into the JavaScript then this would be more akin to the Option-Set solution import.

Removing values over time (Autocomplete)

Since the actual field is stored as a text field there is no issue with removing values. Existing data will still be preserved.

Multi-Language Options (Autocomplete)

You can detect the user interface language and return a specific language field to the user via JavaScript however it will be stored in the textbox and other users will not see it in their language (unlike an option-set). One solution to this would be to use the autocomplete for data entry and then use a web resource to present the field value in the local user's language.

Dependant/Filtered options (Autocomplete)

You can apply whatever filtering you need using JavaScript.

Additional information stored against each option (Autocomplete)

If you use the autocomplete to search a custom entity you can store additional data as additional attributes. The autocomplete flyout can display multiple values for each result row.

Autocomplete fields have the advantage that they can show an icon that is specific to the record (e.g. The flag of the country). If you need this feature, then Auto completes are a good choice.


Search/Filtering (Autocomplete)

If you use a free text autocomplete it's advisable to additionally populate a backing lookup field to facilitate searching/filtering. This would also allow you to ensure that 'unresolved' values cannot be saved by using an OnSave event to check that the text field matches a hidden lookup field that is populated in the OnChange event.

Configure once, deploy everywhere (Autocomplete)

Autocomplete does not work on phone/tablet native apps yet.

Show me the Code!

I have added support for the Auto Complete SDK extensions in CRM2016 to SparkleXRM. To show a country autocomplete lookup, you'd add onload code similar to:

public static void OnLoad()
    Control control = Page.GetControl("dev1_countryautocomplete");

public static void OnCountrySearch(ExecutionContext context)
    string searchTerm = Page.GetControl("dev1_countryautocomplete").GetValue<string>();
    string fetchXml = String.Format(@"<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
                              <entity name='dev1_country'>
                                <attribute name='dev1_countryid' />
                                <attribute name='createdon' />
                                <attribute name='dev1_name' />
                                <attribute name='dev1_longname' />
                                <attribute name='dev1_iso' />
                                <attribute name='entityimage_url' />
                                <order attribute='dev1_name' descending='false' />
                                <filter type='and'>
                                  <condition attribute='dev1_name' operator='like' value='{0}%' />
                            </fetch>", searchTerm);

    OrganizationServiceProxy.BeginRetrieveMultiple(fetchXml, delegate(object state)
            // We use an aysnc call so that the user interface isn't blocked whilst we are searching for results
            EntityCollection countries = OrganizationServiceProxy.EndRetrieveMultiple(state, typeof(Entity));
            AutocompleteResultSet results = new AutocompleteResultSet();

            // The Autocomplete can have an action button in the footer of the results flyout
            AutocompleteAction addNewAction = new AutocompleteAction();
            addNewAction.Id = "add_new";
            addNewAction.Icon = @"/_imgs/add_10.png";
            addNewAction.Label = "New";
            addNewAction.Action = delegate()
                OpenEntityFormOptions windowOptions = new OpenEntityFormOptions();
                windowOptions.OpenInNewWindow = true;
                Utility.OpenEntityForm2("dev1_country", null,null, windowOptions);
            results.Commands = addNewAction;
            results.Results = new List<AutocompleteResult>();
            // Add the results to the autocomplete parameters object
            foreach (Entity country in countries.Entities)
                AutocompleteResult result = new AutocompleteResult();
                result.Id = country.Id;
                result.Icon = country.GetAttributeValueString("entityimage_url");
                result.Fields = new string[] { country.GetAttributeValueString("dev1_name"),                           
                ArrayEx.Add(results.Results, result);                    
            if (results.Results.Count > 0)
                // Only show the autocomplete if there are results
                // There are no results so hide the autocomplete
        catch(Exception ex)
            Utility.AlertDialog("Could not load countries: " + ex.Message, null);

This would result in JavaScript:

ClientHooks.Autocomplete = function ClientHooks_Autocomplete() {
ClientHooks.Autocomplete.onLoad = function ClientHooks_Autocomplete$onLoad() {
    var control = Xrm.Page.getControl('dev1_countryautocomplete');
ClientHooks.Autocomplete.onCountrySearch = function ClientHooks_Autocomplete$onCountrySearch(context) {
    var searchTerm = Xrm.Page.getControl('dev1_countryautocomplete').getValue();
    var fetchXml = String.format("<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>\r\n                                      <entity name='dev1_country'>\r\n                                        <attribute name='dev1_countryid' />\r\n                                        <attribute name='createdon' />\r\n                                        <attribute name='dev1_name' />\r\n                                        <attribute name='dev1_longname' />\r\n                                        <attribute name='dev1_iso' />\r\n                                        <attribute name='entityimage_url' />\r\n                                        <order attribute='dev1_name' descending='false' />\r\n                                        <filter type='and'>\r\n                                          <condition attribute='dev1_name' operator='like' value='{0}%' />\r\n                                        </filter>\r\n                                      </entity>\r\n                                    </fetch>", searchTerm);
    Xrm.Sdk.OrganizationServiceProxy.beginRetrieveMultiple(fetchXml, function(state) {
        try {
            var countries = Xrm.Sdk.OrganizationServiceProxy.endRetrieveMultiple(state, Xrm.Sdk.Entity);
            var results = {};
            var addNewAction = {};
   = 'add_new';
            addNewAction.icon = '/_imgs/add_10.png';
            addNewAction.label = 'New';
            addNewAction.action = function() {
                var windowOptions = {};
                windowOptions.openInNewWindow = true;
                Xrm.Utility.openEntityForm('dev1_country', null, null, windowOptions);
            results.commands = addNewAction;
            results.results = [];
            var $enum1 = ss.IEnumerator.getEnumerator(countries.get_entities());
            while ($enum1.moveNext()) {
                var country = $enum1.current;
                var result = {};
                result.icon = country.getAttributeValueString('entityimage_url');
                result.fields = [ country.getAttributeValueString('dev1_name'), country.getAttributeValueString('dev1_iso'), country.getAttributeValueString('dev1_longname') ];
                Xrm.ArrayEx.add(results.results, result);
            if (results.results.length > 0) {
            else {
        catch (ex) {
            Xrm.Utility.alertDialog('Could not load countries: ' + ex.message, null);





Posted on 26. October 2013

Manage your SDK assemblies the easy way

NuGet has become the de-facto way of managing assembly references from within Visual Studio. Using the Package Manager you can easily download, install, update and uninstall referenced libraries automatically from the ever growing NuGet library repository.

In the past there was an unofficial package, but with the release of CRM2013, the SDK team have made the official assemblies available on NuGet. You can install the references to the CRM2013 SDK assemblies by Right Clicking on your Project from within Visual Studio and selecting Manage NuGet Packages…

If you search for Microsoft.CrmSdk in the Online tab, you will see a list of the official SDK assemblies.

Here are 5 really great things about using NuGet:

  1. All referenced assemblies are stored in a single solution folder named packages so that each project will reference the same assembly version inside your solution.
  2. You can update to the latest version using the 'Updates' tab inside the NuGet Package manager (or a specific version using the command line package manager).
  3. Any dependency assembly packages (e.g. IdentityModel) will automatically be downloaded and referenced in your solution
  4. You don't need to commit your dependency packages to Source Control. If you commit only your code to source control and then try and build the project on a new environment, you can easily re-download the assemblies from within the Package Manager using the 'Restore' button.

    If you want to automatically restore these files on build – you can Right Click on your solution and selecting 'Enable NuGet Package Restore'.

    See the article on the NuGet site for more information.
  5. You can use either the package manager to discover and install your packages, or use the NuGet Console to quickly install/uninstall if you know what you're looking for (Tools-> Library Package Manager - >Package Manager Console)

The package inter-dependencies with their names as referenced in NuGet are as follows:

You can see all of them listed at

Using the Package Manager Console you could install the latest Core Assemblies for CRM2013 using:

Install-Package Microsoft.CrmSdk.CoreAssemblies

Or you could install the CRM 2011 version using:

Install-Package Microsoft.CrmSdk.CoreAssemblies -Version 5.0.17

If you wanted to install them you can use:

Uninstall-Package Microsoft.CrmSdk.CoreAssemblies

Start using NuGet if you are not already and you'll never look back!


Posted on 9. September 2013

Do you understand MergeOptions?

If you use LINQ queries with the OrganizationServiceContext then understanding MergeOptions is vital. At the end of this post I describe the most common 'gotcha' that comes from not fully understanding this setting.

The OrganizationServiceContext implements a version of the 'Unit of Work' pattern ( ) that allows us to make multiple changes on the client and then submit with a single call to 'SaveChanges'. The MergeOption property alters the way that the OrganizationServiceContext handles the automatic tracking of objects when returned from queries. It is important to understand what's going on since by default LINQ queries may not return you the most recent version of the records from the server, but rather a 'stale' versions that is currently being tracked.

What is this 'Merge' they speak of?!

The SDK entry on MergeOptions talks about 'Client side changes being lost' during merges.

The term 'merge' is nothing to do with merging of contacts/leads/accounts – but describes what happens when the server is re-queried within an existing context and results from a previous query are returned rather than new copies of each record. It is a record ID based combination, not an attribute merge – so a record is either re-used from the current context, or a new instance is returned that represents the version on the server.

In order to describe the different options, consider the following code:

// Query 1
var contacts = (from c in context.ContactSet
                select new Contact
                    ContactId = c.ContactId,
                    FirstName = c.FirstName,
                    LastName = c.LastName,
                    Address1_City = c.Address1_City


// Update 1
Contact contact1 = contacts[0];
contact1.Address1_City = DateTime.Now.ToLongTimeString();

// Query 2
var contacts2 = (from c in context.ContactSet
                 select c

// Update 2
var contact2 = contacts2[0];
contact2.Address1_City = DateTime.Now.ToLongTimeString();

// Save Changes


Perhaps the best place to start is the behaviour with no tracking at all.

  • Query 1 – Will return all matching contacts but not add them to the tracking list
  • Update 2 – Will throw and exception because the contact is not being tracked. You would need to use context.Attach(contact) to allow this update to happen
  • Query 2 – This query will pull down new copies of all contacts from the server include a new version of contact 1
  • Update 2 – We now have two version of the same contact with different city attribute value. The UpdateObject will fail without Attach first being called. If you attempt to attach contact2 after attaching contact1 you will receive the error 'The context is already tracking a different 'contact' entity with the same identity' because contact1 is already tracked and has the same ID.

MergeOption.AppendOnly (Default Setting)

When using the OrganizationServiceContext, by default it will track all objects that are returned from LINQ queries. This means that the second query will return the instance of the contacts that have already been returned from query 1. Critically this means that any changes made on the server between query 1 and query 2 (or any additional attributes queried using projection) will not be returned.

  • Query 1 – Will return all matching contacts and add them to the tracking list
  • Update 2 – Will succeed because the contact is being tracked
  • Query 2 – Will return the same instances that are already being tracked. The only records that will be returned from the server will be those that are not already being tracked. This is the meaning of 'AppendOnly'. The query still returns the data from the server, but the OrganizationServiceContext redirects the results to the instances already in the tracking list meaning that any changes made on the server since Query 1 will not be reflected in the results.
  • Update 2 – Will succeed since contact1 and contact2 are the same object. Calling UpdateObject on the same instance more than once is acceptable.


PreserveChanges is essentially the same as AppendOnly except:

  • Query 2 – Will return the same instances that are already being tracked provided they have an EntityState not equal to Unchanged. This means that contact2 will be the same instance as contac1 because it has been updated, but other instances in the contacts1 and contacts2 results will be new instances.

The result of this is that queries will not pick up the most recent changes on the server if a tracked version of that record has been edited in the current context.


With a MergeOption of OverwriteChanges, the query behaviour will effectively be as per NoTracking however the tracking behaviour is like AppendOnly and PreserverChanges:

  • Query 1 – Will return all matching contacts and add each on to the tracking list (as per AppendOnly and PreserveChanges)
  • Update 2 – Will succeed because the contact is being tracked (as per AppendOnly and PreserveChanges)
  • Query 2 – This query will pull down new copies of all contacts from the server include a new version of contact 1 (as per NoTracking). Previously tracked contact1 will no longer be tracked, but the new version (contact2) will be.
  • Update 2 – Will succeed and the values on contact1 will be lost.

The MergeOption has a subtle but important effect on the OrganizationServiceContext, and without truly understanding each setting you might see unexpected results if you stick with the default 'AppendOnly'. For instance, you might update the value on the server between queries, but because a record is already tracked, re-querying will not bring down the latest values. Remember that all of this behaviour only is true for the same context – so if you are creating a new context then any previously tracked/modified records will no longer be tracked.

LINQ Projection 'Gotcha'

The most common issue I see from not fully understanding MergeOptions (and yes I made this mistake too! Smile) is the use of the default AppendOnly setting in conjunction with LINQ projection. In our code example Query 1 returns a projected version of the contact that only contains 4 attributes. When we re-query in Query 2 we might expect to see all attribute values but because we are already tracking the contacts our query will only return the previously queried 4 attributes! This can hide data from your code and cause some very unexpected results!

In these circumstances, unless you really need tracking and fully understand MergeOptions, I recommend changing the MergeOptions to 'NoTracking'.


Posted on 6. December 2011

Dynamics CRM DateTimes - the last word?

The subject of DateTimes in Dynamics CRM 2011 seems to always raises its ugly head on every project – I thought I'd *try* and create a guide for developers on future projects on how to deal with DateTimes in Dynamics CRM 2011.

Time Zones

Dynamics CRM stored Date/Time fields in the database as a SQL datetime field that is always converted to a UTC date. Each user has a Time Zone Code associated with their user settings. To list all the available TimeZoneCodes, you can use the following query against the MSCRM database:

SELECT TimeZoneCode, UserInterfaceName FROM TimeZoneDefinition order by UserInterfaceName

To list all user's selected Time Zone Code you can use:

Select SystemUserId, TimeZoneCode from UserSettings

There are a number of functions available in the MSCRM database that allow converting to and from UTC to local dates. The following function accepts a utc date and converts it to a local date based on the time zone code matching those in TimeZoneDefinition.


Dates are stored as UTC

Consider the following: Joe is in the New York office and creates an appointment in CRM with a scheduled start of '26 Nov 2001 13:00'. Karen is in the Paris office and opens up the same appointment created by Joe, and observes that the start time is '26 Nov 2011 19:00'.

  1. 26 Nov 2001 13:00 – value entered by Joe in New York office as scheduled start date/time. New York is in EST (UTC-5) – i.e. UTC minus 5 hours
  2. 26 Nov 2011 18:00 – value stored in the Database by CRM – converted to UTC date/time by adding 5 hours.
  3. 26 Nov 2011 19:00 – value viewed by Karen in the Paris office – CRM converts from UTC to Karen's local time of UTC+1 by adding 1 hour.

The following SQL shows this example in action:

PRINT 'Non-Daylight Saving Test'
DECLARE @utc datetime = '2001-11-26 18:00:00'

--(GMT-05:00)+1 Eastern Time (US & Canada) New York EST 
-- Entered as 2001-11-26 13:00:00
PRINT 'New York (GMT-05:00)+1	' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,35),120)
PRINT 'UTC						' + CONVERT(nvarchar(30),@utc,120)
--(GMT+01:00) Brussels, Copenhagen, Madrid, Paris
PRINT 'Paris (GMT+1)+1			' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,105),120)

Daylight Saving adjustments

The following scenario is similar to the above except, the date is now in the summer, and subject to daylight saving adjustments. Read this article for more info on daylight saving adjustments.

Joe is in the New York office and creates an appointment in CRM recorded as '26 June 2001 13:00'. Karen is in the Paris office and opens up the same appointment created by Joe, and observes that the start time is '26 June 2011 19:00'.

  1. 26 June 2001 13:00 – value entered by Joe in New York office as scheduled start date/time. New York is in EST (UTC-5) – but they are also on Daylight saving which is +1 hour.
  2. 26 June 2011 17:00 – value stored in the Database by CRM – converted to UTC date/time by adding 4 hours – less one hour due to the daylight saving.
  3. 26 June 2011 19:00 – value viewed by Karen in the Paris office – CRM converts from UTC to Karen's local time of UTC+1 by adding 1 hour and then another hour for daylight saving adjustment.

The important thing to understand is that daylight saving adjustments are based upon the date being entered, and not by the current date time at time of entry. So if a date of 26 June was entered in on the 26 November, the daylight saving adjustment would still be made. This ensures that datetimes are always constant in the same time zone – you wouldn't want the time of an appointment to change depending on what time of year you viewed the record.

The following SQL shows this example in action:

PRINT 'Daylight Saving Test'
DECLARE @utc datetime = '2001-06-26 17:00:00'

--(GMT-05:00)+1 Eastern Time (US & Canada) New York EST 
-- Entered as 2001-06-26 13:00:00
PRINT 'New York (GMT-05:00)+1	' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,35),120)
PRINT 'UTC						' + CONVERT(nvarchar(30),@utc,120)
--(GMT+01:00) Brussels, Copenhagen, Madrid, Paris
PRINT 'Paris (GMT+1)+1			' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,105),120)

Dynamics CRM *always* stores a time element with dates

Dynamics CRM doesn't support storing just dates, they will always have a time element even if it's not displayed in the User Interface or exports. This can cause issue for dates such as 'date of birth' – consider the following:

  1. 26 Nov 2011– Date of birth entered by Karen in the Paris office.
  2. 26 Nov 2011 00:00 - Date of birth sent to the Web Server by the form submit. Note that the time element is set to zero-hundred hours if a date time field is configured to only show the date element.
  3. 25 Nov 2011 23:00 - Date stored in the database converted to UTC by subtracting 1 hour – Since Karen's local time zone is in UTC+1.
  4. 25 Nov 2011 – Date shown to Bob who is in London on GTM (UTC+0)

So a date of birth entered correctly by Karen in Paris is showing as the wrong date to Bob in London due to the time zone UTC conversion.

The following SQL shows this example in action:

PRINT 'Date of birth test'
DECLARE @utc datetime = '2001-11-25 23:00:00'

--(GMT+01:00) Brussels, Copenhagen, Madrid, Paris
-- Entered as 2001-11-26 (sent as 2001-11-26 00:00:00)
PRINT 'Paris (GMT+1)			' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,105),120)
PRINT 'UTC						' + CONVERT(nvarchar(30),@utc,120)
--(GMT-05:00)+1 Eastern Time (US & Canada) New York EST 
PRINT 'New York (GMT-05:00)	' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,35),120)

Absolute Date solutions

There are a number of solutions to this absolute date issue:

  1. Adjust the date/time at point of entry (JavaScript or PlugIn) and convert to mid-day (12:00) so that any time conversion will not move it over the date line. This will only work if you don't have any offices that are more than 12 hours apart.
  2. Write a plugin that intercepts any Retrieve/RetrieveMultiple messages and adjust the time to correct for the time zone offset. This would only work when a date is displayed in a Form or Data Grid – it would not work with SQL based reports or when dates are compared within an advanced find search criteria.
  3. Store the date of birth as a string or 3 options sets for year, month and day – this is in fact the only way to completely avoid the time zone conversion issue for absolute date fields.

You can see the 12:00 date correction in action here:

  1. 26 Nov 2011– Date of birth entered by Karen in the Paris office.
  2. 26 Nov 2011 12:00 - Date of birth sent to the Web Server by the form submit (or adjusted in a PlugIn pipeline). 
  3. 26 Nov 2011 11:00 - Date stored in the database converted to UTC by subtracting 1 hour – Since Karen's local time zone is in UTC+1.
  4. 26 Nov 2011 – Date shown to Bob who is in London on GTM (UTC+0) Correct!
PRINT 'Date of birth test ( 12:00 corrected)'
DECLARE @utc datetime = '2001-11-26 11:00:00'

--(GMT+01:00) Brussels, Copenhagen, Madrid, Paris
-- Entered as 2001-11-26 (sent as 2001-11-26 12:00:00)
PRINT 'Paris (GMT+1)			' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,105),120)
PRINT 'UTC						' + CONVERT(nvarchar(30),@utc,120)
--(GMT-05:00)+1 Eastern Time (US & Canada) New York EST 
PRINT 'New York (GMT-05:00)	' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,35),120)

The downside of this is that if two offices are in time zones more than 12 hours apart, the conversion will still take the date over the date line, and will not show the birth date correclty. At this point, your only option is a text date field.

PRINT 'Date of birth test ( 12:00 corrected - timezone problem)'
DECLARE @utc datetime = '2001-11-26 17:00:00'

--(GMT-05:00)+1 Eastern Time (US & Canada) New York EST 
PRINT 'New York (GMT-05:00)		' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,35),120)
PRINT 'UTC						' + CONVERT(nvarchar(30),@utc,120)
--Fiji (GMT+12)
PRINT 'Fiji (GMT+12)			' + CONVERT(nvarchar(30),dbo.fn_UTCToTzCodeSpecificLocalTime(@utc,285),120)

SDK Web Services DateTime Gotcha

The SDK Web Services can accept either a local datetime or a UTC datetime when performing a create/update, but will always return a UTC date on Retrieve/RetrieveMultiple. For this reason, you must be very careful if you retrieve a value, update it and then send it back.

Using the SOAP endpoint, you will always get a UTC date, to get into a local date on a client (assuming that the client has their datetime set correctly) use DateTime.ToLocalTime, but if you can't guarentee the time zone settings then use the LocalTimeFromUtcTimeRequest.

Contact contact = (from c in ctx.CreateQuery()
                where c.LastName == lastName
                    select c).FirstOrDefault();
Console.WriteLine("UTC Time " + contact.BirthDate.ToString());
Console.WriteLine("Local Time (Converted on Client) "  + contact.BirthDate.Value.ToLocalTime().ToString());

LocalTimeFromUtcTimeRequest convert = new LocalTimeFromUtcTimeRequest
    UtcTime = contact.BirthDate.Value,
    TimeZoneCode = 85 // Timezone of user
LocalTimeFromUtcTimeResponse response = (LocalTimeFromUtcTimeResponse)_service.Execute(convert);
Console.WriteLine("Local Time (Converted on Server) " + response.LocalTime.ToString());


If you want to update the date, you need to ensure you specify if it's a local datetime or a UTC datetime.

newContact.BirthDate = new DateTime(2001, 06, 21, 0, 0, 0, DateTimeKind.Utc);
// or
newContact.BirthDate = new DateTime(2001, 06, 21, 0, 0, 0, DateTimeKind.Local);


If you are using the REST endpoint, then you would set a UTC date using the following format:

<d:BirthDate m:type="Edm.DateTime">2001-06-20T23:00:00Z</d:BirthDate>


To set a local date time, where it will be converted to UTC on the server simply omit the trailing Z. The 'Z' is from the Navy and Aviation's use of 'Zulu' time which is equivalent to UTC (but shorter!)

<d:BirthDate m:type="Edm.DateTime">2001-06-20T23:00:00Z</d:BirthDate>

GMT 'Time-bomb' Gotcha

The problem with being in the UK is that for half the year, the time zone is the same as UTC (GMT+0) – which means date time conversion issues are often not spotted if the development is taking place when British Summer Time (BST) Daylight saving is not in effect – this is because any dates entered into CRM are stored as UTC which is the same date – as soon as Daylight saving comes into effect, the problem is then spotted (hopefully!) because dates are an hour out in reports and integrations with other systems because the utc date is being read from the database or SDK and not converted to GMT.

Key Points:

So in summary, here are the key points to remember:

  1. Date/Times are always stored in the MSCRM database as UTC dates.
  2. When querying the Base table or views for an entity (e.g. ContactBase or Contact), the dates will be UTC.
    E.g. the following dates will be in UTC
    Select birthdate From ContactBase
    Select birthdate form Contact
  3. When query the Filtered Views, dates will be in the local time specified in the current user's settings. There is another field provided that is suffixed by UTC that provides the raw date without any convertion.
    E.g. The first date will be local time zone correct, and the second field will always be utc
    Select birthdate,birthdateutc from FilteredContact
  4. When sending a date/time in SOAP SDK Message (e.g. create/update), the date will default to local time if you use a DateTime.Parse – and if you want to send a UTC time, you must set the DateTimeKind to UTC.
  5. Important: When querying the SOAP SDK, any date/times will be returned as UTC dates, and must be converted to local time using DateTime.ToLocalTime if you know that the locale of the current process is set correctly, or the LocalTimeFromUtcTimeRequest SDK message.
  6. When importing &updating data via the Import Wizard, date/times must be specified in the local date of the user who is importing them.

With any luck, that should settle the matter!

Posted on 18. May 2011

CRM Developer ‘Must Know’ #2: Web Resource Caching

With the introduction of Web Resources in CRM 2011 the task of adding custom user interface functionality (beyond simple JavaScript) has become a whole lot easier to build and deploy. The fact that web resource are part of the solution means that there is no need to have custom deployment routines to create sites in the ISV folder.

You always want your users to have the faster experience possible, so it is important in each of these situations to ensure that your web resources are being cached by the Client browser to avoid downloading a copy with each request.

How Web Resources are cached

CRM 2011 uses an very simple but effective means of ensuring that not only Web Resources are cached by the browser, but when you update any of them, the cache is invalidated and the new version is downloaded.

When a Web Resource is referenced, although the Url shown on the Web Resource form is something like:


If you request this url in a browser, and use Internet Explorer's F12 Developer Tab Network monitor, you'll see that the response has the following Headers:

Cache-Contro:    private
Expires:     <Today's Date Time>

This instructs the browser/proxy server that is should never cache this file, and every time the browser asks for it, get the latest version from the server. This may not seem so bad for the odd file, but if you add all the files the browser needs for every page request and then multiply by the number of users you have, it introduces a considerable network load and download time.

So how are they cached?

When the Web Resource is referenced by a CRM 2011 page, the following format is used:

http://crm/Contoso/%7B634411504110000000%7D /WebResources/new_/custom_page.htm

If you look at the response headers you'll now see:

Cache-Control:    public
Expires:     <Today's Date Time Plus One Year>

So the browser/proxy server will keep a copy of this web resource for a year and use that before it goes to get the latest version.

So how is the client cache cleared when I publish a new version of the web resource?

The additional %7B634411504110000000%7D is the 'cache directory' and every time the customisations are re-published is updated to a new number. Since the browser/proxy cache is linked to the url of the file, if the url changes, the cache is no longer valid, and it is considered a new file altogether to be downloaded.

So how do we ensure that this cache strategy is always used?

Luckily, in most cases CRM 2011 handles this for us providing we play by its rules:

Among the options for showing a Web Resource

1) Embedding in an Entity form
2) Showing from a site map link
3) Ribbon button image
4) A Popup dialogue from a Ribbon button via a javascript function.

Here are the ways to ensure caching in each of these scenarios:

1) Embedding in an Entity form

If you embed a web resource in an Entity form, the cache directory is automatically used for you. However, if that web resource is an html page, ensure that you use relative paths in any javascript/css/image links so that you always stay in the cache directory.

E.g. If you have the following webresources:

  • new_/Account/AssignAccount.htm
  • new_/scripts/Common.js

In your AssignAccount.htm page, reference relatively using:

<script src="../scripts/Common.js" type="text/javascript"></script>

Do not use absolute:

<script src="/Webresources/scripts/Common.js" type="text/javascript"></script>

2) Showing from a site map link

When showing an html page from a site map link, you can ensure that CRM uses the cache directory by avoiding absolute paths and use the $webresource token in the SubArea definition:


3) Ribbon button image

When referencing images from site maps/ribbons, use the $webresource token as above rather than using the absolute path of the web resource.

4) - A Popup dialogue from a Ribbon button via a JavaScript function

UR8 or later 

UR8 introduced a new utility function called 'openWebResource' - this will ensure that if you need to show a popup window with a webresource, caching is used rather than having to provide an absolute path:


This results in a url being used along the lines of:


If you do need to manually construct a Web Resource Url, you can use the contstant 'WEB_RESOURCE_ORG_VERSION_NUMBER' but baring in mind that this is not a supported/documented SDK constant.

Before UR8

Prior to UR8, this final one poses a bit more of a challenge. If you want to show a popup window that references a web resource page, before UR8 there was no way in the customisations to achieve this – it has to be JavaScript which means you need to construct the cache directory web resource location yourself. This is biggest area where I see developers writing code resulting in the browser always downloading the file with every request.

To avoid this, you can find the current cache directory from another web resource that is currently loaded using something similar to:

function GetCacheDirectory(webresourceUrl) {
    var scripts = document.getElementsByTagName("script");
    for (var i = 0; i < scripts.length; i++) {
        var url = scripts[i].src;
        var p1 ="/%7B")
        if (p1 > 0) {
            var p2 ="%7D/", p1) + 4;
            var resourceCache = url.substr(p1, (p2 - p1));
     var url = "/" + Xrm.Page.context.getOrgUniqueName() + resourceCache + webresourceUrl;
    return url;

And calling it using:

// Get cache directory resource url
var url = GetCacheDirectory('/WebResources_/new/Account/AssignAccount.htm');

// Open Window;

Of course - I would recommend you upgade to the latest rollup if you don't have UR8 installed yet!

If you follow these steps, you will ensure that your user's browser only download files when they need to resulting in less network load and faster load time.

Happy caching!