Polymorphic Workflow Activity Input Arguments

I often find myself creating 'utility' custom workflow activities that can be used on many different types of entity. One of the challenges with writing this kind of workflow activity is that InArguments can only accept a single type of entity (unlike activity regarding object fields). The following code works well for accepting a reference to an account but if you want to accept account, contact or lead you'd need to create 3 input arguments. If you wanted to make the parameter accept a custom entity type that you don't know about when writing the workflow activity then you're stuck!

[Output("Document Location")] [ReferenceTarget("account")] public InArgument<EntityReference> EntityReference { get; set; }

There are a number of workarounds to this that I've tried over the years such as starting a child work flow and using the workflow activity context or creating an activity and using it's regarding object field – but I'd like to share with you the best approach I've found. Dynamics CRM workflows and dialogs have a neat feature of being about to add Hyperlinks to records into emails/dialog responses etc. which is driven by a special attribute called 'Record Url(Dynamic)'

This field can be used also to provide all the information we need to pass an Entity Reference. The sample I've provide is a simple Workflow Activity that accepts the Record Url and returns the Guid of the record as a string and the Entity Logical Name – this isn't much use on its own, but you'll be able to use the DynamicUrlParser.cs class in your own Workflow Activities.

[Input("Record Dynamic Url")] [RequiredArgument] public InArgument<string> RecordUrl { get; set; }

The DynamicUrlParser class can then be used as follows:

var entityReference = new DynamicUrlParser(RecordUrl.Get<string>(executionContext)); RecordGuid.Set(executionContext, entityReference.Id.ToString()); EntityLogicalName.Set(executionContext, entityReference.GetEntityLogicalName(service));

  The full sample can be found in my MSDN Code Gallery. @ScottDurow

Monitor, Monitor, Monitor

I once heard someone say that "the great thing about Dynamics CRM is that it just looks after itself" Whilst CRM2013 is certainly very good at performing maintenance tasks automatically, if you have a customised system it is important to Monitor, Monitor, Monitor! There are some advanced ways of setting up monitoring using tools such as System Center but just some regular simple monitoring tasks will go a long way for very little investment on your part: 1) Plugin Execution Monitoring There is a super little entity called 'Plugin-in Type Statistics' that often seems to be overlooked in the long list of advanced find entities. This entity is invaluable for tracing down issues before they cause problems for your users and as defined by the SDK it is "used by the Microsoft Dynamics CRM 2011 and Microsoft Dynamics CRM Online platforms to record execution statistics for plug-ins registered in the sandbox (isolation mode)." The key here is that it only records statistics for your sandboxed plugins. Unless there is a good reason not to (security access etc.) I would recommend that all of your plugins be registered in sandbox isolation. Of course Dynamics CRM online only allows sandboxed plugins anyway so you don't want to put up barriers not to move to the cloud. To monitor this you can use advanced to show a sorted list by execution time or failure count descending:

If you spot any issues you can then proactively investigate them before they become a problem. In the screen shot above there are a few plugins that are taking more than 1000ms (1 second) to execute, but their execution count is low. I look for plugins that have a high execution count and high execution time, or those that have a high failure percent. 2) Workflow & Asynchronous Job Execution Monitoring We all know workflows often can start failing for various reasons. Because of their asynchronous nature these failures can go unnoticed by users until it's too late and you have thousands of issues to correct. To proactively monitor this you can create a view (and even add to a dashboard) of System Jobs filtered by Status = Failed or Waiting and where the Message contains data. The Message attribute contains the full error description and stack trace, but the Friendly message just contains the information that is displayed at the top of the workflow form in the notification box.

3) Client Latency & Bandwidth Monitoring Now that you've got the server-side under control you should also look at the client connectivity of your users. There is a special diagnostics hidden page that can be accessed by using a URL of the format: http://<YourCRMServerURL>/tools/diagnostics/diag.aspx As described by the implementation guide topic, "Microsoft Dynamics CRM is designed to work best over networks that have the following elements:

Bandwidth greater than 50 KB/sec Latency under 150 ms"

After you click 'Run' on this test page you will get results similar to that shown below. You can see that this user is just above these requirements!

You can read more about the Diagnostic Page in Dynamics CRM. You can also monitor the client side using the techniques I describe in my series on Fiddler:

Part 1: X-Ray vision Part 2: Invisibility Part 3: Faster than a speeding bullet! Part 4: Ice Man

If you take these simple steps to proactively monitor your Dynamics CRM solution then you are much less likely to have a problem that goes un-noticed until you get 'that call'! @ScottDurow

Chrome Dynamics CRM Developer Tools

Chrome already provides a fantastic set of Developer tools for HTML/Javascript, but now thanks to Blake Scarlavai at Sonoma Partners we have the Chrome CRM Developer Tools. This fantastic Chome add in provides lots of gems to make debugging forms and testing fetchXml really easy: Form Information- Displays the current form’s back-end information - Entity Name - Entity Id - Entity Type Code - Form Type - Is Dirty- Ability to show the current form’s attributes’ schema names- Ability to refresh the current form- Ability to enable disabled attributes on the current form (System Administrators only)- Ability to show hidden attributes on the current form (System Administrators only) Current User Information- Domain Name- User Id- Business Unit Id Find- Ability to open advanced find- Set focus to a field on the current form- Display a specific User and navigate to the record (by Id)- Display a specific Privilege (by Id) Test- Ability to update attributes from the current form (System Administrators only)- This is helpful when you need to update values for testing but the fields don’t exist on the form Fetch- Execute any Fetch XML statement and view the results Check it out in the chrome web store - https://chrome.google.com/webstore/detail/sonoma-partners-dynamics/eokikgaenlfgcpoifejlhaalmpeihfom

Microsoft.Xrm.Client (Part 3b): Configuration via app/web.config

In this series we have been looking at the Developer Extensions provided by the Microsoft.Xrm.Client assembly: Part 1 - CrmOrganizationServiceContext and when should I use it? Part 2 - Simplified Connection Management & Thread Safety Part 3a – CachedOrganizationService So far in this series we have learnt how to use the Microsoft.Xrm.Client developer extensions to simplify connecting to Dynamics CRM, handling thread safety and client side caching. This post shows how the configuration of the Dynamics CRM connection and associated OrganizationService context can be configured using an app.config or web.config file. This is esspecially useful with ASP.NET Portals and Client Applications that don't need the facility to dynamically connect to different Dynamics CRM Servers. If you only want to configure connection string, then you would add a app.config or web.config with the following entry: <connectionStrings> <add name="Xrm" connectionString="Url=http://<server>/<org>"/> </connectionStrings> The Connection string can take on the following forms:

On Prem (Windows Auth)

Url=http://<server>/<org>;

On Prem (Windows Auth with specific credentials)

Url=http://<server>/<org>; Domain=<domain>; Username=<username>; Password=<password>;

On Prem (Claims/IFD)

Url=https://<server>; Username=<username>; Password=<password>;

On Line (Windows Live)

Url=https://<org>.crm.dynamics.com; Username=<email>; Password=<password>; DeviceID=<DeviceId>; DevicePassword=<DevicePassword>;

On Line (O365)

Url=https://<org>.crm.dynamics.com; Username=<email>; Password=<password>;"

You can find a full list of connection string parameters in the SDK You can then easily instantiate an OrganizationService using: var connection = new CrmConnection("Xrm"); var service = new OrganizationService(connection); If you want to simplify creation of the ServiceContext, and make it much easier to handle thread safety – you can use a configuration file that looks like: <configuration> <configSections> <section name="microsoft.xrm.client" type="Microsoft.Xrm.Client.Configuration.CrmSection, Microsoft.Xrm.Client"/> </configSections> <microsoft.xrm.client> <contexts> <add name="Xrm" type="Xrm.XrmServiceContext, Xrm" serviceName="Xrm"/> </contexts> <services> <add name="Xrm" type="Microsoft.Xrm.Client.Services.OrganizationService, Microsoft.Xrm.Client" instanceMode="[Static | PerName | PerRequest | PerInstance]"/> </services> </microsoft.xrm.client> </configuration> You can then instantiate the default context simply by using: var context = CrmConfigurationManager.CreateContext("Xrm") as XrmServiceContext; The most interesting part of this is the instanceMode. It can be Static, PerName, PerRequest, PerInstance. By setting it to PerRequest, you will get a OrganizationService per ASP.NET request in a portal scenario – making your code more efficient and thread safe (provided you are not using asynchronous threads in ASP.NET). The examples above I find are the most common configurations, although you can also specify multiple contexts with optional caching if required for specific contexts – the SDK has a full set of configuration file examples. Using the configuration file technique can simplify your code and ensure your code is thread safe. On the topic of thread safety, it was recently brought to my attension that there appears to a bug with the ServiceConfigurationFactory such that if you are instantiating your OrganizationService passing the ServiceManager to the constructor in conjunction with EnableProxyTypes, you can occationally get a threading issue with the error "Collection was modified; enumeration operation may not execute". The work around to this is to ensure that the call to EnableProxyTypes is wrapped in the following: if (proxy.ServiceConfiguration.CurrentServiceEndpoint.EndpointBehaviors.Count == 0) { proxy.EnableProxyTypes(); } More information can be found in my forum post. Next up in this series are the Utility and extension functions in the Microsoft.Xrm.Client developer extensions. @ScottDurow  

Do you understand MergeOptions?

If you use LINQ queries with the OrganizationServiceContext then understanding MergeOptions is vital. At the end of this post I describe the most common 'gotcha' that comes from not fully understanding this setting. The OrganizationServiceContext implements a version of the 'Unit of Work' pattern (http://martinfowler.com/eaaCatalog/unitOfWork.html ) that allows us to make multiple changes on the client and then submit with a single call to 'SaveChanges'. The MergeOption property alters the way that the OrganizationServiceContext handles the automatic tracking of objects when returned from queries. It is important to understand what's going on since by default LINQ queries may not return you the most recent version of the records from the server, but rather a 'stale' versions that is currently being tracked. What is this 'Merge' they speak of?! The SDK entry on MergeOptions talks about 'Client side changes being lost' during merges. The term 'merge' is nothing to do with merging of contacts/leads/accounts – but describes what happens when the server is re-queried within an existing context and results from a previous query are returned rather than new copies of each record. It is a record ID based combination, not an attribute merge – so a record is either re-used from the current context, or a new instance is returned that represents the version on the server. In order to describe the different options, consider the following code: // Query 1 var contacts = (from c in context.ContactSet select new Contact { ContactId = c.ContactId, FirstName = c.FirstName, LastName = c.LastName, Address1City = c.Address1City

            }).Take(1).ToArray();

// Update 1 Contact contact1 = contacts[0]; contact1.Address1_City = DateTime.Now.ToLongTimeString(); context.UpdateObject(contact1);

// Query 2 var contacts2 = (from c in context.ContactSet select c ).Take(2).ToArray();

// Update 2 var contact2 = contacts2[0]; contact2.Address1_City = DateTime.Now.ToLongTimeString(); context.UpdateObject(contact2);

// Save Changes context.SaveChanges();

MergeOption.NoTracking Perhaps the best place to start is the behaviour with no tracking at all.

Query 1 – Will return all matching contacts but not add them to the tracking list Update 2 – Will throw and exception because the contact is not being tracked. You would need to use context.Attach(contact) to allow this update to happen Query 2 – This query will pull down new copies of all contacts from the server include a new version of contact 1 Update 2 – We now have two version of the same contact with different city attribute value. The UpdateObject will fail without Attach first being called. If you attempt to attach contact2 after attaching contact1 you will receive the error 'The context is already tracking a different 'contact' entity with the same identity' because contact1 is already tracked and has the same ID.

MergeOption.AppendOnly (Default Setting) When using the OrganizationServiceContext, by default it will track all objects that are returned from LINQ queries. This means that the second query will return the instance of the contacts that have already been returned from query 1. Critically this means that any changes made on the server between query 1 and query 2 (or any additional attributes queried using projection) will not be returned.

Query 1 – Will return all matching contacts and add them to the tracking list Update 2 – Will succeed because the contact is being tracked Query 2 – Will return the same instances that are already being tracked. The only records that will be returned from the server will be those that are not already being tracked. This is the meaning of 'AppendOnly'. The query still returns the data from the server, but the OrganizationServiceContext redirects the results to the instances already in the tracking list meaning that any changes made on the server since Query 1 will not be reflected in the results. Update 2 – Will succeed since contact1 and contact2 are the same object. Calling UpdateObject on the same instance more than once is acceptable.

MergeOption.PreserveChanges PreserveChanges is essentially the same as AppendOnly except:

Query 2 – Will return the same instances that are already being tracked provided they have an EntityState not equal to Unchanged. This means that contact2 will be the same instance as contac1 because it has been updated, but other instances in the contacts1 and contacts2 results will be new instances.

The result of this is that queries will not pick up the most recent changes on the server if a tracked version of that record has been edited in the current context. MergeOption.OverwriteChanges With a MergeOption of OverwriteChanges, the query behaviour will effectively be as per NoTracking however the tracking behaviour is like AppendOnly and PreserverChanges:

Query 1 – Will return all matching contacts and add each on to the tracking list (as per AppendOnly and PreserveChanges) Update 2 – Will succeed because the contact is being tracked (as per AppendOnly and PreserveChanges) Query 2 – This query will pull down new copies of all contacts from the server include a new version of contact 1 (as per NoTracking). Previously tracked contact1 will no longer be tracked, but the new version (contact2) will be. Update 2 – Will succeed and the values on contact1 will be lost.

The MergeOption has a subtle but important effect on the OrganizationServiceContext, and without truly understanding each setting you might see unexpected results if you stick with the default 'AppendOnly'. For instance, you might update the value on the server between queries, but because a record is already tracked, re-querying will not bring down the latest values. Remember that all of this behaviour only is true for the same context – so if you are creating a new context then any previously tracked/modified records will no longer be tracked. LINQ Projection 'Gotcha' The most common issue I see from not fully understanding MergeOptions (and yes I made this mistake too! ) is the use of the default AppendOnly setting in conjunction with LINQ projection. In our code example Query 1 returns a projected version of the contact that only contains 4 attributes. When we re-query in Query 2 we might expect to see all attribute values but because we are already tracking the contacts our query will only return the previously queried 4 attributes! This can hide data from your code and cause some very unexpected results! In these circumstances, unless you really need tracking and fully understand MergeOptions, I recommend changing the MergeOptions to 'NoTracking'. @ScottDurow

Microsoft. Xrm. Client (Part 3a): CachedOrganizationService

In this series we have been looking at the Developer Extensions provided by the Microsoft.Xrm.Client assembly: Part 1 - CrmOrganizationServiceContext and when should I use it? Part 2 - Simplified Connection Management & Thread Safety This 3rd part in the series demonstrates when and how to use the CachedOrganizationService. When writing client applications and portals that connect to Dynamics CRM there are many situations where you need to retrieve data and use it in multiple places. In these situations it is common practice to implement a caching strategy which although can be easy implemented using custom code can quickly add complexity to your code if you're not careful. The CachedOrganizationService provides a wrapper around the OrganizationServiceProxy and provides a caching service that is essentially transparent to your code with the cache being automatically invalidated when records are updated by the client. The CachedOrganizationService inherits from OrganizationService and uses the same CrmConnection instantiation so you can almost swap your existing OrganizationService with a CachedORganizationService so that your code can benefit from caching without any changes. There are always some pieces of data that you don't want to cache, and so you will need to plan your caching strategy carefully. Using an CachedOrganizationService You have two choices when it comes to instantiating the objects required:

Manual Instantiation – Full control over the combination of the CrmConnection, OrganizationService & OrganizationServiceContext Configuration Manager Instantiation – App/Web.config controlled instantiation using a key name.

Part 3b will show how to use the Configuration Management, but for now we'll explicitly instantiate the objects so you can understand how they work together. CrmConnection connection = new CrmConnection("CRM");
using (OrganizationService service = new CachedOrganizationService(connection)) using (CrmOrganizationServiceContext context = new CrmOrganizationServiceContext(service)) { … }

Using the CachedOrganizationService to create your Service Context gives your application automatic caching of queries. Each query results is stored against the query used and if when performing further queries, if there is a matching query, the results are returned from the cache rather than using a server query. Cached Queries In the following example, the second query will not result in any server request, since the same query has already been executed. QueryByAttribute request = new QueryByAttribute(Account.EntityLogicalName); request.Attributes.Add("name"); request.Values.Add("Big Account"); request.ColumnSet = new ColumnSet("name");

// First query will be sent to the server Account acc1 = (Account)service.RetrieveMultiple(request).Entities[0];

// This query will be returned from cache Account acc2 = (Account)service.RetrieveMultiple(request).Entities[0];

If another query is executed that requests different attribute values (or has different criteria), then the query is executed to get the additional values: QueryByAttribute request2 = new QueryByAttribute(Account.EntityLogicalName); request.Attributes.Add("name"); request.Values.Add("Big Account"); request.ColumnSet = new ColumnSet("name","accountnumber");

// This query will be sent to the server because the query is different Account acc3 = (Account)service.RetrieveMultiple(request).Entities[0]; Cloned or Shared By default, the CachedOrganizationSevice will return a cloned instance of the cached results, but it can be configured to return the same instances: ((CachedOrganizationService)service).Cache.ReturnMode = OrganizationServiceCacheReturnMode.Shared;

QueryByAttribute request = new QueryByAttribute(Account.EntityLogicalName); request.Attributes.Add("name"); request.Values.Add("Big Account"); request.ColumnSet = new ColumnSet("name");

// First query will be sent to the server Account acc1 = (Account)service.RetrieveMultiple(request).Entities[0];

// This query will be returned from cache Account acc2 = (Account)service.RetrieveMultiple(request).Entities[0]; Assert.AreSame(acc1, acc2);

The assertion will pass because a ReturnMode of 'Shared' will return the existing values in the cache and not cloned copies (the default behaviour). Automatic Invalidated Cache on Update If you then go on to update/delete and entity that exists in a cached query result, then the cache is automatically invalidated resulting in refresh the next time it is requested. Coupling with CrmOrganizationServiceContext In Part 1 we saw that the CrmOrganizationServiceContext provided a 'Lazy Load' mechanism for relationships, however it would execute a metadata request and query every time the relationship Entity set was queried. When this is coupled with the CachedOrganizationService it gives us the complete solution. In the following example, we perform two LINQ queries against the Account.contactcustomeraccounts relationship, the first returns all the related contacts (all attributes), and the second simply retrieves the results from the cache. You don't need to worry about what is loaded and what is not. // Query 1 Console.WriteLine("Query Expected"); Xrm.Account acc = (from a in context.CreateQuery() where a.Name == "Big Account" select new Account { AccountId = a.AccountId, Name = a.Name, }).Take(1).FirstOrDefault();

// Get the contacts from server Console.WriteLine("Query Expected"); var accounts = (from c in acc.contactcustomeraccounts select new Contact { FirstName = c.FirstName, LastName = c.LastName }).ToArray();

// Get the contacts again - from cahce this time Console.WriteLine("No Query Expected"); var accounts2 = (from c in acc.contactcustomeraccounts select new Contact { FirstName = c.FirstName, LastName = c.LastName, ParentCustomerId = c.ParentCustomerId }).ToArray(); Thread Safety Provided you are using the CachedOrganizationService with the CrmConnection class, then all the same multi-threading benefits apply. The Client Authentication will only be performed initially and when the token expires, and the cache automatically handles locking when being access by multiple threads. Pseudo Cache with OrganizationServiceContext.MergeOption = AppendOnly The OrganizationServiceContext has built in client side tracking of objects and a sort of cache when using a MergeOption Mode of AppendOnly (the default setting). With MergOption=AppendOnly once an entity object has been added to the context, it will not be replaced by a instance on subsequent LINQ queries. Instead, the existing object is re-used so that any changes made on the client remain. This means that even if a new attribute is requested and the CachedOrganizationService executes the query accordingly, it will look as though it hasn't been a new query because the OrganizationServiceContext still returns the object that it is currently tracking. // Query 1 Xrm.Account acc = (from a in context.CreateQuery() where a.Name == "Big Account" select new Account { AccountId = a.AccountId, Name = a.Name, }).Take(1).FirstOrDefault();

Assert.IsNull(acc.AccountNumber); // We didn’t request the AccountNumber

// Query 2 // Because there is an additional attribute value requested, this will query the server Xrm.Account acc2 = (from a in context.CreateQuery() where a.Name == "Big Account" select new Account { AccountId = a.AccountId, Name = a.Name, AccountNumber = a.AccountNumber }).Take(1).FirstOrDefault();

Assert.AreSame(acc, acc2); // MergeOption=AppendOnly preserves existing objects and so the first tracked object is returned Assert.IsNull(acc.AccountNumber); // Account Number will be null even though it was returned from the server This can lead to the conclusion that the CachedOrganizationService isn't detecting that we want to return a new attribute that wasn't included in the first query, but it actually isn't anything to do with caching since the OrganizationServiceContext will behave like this even if there was no CachedOrganizationService in use. If you were to use a MergeOption of NoTracking, PreserveChanges or Overwrite changes you wouldn't see the above behaviour because the 2nd query would always return a new instance of the account with the AcccountNumber attribute value loaded. Next in this series I'll show you how to configure the CachedOrganisationSevice and OrganizationServiceContext using the web/app.config. @ScottDurow