Fiddler2: The tool that gives you Superpowers - Part 2

This post is the second post in the series 'Fiddler – the tool that gives you superpowers!'

Part 1: X-Ray vision Part 2: Invisibility Part 3: Faster than a speeding bullet! Part 4: Ice Man

Invisibility This time it's the superpower of Invisibility! Wow I hear you say! Fiddler is a web debugger that sits between you and the server and so is in the unique position of being able to listen for requests for a specific file and rather than returning the version on the server return a version from your local disk instead. This is called and 'AutoResponder' and sounds like a super-hero it's self – or perhaps a transformer (robots in disguise). If you are supporting a production system then the chances are that at some point your users have found an issue that you can't reproduce in Development/Test environments. Auto Responders can help by allowing us to update any web resource (html/JavaScript/Silverlight) locally and then test it against the production server without actually deploying it. The Auto Responder sees the request from the browser for the specific web resource and rather returning the currently deployed version, it gives the browser your local updated version so you can test it works before other users are affected.

Here are the steps to add an auto responder: 1) Install Fiddler (if you've not already!) 2) Switch to the 'Auto Responders' tab and check the two checkboxes 'Enable automatic responses' and 'Unmatched requests pass-through'

3) To ensure that the browser requests a version of the web resource rather than a cached version from the server you'll need to clear the browser cache using the convenient 'Clear Cache' button on the tool bar. 4) You can ensure that no versions get subsequently cached by selecting Rules-> Performance-> Disable Caching. 5) You can now use 'Add Rule' to add an auto responder rule. Enter a regular expression to match the web resource name

regex:(?insx).+/<Web Resource Name>([?a-z0-9-=&]+.)*

then enter the file location of the corresponding webresource in your Visual Studio Developer Toolkit project. You are now good to go so that when you refresh your browser the version of your web resource will be loaded into the browser directly from your Visual Studio project. No need to publish a file to the server and affect other users. There is one caveat to this – If the script that you are debugging updates data then this approach is probably not a good idea until you are have fully tested the script in a non-production environment. Only once you have QAed and ready to deploy can be it be used against the production environment to check that the specific user's issue is fixed before you commit to deploying it to all users. Read the next post on how to be faster than a speeding bullet! @ScottDurow  

Fiddler2: The tool that gives you Superpowers – Part 1

The next few posts are for those who saw me speaking at the most recent CRMUG UK Chapter meeting about Fiddler2 and wanted to know more (and as a thank you to those who voted for me in X(rm) factor!). I've been using Fiddler for web debugging for as long as I can remember and I can honestly say that I could not live without it when Developing Dynamics CRM extensions as well as supporting and diagnosing issues with existing solutions. I first blogged about it in connection with SparkleXRM development but this post elaborates further on the super powers it gives you! What is a Web Debugger? Fiddler2 is a Web Debugger which basically means that it sits between your browser and the server just like any normal proxy, but the difference is that it shows you all the HTTP traffic going back and forwards, allows you to visualise it in an easy to read format as well as allowing you to 'Fiddler' with it – hence the name.

You can easily install fiddler for free by downloading it from The following posts describe the superpowers that Fiddler can give you whilst you are developing solutions or supporting your end users.

Part 1: X-Ray vision Part 2: Invisibility Part 3: Faster than a speeding bullet! Part 4: Ice Man

X-Ray Vision When you perform any actions in your browser whilst Fiddler is running then each and every request/response is being logged for your viewing pleasure. This log is incredibly useful when you need to see what requests your JavaScript or Silverlight is sending to the server. It shows you the error details even when the user interface may simply report that an 'Error has occurred' without any details. The prize for the most unhelpful error goes to Silverlight with its 'Not Found' message – the actual error can only be discovered with a tool like Fiddler2 by examining the response from the server to see the true exception that is hidden by Silverlight. The HTTP error code is your starting point and Fiddler makes it easy to see these at a glance by its colour coding of request status codes - the most important of which are HTTP 500 requests that are coloured red. For any solution you are developing, the bare minimum you should look at is for any 404 or 500 responses. If you wanted to diagnose a problem that a user was having with CRM that could not reproduce then try following these steps:

Ask the user experiencing the issue to install Fiddler2 (this may require administrator privileges if their workstation is locked down). Get to the point where they can reproduce the problem – just before they click the button or run the query, or whatever! Start Fiddler Ask the user to reproduce the issue Ask the user to click File->Save->All Sessions and send you the file. Once you've got the file you can load it into your own copy of Fiddler to diagnose the issue.

If the user has IE9 or above and they are not using the outlook client then the really neat thing about the latest version of Fiddler is that it can import the F12 Network trace. This allows you to capture a trace without installing anything on the client and then inspect it using Fiddlers user interface. To capture the network traffic using IE:

Get to the point where they are about to reproduce the issue Press F12 Press Ctrl-4 Press F5 (to start the trace) Reproduce the issue Switch back to the F12 debugger window by selecting it Press Shift-F5 to stop the trace Click the 'Export Captured Traffic' button and send you the file

Now you can load this file into fiddler using File->Import Sessions->IE's F12 NetXML format file. Once you found the requests that you are interested in you can then use the inspectors to review the contents – the request is shown on the top and the response is shown on the bottom half of the right panel. Both the request and response inspectors gives you a number of tabs to visualise in different ways depending on the content type. If you are looking at JavaScript, HTML or XML your best bet is the SyntaxView tab that even has a 'Format Xml' and 'Format Script/JSON' option on the context menu. This is great to looking at SOAP requests and responses that are sent from JavaScript to make sure they are correctly formatted. The following screen shows a soap request from JavaScript and inspectors in syntax view with 'Format Xml' selected.

  This technique is going to save you lots of time when trying to work out what is going on over the phone to your users! Next up is Invisibility! @ScottDurow  

‘Ghost’ Web Resources & Client Metadata Caching

When I blogged about my CRM 2013 Start Menu solution I said I would also post about how it caches metadata on the client – so here is that post! Metadata is the information that describes other data and in the case of CRM 2013 metadata describes entities, relationships and attributes as well as other configuration elements such as the sitemap. The Start Menu solution needed to read the sitemap in order to dynamically display it in the drop down command bar button. Initially the solution read the site map from the server every time the menu was displayed. This wasn't too bad but the solution also then had to make additional requests for entity metadata to retrieve the localised display name, image and object type code. What's more is the solution then had to retrieve the user's privileges and iterate over the sitemap to decide if the user had access or not. This is a common scenario with any webresource development for Dynamics CRM – it could be option set labels or view layoutxml - all need by the client every time the page is displayed. Since this metadata doesn't change very often it makes it a very good candidate for caching. Caching isn't the problem Whenever designing a cache solution – the first thing to think about is how to invalidate that cache. It's no good being able to store something for quick access if you can't tell if it is stale and needs to refresh – this could lead to problems much worse than poor performance! Dynamics CRM neatly already provides us with a client side caching mechanism it uses for Web Resources. I blogged about this back in the CRM 2011 days – but it really hasn't changed with CRM 2013. The general principle is that if you request a Web Resource in the following format then you will get the HTTP headers from the server that means that the browser/proxy server can cache the file. http://server/%7B000000000000000000%7D/WebResources/SomeFile.js The net result is that the next time that the browser requests this file, provided it has the same number before the WebResources folder then the file will not be requested from Dynamics CRM but served from the cache. Every time you publish a web resource the number that is used to request web resources is changed so that the client browser gets a new copy and then caches that until the next change of number. So we have a very effective caching mechanism for static content – but how do we make use of this for dynamic content? What we need is a way of storing the sitemap and all the other metadata needed into a web resource so that it will be cached – but we don't want to have to update a webresource with this information – what we need is something similar to ASP.Net where we can dynamically generate the web resource when requested and then cache the result on the client. Dynamic 'Ghost' Web Resources The magic happens by a Plugin that is registered on Retrieve Multiple of webresource. When the Plugin detects that the request is for the SiteMap web resource, all this metadata is retrieved server side, converted into JSON and then added dynamically to the output collection as a webresource record. The user's LCID is also read from the web resource name and used to pick to correct localised labels. The interesting thing here is that the web resource being requested by the Javascript doesn't actually exist – they are 'ghost' web resources. If the plugin wasn't there, then the platform would return the usual 404 File not found but as long as the Plugin intercepts the query and provides an output then the platform doesn't mind that we are requesting something that doesn't exist. This technique provides a host of opportunities for adding additional request information to the name of the web resource that can then be used to determine the contents of the web resource. This allows the web resource contents to vary depending on the request:

Varying content by language/LCID Varying content by record id Varying content by user Varying content by other parameters such as date

  Check out the code for this web resource plugin to see how it's done. Invalidating the cache Since we now can cache our server side generated JSON on the client – we need to know how to clear the cache when something changes. In the case of the 'Start Menu' solution that something changing is the sitemap.xml or entity names. The cache key number that is used by the client will change whenever a web resource is added or updated so to clear the client cache we simply need to update a dummy web resource. Part of the solution publishing that contains a sitemap or entity name change should always include a web resource update so that the client will reflect the updates. Be careful with caching of sensitive data Caching of metadata is the most common use of this technique, but it could also be used for caching commonly used reference data such as products or countries. This cache can be invalidated easily by making a simple request for the most recent modified on date – but make sure you don't cache any sensitive data since this would be accessible by anyone with access to the client machine. Making things easier A future update of SparkleXRM will contain a client side metadata caching framework that I'm working on that uses the techique I describe here, but in the mean time I hope this helps you get better performance from your client side code. @ScottDurow

‘Start Menu’ style navigation for CRM2013

When Windows 8 didn't have a 'Start Menu' there was so much fuss that we saw it return with Windows 8.1 (sort of). If you miss the navigation style of CRM 2011 you might find my CRM 2013 Start Menu solution very helpful. The solution provides a 'Start menu' on most screens that provides drop down menu with a security trimmed sitemap and a link to Advanced Find from wherever you are (very useful!):

  It also provides form navigation when you are on a record form – this is similar to the way the navigation would have looked in CRM2011:

The solution is a SparkleXRM sample if you are interested – I'm going to do a post soon on the techniques I've used to provide client side metadata caching. Installation

First you'll need to install SparkleXRM 0.1.4 or later Then you can install the Start Menu Managed Solution (be sure to leave the activate checkbox checked upon import of the solution)

Localisation Any language resources that are accessible via the SDK will be automatically used but the resources in the sitemap are not all accessible to code and so if you want to provide translations you just add a new web resource with the name - /dev1/js/QuickNavigationResourcesLCID.js. You can use the QuickNavigationResources_1033.js one as a template to translate. If you do create a translation, please let me know so we can add it to the solution. Known Issues

There is a short delay on first use as the site map is cached When a user doesn't have access to some elements of the sitemap, the links are removed, but if all subareas are removed, the parent group isn't.

Thanks to Jukka Niiranen, Damian Sinay & Mitch Milam for testing and feedback.

Ribbon Workbench updated - getServerUrl is removed in CRM2013 UR2

With CRM2013 UR2 being released very soon I have made an update to the Ribbon Workbench that you'll be prompted to install by the auto update when you next open the Ribbon Workbench. I strongly advise you to install this update before you install UR2 otherwise the Ribbon Workbench will no longer work, and you'll need to re-download and re-install. This is because when I updated the Ribbon Workbench for CRM2013 I retained the use of getServerUrl – the update now uses getClientUrl because UR2 has removed getServerUrl altogether. getServerUrl was deprecated back in CRM2011 with UR12 so it's probably about time it was removed anyways!

Real Time Workflow or Plugin?

I have been asked "should I use Real Time Workflows instead of Plugins?" many times since CRM2013 first introduced this valuable new feature. Real Time Workflows (RTWFs) certainly have many attractive benefits over Plugins including:

Uses the same interface as standard workflows making it very quick & simple to perform straightforward tasks. Can be written & extended without coding skills Easily extend using custom workflow activities that can be re-used in many places.

If we can use custom workflow activities to extend the native workflow designer functionality (e.g. making updates to records over 1:N and N:N relationships) then it raises the question why should we use Plugins at all? I am a big fan of RTWFs – they add considerable power to the framework that ultimately makes the solution more effective and lowers the implementation costs. That said I still believe that considering plugins is still an important part of any Dynamics CRM Solution design - the reasons can be split into the following areas: Performance Performance is one of those areas that it is very tempting to get bogged down by too early by 'premature optimisation'. In most cases performance should be considered initially by adhering to best practices (e.g. not querying or updating more data than needed) and ensuring that you use supported & documented techniques. If we ensure that our system is structured in an easy to follow and logical way then Dynamics CRM will usually look after performance for us and enable us to scale up and out when it is needed (Dynamics CRM Online will even handle this scaling for you!). It is true there are some 'gotchas' that are the exception to this rule, but on the whole I think primarily it is better to design for maintainability and simplicity over performance in the first instance. Once you have a working design you can then identify the areas that are going to be the main bottle necks and then focus optimisation efforts on those areas alone. If we try to optimise all parts of the system from the start when there is no need I find that it can actually end up reducing overall performance and end up with an overly complex system that has a higher total cost of ownership. It is true that Plugins will out performance RTWFs in terms of through-put but if the frequency of the transactions are going to be low then this usually will not be an issue. If you have logic that is going to be firing at a high frequency by many concurrent users then this is the time consider selecting a plugin over a RTWF. In some simple tests I found that RTWFs took twice (x2) as long as a Plugin when performing a simple update. The main reason for this is that the plugin pipeline is a very efficient mechanism for inserting logic inside of the platform transaction. A RTWF inserts an additional transaction inside that parent transaction that contains many more database queries required to set up the workflow context. The component based nature of workflow activity steps means the same data must be read multiple times to make each step work in isolation from the others. Additionally, if you update a record in a RTWF, it will apply an update immediately to the database within this inner transaction. This database update is in addition to the overall plugin update. Using a plugin there will only be a single update since the plugin is updating the 'in transit' pipeline target rather than the database. Another reason that the RTWF takes longer to complete the transaction is that it appears to always retrieve the entire record from the database even when using only a single value. Pre/Post Images When using plugins you have a very fine control over determining what data is being updated and can see what the record looked like before the transaction and what it would look like after the transaction. RTWFs don't offer you this same control and so if you need to determine if a value has changed from a specific value to another specific value (say a specific state transition) it is harder to determine what the value was before the workflow started. When a RTWF reads a record from the database, it will load all values, but with a plugin you can select only a small number of attributes to include in the pipeline or query. Impersonation RTWFs allow you to select whether to run as the calling user or the owner of the workflow, where a Plugin gives you full control to execute an SDK call as the system user or an impersonated user at different points during the transaction. Code vs Configuration With RTWFs your business logic tends to become fragmented over a much higher surface area of multiple child RTWFs. This makes unit testing and configuration control much harder. With a plugin you can write unit tests and check the code into TFS. With every change you can quickly see the differences from the previous version and easily see the order that the code is executed. I find that it is good practice to have a single RTWF per triggering event/entity combination and call child workflows rather than have many RTWFs kicked off by the same trigger, otherwise there is no way of determining the order that they will execute in. A very compelling reason to use plugins is the deterministic nature of their execution. You can control the order that your code executes in by simply sequencing the logic in a single plugin and then unit test this sequence of events by using a mock pipeline completely outside of the Dynamics CRM Runtime. So just tell me which is best?! Which is best? Each have their strengths and weaknesses so the answer is "it depends" (as it so often is!). After all this is said and done – a very compelling reason to use RTWS is the fact that business users can author and maintain them in a fraction of the time it takes to code and deploy a plugin and often will result in far fewer bugs due to the component based building blocks. As a rule of thumb, I use two guiding principles:

If there is already custom plugin code on an entity (or it is planned), then use a plugin over a RTWF to keep the landscape smaller and deterministic. If you know up-front that you need a very high throughput for a particular function with a high degree of concurrency, then consider a plugin over a RTWF for this specific area.


Multi-Entity Search: Paging verses Continuous Scrolling with SparkleXRM

If there is one thing that's for sure it's that user interfaces are for every changing. I fondly remember grey pages with rainbow <HR>'s:

Continuous scrolling data sets are a common mechanism seen in news feeds sites such as Twitter and Facebook. As you scroll down the page and reach the bottom more results are loaded dynamically as needed. Dynamics CRM has traditionally shown paged datasets and although this is mostly still true we are seeing some areas shifting to a more continuously such as the Tablet App and Social Feeds (although the news feed does required your to select 'More').

With this in mind I decided to implement a continuous scrolling Data View for SparkleXRM. The original Multi Entity Search sample used the standard EntityDataViewModel but I've now added VirtualPagedEntityDataViewModel to the sample project which shows how this virtual paging can be accomplished. Under the covers it is stilling paging using the fechxml paging cookie but as the user scrolls, additional pages are retrieved in order to show them. Once the pages are loaded, they are cached in the same way that the EntityDataViewModel did. I have also added support for showing the entity image next to the record so the end result is very similar to the Tablet app search results. Infact the entities that are shown are read from the same Tablet search configuration.

You can access the new search via an updated 'Advanced Find' button that shows a pull down on the dashboard home page:

Both the paged and continuous scrolling multi entity searches in in the solution but only the continuous scrolling version is added to the ribbon button. To install the solution you'll need to:

Install the latest build of SparkleXRM managed solution - Install the sample managed solution -

I'll be rolling a version of the VirtualPagedEntityDataViewModel into the code SparkleXRM soon – so let me know if you have any particular uses for it in mind. Have fun! @ScottDurow

Chrome Dynamics CRM Developer Tools

Chrome already provides a fantastic set of Developer tools for HTML/Javascript, but now thanks to Blake Scarlavai at Sonoma Partners we have the Chrome CRM Developer Tools. This fantastic Chome add in provides lots of gems to make debugging forms and testing fetchXml really easy: Form Information- Displays the current form’s back-end information - Entity Name - Entity Id - Entity Type Code - Form Type - Is Dirty- Ability to show the current form’s attributes’ schema names- Ability to refresh the current form- Ability to enable disabled attributes on the current form (System Administrators only)- Ability to show hidden attributes on the current form (System Administrators only) Current User Information- Domain Name- User Id- Business Unit Id Find- Ability to open advanced find- Set focus to a field on the current form- Display a specific User and navigate to the record (by Id)- Display a specific Privilege (by Id) Test- Ability to update attributes from the current form (System Administrators only)- This is helpful when you need to update values for testing but the fields don’t exist on the form Fetch- Execute any Fetch XML statement and view the results Check it out in the chrome web store -

Spot the difference!

If there is one way of keeping an application looking fresh it's by frequently tweaking the user interface. This approach is adopted by Facebook with success and stops their user interface from feeling old hat. With the latest Dynamics CRM Online update ( you might have spotted some little differences: Before:






  A constantly changing User Interface isn't for everyone, but personally I'm really pleased with the new paint job! @ScottDurow

Form File->Properties dialog in CRM 2013

One of the lesser known features of CRM 2011 was the File->Properties dialog that you could view on a record form. It would look something like:

This dialog was very useful for finding out the effective permissions of the current user on a particular record but in CRM 2013 it is no longer present in the user interface – but it is still there in the background! If you used to use this dialog in CRM 2011 I've created a managed solution that provides you with a Properties button on the Command Bar that shows the CRM 2013 version of this dialog. (2.39 kb)(The usual disclaimer applies) After installing the solution, you should see a new button in the Command Bar overflow menu:

The dialog looks like this:

The only down side is clicking OK gives the 'Are you sure you want to navigate away from this page' dialog. Hope this helps. @ScottDurow