Posted on 15. April 2014

Fiddler2: The tool that gives you Superpowers – Part 3

This post is the third post in the series 'Fiddler – the tool that gives you superpowers!'

Faster than a Speeding Bullet

If you have done any development of Web Resources with Dynamics CRM then I'm certain that you'll have become impatient whilst waiting to first deploy your solution and then publish it before you can test any changes. Everytime you need to make a change you need to go round this loop which can slow down the development process considerably. Using the Auto Responders I described in my previous post (Invisibility) you can drastically speed up this development process by using Fiddler to ensure changes you make to a local file in Visual Studio are reflected inside Dynamics CRM without waiting for deploying and publishing. You make the changes inside Visual Studio, simply save and refresh your browser and voilà!

Here some rough calculations on the time it could save you on a small project:

Time to Deploy

15

seconds

Time to Publish

15

seconds

Debug iterations

20

 

Number of web resources

30

 

Development Savings

5

hours

Time to reproduce live data in test/development

1

hour

Number of issues to debug in live

10

 

Testing Savings

10

hours

     

Total Savings for small project

15

hours

 

What is perhaps more important about this technique that it saves the frustration caused by having to constantly wait for web resource deployment and ensures that you stay in the development zone rather than being distracted by the latest cute kitten pictures posted on facebook!

Do remember to deploy and publish your changes once you've finished your development. It seems obvious but it is easily forgotten and you're left wondering why your latest widget works on your machine but not for others!

More information can be found on this at the following locations:

@ScottDurow

Posted on 15. April 2014

Fiddler2: The tool that gives you Superpowers - Part 2

This post is the second post in the series 'Fiddler – the tool that gives you superpowers!'

Invisibility

This time it's the superpower of Invisibility! Wow I hear you say!

Fiddler is a web debugger that sits between you and the server and so is in the unique position of being able to listen for requests for a specific file and rather than returning the version on the server return a version from your local disk instead. This is called and 'AutoResponder' and sounds like a super-hero it's self – or perhaps a transformer (robots in disguise).

If you are supporting a production system then the chances are that at some point your users have found an issue that you can't reproduce in Development/Test environments. Auto Responders can help by allowing us to update any web resource (html/JavaScript/Silverlight) locally and then test it against the production server without actually deploying it. The Auto Responder sees the request from the browser for the specific web resource and rather returning the currently deployed version, it gives the browser your local updated version so you can test it works before other users are affected.

Here are the steps to add an auto responder:

1) Install Fiddler (if you've not already!)

2) Switch to the 'Auto Responders' tab and check the two checkboxes 'Enable automatic responses' and 'Unmatched requests pass-through'

3) To ensure that the browser requests a version of the web resource rather than a cached version from the server you'll need to clear the browser cache using the convenient 'Clear Cache' button on the tool bar.

4) You can ensure that no versions get subsequently cached by selecting Rules-> Performance-> Disable Caching.

5) You can now use 'Add Rule' to add an auto responder rule. Enter a regular expression to match the web resource name

regex:(?insx).+/<Web Resource Name>([?a-z0-9-=&]+\.)*

then enter the file location of the corresponding webresource in your Visual Studio Developer Toolkit project.

You are now good to go so that when you refresh your browser the version of your web resource will be loaded into the browser directly from your Visual Studio project. No need to publish a file to the server and affect other users.

There is one caveat to this – If the script that you are debugging updates data then this approach is probably not a good idea until you are have fully tested the script in a non-production environment. Only once you have QAed and ready to deploy can be it be used against the production environment to check that the specific user's issue is fixed before you commit to deploying it to all users.

Read the next post on how to be faster than a speeding bullet!

@ScottDurow

 

Posted on 15. April 2014

Fiddler2: The tool that gives you Superpowers – Part 4

This post is the fourth and final post in the series 'Fiddler – the tool that gives you superpowers!'

Ice Man

Perhaps Ice Man is the most tenuous super power claim but it's regarding a very important topic – HTTP Caching. Having a good caching strategy is key to having good client performance and not over-loading your network with unnecessary traffic. Luckily Dynamics CRM gives us an excellent caching mechanism – but there are situations where it can be accidently unknowingly bypassed:

  1. Not using relative links in HTML webresources
  2. Loading scripts/images dynamically without using the cache key directory prefix
  3. Not using the $webresource: prefix in ribbon/sitemap xml.

Luckily we can use Fiddler to keep our servers running ice cold by checking that files that are not being cached when they should be. There are types of caching that you need to look for:

Absolute expiration

These web resources will not show in Fiddler at all because the browser has located a cached version of the file with an absolute cache expiration date and so it doesn't need to request anything from the server. By default CRM provides an expiration date of 1 year from the date requested, but if the web resource is updated on the server then the Url changes and so a new version is requested. This is why you see a request similar to /OrgName/%7B635315305140000046%7D/WebResources/somefile.js. Upon inspection of the response you will see an HTTP header similar to:

HTTP/1.1 200 OK
Cache-Control: public
Content-Type: text/jscript
Expires: Tue, 14 Apr 2015 21:18:35 GMT

Once the web resource is downloaded on the client it is requested again until April 14 2015 unless a new version is published where CRM will request the file using a new cache key (the number between the Organization name and the WebResources directory). You can read more about this mechanism on my post about web resource caching.

ETAG Cached files

These resource are usually images and static JavaScript files that are assigned an ETAG value by the server. When the resource changes on the server it is assigned a different ETAG value. When the browser requests the file it sends the previous ETAG value if it hasn't been modified then the server responds with a 304 response meaning that the browser can use the local cached file.

Files that use ETAG caching will show in grey in Fiddler with a response code of 304:

During your web resource testing it is a good idea to crack open Fiddler and perform your unit tests – you should look for any non-304 requests for files that don't need to be downloaded every time they are needed.

Another way to ensure that your servers are running cool as ice is to look at the request execution length. Occasionally code can be written that accidently returns much too much data than required - perhaps all attributes are included or a where criteria is missing. These issues don't always present themselves when working on a development system that responds to queries very quickly, but as soon as you deploy to a production system with many users and large datasets, you start to see slow performance.

There are a number of ways you can test for this using Fiddler:

Visualise Request Times

The order in which your scripts, SOAP and REST requests are executed in can greatly affect the performance experienced by the user and so you can use Fiddler's Time line visualizer to see which requests are running is series and which are running in parallel. It also shows you the length of time the requests are taking to download so that you can identify the longest running requests and focus your optimisation efforts on those first.

  •  

    Simulate Slow Networks

    If you know that your users will be using a slow network to access CRM or you would just like to see how the application responds when the requests start to take longer because of larger datasets you can tell fiddler to add an artificial delay into the responses. To do this you can use the built in Rules->Performance->Simulate Modem Speeds but this usually results in an unrealistically slow response time. If you are using Auto Responders you can right-click on the Rule and use set Latency – but this won't work for Organization Service/REST calls. The best way I've found is to use the Fiddler Script:

    1) Select the 'Fiddler Script' Tab

    2) Select 'OnBeforeRequest' in the 'Go to' drop down

    3) Add the following line to the OnBeforeRequest event handler.

    This will add a 50 millisecond delay for every kB requested from the server which assuming there was no time server time would result in ~160 kbps downloads.

    If you've not used Fiddler for your Dynamics CRM Development yet I hope these posts are enough to convince you that you should give it a try – I promise you'll never look back!

    @ScottDurow

    Posted on 15. April 2014

    Fiddler2: The tool that gives you Superpowers – Part 1

    The next few posts are for those who saw me speaking at the most recent CRMUG UK Chapter meeting about Fiddler2 and wanted to know more (and as a thank you to those who voted for me in X(rm) factor!). I've been using Fiddler for web debugging for as long as I can remember and I can honestly say that I could not live without it when Developing Dynamics CRM extensions as well as supporting and diagnosing issues with existing solutions. I first blogged about it in connection with SparkleXRM development but this post elaborates further on the super powers it gives you!

    What is a Web Debugger?

    Fiddler2 is a Web Debugger which basically means that it sits between your browser and the server just like any normal proxy, but the difference is that it shows you all the HTTP traffic going back and forwards, allows you to visualise it in an easy to read format as well as allowing you to 'Fiddler' with it – hence the name.

    You can easily install fiddler for free by downloading it from http://www.telerik.com/fiddler.

    The following posts describe the superpowers that Fiddler can give you whilst you are developing solutions or supporting your end users.

    X-Ray Vision

    When you perform any actions in your browser whilst Fiddler is running then each and every request/response is being logged for your viewing pleasure. This log is incredibly useful when you need to see what requests your JavaScript or Silverlight is sending to the server. It shows you the error details even when the user interface may simply report that an 'Error has occurred' without any details. The prize for the most unhelpful error goes to Silverlight with its 'Not Found' message – the actual error can only be discovered with a tool like Fiddler2 by examining the response from the server to see the true exception that is hidden by Silverlight. The HTTP error code is your starting point and Fiddler makes it easy to see these at a glance by its colour coding of request status codes - the most important of which are HTTP 500 requests that are coloured red. For any solution you are developing, the bare minimum you should look at is for any 404 or 500 responses.

    If you wanted to diagnose a problem that a user was having with CRM that could not reproduce then try following these steps:

    1. Ask the user experiencing the issue to install Fiddler2 (this may require administrator privileges if their workstation is locked down).
    2. Get to the point where they can reproduce the problem – just before they click the button or run the query, or whatever!
    3. Start Fiddler
    4. Ask the user to reproduce the issue
    5. Ask the user to click File->Save->All Sessions and send you the file.
    6. Once you've got the file you can load it into your own copy of Fiddler to diagnose the issue.

    If the user has IE9 or above and they are not using the outlook client then the really neat thing about the latest version of Fiddler is that it can import the F12 Network trace. This allows you to capture a trace without installing anything on the client and then inspect it using Fiddlers user interface. To capture the network traffic using IE:

    1. Get to the point where they are about to reproduce the issue
    2. Press F12
    3. Press Ctrl-4
    4. Press F5 (to start the trace)
    5. Reproduce the issue
    6. Switch back to the F12 debugger window by selecting it
    7. Press Shift-F5 to stop the trace
    8. Click the 'Export Captured Traffic' button and send you the file

    Now you can load this file into fiddler using File->Import Sessions->IE's F12 NetXML format file.

    Once you found the requests that you are interested in you can then use the inspectors to review the contents – the request is shown on the top and the response is shown on the bottom half of the right panel. Both the request and response inspectors gives you a number of tabs to visualise in different ways depending on the content type. If you are looking at JavaScript, HTML or XML your best bet is the SyntaxView tab that even has a 'Format Xml' and 'Format Script/JSON' option on the context menu. This is great to looking at SOAP requests and responses that are sent from JavaScript to make sure they are correctly formatted.

    The following screen shows a soap request from JavaScript and inspectors in syntax view with 'Format Xml' selected.

  •  

    This technique is going to save you lots of time when trying to work out what is going on over the phone to your users!

    Next up is Invisibility!

    @ScottDurow

     

    Posted on 6. March 2014

    ‘Ghost’ Web Resources & Client Metadata Caching

    When I blogged about my CRM 2013 Start Menu solution I said I would also post about how it caches metadata on the client – so here is that post!

    Metadata is the information that describes other data and in the case of CRM 2013 metadata describes entities, relationships and attributes as well as other configuration elements such as the sitemap. The Start Menu solution needed to read the sitemap in order to dynamically display it in the drop down command bar button. Initially the solution read the site map from the server every time the menu was displayed. This wasn't too bad but the solution also then had to make additional requests for entity metadata to retrieve the localised display name, image and object type code. What's more is the solution then had to retrieve the user's privileges and iterate over the sitemap to decide if the user had access or not. This is a common scenario with any webresource development for Dynamics CRM – it could be option set labels or view layoutxml - all need by the client every time the page is displayed. Since this metadata doesn't change very often it makes it a very good candidate for caching.

    Caching isn't the problem

    Whenever designing a cache solution – the first thing to think about is how to invalidate that cache. It's no good being able to store something for quick access if you can't tell if it is stale and needs to refresh – this could lead to problems much worse than poor performance!

    Dynamics CRM neatly already provides us with a client side caching mechanism it uses for Web Resources. I blogged about this back in the CRM 2011 days – but it really hasn't changed with CRM 2013. The general principle is that if you request a Web Resource in the following format then you will get the HTTP headers from the server that means that the browser/proxy server can cache the file.

    http://server/%7B000000000000000000%7D/WebResources/SomeFile.js

    The net result is that the next time that the browser requests this file, provided it has the same number before the WebResources folder then the file will not be requested from Dynamics CRM but served from the cache.

    Every time you publish a web resource the number that is used to request web resources is changed so that the client browser gets a new copy and then caches that until the next change of number.

    So we have a very effective caching mechanism for static content – but how do we make use of this for dynamic content? What we need is a way of storing the sitemap and all the other metadata needed into a web resource so that it will be cached – but we don't want to have to update a webresource with this information – what we need is something similar to ASP.Net where we can dynamically generate the web resource when requested and then cache the result on the client.

    Dynamic 'Ghost' Web Resources

    The magic happens by a Plugin that is registered on Retrieve Multiple of webresource. When the Plugin detects that the request is for the SiteMap web resource, all this metadata is retrieved server side, converted into JSON and then added dynamically to the output collection as a webresource record. The user's LCID is also read from the web resource name and used to pick to correct localised labels. The interesting thing here is that the web resource being requested by the Javascript doesn't actually exist – they are 'ghost' web resources. If the plugin wasn't there, then the platform would return the usual 404 File not found but as long as the Plugin intercepts the query and provides an output then the platform doesn't mind that we are requesting something that doesn't exist. This technique provides a host of opportunities for adding additional request information to the name of the web resource that can then be used to determine the contents of the web resource. This allows the web resource contents to vary depending on the request:

    • Varying content by language/LCID
    • Varying content by record id
    • Varying content by user
    • Varying content by other parameters such as date

     

    Check out the code for this web resource plugin to see how it's done.

    Invalidating the cache

    Since we now can cache our server side generated JSON on the client – we need to know how to clear the cache when something changes. In the case of the 'Start Menu' solution that something changing is the sitemap.xml or entity names. The cache key number that is used by the client will change whenever a web resource is added or updated so to clear the client cache we simply need to update a dummy web resource. Part of the solution publishing that contains a sitemap or entity name change should always include a web resource update so that the client will reflect the updates.

    Be careful with caching of sensitive data

    Caching of metadata is the most common use of this technique, but it could also be used for caching commonly used reference data such as products or countries. This cache can be invalidated easily by making a simple request for the most recent modified on date – but make sure you don't cache any sensitive data since this would be accessible by anyone with access to the client machine.

    Making things easier

    A future update of SparkleXRM will contain a client side metadata caching framework that I'm working on that uses the techique I describe here, but in the mean time I hope this helps you get better performance from your client side code.

    @ScottDurow

    Posted on 28. February 2014

    ‘Start Menu’ style navigation for CRM2013

    When Windows 8 didn't have a 'Start Menu' there was so much fuss that we saw it return with Windows 8.1 (sort of). If you miss the navigation style of CRM 2011 you might find my CRM 2013 Start Menu solution very helpful.

    The solution provides a 'Start menu' on most screens that provides drop down menu with a security trimmed sitemap and a link to Advanced Find from wherever you are (very useful!):

     

    It also provides form navigation when you are on a record form – this is similar to the way the navigation would have looked in CRM2011:

    The solution is a SparkleXRM sample if you are interested – I'm going to do a post soon on the techniques I've used to provide client side metadata caching.

    Installation

    1. First you'll need to install SparkleXRM 0.1.4 or later
    2. Then you can install the Start Menu Managed Solution (be sure to leave the activate checkbox checked upon import of the solution)

    Localisation

    Any language resources that are accessible via the SDK will be automatically used but the resources in the sitemap are not all accessible to code and so if you want to provide translations you just add a new web resource with the name - /dev1_/js/QuickNavigationResources_LCID.js. You can use the QuickNavigationResources_1033.js one as a template to translate. If you do create a translation, please let me know so we can add it to the solution.

    Known Issues

    1. There is a short delay on first use as the site map is cached
    2. When a user doesn't have access to some elements of the sitemap, the links are removed, but if all subareas are removed, the parent group isn't.
    Thanks to Jukka Niiranen, Damian Sinay & Mitch Milam for testing and feedback.
    Posted on 28. February 2014

    Ribbon Workbench updated - getServerUrl is removed in CRM2013 UR2

    With CRM2013 UR2 being released very soon I have made an update to the Ribbon Workbench that you'll be prompted to install by the auto update when you next open the Ribbon Workbench. I strongly advise you to install this update before you install UR2 otherwise the Ribbon Workbench will no longer work, and you'll need to re-download and re-install.

    This is because when I updated the Ribbon Workbench for CRM2013 I retained the use of getServerUrl – the update now uses getClientUrl because UR2 has removed getServerUrl altogether.

    getServerUrl was deprecated back in CRM2011 with UR12 so it's probably about time it was removed anyways!

    Posted on 28. February 2014

    Real Time Workflow or Plugin?

    I have been asked "should I use Real Time Workflows instead of Plugins?" many times since CRM2013 first introduced this valuable new feature. Real Time Workflows (RTWFs) certainly have many attractive benefits over Plugins including:

    • Uses the same interface as standard workflows making it very quick & simple to perform straightforward tasks.
    • Can be written & extended without coding skills
    • Easily extend using custom workflow activities that can be re-used in many places.

    If we can use custom workflow activities to extend the native workflow designer functionality (e.g. making updates to records over 1:N and N:N relationships) then it raises the question why should we use Plugins at all?

    I am a big fan of RTWFs – they add considerable power to the framework that ultimately makes the solution more effective and lowers the implementation costs. That said I still believe that considering plugins is still an important part of any Dynamics CRM Solution design - the reasons can be split into the following areas:

    Performance

    Performance is one of those areas that it is very tempting to get bogged down by too early by 'premature optimisation'. In most cases performance should be considered initially by adhering to best practices (e.g. not querying or updating more data than needed) and ensuring that you use supported & documented techniques. If we ensure that our system is structured in an easy to follow and logical way then Dynamics CRM will usually look after performance for us and enable us to scale up and out when it is needed (Dynamics CRM Online will even handle this scaling for you!). It is true there are some 'gotchas' that are the exception to this rule, but on the whole I think primarily it is better to design for maintainability and simplicity over performance in the first instance. Once you have a working design you can then identify the areas that are going to be the main bottle necks and then focus optimisation efforts on those areas alone. If we try to optimise all parts of the system from the start when there is no need I find that it can actually end up reducing overall performance and end up with an overly complex system that has a higher total cost of ownership.

    It is true that Plugins will out performance RTWFs in terms of through-put but if the frequency of the transactions are going to be low then this usually will not be an issue. If you have logic that is going to be firing at a high frequency by many concurrent users then this is the time consider selecting a plugin over a RTWF.

    In some simple tests I found that RTWFs took twice (x2) as long as a Plugin when performing a simple update. The main reason for this is that the plugin pipeline is a very efficient mechanism for inserting logic inside of the platform transaction. A RTWF inserts an additional transaction inside that parent transaction that contains many more database queries required to set up the workflow context. The component based nature of workflow activity steps means the same data must be read multiple times to make each step work in isolation from the others. Additionally, if you update a record in a RTWF, it will apply an update immediately to the database within this inner transaction. This database update is in addition to the overall plugin update. Using a plugin there will only be a single update since the plugin is updating the 'in transit' pipeline target rather than the database.

    Another reason that the RTWF takes longer to complete the transaction is that it appears to always retrieve the entire record from the database even when using only a single value.

    Pre/Post Images

    When using plugins you have a very fine control over determining what data is being updated and can see what the record looked like before the transaction and what it would look like after the transaction. RTWFs don't offer you this same control and so if you need to determine if a value has changed from a specific value to another specific value (say a specific state transition) it is harder to determine what the value was before the workflow started. When a RTWF reads a record from the database, it will load all values, but with a plugin you can select only a small number of attributes to include in the pipeline or query.

    Impersonation

    RTWFs allow you to select whether to run as the calling user or the owner of the workflow, where a Plugin gives you full control to execute an SDK call as the system user or an impersonated user at different points during the transaction.

    Code vs Configuration

    With RTWFs your business logic tends to become fragmented over a much higher surface area of multiple child RTWFs. This makes unit testing and configuration control much harder. With a plugin you can write unit tests and check the code into TFS. With every change you can quickly see the differences from the previous version and easily see the order that the code is executed.

    I find that it is good practice to have a single RTWF per triggering event/entity combination and call child workflows rather than have many RTWFs kicked off by the same trigger, otherwise there is no way of determining the order that they will execute in.

    A very compelling reason to use plugins is the deterministic nature of their execution. You can control the order that your code executes in by simply sequencing the logic in a single plugin and then unit test this sequence of events by using a mock pipeline completely outside of the Dynamics CRM Runtime.

    So just tell me which is best?!

    Which is best? Each have their strengths and weaknesses so the answer is "it depends" (as it so often is!).

    After all this is said and done – a very compelling reason to use RTWS is the fact that business users can author and maintain them in a fraction of the time it takes to code and deploy a plugin and often will result in far fewer bugs due to the component based building blocks.

    As a rule of thumb, I use two guiding principles:

    1. If there is already custom plugin code on an entity (or it is planned), then use a plugin over a RTWF to keep the landscape smaller and deterministic.
    2. If you know up-front that you need a very high throughput for a particular function with a high degree of concurrency, then consider a plugin over a RTWF for this specific area.

    @ScottDurow

    Posted on 14. February 2014

    Subliminal Moshlings…

    Those of you who've attended an event that I've been speaking at will know that I quote often talk about my kids…My daughter is absolutely mad about Moshi Monsters. Right about the time I was dreaming up SparkleXRM, my daughter was just beginning her obsession and insisting that I spend quality time with her debating the relative merits of each character. It seems I may have been subliminally influenced by one particular Moshling…

    Meet Roxy the Moshling

    Any similarities are not intentional and are a mere coincidence. Coincidence or Subliminal? I'll leave you to decide ;)

    @Scottdurow

    Posted on 14. February 2014

    Multi-Entity Search: Paging verses Continuous Scrolling with SparkleXRM

    If there is one thing that's for sure it's that user interfaces are for every changing. I fondly remember grey pages with rainbow <HR>'s:

    Continuous scrolling data sets are a common mechanism seen in news feeds sites such as Twitter and Facebook. As you scroll down the page and reach the bottom more results are loaded dynamically as needed. Dynamics CRM has traditionally shown paged datasets and although this is mostly still true we are seeing some areas shifting to a more continuously such as the Tablet App and Social Feeds (although the news feed does required your to select 'More').

    With this in mind I decided to implement a continuous scrolling Data View for SparkleXRM. The original Multi Entity Search sample used the standard EntityDataViewModel but I've now added VirtualPagedEntityDataViewModel to the sample project which shows how this virtual paging can be accomplished. Under the covers it is stilling paging using the fechxml paging cookie but as the user scrolls, additional pages are retrieved in order to show them. Once the pages are loaded, they are cached in the same way that the EntityDataViewModel did. I have also added support for showing the entity image next to the record so the end result is very similar to the Tablet app search results. Infact the entities that are shown are read from the same Tablet search configuration.

    You can access the new search via an updated 'Advanced Find' button that shows a pull down on the dashboard home page:

    Both the paged and continuous scrolling multi entity searches in in the solution but only the continuous scrolling version is added to the ribbon button.

    To install the solution you'll need to:

    1. Install the latest build of SparkleXRM managed solution - SparkleXrm_0_1_2_managed.zip
    2. Install the sample managed solution - MultiEntitySearch_2_0_managed.zip

    I'll be rolling a version of the VirtualPagedEntityDataViewModel into the code SparkleXRM soon – so let me know if you have any particular uses for it in mind.

    Have fun!

    @ScottDurow