Posted on 11. August 2017

Solution Packager and global optionset enum support in spkl Task Runner

I’ve published version 1.0.9 of spkl to NuGet - this adds the following new features:

  1. Global optionset enum generation for early bound classes.
  2. Solution Packager support

Global Optionset enum generation

This was a tricky one due to the CrmSvcUtil not making it easy to prevent multiple enums being output where a global optionset is used, but you can now add the following to your spkl.json early bound section to generate global optionset enums.

{
  "earlyboundtypes": [
    {
      ...
      "generateOptionsetEnums": true,
      ...
    }
  ]
}

In a future update, I’ll add the ability to filter out the enums to only those used.

Solution Packager Support

The solution packager allows you to manage your Dynamics metadata inside a Visual Studio project by extracting the solution into separate xml files. When you need to combine multiple updates from code comments, you can then use the packager to re-combine and import into Dynamics. To configure the solution packager task you can add the following to your spkl.json

 /*
  The solutions section defines a solution that can be extracted to individual xml files to make
  versioning of Dynamics metadata (entities, attributes etc) easier
  */
  "solutions": [
    {
      "profile": "default,debug",
      /*
      The unique name of the solution to extract, unpack, pack and import
      */
      "solution_uniquename": "spkltestsolution",
      /*
      The relative folder path to store the extracted solution metadata xml files
      */
      "packagepath": "package",
      /*
      Set to 'true' to increment the minor version number before importing from the xml files
      */
      "increment_on_import": false
    }
  ]

There are two .bat files provided that will call:

spkl unpack

This will extract the solution specifed in the spkl.json into the packagepath as multiple xml files

spkl import

This will re-pack the xml files and import into Dynamics - optionally increasing the version number of the solution to account for the new build.

Posted on 15. May 2017

Continuous Integration using spkl Task Runner

This is the third video in a series showing you how to quickly setup VSTS Continuous Integration with spkl.

Watch in youtube

1. Learn more about the spkl task runner

2. Learn how to deploy plugins with the spkl task runnner

3. Learn how to deploy webresources with the spkl task runnner

Posted on 15. May 2017

Deploying Webresources using spkl Task Runner

This is the second video in a series showing you how get up and running with spkl with no fuss!

Watch in youtube

1. Learn more about the spkl task runner

2. Learn how to deploy plugins with the spkl task runnner

Posted on 12. May 2017

Deploying Plugins using spkl Task Runner

Following from my last blog post on the spkl Task Runner, this is the first video in a series showing you how get up and running with spkl with no fuss!

 

Posted on 3. May 2017

Simple, No fuss, Dynamics 365 Deployment Task Runner

Why?

I've used the Dynamics Developer Toolkit since it was first released by MCS for CRM4! I love the functionality it brings however the latest version is still in beta, it isn't supported on VS2017 and there isn't a date when it's likely to be either (yes, you can hack it to make it work but that's not the point J).

Rather than using an add-in Visual Studio project type, I've been attracted by the VS Code style simple project approach and so I decided to create a 'no-frills' alternative that uses a simple json config file (and that can be used in VS2017).

What?

  1. Deploy Plugins & Workflow Activities - Uses reflection to read plugin registration information directly from the assembly. This has the advantage that the plugin configuration is in the same file as the code. You can use the 'instrument' task to pull down the plugin configuration from Dynamics and add the metadata to your classes if you already have an existing project.
  2. Deploy Web Resources – deploy webresources from file locations defined in the spkl.json configuration. You can use the 'get-webresources' task to create the spkl.json if you already have webresources deployed.
  3. Generate Early Bound Types – Uses the spkl.json to define the entities to generate each time the task is run to make the process repeatable.
  4. Profile management – An optional profile can be supplied to select a different set of configuration from spkl.json. E.g. debug and release build profiles.

How?

Let's assume you have a project in the following structure:

Solution
    |-Webresources
    |    |-html
    |    |    |-HtmlPage.htm
    |    |-js
    |    |    |-Somefile.js
    |-Plugins
    |    |-MyPlugin.cs
    |-Workflows
    |    |-MyWorkflowActivity.cs

On both the Plugin and Workflows project, Run the following from the Nuget Console:

Import-Package spkl

This will add the spkl to the packages folder and the metadata CrmPluginConfigurationAttribute.cs that is used to mark up your classes so that spkl can deploy them. Some simple batch files are also included that you can use to get started.

If you already have plugins deployed, you can run the following command line in the context of the Plugins folder:

spkl instrument

This will prompt you for a Dynamics Connection, and then search for any deployed plugins and their matching .cs file. If the MyPlugin.cs plugin is already deployed it might end up with the following Attribute metadata:

[CrmPluginRegistration("Create","account",
    StageEnum.PreValidation,ExecutionModeEnum.Synchronous,
    "name,address1_line1", "Create Step",1,IsolationModeEnum.Sandbox,
    Description ="Description",
    UnSecureConfiguration = "Some config")]

A spkl.json file will be created in the project directly similar to:

{
  "plugins": [
    {
      "solution": "Test",
      "assemblypath": "bin\\Debug"
    }
  ]
}

If you now build your plugins, you can then run the following to deploy

spkl plugins

You can run instrument for the workflow project using the same technique which will result in code similar to the following being added to your workflow activity classes:

[CrmPluginRegistration(
        "WorkflowActivity", "FriendlyName","Description",
        "Group Name",IsolationModeEnum.Sandbox)]

…and then run the following to deploy:

spkl workflow			

To get any currently deployed webresources matched to your project files you can run the following from the Webresource project folder:

spkl get-webresources /s:new			

    Where new is the solution prefix you've used

This will create a spkl.json similar to the following:

{
  "webresources": [
    {
      "root": "",
      "files": [
        {
          "uniquename": "new_/js/somefile.js",
          "file": "js\\somefile.js",
          "description": ""
        },
        {
          "uniquename": "new_/html/HtmlPage.htm",
          "file": "html\\HtmlPage.htm",
          "description": ""
        }
      ]
    }
  ]
}

You can then deploy using:

spkl webresources

Profiles

For Debug/Release builds you can define multiple profiles that can be triggered using the /p:<profilename> parameter.

{
  "plugins": [
    {
      "profile": "default,debug",
      "assemblypath": "bin\\Debug"
    },
    {
      "profile": "release",
      "solution": "Test",
      "assemblypath": " bin\\Release"
    }
  ]
 
}

The default profile will be used if no /p: parameter is supplied. You can specify a profile using:

spkl plugins /p:release			

Referencing a specific assembly rather than searching the folder

If you have multiple plugins in a single deployment folder and you just want to deploy one, you can explicitly provide the path rather than using the folder search. E.g.

{
  "plugins": [
    {
      "assemblypath": "bin\\Debug\MyPlugin.dll"

Adding to a solution

If you'd like to automatically add the items deployed to a solution after deployment you can use:

{
  "webresources": [
    {
      "root": "",
      "solution": "Test",

Combining spkl.json

Perhaps you want to have a single spkl.json rather than multiple ones per project. You can simply add them all together:

{
  "webresources": […],
  "plugins": […]
}

Multiple project deployments

Since the spkl.json configuration files are searched from the current folder, you can deploy multiple plugins/webresources using a single spkl call from a root folder.

I'll be updating the github documentation page as things move forwards.

Posted on 14. March 2017

There is something rather different about Dynamics 365 Business Process Flows!

The new business process flow designer in Dynamics 365 is lovely! However, I'm not going to talk about that since it's rightly had lots of love by others already.

For me the biggest change in Dynamics 365 is the fact that running Business Process Flows (BPFs) are now stored as entity records. Instance details are no longer held as fields on the associated record. I first visited this topic back in the CRM2013 days with the introductions of Business Process Flows where I described how to programmatically change the process.

Previously when a BPF was started, all of the state about the position was held on the record it was run on was stored in fields on the record itself:

  • Process Id: The ID of the BPF running
  • Stage Id: The ID of the BPF step that was active
  • Traversed Path: A comma separated string listing the GUIDs of current path of steps taken through the BPF. This is to support BPFs with branching logic.

With the new Dynamics 365 BPFs, each process activated is automatically has an entity created that looks just like any other custom entity. The information about the processes running and any record is now stored as instances of this entity with a N:1 relationship to the parent record and any subsequent related entities. This BPF entity has similar attributes that were stored on the parent entity, but with the following additions:

  • Active Stage Id: The ID of the BPF step that is active – replaces the Stage Id attribute.
  • Activate Stage Started On: The Date Time that the current step was started on – this allows calculation of the amount of time it has been active for
  • State & Status: Each BPF Instance has its own state that allows finishing and abandoning before other BPF are run.

     

In addition to making migration of data with running BPFs a little easier - this approach has the following advantages:

  1. You can control access to BPFs using standard entity role privileges
  2. You can have multiple BPFs running on the same record
  3. You can see how long the current stage has been active for
  4. You can Abandon/Finish a BPF

BPF Privileges

Prior to Dynamics365, you would have controlled which roles could access your BPF using the Business Process Flow Role Check list.     In Dynamics 365 when you click the 'Enable Security Roles' button your BPF you are presented with a list of Roles that you can open up and define access in the 'Business Process Flow' tab:

Multiple BPFs on the same record

Switching BPFs no longer overwrites the previous active step – meaning that you can 'switch' back to a previously started BPF and it will carry on from the same place. This means that BPFs can run in parallel on the same record.

  • If a user does not have access to the running BPF they will see the next running BPF in the list (that they have access to).
  • If the user has no access to any BPF that is active – then no BPF is shown at all.
  • If user has read only access to the BPF that is running, then they can see it, but not change the active step.
  • When a new record is created, the first BPF that the user has create privileges on is automatically started.

When you use the Switch Process dialog, you can now see if the Business Process Flow is already running, who started it and when it was run.

NOTE: Because the roles reference the BPF entities – you must also include the system generated BPF entities in any solution you intend to export and import into another system.

Active Step timer

Now that we have the ability to store addition data on the running BPF instance, we have the time that the current step was started on. This also means that when switching between processes, we can see the time spent in each step in parallel running BPFs.

Abandon/Finish

Since each BPF has its own state fields, a business process can be marked as Finished – or Abandoned at which point it becomes greyed out and read only.

When you 'Abandon' or 'Finish' a BPF it is moved into the 'Archived' section of the 'Switch Process' dialog.

NOTE: You might think that this means that you could then run the BPF a second time, but in-fact it can only have a single instance per BPF – and you must 'Reactivate' it to use it again.

  • Reactivating an Abandoned BPF will start at the previously active step
  • Reactivating a Finished BPF will start it from the beginning again.

Example

Imagine your business has a sales process that requires an approval by a Sales Manager. At a specific step in that sales process you could run a workflow to start a parallel BPF that only the Sales Managers have access to. When they view the record, making the Approval BPF higher in the ordered list of BPFS will mean that they will see the Approval BPF instead of the main Sales Process. They can then advance the steps to 'Approved' and mark as Finished. This could then in turn start another Workflow that updates a field on the Opportunity. Using this technique in combination with Field Level Security gives a rather neat solution for custom approval processes.

When I first saw this change I admit I was rather nervous because it was such a big system change. I've now done a number of upgrades to Dynamics 365 and the issues I found have all been resolved.
I'm really starting to like the new possibilities that Parallel BPFs brings to Dynamics 365.

@ScottDurow

Posted on 11. March 2017

Simplified Connection Management & Thread Safety (Revisited)

There is one certainty in the world and that is that things don't stay the same! In the Dynamics 365 world, this is no exception, with new features and SDK features being released with a pleasing regularity. Writing 'revisited' posts has become somewhat of a regular thing these days.

In my previous post on this subject back in 2013 we looked at how you could use a connection dialog or connection strings to get a service reference from the Microsoft.Xrm.Client library and how it can be used in a thread safe way.

Microsoft.Xrm.Tooling

For a while now there has been a replacement for the Microsoft.Xrm.Client library – the Microsoft.Xrm.Tooling library. It can be installed from NuGet using:

Install-Package Microsoft.CrmSdk.XrmTooling.CoreAssembly

When you use the CrmServerLoginControl, the user interface should look very familiar because it's the same that is used in all the SDK tools such that Plugin Registration Tool.

The sample in the SDK shows how to use this WPF control.

The WPF control works slightly differently to the Xrm.Client ShowDialog() method – since it gives you much more flexibility over how the dialog should behave and allows embedding inside your WPF application rather than always having a popup dialog.

Connection Strings

Like the dialog, the Xrm.Tooling also has a new version of the connection string management – the new CrmServiceClient accepts a connection string in the constructor. You can see examples of these connection strings in the SDK.

CrmServiceClient crmSvc = new CrmServiceClient(ConfigurationManager.ConnectionStrings["Xrm"].ConnectionString);

For Dynamics 365 online, the connection would be:

<connectionStrings>
    <add name="Xrm" connectionString="AuthType=Office365;Username=jsmith@contoso.onmicrosoft.com; Password=passcode;Url=https://contoso.crm.dynamics.com" />
</connectionStrings>

Thread Safety

The key to understanding performance and thread safety of calling the Organization Service is the difference between the client proxy and the WCF channel. As described by the 'Improve service channel allocation performance' topic from the best practice entry in the SDK, the channel should be reused because creating it involves time consuming metadata download and user authentication.

The old Microsoft.Xrm.Client was thread safe and would automatically reuse the WCF channel that was already authenticated. The Xrm.Tooling CrmServiceClient is no exception. You can create a new instance of CrmServiceClient and existing service channels will be reused if one is available on that thread. Any calls the same service channel will be locked to prevent thread issues.

To demonstrate this, I first used the following code that ensures that a single CrmServiceClient is created per thread.

Parallel.For(1, numberOfRequests,
    new ParallelOptions() { MaxDegreeOfParallelism = maxDop },
    () =>
    {
        // This is run for each thread
        var client = new CrmServiceClient(username,
               CrmServiceClient.MakeSecureString(password),
               "EMEA",
               orgname,
               useUniqueInstance: false,
               useSsl: false,
               isOffice365: true);
        
        return client;
    },
    (index, loopState, client) =>
    {
        // Make a large request that takes a bit of time
        QueryExpression accounts = new QueryExpression("account")
        {
            ColumnSet = new ColumnSet(true)
        };
        client.RetrieveMultiple(accounts);
        return client;
    },
    (client) =>
    {
    });

With a Degree of Parallelism of 4 (the number of threads that can be executing in parallel) and a request count of 200, there will be a single CrmServiceClient created for each thread and the fiddler trace looks like this:

Now to prove that the CrmServiceClient handles thread concurrency automatically, I moved the instantiation into loop so that every request would create a new client:

Parallel.For(1, numberOfRequests,
    new ParallelOptions() { MaxDegreeOfParallelism = maxDop },
    (index) =>
    {
        // This is run for every request
        var client = new CrmServiceClient(username,
               CrmServiceClient.MakeSecureString(password),
               "EMEA",
               orgname,
               useUniqueInstance: false,
               useSsl: false,
               isOffice365: true);
        // Make a large request that takes a bit of time
        QueryExpression accounts = new QueryExpression("account")
        {
            ColumnSet = new ColumnSet(true)
        };
        client.RetrieveMultiple(accounts);
    });

Running this still shows a very similar trace in fiddler:

This proves that the CrmServiceClient is caching the service channel and returning a pre-authenticated version per thread.

In contrast to this, if we set the useUniqueInstance property to true on the CrmServiceClient constructor, we get the following trace in fiddler:

So now each request is re-running the channel authentication for each query – far from optimal!

The nice thing about the Xrm.Tooling library is that it is used exclusively throughout the SDK – where the old Xrm.Client was an satellite library that came from the legacy ADX portal libraries.

Thanks to my friend and fellow MVP Guido Preite for nudging me to write this post!

@ScottDurow

Posted on 9. December 2016

Dynamic365 Data Export Service

If you've moved to Dynamics CRM/365 Online then the likelihood is that you've come up against the limitation of not being able to query the SQL database directly to perform more complex reporting or for custom integrations. Many on premises deployments rely on querying the backend databases and in the past this has been a blocker to moving to the cloud – or at least it has meant a complex and costly integration to copy the data from Dynamics 365 to a on prem SQL database.

The introduction of the Data Export Service is a real game changer with the possibility to replicate your data from Dynamics CRM/365 online to an Azure SQL database in your own Azure Subscription. Once you have your data in a SQL Database you can then using PowerBI, integrate with other systems and create a data warehouse. I've found that the speed of the replication is impressive, being minutes/seconds and not hours.

There are a number of perquisites to enabling this which you can read about in msdn: https://technet.microsoft.com/en-us/library/mt744592.aspx

  • Azure Active Directory linked to Office 365
  • Azure SQL Database and user with correct permissions
  • Azure KeyVault created (using PowerShell script provided)
  • Dynamics CRM Online 8.1 or later
  • Data Export Service solution installed from App Source
  • Change tracking enabled for custom entities you want to sync
  • You must be a System Administrator to create the export profiles

The PowerShell script requires that you install the Azure cmdlets – see https://docs.microsoft.com/en-gb/powershell/azureps-cmdlets-docs/

Here is a video that demonstrates this new service and how to set it up

Posted on 22. August 2016

Debugging JavaScript in the Interactive Service Hub (Part 1)

Those that read regularly my blog and follow my work with Sparkle XRM will know I'm a massive fan of using Fiddler to debug JavaScript. One of the most productive 'superpowers' that Fiddler gives us is the ability to change JavaScript on the disk and not have to upload/publish – we can simply refresh the form and the new script will be used.

The Interactive Service Hub (ISH) was first introduced in CRM2016 and has been improved with more support for customisations in CRM2016 Update 1.

I see the purpose of the ISH at this stage is not to replace the main User Interface but rather as a testing ground for the principle of bringing the MoCA mobile/tablet native client platform to the web client. I think of it similar to the introduction of the Polaris UI back in CRM2011 – there are many similarities in that they both only support a limited set of entities and have limited customisations features. The main difference is that the ISH is being incrementally improved with each release, where the Polaris UI was more of a throw away proof of concept. At this stage the ISH is only supporting 'case' oriented operations but I'm sure it'll eventually graduate to support all Sales, Service and Marketing features.

So why the new approach the UI?

Surely it would be better to improve the existing UI incrementally rather than replace it?

One of the key drivers for the Dynamics CRM Team over the last few releases has been 'configure once deploy everywhere'. This allows us to configure business rules that can be run on all devices/platforms reliably without having to perform separate testing and perhaps re-write to target different clients. The maintenance of having multiple user interface platforms is considerable so it's a natural step to try and achieve some degree of convergence between the mobile/tablet/web/outlook interfaces.

A little background on how the ISH loads metadata

I think we are all fairly comfortable with the normal Web 2.0 paradigm of loading resources. This is where with each operation the client requests an html page and then the browser requests all the additional resources (JavaScript, CSS etc. ) that are referenced by that page. JavaScript can then make additional XHR/Ajax requests to the server to display further dynamic content. The CRM2016 UI is very similar on this front as can be seen below. I documented the CRM2013 script loading sequence which hasn't significantly changed even in CRM2016.

  • Page Load Sequence Diagram
  • Each time you open the web client, the homepage.aspx or Main.aspx has to request the metadata for the specific resource (view or form) and then combine it with the requested data. Although there is browser and server side caching in place, this is still costly in terms of the requests and rendering overhead of the browser. The 'turbo forms' update in CRM2015 Update 1 has really helped with the speed of this since it minimises the resources that requested with each navigation however fundamentally it is still limited by the page per browser request architecture.

    ISH works very differently…

    The ISH is more what we would call a 'single page application'. The sequence is very different in that there is an initial download of metadata and then subsequently all user interactions only request the actual data using the Organization.svc and OrganizationData.svc.

  • New Page Load Sequence Diagram
  • This single page approach has the advantage that it makes navigation super slick but with the rather annoying drawback that there is an initial wait each time the ISH is opened where the metadata changes are checked. The first time you open the ISH all the metadata is downloaded but from then on only the differences from the last open are downloaded. If there haven't been any changes then it's super quick because all the metadata is stored in the browsers indexed Database but if you've done a publish then the next open can take a while. Furthermore, the new metadata won't be downloaded until you close and re-open the ISH - this is different to the Web 2.0 UI and can lead to the client working with stale metadata for a time. The Indexed Database is one of the significant differences between and HTML5 single page app and a more traditional Web 2.0 architecture.

    Note: For now the ISH mostly uses the SOAP/Xml based Organization.svc rather than the new JSON based Web API.

    The speed of the metadata sync can be helped further by using the 'Prepare Client Customizations' button on the solution since this will pre-prepare the download package rather than waiting for the first person to open the ISH to detect the changes in the metadata. The difference between the MoCA client and the ISH is that the MoCA asks if the user wants to download the updates – presumably because you may be on a low bandwidth connection.

    So where does that leave us with respect to JavaScript debugging?

    If you've been keeping up so far (you have right?) then you'll realise that because the metadata (this includes JavaScript) will be all stored in the browser Indexed DB and not relying on the browser cache. As a result, we can't simply prevent the files from being cached and download the latest version with each page load as we used to do with Fiddler. We're back with the uncomfortable debug cycle of having to make a change to a JavaScript web resource, upload it to CRM, publish, close and re-start the ISH - urgh!

    To preserve our collective sanity, I've created a little debug utility solution that you can use to clear the cache of specific web resources so that you can quickly make changes to JavaScript on your local disk and then reload it in the ISH without doing a full publish cycle. Here is how:

    1. Install the latest build of SparkleXRM
    2. Install the Interactive Service Hub Debug Helper Solution
    3. Setup Fiddler's Auto Responder to point to your local webresource file as per my instructions.
    4. Start the ISH to load your JavaScript
    5. Make a local change to your JavaScript
    6. Open the ISH Debug Utility Solution configuration page and enter the name of your script name, then click 'Refresh JavaScript Webresource'
      Note: You can enter only part of the webresource name and it will use a regular expression to match.
    7. Use Ctrl-F5 on your ISH Page and when re-loaded the Web Resource will use the new version since the debug utility has forced a new download and updated the Indexed DB storage.

    Sweet – but what about the MoCA client?

    Obviously this technique is not going to work for mobile client running on an iPad, iPhone etc. The good news is that you can run the MoCA client in the Chrome browser in the same way you can run the ISH – just navigate to:

    <crmserver>/nga/main.htm?org=<orgname>&server=<crmserver>

    Note: You must be pre-authenticated for this to work.

    OnPrem

    http://dev03/nga/main.htm?org=Contoso&server=http://dev03/Contoso

    OnPrem IFD

    https://myorg.contoso.com/nga/main.htm?org=myorg&server=https:// myorg.contoso.com

    Online

    https://myorg.crm4.dynamics.com/nga/main.htm?org=myorg&server=https:// myorg.crm4.dynamics.com  

    Since the ISH and the MoCA client are build using the same platform you can now use the ISH Debug Helper from the same browser session to perform the same script refresh! This is actually an excellent way of testing out your Scripts on the MoCA client! For more information, check out the comments in this tip of the day.

    Looking forwards to the future

    I'd really encourage you to check out the ISH and use the New CRM Suggestions site to record anything you find that you would like to see in subsequent releases. Whilst I suspect that the existing 'refreshed' UI will be available for some releases to come, it is likely at some point to become the new 'legacy' UI and with on-going investment being made in the ISH style UI.

    In part 2 we'll look at some limitation of the ISH and how to get around them.

    Any comments, just tweet me! @ScottDurow

    Posted on 25. April 2016

    Ribbon Workbench 2016 Beta

    A couple of weeks ago I had both the privilege and a most enjoyable hour on CRM Audio chatting with George, Joel and Shawn about the Ribbon Workbench and SparkleXRM. You'll have heard me mention that I'd be posting details on how to get involved with the Beta version of the new Ribbon Workbench 2016 that's written using HTML and JavaScript rather than Silverlight – so here it is!

    I've had a fruitful relationship with Silverlight over the years and it has been the enabler in many successful rich client Dynamics CRM customisations but things have moved on! In July 2015 the time had come to say goodbye in part because there was no Silverlight in Windows 10's Edge Browser. My main blocker for writing pure HTML and JavaScript Web Resources in the past had always been one a lack of productivity tooling, but that had moved on as well not least because of SparkleXRM, my framework for building rich user interface Dynamics CRM web resources. The Ribbon Workbench 2016 is written using SparkleXRM (although it comes pre-packaged in the solution) and if I'm honest I think one of my drivers originally for working so hard on that project was the inevitability of having to re-write the Ribbon Workbench in HTML one day. Without the framework it would have been a bridge-too-far, but as it happened I was pleasantly surprised as to how easy the conversion process went and I am really pleased with how it's turned out. Here are some highlights…

    Drag and Drop Flyout Editing

    Drag and Drop Flyout Editing

    Delete Undo

    Undo

    Drag and Drop Command Editing

    Drag Drop Command Editing

    Can you help with Beta Testing?

    You can download the beta version by signing up to beta test today!

    Please report issues and bugs via UserVoice! Thank you!