Using Azure Durable Functions to implement SharePoint Reuseable Workflows

Background

Durable Functions are an extension of Azure Functions that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. While Microsoft Flow and Azure Logic Apps allow you to create workflows in a visual environment, Durable Functions allow you to create stateful, long-running workflows in C#.

This document, and the accompanying code samples, show how you can use Azure Durable Functions to implement SharePoint workflows. For this sample, we have a document library called Drafts and a document library called Published. An approval workflow will be created using Azure Durable Functions so that once a document is approved it will be copied from the Drafts library to the Published library. The Drafts library has a column of type person or group called DocumentOwner and a column of type person or group call Stakeholders. The workflow/function will be created such that the DocumentOwner must first approve the document. If the DocumentOwner Approves, then tasks will be created for Stakeholders to approve. The stakeholders have 1 minute to approve the tasks. If they do not approve in that timeframe, the document is automatically approved. If any stakeholder rejects the document, the workflow stops and the document is not copied to the published library.

This solution consists of an Azure Durable Function Application (https://github.com/russgove/DurableFunctionsDemo) and an SPFX project (https://github.com/russgove/DurableFunctionsDemoSPFX) that has a List View Command Set that lets the user initiate a workflow, and a webpart that lets the user see the status of all workflows and optionally terminate them

The Durable Function Application

The Durable Function application consists of two classes. The Orchestrator Class contains the functions necessary to initiate and run the workflow itself, the TaskNotifications  Class contains  a SharePoint webhook and the functions necessary to  send events to the Orchestrator class signaling that a task has been approved or rejected,

The Orchestrator Class

The Orchestrator class in the Durable functions project has the following methods:

ApprovalStart: This is function is triggered by an HTTP Request from the SPFX List view Extension. The extension passes the Item ID that an approval has been requested on, and the email address of the person requesting the approval into this function. The function also receives a DurableOrchestrationClient from the Azure runtime which allows it to send messages to Durable functions and initiate ‘Orchestrations’. This function just reads the input passed into it and starts the ‘Publish” orchestration passing it the item ID and initiator email.

Publish: The Publish function is the core of the workflow in that it ‘orchestarates’ the activities of all the other functions. It receives a DurableOrchestrationContext parameter from the runtime which allows it to initiate Activity functions and to wait for external events (among other things).

The first thing the Publish function does is call the GetListitemData Activity Function to get the information about the document being approved. It passes in the parameters it received from the HTTP Trigger that initiated it, which contains the item ID. Note that the Publish function (the Orchestrator, in Durable functions
terminology ) does not actually call
GetListitemData (the Activity Function in Durable function terminology). It instead calls the CallActivityAsync method of the DurableOrchestrationContext which writes an entry to a queue which in turn triggers the GetListitemData function. The Publish function then stops. When GetListitemData completes and returns the ListItemData, another message is written to a queue which triggers re-execution of the Publish function with the ListItemData. This is more clearly explained in the durable functions documentation here.

After receiving the ListItemData, the Publish function triggers the ScheduleDocOwnerApproval function. It needs to pass that function the InstanceID of the currently running workflow (so we can restart the workflow after the DocumentOwner approves/rejects) and the numeric ID of the DocumentOwner (so we can assign the task to that person). Activity functions can take only a single parameter (because of the way that they are invoked via a queue) so we create a class called ScheduleDocOwnerApprovalParms so that we can pass these two values to the function in a single parameter. Note that the value passed to an activity function and the return values from an Activity function must be JSON Serializable. You cannot reliably pass or return a SharePoint List Item from an Activity function as it is not fully serializable to JSON.

The ScheduleDocOwnerApproval function will create an item in the Tasks list for the DocumentOwner to approve. When the Document Owner approves or Rejects the task, a event notification will be sent to the Approval workflow signaling that action,

The Publish function then waits for either DocOwnerApproved or DocOwnerRejected external event notification. As noted above, these events are signaled when the DocumentOwner Approves or rejects the task.

If the DocumentOwner Rejects the task, the publish function receives a
DocOwnerRejected event and triggers the execution of the DocownerRejected function to send an email, and the Publish function is completed.

If the DocumentOwner Approves the task, the Publish function receives a
DocOwnerApproved event which causes it to trigger execution of ScheduleStakeholderApproval for each of the stakeholders, and create an external event for that particular stakeholder (the event name is StakeholderApproval: plus the ID of the stakeholder). After all the stakeholder tasks and events have been created, it creates a single task (stakeHolderApprovalTask) that will be triggered once ALL StakeholderApproval events have been triggered. It also creates a stakeHolderRejectionTask that will be triggered when ANY stakeholder rejects his task,

So at this point the Publish function has tasks that it can wait on (stakeHolderApprovalTask and stakeHolderRejectionTask). We need to create an additional task that will fire one minute. This is done by creating a CancellationTokenSource (ctx) and calling the DurableOrchestrationContext createTimer method passing in the duration we want to wait for and the token of the CancellationTokenSource we created. We assign this task to a variable timeouttask.

Now the Publish function waits for any if these three tasks to complete. Once any of the three tasks has completed , the function continues on and determines which task did actually complete. If the rejection task completed, the function does nothing and terminates, Otherwise, the
stakeHolderApprovalTask or the timeout task were completed and the function copies the document to the Published library,

GetListitemData: The GetListitemData Activity function uses OfficeDevPnP.Core.AuthenticationManager to connect to the SharePoint site using the SiteUrl and credentials stored in the function’s configuration (local.settings.json if running locally or the App Settings if running in Azure).

It then fetches the Item from the Draft library using the Item ID passed in from the Publish function, and extracts the numeric DocumentOwner ID and the numeric ID’s of the stakeholders. It then creates a new ListItemData object to return these values to the Publish function. Again, parameters passed to, and return values from, an Activity Functions must be JSON Serializable. If we try to return the native SharePoint List Item the people fields will not be serialized correctly and will not be accessible in the Publish function.

ScheduleDocOwnerApproval: This Activity function creates a task  in the Tasks list  setting the AssignedTo and the workFlowId passed in from the Publish Function, It also sets the Action field in the task list to “DocOwnerApproval” so that the webhook can send the correct notifications to the Publish function.

DocOwnerRejected. This Activity function is called by the Publish function when the DocumentOwner Rejects a task, It simply sends out an email. It has no effect on the workflow and is included only for demonstration,

ScheduleStakeholderApproval: This Activity function works the same as the ScheduleDocOwnerApproval function—it creates a task in the Tasks list setting the AssignedTo to the stakeholder and the WorkFlowId  passed in from the Publish Function. It also sets the Action field in the task list to “StakeholderApproval” so that the webhook can send the correct notifications to the Publish function.

GetAllStatus: This function is called by the ManageFunctionInstances webpart to get a list of all running workflows.

TerminateInstance: This function is called by the ManageFunctionInstances webpart to terminate a running workflow.

The TaskNotifications Class

The TaskNotifications class in the Durable functions project implements a SharePoint a webhook in Azure functions that listens for updates to items in the Tasks list.

The ReceiveApproval method  will get called by  SharePoint when items in our Tasks list are changed. It will call the ProcessChanges method for each notification. Note that this function also receives a DurableOrchestrationClient as a parameter so that it can send events to the orchestrator.

When an item changes the ProcesssChanges method checks to see that the task has at least two versions (versioning must be enabled on the task list for this application to work) and that the approval status has changed between the last two versions.  If the item has been deleted or there are not two versions or if the status has not changed, it does nothing,

If the Action field in the task list is “DocOwnerApproval”, its sends a “DocOwnerApproved” or “DocOwnerRejected” (depending on the status) event to the instance of the Publish workflow using the RaiseEventAsync method of the DurableOrchestrationClient, causing the Publish workflow to continue.

If the Action field in the task list is “StakeholderApproval”, and the status is “Rejected” it sends a “StakeHolderRejected” event to the instance of the Publish workflow using the RaiseEventAsync method of the DurableOrchestrationClient. (The publish workflow will continue when it receives a single StakeHolderRejected event.

If the Action field in the task list is “StakeholderApproval”, its and the status is “Approved” it sends a “StakeHolderApproved:xx” (where xx is the numeric id of the stakeholder) event to the instance of the Publish workflow using the RaiseEventAsync method (The publish workflow will continue when it receives a StakeholderApproval-xx events from ALL stakeholders).

   

  

The SPFX Application

The SPFX application contains a List View Command Set with a single command ‘Start Approval’ that is enabled when a single item is selected in the Draft Library. The command does an HTTP POST to the
ApprovalStart endpoint in the Orhestartor class of the durable function to initiate the workflow.

The SPFX application also contains a webpart ( ManageFunctionInstances) that lets the user see all instances of the workflow . It does this by calling the GetAllStatus endpoint in the orchestrator class and displaying the results in a DetailsList. The webpart also has a single action in the Command Bar to Terminate a selected instance.

Installing and Testing the sample

Create a new site

Run the following PNP-PowerShell command to create a new modern site:

Connect-PnPOnline -Url https://yourtenant.sharepoint.com/ 

New-PnPTenantSite `
  -Title "Test Durable Functions" `
  -Url "https://yourtenant.sharepoint.com/sites/DurableFunctionsDemo"     -Description "Test Durable Functions" `
  -Owner "youremail@your.domain" `
  -Lcid 1033 `
  -Template "STS#3" `
  -TimeZone 0 `
  -Wait  `
  -StorageQuota 900000

Run the script CreateListsForDurableFunctionsSample.ps1 in the DurableFunctionsDemo application to create the required lists and libraries.

Navigate to https://tenant.sharepoint.com/sites/DurableFunctionDemo/_layouts/15/appregnew.aspx to create a new App Registration.

Generate a new Client ID and Client Secret and complete the form as follows:

Update the local.setting.json of the Durable Functions project with the clientID and ClientSecret you just created. Also add the siteUrl, Allowed Origins and the ID of the task list that was created previously.

Now navigate to https://tenant.sharepoint.com/sites/DurableFunctionDemo/_layouts/15/appinv.aspx to grant permissions to the id. Enter the App ID and click lookup. Then enter the following for the Permission Request XML:

<AppPermissionRequests AllowAppOnlyPolicy="true">  
<AppPermissionRequest Scope="http://sharepoint/content/sitecollection/web" 
Right="Write" />
</AppPermissionRequests>

Next we need to add the webhook. To do so the Azure Function app needs to be running. So start debugging and wait for it to complete. You should see text like this indicating it has started:

Next open a command prompt window and change directory to the ngrok folder in Program files and enter the command:

ngrok http 7071 –host-header=localhost:7071

You should get a message like this:

This means that all requests to https://0a656ad1.ngrok.io&nbsp; will be routed to your local machine on port 7071.

With that, we can now add a webhook to our ‘Tasks’ list using the following PNP PowerShell commands:

[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12;

Connect-PNPOnline “https://yourtenant.SharePoint.com/sites/DurableFunctionDemo&#8221; -Credentials o365

Add-PnPWebhookSubscription -List Tasks -NotificationUrl https://0a656ad1.ngrok.io/api/ReceiveApproval

Now switch to the SPFX project in VSCode. Edit the serve.json file and under serverConfigurations change the default pageUrl to your tenant. Now debug the project. When the broswer starts, allow it to run the Debug Scripts. Open a new tab in the browser and point it to the _layouts/15/workbench.aspx of the site you are testing in and add the ManageWorkflowInstances webpart.

Switch back to the browser tab that has the Draft Library open and upload a file. Edit the properties and add a Document owner and a few Document Approvers (can be anyone). Select the file and then choose the ‘Start Approval’ item from the menu. This will post a message to the ApprovalStart function in the Orchestration class of your Visual Studio project. After a few seconds the ApprovalStart function will initiate a new “Publish” orchestration for the document you selected.

Switch browser tabs to the tab containing the workbench and refresh the page. You should see a single instance of your workflow running.

Open another browser tab and navigate to the Tasks list in your site, There should be a single task there assigned to whomever you specified as the DocumentOwner. The workFlowId on this task should match the InstanceID of the workflow in the ManageWorkflowInstances webpart in the Workbench tab.

Go ahead and mark this task as approved. This will cause SharePoint to fire the webhook and the webhook will see that a DocOwnerApproval task was received and fire the DocOwnerApproval event on the Publish function. The publish function will now create tasks for each of the Stakeholders. Approve these tasks and the webhook will run for each change, notifying the Publish function of the approvals.  If all the approvals are completed within the allotted one minute the file will be copied to the published Library

Summary

Azure Durable Functions can be used to implement complex business logic in code.

An additional advantage to using Azure Durable Functions is that the workflow is reuseable! You just need to grant the Azure function Write Permission to any additional sites, and add the List View Command Set and webhook to any additional lists/libraries you want the workflow to be available on.

Advertisements
Posted in sharepoint, spfx | Tagged , | Leave a comment

Calling On-Prem services from SPFX Components using the Azure Service Bus Relay.

I was working on a project to expose information from an on-Premise SharePoint Server to PowerApps using the Azure Service Bus Relay. While that effort was unsuccessful, it turns out that the methodology can be used to easily expose  on-premise SharePoint (or any other on-premise service) to SPFX components without any changes to your network. As stated in the official documentation “The Azure Relay service facilitates hybrid applications by enabling you to securely expose services that reside within a corporate enterprise network to the public cloud, without having to open a firewall connection, or require intrusive changes to a corporate network infrastructure. Relay supports a variety of different transport protocols and web services standards”

Using the Azure Service Bus Relay (https://docs.microsoft.com/en-us/azure/service-bus-relay/), you create a namespace on the Azure Service Bus.  You then create an on-Premise component ( a windows service perhaps ) that listens for messages on the namespace. Then you create a WebApi hosted in Azure that can receive requests from clients and post them as requests to the Service Bus  namespace and return the results. Any SPFX (or other) component  can then call this Azure WebApi.

Azure Service Bus Relay comes in two flavors: WCF Relays (which uses Windows Communication Foundation (WCF) to enable remote procedure calls) and Hybrid Connections (which uses the open standard web sockets enabling multi-platform scenarios). This post explains the usage of WCF Relays. I will exploreo Hybrid connections in a future post.

So the steps to set this up are :

  1. Register the namespace
  2. Create a windows service.
  3. Update the windows service to listen for requests on the namespace, and call SharePoint
  4. Create an azure webapi to receive http requests and forward them to the namespace
  5. Create an SPFX Component to call the azure WebApi

Register the namespace

  1. Navigate to portal.azure.com and log on
  2. Click Create a Resource: a
  3. Search for Relay: a
  4. Select Relay:a
  5. Click Create: a
  6. Enter a Name, Select a Subscripton, a resource Group, a Location and click Create:a
  7. You’ll get a notifcation that the deployment is in progress: a
  8. Wait for a notification that the Deployment Succeeded:a
  9. Navigate to the Resouce Group that you created the Namespace in :a
  10. Click in the Namespace to view its properties: a
  11. Click On Shared Access Policies in the Left Navigation then click on RootManageSharedAccess Key: a
  12. Make a note of the Primary Key, as we will need to reference that in our code: a

Create a windows service to listen for requests on the namespace

See https://docs.microsoft.com/en-us/dotnet/framework/windows-services/walkthrough-creating-a-windows-service-application-in-the-component-designer for  details on creating a windows service. Skip the bits about adding a timer service. We just need a basic Windows service with and EventLooger and an Installer. After you have created the basic service continue here.

Update the windows service to listen for requests on the namespace, and call SharePoint

The next  step is to install  ServiceBus and CSOM APIs in the Windows Service project you just created.

  1. Click on Tools –>Nuget Package Manager –> Manage Nuget Packages for Solution:
  2. Click The Browse tab and Search for WindowsAzure.Service Bus: a
  3. Select your project Name and click the install button: a
  4. Click OK, on the Preview Changes Screen: a.png
  5. And Accept the Agreement: a
  6. Follow the same steps as above to install the Microsoft.SharePoint2016.CSOM (or whatever version you are running on-Prem):a

Now we have the dlls to talk to the service bus and to SharePoint. So the next step is to define the service Contract. Click on the project name and add a class called RelayDemoServiceContract.cs:a

For this demo, we’re going to set it up so that the client can pass in a WebId, and we’ll return a list of documents stored in the library called “Document Library” for the selected web. In theory we could configure the service app so that the client could pass in any rest endpoint URL that was accessible by an HTTP get, and the service would call that endpoint and return the results. I’ll explore that option in a future post.  So the contract for our current sample  would look like this:

using System.Runtime.Serialization;
using System.ServiceModel;
namespace RelayServiceDemoService
{
    // Define the data contract for the service
    [DataContract]
    // Declare the serializable properties.
    public class DocumentData
    {
        [DataMember]
        public string Title { get; set; }
        [DataMember]
        public string ServerRelativeUrl { get; set; }
        [DataMember]
        public string FileName { get; set; }
        [DataMember]
        public string CreatedDate { get; set; }
        [DataMember]
        public string LastModified { get; set; }
        [DataMember]
        public string Author { get; set; }
        [DataMember]
        public string Editor { get; set; }
        [DataMember]
        public int FileSize { get; set; }
    }
    // Define the service contract.
    [ServiceContract]
    interface IDocuments
    {
        [OperationContract]
        IList GetDocuments(string WebId);
    }
    interface IDocumentsChannel : IDocuments, IClientChannel
    {
    }

}

Now that we have the contract defined, we will change the  Service so that it implements the contract (Note that for a production solution the Service contract and the service itself should be created in a separate dll to make it easier to test, but for this demo we’ll keep it simple).  To change the service so that it implements the contract , right click on Service1.cs in the solution explorer and select View Code. Change the code so that it listens for messages on the namespace we created, and implements the IDocuments Interface:

using Microsoft.SharePoint.Client;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Configuration;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.Net;
using System.ServiceModel;
using System.ServiceProcess;
using System.Text;
using System.Threading.Tasks;

namespace RelayServiceDemoService
{
public partial class Service1 : ServiceBase, IDocuments

{
private ServiceHost serviceHost; // this is the host that will listen for messages
public Service1()
{
InitializeComponent();
eventLog1 = new System.Diagnostics.EventLog();
if (!System.Diagnostics.EventLog.SourceExists(“RelaySource”))
{
System.Diagnostics.EventLog.CreateEventSource(
“RelaySource”, “RelayLog”);
}
eventLog1.Source = “RelaySource”;
eventLog1.Log = “RelayLog”;
}
protected override void OnStart(string[] args)
{
eventLog1.WriteEntry(“In OnStart”);
try
{
serviceHost = new ServiceHost(typeof(Service1));
eventLog1.WriteEntry(“Created Service Host”);
serviceHost.Open();
}
catch (Exception e)
{
eventLog1.WriteEntry(“Error” + e.Message);

}
eventLog1.WriteEntry(“Service Host Opened”);

}

protected override void OnStop()
{
eventLog1.WriteEntry(“In OnStop”);
serviceHost.Close();
eventLog1.WriteEntry(“Service Host Closed”);
}

public IList<DocumentData> GetDocuments(string WebId)
{
eventLog1.WriteEntry(“GetDocuments called for webID ” + WebId);
try
{
List<DocumentData> docs = new List<DocumentData>();
using (ClientContext context = new ClientContext(ConfigurationSettings.AppSettings[“App.Url”]))

{

eventLog1.WriteEntry(“Created Client context “);
context.RequestTimeout = 24000;
string user = ConfigurationSettings.AppSettings[“App.User.Id”];
string password = ConfigurationSettings.AppSettings[“App.User.Password”];
string domain = ConfigurationSettings.AppSettings[“App.User.Domain”];
context.Credentials = new NetworkCredential(user, password, domain);
Web web = context.Site.OpenWebById(new Guid(WebId));
context.ExecuteQuery();
eventLog1.WriteEntry(“Opened web”);
List list = web.Lists.GetByTitle(“Document Library”);// change the name to match your library name
CamlQuery camlQuery = new CamlQuery();
camlQuery.ViewXml = @”<View><RowLimit>100</RowLimit><ViewFields>
<FieldRef Name = ‘ID’></FieldRef>
<FieldRef Name = ‘Title’></FieldRef>
<FieldRef Name = ‘FileRef’></FieldRef>
<FieldRef Name = ‘FileLeafRef’></FieldRef>
<FieldRef Name = ‘Created_x0020_Date’></FieldRef>
<FieldRef Name = ‘Last_x0020_Modified’ ></FieldRef>
<FieldRef Name = ‘Author’ ></FieldRef>
<FieldRef Name = ‘Editor’ ></FieldRef>
<FieldRef Name = ‘File_x0020_Size’ ></FieldRef>
</ViewFields></View>”;
ListItemCollection collListItem = list.GetItems(camlQuery);
context.Load(collListItem);
context.ExecuteQuery();
eventLog1.WriteEntry(“Got ListItems”);
foreach (ListItem item in collListItem)
{
eventLog1.WriteEntry(String.Format(“ID: {0} \nTitle: {1} “, item.Id, item[“Title”]));
docs.Add(new DocumentData()
{
Title = (string)item[“Title”],
FileName = (string)item[“FileLeafRef”],
ServerRelativeUrl = (string)item[“FileRef”],
CreatedDate = (string)item[“Created_x0020_Date”],// keep it as string for powerapps
LastModified = (string)item[“Last_x0020_Modified”],
Author = ((FieldUserValue)item[“Author”]).Email,
Editor = ((FieldUserValue)item[“Editor”]).Email,
FileSize = Convert.ToInt32((string)item[“File_x0020_Size”])

});

}
}
eventLog1.WriteEntry(“Completed GetDocuments for webID ” + WebId);
return docs;
}catch(Exception e)
{
eventLog1.WriteEntry(e.Message);
return null;

}
}
}
}

In the OnStart method above we establish a connection to the service bus. When  a message is received GetDocuments is called, which connects to the SharePoint  site using the passed in webID, and retrieves the documents from the library called “Document Library”.

Now we need to update the app.config  with connection information for our on-prem SharePoint farm, and the service bus. Add the following to the system.serviceModel section of your app.config (being sure to change the keyname and the endpoint address):

<services>
        <service name="RelayServiceDemoService.Service1" behaviorConfiguration="debug">
          <endpoint address="sb://yourNameSpace.servicebus.windows.net/documents" binding="netTcpRelayBinding" contract="RelayServiceDemoService.IDocuments" behaviorConfiguration="documents" />
        </service>
      </services>
      <behaviors>
        <endpointBehaviors>
          <behavior name="documents">
            <transportClientEndpointBehavior>
              <tokenProvider>
                <sharedAccessSignature keyName="RootManageSharedAccessKey" key="yourKey" />
              </tokenProvider>
            </transportClientEndpointBehavior>
          </behavior>
        </endpointBehaviors>
        <serviceBehaviors>
          <behavior name="debug">
            <serviceDebug includeExceptionDetailInFaults="true" />
          </behavior>
        </serviceBehaviors>
        
      </behaviors>

And add the following appSettings:

<appSettings>
    <!-- Service Bus specific app setings for messaging connections -->
    <add key="Microsoft.ServiceBus.ConnectionString" value="Endpoint=sb://yournamespace.servicebus.windows.net;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=yourkey" />
    <add key="App.Url" value="http://youronprem.sp.site" />
    <add key="App.User.Id" value="username" />
    <add key="App.User.Password" value="password" />
    <add key="App.User.Domain" value="domain" />
    <add key="ClientSettingsProvider.ServiceUri" value="" />
  </appSettings>

Note that when rebuilding the service you need to stop the service, build the project run installutil with the /u flag to uninstall the old, run installutil to install the new version, then restart the service.

Once completed go ahead and install the service and start it.

 

Create an azure webapi to receive http requests and forward them to the Relay

Right-click on your solution and add a new WebApplication called RelayServiceProxy:a.png

Select Web API:a

Add the Windows.AzureServiceBus nugget package to the RelayServiceProxy:a.png

Right click on the models folder an add a class called Document.cs:

 using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace RelayServiceProxy.Models
{
    public class Documents
    {
        public string Title { get; set; }
        public string FileRef { get; set; }
        public string FileLeafRef { get; set; }
        public string CreatedDate { get; set; }
        public string LastModified { get; set; }
        public string Author { get; set; }
        public string Editor { get; set; }
        public int FileSize { get; set; }
    }
}

Next we need to add the contract  that we defined in the windows service to the Proxy, so right click on the RelayServiceProxyProject in the Solution Explorer and click Add existing item. Navigate to the RelayServiceDemosService folder and select RelayDemoServiceContract. Be sure to hit the Dropdown on the Add button and select Add as link:a.png

Now we need to add a Controler. Right click on the Controllers folder and select Add->Controller. Select an empty Web API 2 Controller and click add:a

Name the Controller DocumentController:a

Add the following variable and methods to the DocumentController class:

  static ChannelFactory channelFactory;
        static DocumentController()
        {
            // Create shared access signature token credentials for authentication.
            channelFactory = new ChannelFactory(new NetTcpRelayBinding(),
                "sb://[yournamespace].servicebus.windows.net/documents");
            channelFactory.Endpoint.Behaviors.Add(new TransportClientEndpointBehavior
            {
               TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider(
                   "RootManageSharedAccessKey", "[youraccesskey]")
            });
        }
        public IList Get(string webID)
        {
            using (IDocumentsChannel channel = channelFactory.CreateChannel())
            {
                return channel.GetDocuments(webID);
            }

        }

Be sure to set the namespace and acccesskey to be the values from your own relay service!

Now, change the solution so that it just starts your web app when you start debugging.  Right click on your solution in Solution Explorer and select Properties.  On the Startup Project tab select Single startup project, and select RelayServiceProxy as the startup project:a

 

If you run your webapp locally now, you should be able to get a list of documents in a web by browsing to http://localhost:13778/api/Document?webID=awebid. (be sure your service is running  and that awebid in the url points to a web that has a library called “Document Library!)

Now , we can deploy the RelayServiceProxy to azure.  Right click on RelayServiceProxy in your solution and click Deploy. Crate a new Azure App Service:a

and click Publish:a.png

Enter an App Name, Subscription, Resource Group and Hosting Plan, then click Create:a

Save the Site URL, so that we can reference it in our SPFX app next. We are going to be calling this  WebAPI from and SPFX webpart, so we need to enable CORS. Navigate to your WebAPI app in the azure portal and  click CORS.  Enter your SharePoint tenant and https://localhost:4321 as allowed origins:

a

Note that in a real-life situation we would need to enable Authentication on the WebAPI. It was left off here to keep the post short. Also note that the identity of the user is not passed to the windows service. Therefor all authorization would need to be done in the WebAPI!

 

Create an SPFX Component to call the azure WebApi

Create a new spfx webpart: a

Add a member variable to the webpart class to hold the results pf the call:

private results: Array<any>;

Add the following OnInit method to your webpart to call the Azure Relay:

public onInit(): Promise<any> {
debugger;
const requestHeaders: Headers = new Headers();
requestHeaders.append(‘Content-type’, ‘application/json’);
requestHeaders.append(‘Cache-Control’, ‘no-cache’);
const httpClientOptions: IHttpClientOptions = {

headers: requestHeaders
};

const url = “https://yournamespace.azurewebsites.net/api/Document?webID=awebid&#8221;;
return this.context.httpClient.get(url, HttpClient.configurations.v1, httpClientOptions).then((response) => {
response.json().then((r => {
debugger;
this.results = r;
}))
}).catch((err) => {
console.log(err);
})
}

 

Next change the component properties to include the results in a field called documents:

export interface IHelloAzureRelayServiceProps {
description: string;
documents:Array<any>;
}

 

Chnage the Webparts render method to pass the results to the component:

 

public render(): void {
const element: React.ReactElement<IHelloAzureRelayServiceProps> = React.createElement(
HelloAzureRelayService,
{
description: this.properties.description,
documents: this.results
}
);
ReactDom.render(element, this.domElement);
}
Finally, change the Render method of the react component to render the results from the relay service call:
public render(): React.ReactElement<IHelloAzureRelayServiceProps> {
return (
Welcome to SharePoint!

Customize SharePoint experiences using Web Parts.

{escape(this.props.description)}

    {this.props.documents.map((d, i) => {
    debugger;
    return

  • {d.Title}
  • ;

    })}
    </div>
    </div>
    </div>
    );
    }

    Now, when you run your webpart, it posts a message to the service bus, your windows service reads the message and calls out to SharePoint to get the list of documents posting the reply back to the service bus. Your webpart gets the response and displays the documents:

    a

    The code for this solution will be posted to the sp-dev-fx-webparts repository.

    As a final note Azure Relay comes in two flavors :WCF relay and Hybrid Connections. This post demonstrated WCF Relays. In  Hybrid Connections use WebSockets as the communications layer rather than WCF. In a future post I’ll demonstrate the same capability using Hybrid Connections.

    Continue reading

    Posted in sharepoint, spfx | Leave a comment

    Updating props in a react-based SPFX webpart after initial Render

    The react-based spfx webparts that are generated by the yeoman templates do not allow us to update the properties of the react component after it is initially rendered. It can sometimes be useful to do so.

    To be able to update the properties after initial rendering make the following changes to the webpart class:

    1. Make the local variable element in the Render method a class variable.
    2. Declare another class variable called formComponent whose type is the type of your react class
    3. Save the result of reactDOM.render in the webparts render method to the formComponent class variable

    So the class variables look like this for a webpart called ‘Test’:


    private element: React.ReactElement;
    private formComponent:Test;

    and your render method looks like this:

    public render(): void {
    this.element= React.createElement(
    Test,
    {
    description: this.properties.description
    }
    );
    this.formComponent=ReactDom.render(this.element, this.domElement) as Test;
    }

    Now, in any function in your webpart class you can update the props of the component as shown here:

    private someFunction():void {
    let newProps:ITestProps = this.element.props;
    newProps.description="New Description";
    this.element.props = newProps;
    this.formComponent.forceUpdate();

    }

     

    This can be useful in a number of cases. For instance, say you have a react based spfx component that has a bunch of dropdowns  with data that comes from different sharepoint lists. You would typically use either of two options. First, you could fetch all the information the onInit method of your webpart and pass it to your component as props, so that your component could render the dropdowns using data from your props (this can cause the initial render to be slow).  Second option would be to have the react component call sharepoint directly when it needs to render the list (this can cause a lag when the user clicks the dropdown).

     

    The other option, discussed here, is to render your component initially with just the first few of the dropdown lists populated in the props. The onInit method gets the values for the first few dropdowns and then calls render. The render method does the render as shown above, and then continues to fetch the information for the additional dropdowns. When it’s done, it adds them to the props and re-renders the component as shown in SomeFunction above.

    Posted in react, spfx | Tagged | 1 Comment

    Creating and Outlook Add-in using an SPFX Webpart

    This post demonstrates how you can create an outlook add-in using an SPFX webpart.

    The first step is to create a page that will host your webpart and be displayed in outlook.  On your site create a new page (I called mine test.aspx).  Add the following code to the Additional Page head:

    addin5
    The first line will allow the page to be opened in an Iframe (which is how add-ins open). The second line loads the Office.js script needed to talk to office.

    The second step is  to create the webpart that will make up the office add-in.

    Add the following member variables to the webpart class:

      private from: string;

    private attachments: Array;

    private body: string;

    private office: office;

    private subject: string;

    In the onOnit method of the spfx webpart  add the following code.

    public onInit(): Promise {

    return new Promise((resolve: (args: T) => void, reject: (error: Error) => void) => {

    window[“Office”][“initialize”] = () => {

    debugger;

    this.office = window[“Office”];

    this.attachments = window[“Office”].context.mailbox.item.attachments;

    this.from = window[“Office”].context.mailbox.item.from.emailAddress;

    this.subject = window[“Office”].context.mailbox.item.subject;

    window[“Office”].context.mailbox.item.body.getAsync(

    “html”,

    { asyncContext: “This is passed to the callback” },

    (result) => {

    debugger;

    this.body = result.value;

    resolve(window[“Office”]);// or undefined

    }

    );

    };

    });

    }

    This code will load up the office runtime and set the attachments, from ,and subject member variables from the currently opened email. The full code for the webpart can be found at https://github.com/russgove/fpaoutlookaddin.

    Note that at this point the webpart has a context in the sharepoint site as well as the users email so we can exchange data between the two.

    Now that the webpart is built, go ahead and  deploy it and add it to the page created in step one.

    Last thing we need is a manifest file used to load our add-in into outlook. A sample  manifest is  can be found in the SampleOfficeManifest.xml file on the github repo.  Change   YOURTENANT in the sample to the name of your tenant, and change ‘sites/fpa/siteassets/test.aspx’ to the page where you added your webpart. Save the file to your local disk.

     

    Now open outlook on O365 and select Manage Addins:addin1

    Select My add-ins –> add a custom add-in –> from file

    addin2

    And select the manifest file you just created.

    Now that the add-in was added, open an email in your web outlook and notice the new icon (my icon is set to local host in my manifest so I am getting that default icon in the example below):

    addin3

    Click on the  icon and , voila:

    addin4

    We have an spfx webpart running as an outlook add-in!

    Posted in Add-in, office-ui-fabric-react, sharepoint, spfx, Uncategorized | Tagged , , | Leave a comment

    Cancel Running workflows prior to saving an item.

    This is a problem for workflows that delay until a certain date before sending out reminders. The user creates an item with 1/27/2018 as the due date and the workflow starts waiting until 1/27/2018 to send out a reminder. If the user then edits the item and changes the date to 1/27/2018, no new workflow is

    Started, and the running workflow is unaware the date has changed.

    This is typically resolved by  setting up a workflow to run as part of a retention policy. This can sometimes be troublesome if you want to send out multiple reminders –say  7 days before due and 3 days before due—because you need to set up calculated fields with the reminder dates.

    But there is another way.  With a bit of JSOM code, a sharepoint edit form can be made to cancel

    the running workflow prior to saving the item to the list. This way a new workflow gets started when you save the item (provided you have the workflow configured to run when and item is added AND when an item is changed.

    If you are using an SPFX webpart as your edit form (which is unfortunately not currently supported on modern lists), the following code can be used to cancel the running workflow just prior to saving the new version.

     private async getWorkFlowDefinitionByName(workflowDeploymentService: SP.WorkflowServices.WorkflowDeploymentService, workFlowName: string): Promise {
        let context = workflowDeploymentService.get_context();
        let wfDefinitions = workflowDeploymentService.enumerateDefinitions(true);
        context.load(wfDefinitions);
        await new Promise((resolve, reject) => {
          context.executeQueryAsync((x) => {
            resolve();
          }, (error) => {
            console.error("an error occured getting workflow definitions");
            console.log(error);
            reject();
          });
        });
        let foundDefinition: SP.WorkflowServices.WorkflowDefinition;
        let defEnum = wfDefinitions.getEnumerator();
        while (defEnum.moveNext()) {
          const wfDefinition = defEnum.get_current();
          if (wfDefinition.get_displayName() === workFlowName) {
            foundDefinition = wfDefinition;
            break;
          }
        }
        return Promise.resolve(foundDefinition);
      }
      private async getWorkFlowSubscriptionByDefinitionIdListId(workflowSubscriptionService: SP.WorkflowServices.WorkflowSubscriptionService, workFlowDefinitionId: string, listId): Promise {
        let context: SP.ClientRuntimeContext = workflowSubscriptionService.get_context();
        let wfSubscriptions: SP.WorkflowServices.WorkflowSubscriptionCollection =
          workflowSubscriptionService.enumerateSubscriptionsByList(listId);
        context.load(wfSubscriptions);
        await new Promise((resolve, reject) => {
          context.executeQueryAsync((x) => {
            resolve();
          }, (error) => {
            console.error("an error occured gettin workflow subscriptions");
            console.log(error);
            reject();
          });
        });
        if (!wfSubscriptions) {
          alert("Failed to load workflow subscriptsion. Running workflows were not cancelled. This can happen if the Office 365 workflow service is unavailable.");
          console.error("Failed to load Workflow instances.");
          return Promise.reject("Failed to load Workflow instances.");
        }
        let foundSubscription: SP.WorkflowServices.WorkflowSubscription;
        let subscriptionEnum = wfSubscriptions.getEnumerator();
        while (subscriptionEnum.moveNext()) {
          const wfSubscription: SP.WorkflowServices.WorkflowSubscription = subscriptionEnum.get_current();
          if (wfSubscription.get_definitionId().toString().toUpperCase() === workFlowDefinitionId.toString().toUpperCase()) {
            foundSubscription = wfSubscription;
            break;
          }
        }
        return Promise.resolve(foundSubscription);
      }
      private async cancelRunningWorkflows(ItemId: number, listId: string, workflowName: string): Promise {
        if (!workflowName) {
          return Promise.resolve();
        }
        var context = SP.ClientContext.get_current();
        // get all the workflow service managers
        var workflowServicesManager: SP.WorkflowServices.WorkflowServicesManager = SP.WorkflowServices.WorkflowServicesManager.newObject(context, context.get_web());
        var workflowInstanceService: SP.WorkflowServices.WorkflowInstanceService = workflowServicesManager.getWorkflowInstanceService();
        var workflowSubscriptionService: SP.WorkflowServices.WorkflowSubscriptionService = workflowServicesManager.getWorkflowSubscriptionService();
        var workflowDeploymentService: SP.WorkflowServices.WorkflowDeploymentService = workflowServicesManager.getWorkflowDeploymentService();
        //Get all the definitions from the Deployment Service, or get a specific definition using the GetDefinition method.
        let wfDefinition: SP.WorkflowServices.WorkflowDefinition = (await this.getWorkFlowDefinitionByName(workflowDeploymentService, workflowName));
        if (!wfDefinition) {
          console.error("Coold not find workflow Definition for workflow named : " + workflowName);
          alert("Coold not find workflow Definition for workflow named : " + workflowName);
          return Promise.resolve();
        }
        let wfDefinitionId: string = wfDefinition.get_id();
        // get the subscription for the list
        let wfSubscription: SP.WorkflowServices.WorkflowSubscription =
          await this.getWorkFlowSubscriptionByDefinitionIdListId(workflowSubscriptionService, wfDefinitionId, listId);
        if (!wfSubscription) {
          console.error("Could not find a subscription for  workflow named : " + workflowName + " ib the TR List");
          alert("Could not find a subscription for  workflow named : " + workflowName + " ib the TR List");
          return Promise.resolve();
        }
        let wfSubscriptionId: string = wfSubscription.get_id().toString().toUpperCase();
        let wfInstances: SP.WorkflowServices.WorkflowInstanceCollection = workflowInstanceService.enumerateInstancesForListItem(listId, ItemId);
        context.load(wfInstances);
        await new Promise((resolve, reject) => {
          context.executeQueryAsync((x) => {
            resolve();
          }, (error) => {
            console.log(error);
            reject();
          });
        });
        if (!wfInstances) {
          debugger;
          alert("Failed to load workflow instances. Running workflows were not cancelled. This can happen if the Office 365 workflow service is unavailable.");
          console.error("Failed to load Workflow instances.");
          return Promise.resolve();
        }
        var instancesEnum = wfInstances.getEnumerator();
        let runningInstance;
        while (instancesEnum.moveNext()) {
          var instance = instancesEnum.get_current();
          let instanceSubscriptionId = instance.get_workflowSubscriptionId().toString();
          let instanceStatus = instance.get_status();
          if (instanceSubscriptionId.toUpperCase() === wfSubscriptionId && instanceStatus === 1) {
            runningInstance = instance;
          }
        }
        if (runningInstance) {
          workflowInstanceService.terminateWorkflow(runningInstance);
          await new Promise((resolve, reject) => {
            context.executeQueryAsync((x) => {
              console.log("Workflow Termination Successful");
              resolve();
            }, (error) => {
              console.log(error);
              debugger;
              console.error("Failed to terminate workflow.");
              resolve();
            });
          });
        }
      }
    

    With the above methods in place , you just need to call


    if (originalReuiredDate != tr.RequiredDate) {
    await this.cancelRunningWorkflows(itemId, listId, workflowName).then((x) => {
    console.log("Workflow has been terminated");
    });
    }

    prior to saving your list item. If there is an instance of the workflow already running , it will be canceled and a new workflow will start once your item is saved.

    Posted in react, spfx, Uncategorized | Tagged , | Leave a comment

    Code Editor Property Pane Control for SPFX WebParts

    I submitted a PR to the spfx-property-controls repository today for a new SPFX property pane control – the PropertyFieldCodeEditor control.
    The new control uses the Ace editor under the hood (see https://ace.c9.io/).
    The Ace editor supports editing many types of content with auto-complete, error checking, syntax highlighting , etc.

    The PropertyFieldCodeEditor property pane control allows you to edit Json, Javascript, Sass, Typescript, Plain text, HTML, Handlebars and XML code within a language-aware editor, right from the property pane.

    I can be use to :
    • edit the XML needed to add an SPFX webpart to a page (that’s the reason it was created originally)
    • edit HTML snippets to be shown in an SPFX webpart
    • edit plain text to be shown in an spfx webpart

    • edit a CAML query to be passed to renderListDataAsStream
    • edit JSON values to pass complex data structures to a webpart
    • edit Handlebars templates to be used in an spfx webpart
    • edit javascript snippets -?
    • Edit typescript code _? (we could have an azure job compile!)
    • and more…

    Merry Christmas!

    Capture

    Capture

    Posted in react, sharepoint, spfx, Uncategorized | Tagged , | Leave a comment

    Using Async/Await with JSOM

    Async/Await can make JSOM coding much easier and is simple to set up. All you need to do is wrap your executeQuery calls in a Promise, then you can await them!

    
    await new Promise((resolve, reject) => {
    
    clientContext.executeQueryAsync((x) => {
    
    resolve();
    
    }, (error) => {
    
    console.log(error);
    
    reject();
    
    });
    
    });
    

    Here’s a full example of a method that hides the firs webpart on the page:

    
    public async AddWebPartToEditForm(webRelativeUrl: string, editformUrl) {
    
    const clientContext: SP.ClientContext = new SP.ClientContext(webRelativeUrl);
    
    var oFile = clientContext.get_web().getFileByServerRelativeUrl(editformUrl);
    
    var limitedWebPartManager = oFile.getLimitedWebPartManager(SP.WebParts.PersonalizationScope.shared);
    
    let webparts = limitedWebPartManager.get_webParts();
    
    clientContext.load(webparts, 'Include(WebPart)');
    
    clientContext.load(limitedWebPartManager);
    
    await new Promise((resolve, reject) => {
    
    clientContext.executeQueryAsync((x) => {
    
    resolve();
    
    }, (error) => {
    
    console.log(error);
    
    reject();
    
    });
    
    });
    
    let originalWebPartDef = webparts.get_item(0);
    
    let originalWebPart = originalWebPartDef.get_webPart();
    
    originalWebPart.set_hidden(true);
    
    originalWebPartDef.saveWebPartChanges();
    
    await new Promise((resolve, reject) => {
    
    clientContext.executeQueryAsync((x) => {
    
    console.log("the webpart was hidden");
    
    resolve();
    
    }, (error) => {
    
    console.log(error);
    
    reject();
    
    });
    
    });
    
    }
    

     

    Posted in Uncategorized | Leave a comment