Salesforce’s Apex does not support the PATCH method. What to do if I need to call an endpoint with said method?

I had a requirement once. A proof of concept. I needed to call a Microsoft Azure endpoint from Salesforce using the PATCH HTTP verb. The problem is, as mentioned in the title, that Apex does not support this verb.

If we are trying to call a Salesforce endpoint, there’s a trick: append ?_HttpMethod=PATCH to the end of the URL. This is a workaround that Salesforce. This doesn’t help us because we are not calling a Salesforce endpoint. Another workaround would be setting the X-HTTP-Method-Override as PATCH in the request’s header. This is a convention that some servers follow, but this does not guarantee that the server being called will accept our request as a patch.

Let’s write a simple proxy that is hosted on Heroku!

Leveraging a Heroku app in another cloud (technically another Salesforce cloud since 2010) we are able to forward our request to its final destination.

  1. Salesforce calls our Heroku app endpoint
  2. The app forwards the request with the correct verb
  3. The app receives the response from Azure and forwards it back to Salesforce

To do that, I’m going to use Python with the Flask and requests libraries. Flask will handle the “web app” part, while requests is going to be used to forward our request.

NOTE: I am not going to cover the part where we get Azure’s access token because that doesn’t involve an unsupported verb.

Assuming that our Salesforce code will send a request with the access token, the payload and the target URL, it will probably look like this:

{
    "token": "V2VsbCBhcmVuJ3QgeW91IGN1cmlvdXM/DQoNCiBMb3JlbSBpcHN1bSBkb2xvciBzaXQgYW1ldCwgY29uc2VjdGV0dXIgYWRpcGlzY2luZyBlbGl0LiBOdWxsYW0gcGVsbGVudGVzcXVlIHRvcnRvciBhYyBlbmltIGxhb3JlZXQsIGFjIGVsZW1lbnR1bSB0dXJwaXMgdWx0cmljaWVzLiBJbnRlZ2VyIGludGVyZHVtIHJpc3VzIGxhY3VzLCBlZ2V0IGNvbnNlcXVhdCBsaWd1bGEgZmVybWVudHVtIHZpdGFlLiBFdGlhbSBzb2RhbGVzLCBsaWJlcm8gdml0YWUgZGlnbmlzc2ltIGx1Y3R1cywgbGliZXJvIGFyY3UgdnVscHV0YXRlIHF1YW0sIGF0IG1hdHRpcyBkdWkgbWFnbmEgbmVjIG1hc3NhLiBEb25lYyBpcHN1bSBkb2xvciwgZnJpbmdpbGxhIHZpdGFlIG5pYmggYXQsIHJob25jdXMgc2NlbGVyaXNxdWUgZXN0LiBEb25lYyBuZWMgc29kYWxlcyByaXN1cy4gUGVsbGVudGVzcXVlIHF1aXMgZnJpbmdpbGxhIGVyb3MuIFBlbGxlbnRlc3F1ZSBoYWJpdGFudCBtb3JiaSB0cmlzdGlxdWUgc2VuZWN0dXMgZXQgbmV0dXMgZXQgbWFsZXN1YWRhIGZhbWVzIGFjIHR1cnBpcyBlZ2VzdGFzLiBOYW0gcnV0cnVtIG1ldHVzIG1hdXJpcywgYWMgdWxsYW1jb3JwZXIgdGVsbHVzIGF1Y3RvciBpbi4gVXQgYWNjdW1zYW4gc2NlbGVyaXNxdWUgc29kYWxlcy4gRnVzY2UgdmFyaXVzIG5lcXVlIGVzdCwgc2VkIHB1bHZpbmFyIHNlbSBzY2VsZXJpc3F1ZSBub24uIA==",
    "payload": "IFNlZCB2ZW5lbmF0aXMgZXQgbWV0dXMgbm9uIGx1Y3R1cy4gUGVsbGVudGVzcXVlIGFjIGV1aXNtb2QgbWV0dXMsIG5lYyB0ZW1wb3IgZHVpLiBOYW0gYSB2ZXN0aWJ1bHVtIGZlbGlzLiBOdW5jIG1hZ25hIGxpZ3VsYSwgY29uZ3VlIG5lYyBpbXBlcmRpZXQgdXQsIGNvbmd1ZSB2dWxwdXRhdGUgcXVhbS4gTWFlY2VuYXMgYmxhbmRpdCwgZmVsaXMgbmVjIHNlbXBlciBkYXBpYnVzLCB0ZWxsdXMgaXBzdW0gdm9sdXRwYXQgYXVndWUsIGFjIGVnZXN0YXMgbmlzbCBvcmNpIG5lYyBzYXBpZW4uIEV0aWFtIGEgdnVscHV0YXRlIGVyb3MuIEN1cmFiaXR1ciBsYWNpbmlhIHNjZWxlcmlzcXVlIG5pc2wgc2VkIHZvbHV0cGF0LiBNYXVyaXMgdml0YWUgZXJhdCBwZWxsZW50ZXNxdWUsIGxhY2luaWEgdHVycGlzIHV0LCB0ZW1wb3Igc2FwaWVuLiBJbnRlZ2VyIHZlbCBsb2JvcnRpcyBkdWkuIEN1cmFiaXR1ciBpbXBlcmRpZXQgbWF0dGlzIGZlbGlzLiBQaGFzZWxsdXMgY29tbW9kbyBtYXNzYSBldSB2ZWxpdCBkYXBpYnVzIHRyaXN0aXF1ZSBhIGV1IGxpYmVyby4gRnVzY2UgaW4gcmlzdXMgZW5pbS4gRnVzY2UgZmVybWVudHVtIGV0IHB1cnVzIGV0IGNvbmRpbWVudHVtLiBJbiBzY2VsZXJpc3F1ZSBwb3N1ZXJlIGVsaXQsIHZpdGFlIGludGVyZHVtIHR1cnBpcyBjb25zZWN0ZXR1ciBhdC4g",
    "url": "https://outlook.office.com/api/beta/me/contacts/31d14663-8cf4-4acf-b1c8-556b8e62107d" 
}

The app will receive this and interpret it as “okay, I’ve got this encoded payload, and I shall use this token to send it to this endpoint”:

# Import the required libraries
# Flask is the web framework for dealing with web stuff (such as serving the app and handling
# the connections) We need to import the main "Flask" to run the app, and also its
# request and Response method and class to handle the request properly
from flask import Flask, Response, request

# requests is a simple http request library to handle... requests.
import requests

# Base64 is a standard module to help us encode/decode Base 64 strings
import base64
# Json is a standar dmodule to help us handle JSON in Python (converting it from/to
# dictionaries - which are also known as maps in some other languages)
import json
# OS is a standard module to handle dealing with the OS directly (we use it just to check
# an environment variable at the end of the script)
import os


# Lets first create the app. This is an empty app which does nothing.
# The app will do what we want as we define the methods/routes below, with (for example)
# the `app.route` decorator (which specifies the route and allowed methods)
app = Flask(__name__)

# This route defines that the app can receive POST requests in the `/contact/` endpoint. So
# when deployed, if the app is named `quiet-waters-12345`, its Heroku URL will be
# `https://quiet-waters-12345.herokuapp.com/` and we should hit that endpoint, adding the
# `/contact/` at the end.
@app.route('/contact/', methods=['POST'])
def contact():
    # First lets deserialize the request's JSON data into a dictionary.
    request_data = request.get_json()

    # We check if there are the required attributes we need
    if 'token' in request_data and 'payload' in request_data and 'url' in request_data:
        try:
            # We try to decode the payload
            payload = base64.b64decode(request_data['payload']).decode('utf-8')

            # Assign the original payload to a new attribute named `original_payload`
            # in our dictionary
            request_data['original_payload'] = payload

            # Define the headers as required by the Azure endpoint
            headers = {
                'Authorization': 'Bearer ' + request_data['token'],
                'Content-Type': 'application/json'
            }

            # Try to call external endpoint using the requests library. Note that we
            # use the `patch` method here.
            azure_request = requests.patch(
                url=request_data['url'],
                data=payload,
                headers=headers
            )
            # When the request is finished, its result is stored in `azure_request`,
            # which we can use to get the JSON response.
            result = {
                "azure_response": azure_request.json()
            }
            # We basically dump the request's result into a new Response and we return
            # it to the service who called us in the first place.
            resp = Response(json.dumps(result), status=azure_request.status_code, mimetype='applcation/json')
            return resp
        except Exception as e:
            resp = Response(json.dumps({'error': e.args}), status=500, mimetype='applcation/json')

    # Returns an error response because there is missing data in the payload.
    return Response(json.dumps({'error':'No token or payload data informed'}), status=400, mimetype='application/json')

# Checks if the `IS_HEROKU` variable is set. If it is (in our dyno) then the app is running on
# Heroku's cloud. Otherwise it is running locally in our machine, so we want it to run in our
# localhost, on port 8080 instead (and with debug mode active).
if not os.environ.get('IS_HEROKU', None) and __name__ == '__main__':
    app.run(host='localhost', port='8080', debug=True)

And with this small web app hosted in Heroku we are not limited to a single URL. This transforms any POST request to a PATCH request. I’ve used this to call an Outlook endpoint (hence why the apps’ route was named /contacts/) but it can be renamed as needed.

An idea would be to have all HTTP verbs available as endpoints, such as /post, /get, /delete, etc. This way the app will look more like an endpoint bus though…

Using IsDeprecated to reduce the number of packages returned by force:package:version:list

As you develop your package, the versions start to pile up in Salesforce’s servers. Suddenly you can find yourself getting dozens of results with force:package:version:list in your DevHub org. If the project lasts for years and you are working in a fast-paced environment, you surely will have a bad time with this list.

This command has some attributes to help you filter the package list using the date in which the package was created, the date when the package was modified (both in days), by its release status and you can also order the packages by some fields on the Package2Version object. Just a tip: --orderby Version doesn’t work (because the version field returned on our terminal is actually a representation of all four version fields).

But then what if I just want to list my latest stable released package? This command won’t do. Fortunately for us Salesforce is “API First” so we can query the Package2Version table using the Tooling API to filter the results with a SOQL query.

If we want only the released packages, we include the IsReleased = TRUE.

If we want filter out the deprecated ones, add a IsDeprecated = FALSE then.

We can even get the ones protected by passwords, with the IsPasswordProtected field.

Example query:

$ sfdx force:data:soql:query -t -u DevHub -q "SELECT Id, SubscriberPackageVersionId, IsPasswordProtected FROM Package2Version WHERE Package2.Name = 'MY_AWESOME_PACKAGE_ALIAS' AND IsReleased = TRUE AND IsDeprecated = FALSE"
ID                  SUBSCRIBERPACKAGEVERSIONID  ISPASSWORDPROTECTED
──────────────────  ──────────────────────────  ───────────────────
05iXX000000XXX0XXX  04tXX00000XXXX0XXX
05iXX000000XXX1XXX  04tXX00000XXXX1XXX          true

This is great, but how can I mark a package version as deprecated again?

The documentation about the Package2Version object lists the fields and says that the update call is supported. Also, the IsDeprecated field description contains this snippet:

If you set IsDeprecated to true for a package, the package and all of its child package versions are deprecated.

Package2Version docs

The “if you set to true” part and the lack of “read-only” in this cell indicate that this field is writable. We can probably update it using the Tooling API and the Salesforce CLI.


I have two released package versions, one of those is not password protected because I forgot to manually assign it the password. I don’t want the installation Id (the one starting with 04t) to fall into wrong hands, because I’m dealing with an Unlocked package here. I don’t want people to see my precious metadata! So I’ll just deprecate that version and people won’t be able to install it anymore:

$ sfdx force:data:record:update -s Package2Version -i 05iXX000000XXXXXXX -v "IsDeprecated=true" -u DevHub -t
Successfully updated record: 05iXX000000XXXXXXX.

Great! Now only me and my password manager know the key to install this package. 😌

Let’s build a status page with Salesforce’s Force.com Sites

Disclaimer: Force.com sites might not be the best way to develop a status page because of the daily and monthly allocations. However, I think its a nice Visualforce and site development exercise. And since the page is public and we need the guest profile, we also deal a little bit with sharing and visibility.

At the end we should have something like this in our page:


Salesforce has a small snippet on how to add a Lightning component to a Visualforce page (here), but we want to use Lightning Out to add a component to a public website. To allow guest user access, there’s a catch: we need to implement the ltng:allowGuestAccess interface in our app:

<aura:application access="GLOBAL" extends="ltng:outApp" implements="ltng:allowGuestAccess">
    <aura:dependency resource="c:statusComponent"/>
</aura:application>

For starters, thinking about the data structure of a simple status page like this one, we have three obvious objects to deal with: the systems listed, the incidents (per system) and the incidents updates. We can assume there’s a master-detail relationship between a system and an incident, and another between the incident and the incident update. The system has a current status (available, with a performance degradation or unavailable), while the incidents and their updates can change this status.

No mystery regarding how the page works. It is a Visualforce which uses Lightning Out, which implements a Lightning App that has a component as dependency. This is a composed component, which means it uses another components to compose the view.

The main status component (statusComponent) queries the systems and instantiates a component (systemStatus) that is dedicated to show info regarding a specific system. This system status component queries the unresolved incidents of its system and instantiates the systemIncident component, which does pretty much the same thing to a systemIncidentUpdate component.


Now talking a little about the back-end of things: it would be nice if the user responsible for updating the incidents statuses could update the system and the incident statuses. This way, if an incident is resolved, the site can be instantly updated.


The source code can be found here: https://github.com/renatoliveira/force-com-status-page.

I’d like to list all the test classes in my Salesforce org, can I do that?

Yes, we can! But it is not as easy as typing a command to return a pretty JSON list with all the classes names.

First of all, we can’t query the ApexClass table directly using the IsTestClass field as criteria, because it doesn’t exist. The second problem is that we have differences between managed and unmanaged classes. The later one’s code isn’t accessible, so we can’t just search for the @IsTest annotation.

What do we do, then? Turn into one of Salesforce’s APIs: The Tooling API. It provides us with data that is interesting for a developer developing things for developers. 

Fortunately the Salesforce CLI supports this API. So we can query the ApexClass table using the Tooling API. It can return an object that represents information about the queried class. For example, the following query:

sfdx force:data:soql:query --query "SELECT Id, SymbolTable FROM ApexClass WHERE Name = 'SObjectUnitOfWorkTest'" --usetoolingapi --targetusername playground --json

It queries the SObjectUnitOfWorkTest class and its symbol table using the Tooling API for that. Don’t mind the “playground” thing, that’s what I call my personal dev org.

The output of this command should be a JSON of considerable size (depending on the size of the class as well). But we are interested in just one attribute of the response: the “methods” attribute of the SymbolTable attribute. This attribute contains a list of the methods in the class we queried, and the output is something like this:

"methods": [
  {
    "annotations": [
      {
        "name": "IsTest"
      }
    ],
    "location": {
      "column": 25,
      "line": 39
    },
    "modifiers": [
      "private",
      "static",
      "testMethod"
    ],
    "name": "test_inserts",
    "parameters": [],
    "references": [],
    "returnType": "void",
    "type": null
  },
  // and more methods below, if the class contains more than one method
]

Using a command line tool like jq we can easily extract the methods annotated with IsTest using a command like this:

cat test.apxc | jq '.result.records[0].SymbolTable.methods | .[] | select((.annotations | length > 0) and .annotations[0].name == "IsTest")'

If you just want the methods’ names, the command differs just a bit at the end:

cat test.apxc | jq '.result.records[0].SymbolTable.methods | .[] | select((.annotations | length > 0) and .annotations[0].name == "IsTest") | .name'

For the class I used before, I get the following output:

? cat test.apxc | jq '.result.records[0].SymbolTable.methods | .[] | select((.annotations | length > 0) and .annotations[0].name == "IsTest") | .name'
"test_inserts"
"test_updates"
"testUnitOfWorkEmail"
"testDerivedUnitOfWork_CommitDMLFail"
"testDerivedUnitOfWork_CommitDoWorkFail"

And that’s it! A not so easy way of extracting the test methods of a single class. But at least it can be automated.

But what if I want to get the methods for multiple classes at once?

It might take a while to run the query, but Salesforce definitely delivers a result to your terminal. Then the biggest issue is parsing the JSON result. I don’t know how to pretty parse and/or reshape the output using jq alone, but it definitely seems possible! And I’m sure that one can use a scripting language (like Python or PowerShell) to automate this.

Creating Workflow Rules Using MetadataService.cls

I had trouble creating this particular type of metadata using only apex code once. And I’ll try to explain the process so you don’t have to waste 2 goddamn days of your life trying to figure out why the hell Salesforce is telling you that the field ‘fullName’ is empty.

First of all, in Salesforce the Workflow object isn’t actually a system object, which means you cannot use it on queries (‘SELECT id FROM workflow’ – no, you can’t do that). What is it then? Pure Metadata. It just references some of your objects and fields (which aren’t just metadata).

One thing I strongly recommend when dealing with this kind of thing is using the Migration Tool. It can be really useful to see what you currently have in your organization. Some things do not appear in the UI, and you might think there is not much in there. WorkflowFieldUpdates, for instance, do not appear unless you have a WorkflowRule working with them.

Now to our example:

You will need the MetadataService class, which can be obtained here.

Suppose we need to create a WorkflowRule that has the following criteria:

It has to be linked to a custom object that has different RecordTypes, and we trigger the rule based on two fields of this object, comparing with the current date. It has to be active (so it can work, duh), and set a value in a field for this custom object.

Well that seems pretty easy, right? Right. If you were using the UI. But you are using Apex. So it is still easy, but requires a little more brain work.

<fieldUpdates>
    <fullName>Active_Permission</fullName>
    <description>Actives a permission.</description>
    <field>Active__c</field>
    <literalValue>1</literalValue>
    <name>Active Permission</name>
    <notifyAssignee>false</notifyAssignee>
    <operation>Literal</operation>
    <protected>false</protected>
</fieldUpdates>
<rules>
    <fullName>Manage Active Temporary Permission</fullName>
    <active>true</active>
    <description>Manages when a temporary permission has to be set as Active.</description>
    <formula>IF(RecordType.DeveloperName=&apos;Temporary&apos; &amp;&amp; BeginDate__c > Today(),TRUE,FALSE)</formula>
    <triggerType>onCreateOrTriggeringUpdate</triggerType>
    <workflowTimeTriggers>
        <actions>
            <name>Active_Permission</name>
            <type>FieldUpdate</type>
        </actions>
        <offsetFromField>Permission__c.BeginDate__c</offsetFromField>
        <timeLength>0</timeLength>
        <workflowTimeTriggerUnit>Hours</workflowTimeTriggerUnit>
    </workflowTimeTriggers>
</rules>

The code above was obtained in the Permission__c.workflow file, retrieved directly from the SF organization using the Migration Tool. I’ve simplified this snippet a little, since there are other fieldUpdates and rules in the file.

If you take a good look in this file, you’ll notice that it contains every bit of information we need to recreate it with our MetadataService class.

public void create_ManageInactiveTemporaryPermission()
{
    MetadataService.MetadataPort ms = MetadataJob.createService();
    MetadataService.WorkflowActionReference workflowActionReference = new MetadataService.WorkflowActionReference();
    MetadataService.WorkflowRule workflowRule = new MetadataService.WorkflowRule();
    MetadataService.WorkflowTimeTrigger workflowTimeTrigger = new MetadataService.WorkflowTimeTrigger();
    MetadataService.WorkflowFieldUpdate workflowFieldUpdate = new MetadataService.WorkflowFieldUpdate();

    workflowFieldUpdate.fullName = 'Permission__c.Inactivate_Permission';
    workflowFieldUpdate.description = 'Inactivates a permission.';
    workflowFieldUpdate.field = 'Active__c';
    workflowFieldUpdate.literalValue = '0';
    workflowFieldUpdate.name = 'Inactivate Permission';
    workflowFieldUpdate.notifyAssignee = false;
    workflowFieldUpdate.operation = 'Literal';
    workflowFieldUpdate.protected_x = false;

    workflowActionReference.name = 'Inactivate_Permission';
    workflowActionReference.type_x = 'FieldUpdate';

    workflowTimeTrigger.offsetFromField = 'Permission__c.EndDate__c';
    workflowTimeTrigger.timeLength = '0';
    workflowTimeTrigger.workflowTimeTriggerUnit = 'Hours';
    workflowTimeTrigger.actions = new MetadataService.WorkflowActionReference[]{workflowActionReference};

    workflowRule.fullName = 'Permission__c.ManageInactiveTemporaryPermission';
    workflowRule.active = true;
    workflowRule.description = 'Manages when a temporary permission has to be set as inactive.';
    workflowRule.formula = 'IF(RecordType.DeveloperName=\'Temporary\' && EndDate__c > Today(),TRUE,FALSE)';
    workflowRule.triggerType = 'onCreateOrTriggeringUpdate';
    workflowRule.actions = new MetadataService.WorkflowActionReference[]{workflowActionReference};
    workflowRule.workflowTimeTriggers = new MetadataService.WorkflowTimeTrigger[]{workflowTimeTrigger};

    MetadataService.Metadata[] theMetadata = new MetadataService.Metadata[]{};

    theMetadata.add(workflowFieldUpdate);
    theMetadata.add(workflowRule);

    MetadataService.SaveResult[] results = ms.createMetadata(theMetadata);

    for (MetadataService.SaveResult sr : results)
    {
        MetadataJob.handleSaveResults(sr);
    }
}

Note about the MetadataJob class: this is a class I created to check the results and show the error message in a simpler way (so I don’t need to go to the debug log every freaking time I try to make it work). It just receives a MetadataService.SaveResult object and checks if it is null or if its .success property is true. It also checks if its errors property isn’t empty, and if it isn’t, display the errors to me. In the end, if those two previous conditions aren’t met, it throws an exception telling me that the service failed with an unknown reason.

Getting back to our code above, you can see that I basically copy the metadata into the fields in the Apex code. Some things are to be noticed, though:

At line 9, I have to specify the object’s name before the name of the action I want the rule to do (updating a field is an action, right?). I have to do this because the WorkflowFieldUpdate does not hold anything that it can relate to besides a object in the organization. You’ll notice that I create it before the rule itself (see line 36), and that the rule also have the object name before the rule name, and so does the TimeTrigger.

After creating my action, I’m off to create the rule. For this, I check my XML file. Notice that the WorkflowRule ‘object’ has two fields to reference other things, that are interesting to us: actions and workflowTimeTriggers.

Those two have to be created before the rule, and referenced to it after. So I create the ActionReference first (line 18), then the TimeTrigger (line 21) and finally I create my rule (at line 26) referencing those two in the respective fields.

I haven’t tested this, but since the actions and workflowTimeTriggers properties are lists, I assume you can have multiple actions to a rule, and multiple timeTriggers too.

When I have everything set, I send the metadata to the createMetadata function, that actually does the whole connection and conversion to request Salesforce to create my things in the organization.

If everything goes right, you will have your workflows created after running this code!

You can even delete it afterwards using the deleteMetadata function, which is even simpler. You just pass the name of the metadata and a list of names of objects that have this type). So, for my example, it would be something like this:

theMetadataService.deleteMetadata('WorkflowFieldUpdate', new List<String>{'Permission__c.Active_Permission'})

Be warned though: if you are working in an environment with managed packages, you will have to add the namespace before both names, like:

theMetadataService.deleteMetadata('WorkflowFieldUpdate', new List<String>{'myPkgNamespace__Permission__c.myPkgNamespace__Active_Permission'})

If you don’t do that, Salesforce won’t recognize your metadata, and tell you “hey, I didn’t find anything with this name in here!”.