Evaluating an API for Ease-of-Use With Valence

Almost always, if an API is addressable via HTTP or HTTPS, you can use Valence to talk to it.

Now, how easy that will be greatly depends on how the API is constructed, and what sort of features it supports. This is a companion guide to How to Design a Complementary Valence API. That guide talks about designing an API, this guide is for evaluating a preexisting API and gives you a checklist to go through to see what level of effort it would take to talk to it from Valence.

This is a guide for someone designing/developing a new Valence Adapter. If an Adapter already exists for the API you need to connect to, you can stop reading. You’re all set. If not, here are the things you should investigate:

Topics of Investigation

Authentication

How does this API handle authentication? Is it an API key? An OAuth flow? Salesforce handles some basic scenarios out of the box with NamedCredentials. If the authentication for the API is standard then there’s almost no effort here. If it’s something custom you will likely need to write code to retrieve tokens or keys to use.

Schema

How will we know about the schema of the API?

Best case scenario there is an endpoint (or endpoints) in the API itself that self-describe the API. For example, a common setup is an endpoint to list tables, and another endpoint to list fields, given a table name. Dynamic schema discovery is a huge win, as it will allow Valence to surface those details to users, and stay current as the API changes over time.

A next-best-option that sometimes exists is a static file that represents the schema. Perhaps this is a Swagger or Open API document, or some kind of JSON / XML representation of the schema (like a WSDL). If this is the case, it can be a good pattern to store that file as a static resource in Salesforce, and load it from your Apex code to interpret at runtime when doing schema work for the user.

A last resort is to either hardcode the schema into the Apex class, or give up completely and leave the schema unknown. An unknown schema means the user will have nothing to map against, but if records flow Valence will discover the record shape and surface it to users. Not nearly as good as the other options, but can help in a pinch.

Research Checklist

Use this list of questions to gather relevant details to drive your design:

Authentication

  1. What authentication options exist? Basic auth? API key? OAuth? If OAuth, what flows are supported?

Schema

  1. Is the schema retrievable via API? What does that look like?

  2. If not retrievable via API, is the schema stored in a machine-readable format somewhere? WSDL / Swagger / Open API / etc?

  3. If not machine-readable, how is the schema documented?

Reading

  1. Is it possible to fetch records from the API?

  2. In what format are records received (CSV, JSON, XML, etc)?

  3. Can specific columns/fields be selected, or is always full records?

  4. Can records be filtered in some way, ideally based on a last-modified timestamp or similar?

  5. Is record fetching synchronous or asynchronous?

  6. How many records can be retrieved at once?

  7. Are read results paginated? If so, what is the mechanism?

  8. If pagination, can all the pages be calculated upfront (ex: knowing a total count of records) or do we learn along the way (ex: there is a next page, but we don’t know how many more next pages there will be)?

Writing

  1. Is it possible to write records to the API?

  2. In what format are records written (CSV, JSON, XML, etc)?

  3. Do records have to be written one at a time or can they be written in bulk? How many records can be written at once?

  4. Is upsert a supported operation?

  5. What keys/identifiers are supported for upsert/update operations?

  6. Is record writing synchronous or asynchronous?

  7. What does the response payload for a write operation look like? Does it include identifiers when records are created?

Other

  1. What does error handling and error reporting look like?

  2. Compression: Is the Accept-Encoding HTTP header supported, specifically for “gzip”?

  3. Are there any sort of rate or volume limits on the API?

  4. How are dates and date-times formatted? Are they UTC, or some other timezone?