Toggle document index

There are some common patterns you may see when working with some of Aotal's tenant APIs.

Application views

About views

Many of Aotal's tenant APIs present information about job applications in the form of an application view. For example:

In all of these cases, the API passes your app a view. A view is a json formatted package containing all the data that the customer has specified that your app should see.

Views are used to give your app just the information that it needs, and only that information.

If you need data added to the view, contact the customer. The customer has access to setup screens in other apps (such as the assessment hub and/or the ATS) that let them control which fields appear in the view, and possibly also rename individual fields to match your expectations.

Information available in a view

A view can hold (at the discretion of the customer) information about any of the following objects:

  • the application itself
  • the candidate
  • the job
  • the job's recruiter
  • the job's hiring manager

Each of these objects can have within it two types of data:

  • system fields (always presented in the same way, e.g. the job's recruiter's first name).
  • custom fields (user-defined fields, potentially different for every tenant, and that appear in the view within one of the items array)

Example view

POST /applications/views/at/hire/now/byID/{application}/pushes


In this example view:

  • The application itself has a single item (custom field) for HIREDATE
  • The candidate's names, email address and resume are all included (system fields)
  • There are no custom fields visible on the candidate
  • The job's recruiter's email is included (system field)

Making sure the view contains the data your app needs

Since the contents of a view is completely under the control of the customer, a view may potentially be packed with information or virtually empty.

Your app however will likely need some minimum set of certain fields to be present in the view. Typically you'll do the following when building your app:

  1. Decide what fields your app needs in the view. For example if you are writing an onboarding app, then hire date will probably be mandatory for you.
  2. For the custom fields that you need, choose a name for each one (e.g. HIREDATE), using only upper or lower case characters, - sign and digits only, < 30 characters. You don't need to decide on a name for system fields, as they always appear as named json elements in the view.
  3. Document your required system and custom fields, either in your app's description or somewhere that is linked to. This information will be required by the customer to set up the view. Guiding the customer to set up the view correctly is likely to be one of the most critical parts of successfully onboarding new customers who install your app.
  4. Now code your app. Somewhere in your code, you'll pull the required fields out of the incoming view. For the system fields, you'll simply grab them from the relevant fixed json structure (e.g. recruiter email, name). For the custom fields, you should find them in the items array.
  5. For custom data fields to do with the job application (as opposed to the job, or the job's recruiter or manager) it is important in the previous step to look for your items (e.g. HIREDATE) in both the application and the candidate-scoped items arrays. That's because different ATS's and even different customer setups within the same ATS may decide to store hire date (for example) as an attribute of either the job application or the candidate. So you need to cater for both of these cases.
  6. In your code, if you don't find the fields you need in the view, then cause an error. How you do this depends on the API you are interacting with - for an assessmnt app, you would PATCH the assessment to be in Error status. You should also display clearly to the user why the error occurred - i.e. what specific fields you couldn't find in the view, so that the customer can correct the situation.

Error handling in tenant APIs

http status codes

Where possible, when indicating errors, API producers should document and use existing, meaningful http error codes.

For example, 409 Conflict could be returned if a consumer tried to create object "foo" but such object already exists.

If there is no suitable code, APIs should just use 400 or 500 as appropriate - we don't currently define new http status codes.

application/problem+json responses

In addition to the status code, APIs responding with errors (other than self-explanatory ones such as 404) should also return a body of type application/problem+json as per Problem Details for HTTP APIs.

The application/problem+json format uses the "type" field (a URI) to identify the type of error.

For predictable error cases (e.g. create a job application fails because the job is closed), the API documentation should specify actual values for type. At TAS, we start our types with For example:

            schema: !include ../schemas/applicationProblem.json
            description: |
              The app producing the API should return one of the following values in the type field where appropriate:
              - - the job is closed 
              - - its after midnight 

Apps may also throw errors with undocumented values for type (obviously the consumer won't be able to take any specific action in this case).

URLs for complex resources

We try and follow something like this pattern for urls of complex resources.

Class - broad family for the resource, e.g. buttons
Who - principal type who can view the resource + actual viewer, e.g., me, anonymous, byID/{id}
Where - significant location, e.g. general, /jobs/{job} - possibly implied by who  
What - specific resource type, e.g. possibles, meta, omitted where obvious
Which - further filtering (over location), e.g. byName/{name}, byApp

e.g.: /items/toCandidate/me/jobs/{job}/itemMetas/byName/{item}

Class - /items
Who - /toCandidate/me
Where - /jobs/{job}
What - /itemMetas
Which - /byName/{item}

SSO conventions

This section has some conventions that most apps that use SSO should follow.

Add the isSignedIn parameter to outgoing links

Some apps have pages that work both when the user is signed in, and when they are not.

For example a career site may have a job landing page like:

When the user is not signed in, the page displays an apply button.

When the user is signed in, the apply button changes to "You've already applied". The page also detects that they are an employee, and a "refer a friend" button appears.

A problem occurs when the user follows a link from one app, where they are signed in, to another app, where they are not signed in (but could be instantly thanks to SSO). For example:

  1. The candidate surfs to the careers site, then into a specific job, then clicks apply. They are redirected to the apply app.
  2. They authenticate and apply.
  3. Finally they click back to the careers site app. The career site app does not know that they are signed in, so it cannot display its contents intelligently (e.g., displaying "you've already applied" instead of an apply button).

One solution to the above would be for the career site to always ask everyone to log in before viewing the page, but that would be a barrier for non-signed in users, and cause SEO problems.

So to handle this we follow this convention:

Whenever an app emits a link which it believes is to another app, and there is a signed in user, it appends the "isSignedIn" hint to it.
The hint indicates to the destination app that the user is most likely already signed in, so it is probably OK to ask them to authenticate since we have good reason to believe the process will be instant and invisible (thanks to SSO).

In the example above, when the apply app links the user back to the job on the career site via, it appends the isSignedIn hint as follows:

Add a filter to handle the isSignedIn parameter on incoming traffic

Apps that use SSO should incorporates a isSignedIn filter, which behaves as follows:

for all requests with the isSignedIn query parameter
   if the parameter's value == the principal type for this app
      // the user is likely signed in
      force the user to authenticate before visiting the page (with the isSignedIn parameter stripped off)


Replication is a pattern that allows one or more replication secondary apps to maintain a real-time copy of the master data held on a single replication primary app.

For example:

  • the replication primary app might be an ATS that produces an API like GET /jobs
  • the replication secondary app might be a job board that wants to maintain its own local database of jobs, kept in synch with the jobs held on the ATS

You shouldn't use replication unless you need to. In the example above, the job board might be able to simply call GET /jobs each time a candidate visited the site. However sometimes replication is required.

The TAS core itself is unaware of the concept of replication. Replication is simply tenant API calls between apps as far as TAS is concerned. Your app can approach replication in any way it wants - there is no need for it to follow this pattern. However if it does, your app will be more likely to interoperate with other apps.


The replication pattern described here:

  • Is real-time (non-polling)
  • Supports partial replication - the secondary can choose to maintain a subset of the instances of the master records (e.g., only jobs that are currently open) or a subset of the properties on the instances (e.g. only the job's title, and not its description or attached documents).
  • Relies on a bulk load phase, where immediately after install, the secondary gradually loads up all of the master data that already exists at the primary. The pattern is best suited to a single-threaded implementation.


Following the standard replication pattern, the ATS and job board apps in the example above would work together as below:

  • A tenant has already installed the ATS app (the primary)
  • The tenant installs the job board app (the secondary)
  • The job board app starts its bulk loading phase
  • The job board repeatedly calls GET /jobs/{} until it has loaded all of the existing jobs from the ATS
  • Since the bulk load phase might take hours, or days (not likely for jobs - but possible for other resources), the job board keeps track of the most recently loaded job in a persistent store (such as the repstate app), so that it can pick up and continue the bulk load phase if the tenant is disrupted, or the app itself is restarted
  • Eventually the job board's bulk load phase is complete
  • The job board now listens for incoming alerts about changes to the master set of jobs
  • At some point, a new job is created inside the ATS
  • The ATS sends a "ping" to the job board to alert it of the new data
  • Unless the ping is for a delete, the job board calls GET /jobs/{} to fetch the new/updated data, and updates its local database

Example API flow

Below is a detailed message sequence diagram showing the API flows between replication primary and secondary and TAS.

In this example, the replication primary is an ATS holding the master set of candidates, and the replication secondary is a new candidate search app called "ferret".

This example also shows the use of the repstate app, which acts as a persistent store for the secondary's state during the bulk load phase.

participant ats participant ferret participant repstate participant TAS note left of TAS: tenant acme installs the ferret app TAS->ferret: POST /tenants/acme note left of ferret: ferret knows it is a replication\nsecondary for candidates,\ninitializes the replication state store ferret->repstate: POST /repstates/ferret/tas/%2Fcandidates\n{"loading"":true,lastLoaded:null} note left of ferret: ferret asks for id of first candidate ferret->ats: GET /candidates?$orderby=id&$select=id&$top=1 note right of ats: ats says 10234 note left of ferret: ferret calls its own onPing() method\nwhich fetches full details from the primary ferret->ats: GET /candidates/10234?$select=resume note right of ats: ats passes back resume ferret->repstate: POST /repstates/ferret/tas/%2Fcandidates\n{"loading"":true,lastLoaded:10234} note left of ferret: ferret asks for id of next candidate ferret->ats: GET /candidates?$orderby=id&$select=id&$top=1&$filter=id gt 10234 note right of ats: ats says 10235 ferret->ats: GET /candidates/10235?$select=resume note right of ats: ats passes back resume ferret->repstate: POST /repstates/ferret/tas/%2Fcandidates\n{"loading"":true,lastLoaded:10235} note right of ats: ..time passes.. note right of ats: a database trigger fires in the ats,\nreflecting that a new candidate has been created note right of ats: this is the first time that ats\nhas tried to broadcast\nto this API for tenant acme,\nso asks TAS who produces that API ats->TAS: GET /tenants/acme/routes/ats/tas/%2Fm%2Fcandidates%2F%7BcandidateID%7D%2FdeltaPings note left of TAS: TAS says:\n"items": [{"producer": "ferret",\n"location": ""},..] note right of ats: ats now sends message to say\nthat a new candidate has been created ats->ferret: POST /m/candidates/29046/deltaPings\n{"operation": "insert"} note right of ferret: to decide whether to ignore the ping,\nferret needs to know the current\nstate of the replication secondary.\nIt could have this in memory or it\nit may be easier for the API handler\nto fetch it from the replication\nstatus store. ferret->repstate: GET /repstates/ferret/tas/%2Fcandidates\n{"loading"":true,lastLoaded:10235} note left of ferret: ferret onPing()\nif (!loading || id <= lastID) absorbPing();\nignores the ping note left of ferret: ferret asks for id of next candidate ferret->ats: GET /candidates?$orderby=id&$select=id&$top=1&$filter=id gt 10235 note right of ats: ats says 404 error note left of ferret: replication bulk load is complete ferret->repstate: POST /repstates/ferret/tas/%2Fcandidates\n{"loading"":false,lastLoaded:null}

APIs required for replication

As seen above, for a given types of master data (e.g. jobs), replication requires the following APIs:

  • Produced by the primary and consumed by the secondary
    • Get a master record by its "primary key", e.g. GET /jobs/{jobID}
    • Get the master record with the first/lowest primary key value, e.g. GET /jobs?$orderby=id&$top=1
    • Get the master record with the next primary key value, e.g.: GET /jobs?$filter=id gt 100&$orderby=id&$top=1
  • Produced by the secondary and consumed by the primary
    • Alert secondaries of a change to a master record, e.g. POST /jobs/{jobID}/deltaPings.
      (The primary must queue pings in the event of any secondary being unavailable, until it becomes available again - it might use something like a broadcast service to achieve this).

Primary key properties

The primary key used by the primary must:
  • be immutable
  • be of either integer or aphanumeric type
  • use consistent sorting [specify], so that the secondary can do key comparisons client-side