Skip to end of metadata
Go to start of metadata

API for checklist evaluation.

API function overview

The checklist evaluation API is intended to provide access to the minim-based evaluation of Research Objects, used to test for completeness, executability, repeatability and other desired features. The functionality provided is based on the ro-manager evaluate checklist option:


  • <dir> is the directory containing the RO to be evaluated
  • <level> indicates the level of information detail to be returned
  • <minim> is a URI reference for a minimum information model resource from which the checklist definition is obtained
  • <target> is a target resource with respect to which the evaluation is performed; the default <target> is the RO itself, but a component within the RO may be selected.
  • <purpose> is a keyword indicating the purpose for which the RO or <target> is to be evaluated.

For example:

might evaluate the RO at /workspace/myro using the minim model in file /workspace/minim.rdf.

The Web API is intended to provide remote access to the above functionality using simple HTTP requests.

Research Objects and other data are provided as web resources, and indicated in the API using their URIs.

API usage

Suppose we have:

Note: there is an example of a simple minim model at

The checklist evaluation would then be invoked in a sequence of two HTTP operations:

  1. Client retrieves service document:
  2. Client parses the service document, extracts the URI template for the checklist evaluation service and assembles URI for the desired evaluation result (cf. RFC6570), and issues a second HTTP GET request:

The result from the second request is the checklist evaluation result. The URI shown above has been split over several lines for readability - the actual HTTP request must present it without whitespace. The optional target URI parameter has been omitted in this example on the assumption that the target is the Research Object itself.

See also:

Link relations


This relation is generally used used in the service description document

It indicates a relation between a service description and a URI template for RO evaluation results using the described service. The URI template is is used to construct a service result URI by:

  1. applying the URI template expansion procedures with caller-supplied RO URI, minim URI, purpose and target URIs, and
  2. resolving the resulting URI-reference to an absolute URI using normal URI resolution rules (e.g. typically, using the service document URI as a base URI)

See also:

HTTP methods

The service description is obtained in response to an HTTP GET to a checklist evaluation service URI.

The checklist evaluation service responds to an HTTP GET with the results of a checklist evaluation, using the URI defined by expanding the template provided by the service description.

Resources and formats

Service description

The checklist evaluation service description is an RDF file that contains URI templates for accessing RO related services, including checklist evaluation. The RDF syntax used may be content negotiated. In the absence of content negotiation, RDF/XML should be returned.


Research Object


Minim description

NOTE: this section describes the original Minim model structure, which is still fully supported by the software. However, the capabilities described here have been refactored and extended by a revised minim, model, which is described more fully in and

The MINIM description contains 3 levels of description:

  1. minim:Checklist associates a target and purpose (e.g. runnable RO) to a minim:Model
    to be evaluated.
  2. minim:Model encodes the checklist (list of requirements) to be evaluated.
    (There is provision for MUST / SHOULD / MAY requirements in a checklist to
    cater for limited variation in levels of conformance.)
  3. minim:Requirement is a single requirement (checklist item), which is associated with a rule for evaluating whether or not it is satisfied or not satisfied. Each rule makes reference to a "checklist primitive" function. Additional capabilities can be added (in due course) by expanding the set of available checklist primitives (e.g. see

These 3 levels are called out in the examples that follow


Minim Checklist

The "checklist" (previously called "constraint") describes a mapping from target and purpose values to a particular minim:Model to be used as the basis of an evaluation. Relative URI references are resolved relative to the location of the Minim resource. In this example, the Minim resource is taken from the root directory of an RO, so "." refers to the RO itself.

Minim Model (check item list)

A minim model represents a list of check items to be evaluated. It enumerates of a number of requirements which may be declared at levels of MUST, SHOULD or MAY be satisfied for the model as a whole to be considered satisfied. This follows a structure for minimum information models proposed by Matthew Gamble.

Minim Requirements

Minim Requirements are evaluated using rules, which in turn invoke checklist evaluation primitives with appropriate parameters. This structure allows a relatively wide range of checklist items to be evaluated based on a relatively small number of primitive tests. The examples show the various primitives.

Requirement for an RO to contain a workflow primitive

The minim:ContentMatchRequirementRule is driven by a SPARQL query probe which is evaluated over a merge of all the RO annotations (including the RO manifest). In this case, it simply tests that the query can be satisfied. The minim:showpass and minim:showfail properties indicate strings that are used for reporting the status of the checklist evaluation.

Requirement for workflow output files to be present

This use of a minim:ContentMatchRequirementRule uses the SPARQL query as a probe to find all workflow output files mentioned according to the wfdesc description vocabulary, and for each of these tests that the indicated resource is indeed aggregated by the RO (a weak notion of being "present" in the RO). The URI of the required aggregated resource is constructed using a URI template ( with query result values. The diagnostic messages can interpolate query result values, as in the case of minim:showfail in this example.

Liveness testing

To test for liveness of a resource, the evaluator will need to attempt to access the resource. If it is a local file, a file existence check should suffice. If it is a web resource, then a success response to an HTTP HEAD request is expected.

This varies from the simple aggregation test in that the minim::aggregatesTemplate property is replaced by a minim:isLiveTemplate property.

Software environment testing

A minim:SoftwareEnvironmentRule tests to see if a particular piece of software is available by issuing a command and checking the response against a supplied regular expression. (This test is primarily intended for local use within RO-manager, and may be of limited use on the evaluation service as the command is issued on the host running the evaluation service, not on the host requesting the service.)

Proposed Minim enhancements

In viewing the proposals for liveness and integrity testing, compare with the current framework used for testing that an RO aggregates a specified resource; the following rule example tests that all workflow inputs are aggregated by the evaluated RO:

Integrity testing

In this context, "integrity testing" means checking that a resource content matches some expected (e.g. previously calculated) value.

The integrity testing builds upon the liveness testing, by adding a reference to a resource with which the tested resource may be compared, by way of the minim:contentMatchTemplate property. Note that this property says nothing about using a hash or checksum value; the plan is that it's just the URI of a resource with which to compare. But that URI may be an ni: URI (cf., in which case, instead of trying to dereference the URI and do a byte-by-byte comparison, instead it calculates the appropriate hash function over the resource being tested and compares that with the value encoded by the ni: URI. To check for a known constant value, a data: URI ( can be used.

This structure assumes that appropriate annotations have been created to allow a SPARQL query to discover the URI of an appropriate resource with which to match the content. If may turn out that further queries against a separate RO are needed, in which case the above structure may need to be expanded. In the example below, the annotation is a roeval:contentMatch property directly about the resource being integrity-checked.

Resource annotation with checksum values

Integrity checking with a single RO may be achieved by adding checksum annotations to the resources that subsequently will be tested. A new RO-manager command could be provided to facilitate creation of such annotations; e.g.

which would calculate and output an appropriate ni: URI, and could be used as part of an RO command thus:

This flexible approach would allow a checksum to be calculated from one RO and applied as a reference for checking to another. But this combination of commands might be difficult to do in some systems (e.g. Windows), so a less flexible form of command might be offered:

where resource1 would be annotated with the content hash of resource2, using the indicated attribute identifier.

Comparing with a reference object or resource

A proposed requirement is that contents of one RO can be compared with the content of another calibration (or reference) RO.

This might be achieved by adding an annotation to an RO referring to its corresponding calibration RO. It is expected that this information might be provided by RO evolution information, but in the absence of such annotations a new, specialized annotation might be needed for this.

Operationally, the comparison between ROs might be invoked in any of the following ways:

  • One RO contains links to its corresponding calibration RO. This would appear to be quite natural if one RO is derived from another: it's evolution trace would contain a reference to the RO from which it is derived, but would be useful only when that is the RO with which one would want to perform the comparison. This would limit the scope of ROs that could be compared to those that are explicitly related to each other. Thus, a researcher finding two independent ROs that claim to calculate a common value would not be able to simply compare those without first (copying and) modifying at least one of them.
  • References to the calibration RO are carried in the Minim description. This would limit the range of ROs with which such a Minim description might be used, and would make it harder or impossible to create truly generic checklists that compare different ROs.
  • A new RO might contain references to two other ROs to be compared.
  • The URIs of the ROs to be compared could be provided in the service invocation.

@@TODO - define how to handle this. We need a clearer view of the use-cases to best understand which approach(es) would be most appropriate.

Constraint references to RO and target resources

Currently, the mechanisms used for defining minim:Constraint values contain references to the RO, which makes it difficult to use the same Minim resource with multiple ROs.

Current structure:

Outline of change proposal:

  • Do not use the minim:hasConstraint property when selecting potential constraints; instead, query the Minim description for all resources of type minim:Constraint.
  • as an alternative to minim:onResource for selecting the target resource, also recognize a minim:onResourceTemplate parameter whose value is a string containing a URI template. The URI template is expanded with variables RoUri being the URI of the RO being evaluated, and TargetUri being the URI reference of the selected target (which defaults to the RO URI if no specific target is selected). The URI reference resulting from the expansion may be resolved to a full URI using the RO URI as a base URI.

Thus, the above example might become:

A further development might allow an additional SPARQL query to be used, which is executed against a merge of the RO annotations resulting in additional variable bindings that can be referenced by the target resource template. Such might be used, for example, to locate a workflow template within the RO that uses a particular global database, which in turn might be used to test that the corresponding provenance record indicates use of an up-to-date version of that database.

Checklist evaluation results

Evaluation results are an RDF or JSON file containing a fully detailed description of the results of a checklist evaluation. RDF results are reported using the Minim vocabulary and requirement URIs, together with additional terms to include diagnostic and summary information.

Example of RDF (Turtle syntax here):

These are mainly terms from the current MINIM vocabulary, with a few new ones added to provide more detailed diagnostic information about the evaluation result:

  • minim:testedConstraint is the URI of the constraint
  • minim:testedTarget is the URI of the target of the constraint
  • minim:testedPurpose is the URI of the constraint
  • minim:missingMust refers to a minim:hasMustRequirement rule for the model used that is not satisfied by the RO with corresponding variable bindings where appropriate
  • minim:missingShould refers to a minim:hasShouldRequirement rule for the model used that is not satisfied by the RO, with corresponding variable bindings where appropriate
  • minim:missingMay refers to a minim:hasMayRequirement rule for the model used that is not satisfied by the RO, with corresponding variable bindings where appropriate
  • minim:satisfied refers to a rule that is satisfied by the RO, with corresponding variable bindings where appropriate
  • minim:tryRule object is URI of a rule in the MINIM model.

URIs that that appear as object values above are references to elements of the MINIM model tested, and are defined in the minim description resource. It is proposed that the result graph should also include the original minim description, so that all information needed for checklist reporting is obtainable from the result returned.

@@TODO JSON format for results TBD; proposed to use something like a JSON-LD rendering of the RDF.

Cache considerations

Effective (and correct) cacheing of checklist evaluation results is desirable, as this could prevent unnecessary recalculation of results that have already been calculated. But care is needed to avoid incorrect cacheing which could result in incorrect results being returned.

The result of a checklist evaluation should be cacheable, subject to cacheability of the evaluation service document, the minim description and the RO that is evaluated. The service should return cache-control headers on the evaluation result that require re-evaluation when the criteria for cacheing the various resources used are no longer satisfied (e.g. Cache-Control: max-age values returned should no greater than the minimum of the values associated with all resources used).

Also note any Vary: headers or other Cache-control: values that are returned: if the corresponding headers are derived from client-supplied header field values, the checklist evaluation response should indicate this.

@@Any other potential gotchas here?

See also:

Security considerations

Checklist evaluation is a read-only function, so there is no obvious risk of unintended data modification.

Checklist evaluation results may expose information about the content of a Research Object. As a general rule, the results of a checklist evaluation should be provided only to agents who themselves have permission to access the RO contents. Mechanisms for implementing such access control are not described here.

See also: @@ref Wf4Ever access control mechanisms


See also:

  • No labels