Skip to end of metadata
Go to start of metadata

This is the result of one of the discussions held during the meeting on 11/04/2012 at UPM, with Dave, José Manuel, Esteban, Aleix, Dani, Rafa and Oscar. 

Jun suggests to specify very clearly which are the intended users of this classification.

5 Stars



Pros and cons

0 stars

Picture, scripts...
Runs, scufl in your pc


Not cited/ used
Available / accesible in an explicit manner (include license)
Workflow (at least)

Scufl, galaxy


Workflow + other things
Discoverable (URI) - is citable

MyExp pack (zip)


Follows standards to aggregate & publish

Wf4ever RO,
MyExp pack (ORE)


Make it self-describing/repurposable
Intermediate results/replayable
Examples of runs

Golden exemplars,
Beautiful ROs

and social

Cited/used by others
It is reproducible (rerun in your lab)

There was also a thread opened by Graham on a related topic (

I'm thinking this could power the "status button" that Marco has mentioned might appear in the Wf4Ever portal.  It's also prompted in part by discussions with Jun to build a story around our RO evaluation work.

1. Completeness.  All required inputs are identified in the RO manifest.

2. Liveness.  All required inputs are available: either they are actually present in the RO or remain accessible from external sources.  Also, the execution environment containing necessary facilities.

3. Specification.  Jose and Aleix wrote: "We define RO stability as the quality of an RO to remain functionally unchanged with respect to its specification ...".  To assess this, we need a specification, or the whole enterprise is meaningless.  In the first instance, such specification may be a human researcher statement of what the RO achieves.  But it would also be good to develop some more concretely testable form of specification; e.g. properties of inputs, related properties of outputs, statement of requirement for manual review with possible outcomes, ... (The "specification" might also link to the notion of a "purpose", something I've incorporated into my ro completeness evaluation implementation.)

4. Execution.  Actually executing an RO and generating the expected outputs. Capture information that can be verified and then maybe fed back into a specification (e.g. using a provenance log to infer a minimum information model).

5. Correctness.  Checks that results of RO execution do actually satisfy the specification.  These may range from manual inspection and checking to mechanized testing procedures.

  • No labels