Skip to end of metadata
Go to start of metadata

Ideas for the showcase

The idea of this showcase is to create different measures that could be useful for the user based on the ideas of the stability, the analysis over time of the quality and the roevo trace.

Apart from that, we want to add annotations on the changes captured by the roevo trace that was made on showcase66a. Those annotations are suposed to be done by the user reflecting information as "I want to modify X because of Z" or "I've deleted Y because of W".

The last target of the showcase will be to create another example of a LiveRO, with some snapshots, provenance of the workflow execution and a roevo trace using one of the workflows that is part of the set of workflows available in the provenance corpus (Showcase 78).

Ideas for analytics

The usage of the roevo information lets us create some metrics such as:

  • Quantity of changes (and separate them by kind).
  • Impact of changes (by percentage)
  • Frequency of changes or snapshots
  • Which elements have had more changes than others.
  • Which have been the best/worst users
  • Which have been the most/less active users.
  • (...open to more ideas...)

Analysis of the roevo trace (without including quality)

We gather different information of the roevo trace in order to provide useful information to the user. Statistics related with the amount of changes, the kind of changes, the users, etc...

This analysis is implemented as a rest service that is called with a LiveRO as a parameter : (HOST)/rest/getAnalytics{?LiveRO}

Example

  • liveRO: uri of the Live RO.
  • totals: list of total values.
  • totalChanges: the total number of additions+removals+modifications of the described resource.
  • totalAdditions: the total number of additions of the described resource.
  • totalModifications: the total number of modifications of the described resource.
  • totalRemovals:the total number of removals of the described resource.
  • relatives: list of percentages.
  • relativeAdditions: the percentage of additions of the described resource.
  • relativeModifications: the percentage of modifications of the described resource.
  • relativeRemovals: the percentage of removals of the described resource.
  • snapshots: list of snapshots performed during the lifecycle of the RO.
  • users: users that have done snapshots during the lifcycle of the RO.
  • uri: the uri of the snapshot.
  • user: username.
  • quantityImpact: the Impact of the described element in the whole RO (as a percentage).
  • numSnapshots: percentage of snapshots performed by a user.

An example result of this service, using the example created in the showcase 66a:

Analysis of the roevo trace (including quality)

This analysis is an extension of the stability analysis that was implemented and used in the previous showcase. In this version the addition captures information related with the way that a user has influenced the RO and statistics about good and bad creation of snapshots.

This service provides needed information for the "clumsy user" detection.

This analysis is implemented as a rest service that is called with a LiveRO as a parameter, the minim that is going to be used for the evaluation and a specific purpose: (HOST)/rest/getStability{?RO,minim,purpose}

Example

In addition to the previous version of the stability service, now we have:

  • listUsers: Stores a user.
  • user: username.
  • probPositive: percentage of snapshots that have improved the previous quality.
  • probNegative: percentage of snapshots that have decreased the previous quality.
  • impactPositive: quality increasing snapshots done by the user over the total number of snapshots done for a RO.
  • impactNegative: quality decreasing snapshots done by the user over the total number of snapshots done for a RO.

An example result of this service, using the example created in the showcase 66a:

Providing explanation

Apart from the previous services, it is a good idea to provide some explanation to the user about the changes done in a snapshot, so he can understand better why the quality may have increased/decreased from other snapshots.

This analysis is implemented as a rest service that is called with a LiveRO as a parameter : (HOST)/rest/getStabilityInfo{?snapshot}

Example

  • ro: uri of the snapshot.
  • author: user that did the snapshot.
  • date: date of the snapshot.
  • additions: list of additions to the snapshot over the previous version.
  • modifications:  list of modifications to the snapshot over the previous version.
  • removals:  list of remopvals to the snapshot over the previous version.
  • resource: resource that has been subject of additions/modifications/removals.

Creating a Scenario for the DEMO

Introduction

We have planned to create a bigger scenario than the pevious one created in order to test capabilities of the stability service and roevo. This scenario will have 9 different Snapshots performed on a LiveRO. The snapshots will be performed by two different users, having additions, removals and modifications in order to get different results once they're evaluated by the checklist evaluation service.

There is an additional task which related to this showcase: the creation, implementation and testing of a RO Evolution API. In that showcase the planned scenario will become into a real RO with its snapshots, creating the roevo trace that will be available through the portal endpoint.

The last goal of this showcase is to create a webpage where the users will be able to test the stability service and analize the roevo of a RO in an easy way. This web will show the number of changes, the snapshots, the checklist evaluation, etc. in form of lists and charts.

Scenario

Users: Mr.Clumsy and Mr.Curator.

List of snapshots with a comment done by the user:

1st - Curator

Added wf (Comment: I've added a wf for my new project)

2nd - Curator

Added inputs1, modified wf (Comment: These inputs can be useful with the new version of the wf)

3rd - Curator

Added intermetdiate, outputs and prov1 (Comment: Run the wf with the inputs)

4th - Clumsy

Removed intermediate and outputs and provm nodified inputs (Comment: These inputs look better than the old ones)

5th - Curator

Added intermediate, outputs and prov2 (Comment: Run the wf with the new input)

6th - Clumsy

Removed inputs2, intermediate, outputs and prov2 (Comment: I want to start from the beggining again)

7th - Clumsy

Added inputs1 (Comment: Inputs added)

8th - Clumsy

Modified inputs1 to inputs2 (Comment: Those inputs were better)

9th - Curator

Added outputs, intermediate, prov (Comment: I've run the workflow again and stored its results)

Content of the RO in each snapshot:

1st - wf
2nd - wf + inputs1
3rd - wf + inputs1 + intermediate + outputs (prov)
4th - wf + inputs2
5th - wf + inputs2 + intermediate + outputs (prov 2)
6th - wf 
7th - wf + inputs1
8th - wf + inputs2
9th - wf + inputs2 + intermediate + outputs (prov 2)

Idea for evaluation

3 Colours used:

  • Red (when a "must" requirement is not satisfied)
  • Yellow (when a "should" requirement is not satisfied)
  • Green (when ALL "must" and "should" requirements are staisfied)
    [notice  that the "may" requirements don't have impact in the colour representation as they are optional requirements]

Value:

  • If a MUST requirement fails the value is never going to be greater than 0.6, where:
  • % of MUST requirements satisfied have a weight of 0.5
  • % of SHOULD requirements satisfied have a weight of 0.07
  • % of MAY requirements satisfied have a weight of 0.03
  • If a SHOULD requirement fails (and all the MUST are satisfied) the value is going to be between 0.6 and 0.9, where:
  • MUST (all satisfied) have a value of 0.6
  • % of SHOULD requirements satisfied have a weight of 0.25
  • % of MAY requirements satisfied have a weight of 0.05
  • If a all MUST and SHOULD requirement are satisfied the value is going to be between greater than 0.9, where:
  • MUST (all satisfied) have a value of 0.6
  • SHOULD (all satisfied) have a value of 0.3
  • % of MAY requirements satisfied have a weight of 0.1

User Interface

This is a snapshot of the web user interface that we have created. For now it is running in localhost. It will be available in the sandbox soon.

  • No labels