Status: sprint finished; API work continues


The main goal of the showcase is to make sure that all Wf4Ever services have proper APIs, in terms of being RESTful and using Linked Data. This will be done by revisiting the existing or planned APIs, improving them where necessary, performing a quick or mockup implementation and finally documenting them. The APIs will be investigated in the following order:

  1. Checklist API documentation update (Graham)
  2. Evolution API (clarify required functionality) (Piotr?, Raul?)
  3. Workflow transformation API, based on current taverna->RO service (Stian?)
  4. Annotations API to allow myExperiment to add annotations to an RO in RODL (Piotr?)
  5. User management API (Piotr?, Kevin?)
  6. Stability API (Aleix?)
  7. Recommender API (Rafa?)
  8. RO aggregation API (adding new contents, etc.) (Piotr?)

People involved: Graham, Piotr, Kevin, API authors.

Scheduled for: 18-29 June 2012 (confirmed at stand-up 2012-06-20)

There's a github project for collecting API descriptions and examples -

Kick-off meeting agenda and notes

When: 2012-06-18, 14:30 BST / 15:30 CET

Where: Skype chat (with voice if needed)

Who: Graham, Piotr, Stian, Aleix, Rafa, Kevin (lurking)

  1. Set the scene - meeting goals (GK - 5 mins)
  2. Review of REST APIs (GK - 5 mins)
  3. How to document APIs? (GK intro; all discuss - 10 mins)
  4. How to implement APIs? (GK intro; all discuss - 15 mins)
  5. Review sprint plan (all - 10 mins)
  6. Next steps (all 5 mins)
  7. Wrap up (all 5 mins)

Summary of actions:

Skype chat log: ?2012-06-18 - Skype chat log

Stand-ups and other discussions

Supporting and background materials

Sprint plan

NOTE: in what follows, "Describe API" includes describing or referencing the data models and formats that may be exchanged.

  1. Preparation: Articulate / summarize principles for REST APIs (GK) Done.
  2. Preparation: Prepare template for API description (GK) Done
  3. Preparation: Assemble links for supporting materials (GK) Done
  4. Review and discuss preparation materials (all) Done
  5. Discuss and plan framework for API mock-ups and test cases (all) Done
  6. Review priorities for tackling APIs (all) Done
  7. Draft API for checklists (GK, 2012-06-19) Done -
  8. Draft API for workflow transformation (Stian, 2012-06-19 @@TBC)
  9. Draft API for RO SRS; possible modularization of RO access, aggregation, annotations (Piotr, 2012-06-19) Done -
  10. Draft API for stability (Aleix, 2012-06-19) Done -
  11. Draft API for recommender (Rafa) - probably won't get done within this sprint
  12. Review Draft APIs in pairs or groups (all) Done - various group discussions; includes aggregation and annotations.
  13. Select 3 APIs for initial implementation (all) Done - focusing on just the RO API with aggregation and annotation.
  14. Implement RO SRS sample service (Piotr, 2012-06-29) In progress
  15. Implement RO SRS sample client (Graham, 2012-06-20) In progress
  16. Integration: sample clients working with sample services (@@TBD)

(This plan was revised on 2012-06-28 to reflect implementation focus on the RO SRS API. The original plan for for 3 distinct APIs to be implemented, but that proved to be more than we could handle within the sprint.)

API description template


Links to API descriptions, implementations and test cases


Testing / sample framework for APIs

The testing/sample framework is intended to serve several purposes:

Within the context of Wf4Ever, it is clear that different services will be implemented in different programming environments (Java+Tomcat+Wicket, Python+Pylons, Ruby+Rails are all being used for different parts of the project). As such, it is probably unreasonable to require that all service test implementations or mock-ups are implemented in a common framework. To this end, services or service mock-ups are implemented by an appropriate developer in the environment of their choice. We may want to figure out a test environment that allows these diverse services to be run alongside each other.

For client test implementations, assuming that these are generally quite simple, these may be implemented as test suites using any appropriate tools that run against the service implementations. It would be nice if this could be implemented with a view to continuous integration deployment via Jenkins or a similar system.

Review and reflection

Report to project meeting on 2012-06-27

(The sprint was still in progress at the time of this report)

Showcase 68 (APIs) - sprint review report

The showcase started late, and is scheduled to run until this Friday. We intend to have a reflection/review meeting tomorrow, and use the rest of the week to tidy up loose ends.

We have drafts of several APIs in various states. Original priorities have not been followed closely: we are focusing on the RO SRS API with aggregation and annotation capabilities. I plan to attempt a client implementation today. We are a bit behind on implementation because discussions about the RO API took longer that I anticipated, but we have reached a number of very important points of consensus, which are relevant to the overall project architecture. See Arguably this (SRS) is 3 APIs in one, as it covers aggregation and annotation. I'm hoping to progress ROEVO by the end of the week, second in priority to RO SRS implementation.


(Nothing noted)

Sprint progress review

Progress summary:

Outstanding issues

Assuming that RO SRS implementation and RO EVO specification draft are completed by Friday:

Sprint process review

The complexity and subtlety of the work needed on the RO SRS API was underestimated. The time required for this was an order of magnitude or so greater than required for specifying the checklist API that we developed previously. But we've made good and valuable progress here, which will stand us in good stead as we move forward with project-wide integration. Members of the team have found the exercise useful and informative, and that the principles and advantages of true REST APIs are a little clearer to all.

The stand-ups worked pretty well overall. On a couple of occasions, we overran our 10-minute time-slot, but I think there were genuine issues being raised. It would probably help if we could be more disciplined about dealing with communication and planning issues in the stand-up, and pushing other matters outside of that period so that people have the choice to participate or not beyond the 10-minute slot. It could help to ensure that we had everyone's full attention during the stand-up itself, if they could be confident it wouldn't drag out into the rest of their day.

There were mixed feelings about the extended discussions that took place on the open Skype channel. On one hand, we had some extremely valuable discussions, and many important issues were resolved, which we probably could not have achieved this as effectively in scheduled meetings. On the other hand, we seemed to spend a lot of time just chatting - was this the best use of our time? Using text chat is slower than voice. There was a sense in the group that the brief exchanges were useful, but that some of the extended discussions would have been better handled as a scheduled voice conference.

It was felt that in any future showcase of this nature, it would be better to be more focused in scope, and maybe have fewer people involved. For example, to showcase a single API and its implementation, and maybe also looking more to its context of use within the overall project.

Related links

(Nothing noted)