Skip to end of metadata
Go to start of metadata

This is a placeholder for the Frequently Asked Questions area of the website, which we will constantly maintain... I have taken the freedom to include some of the comments from Dave taken from his meeting in Canberra to start this process.

  1. How big do you think Research Objects will be?
    1. Dave: somewhere between nano-pubs and the Web? :-) 
    2. Dave: One of the great flexibilities about ROs is they don't have a standard size like a paper does – but nor are they necessarily encapsulated granules, e.g. ROs can contain other ROs, and the "Web-Particle duality" e.g. ROs could be thought of like Web pages. I talked about boundary objects e.g. ROs with a commonly interpretable core but then other pieces attached for use by particular groups/machines interpreting them. Ultimately the answer is we'll see, as a result of the co--evolution.
    3. José Manuel: Size is a matter of the purpose of the RO. For example, I would assume most ROs dealing with a particular experiment or proof are quite concise and specific to such experiment, hence usually small (whatever "small" means). An additional question would therefore be what dimensions to use in order to measure RO size: number of resources? size in terms of MB? number of users related to the RO? number of ROs versioning this RO?... Further on, this leads to thinking of scalability: how large and complex an RO should we be able to supportin Wf4Ever? In both cases, what is myExperiment's experience?
    4. Graham: How big is an RO that contains (say) an Affymetrics gene expression assay and several references to a multi-gigabyte bioinformatics database?
    5. Sean: The size depends somewhat on how you're looking at it. It's kind of like asking "how big is a paper?". You could answer that in terms of bytes (e.g. size of the PDF), words, pages or possible in terms of its content -- contributions, hypotheses etc. The RO should give us a framework within which to answer those different questions.
    6. Oscar: There are examples already of what we expect to have in ROs (in Astronomy and Genomics), so the size, although not accountable in terms of bytes, may be accountable in terms of components/elements inside the RO (Astronomy: List of components for Astro RO, Biology: Expected content of a Research Object (Bio)). 
  2. Do you measure the influence of ROs? 
    1. Dave: we know this is important and we take a sociotechnical perspective (so citation, reputation, incentives matter), and that we will make sure we instrument our systems (one of the affordances of working digitally) but I don't know what to say beyond that.  
    2. Some experts are now trying to figure out how to measure the influence of social media, not just papers – which makes me think, what can we do to make it easy for others to measure the influence of ROs?
    3. José Manuel: Citation and reuse be the fundamental parameters to measure the influence/impact of an RO
    4. Graham: I'd hope to see something useful come out Matthew Gamble's work on Bayesian networks and quality evaluation (and provenance?).  Forward to quality evaluation.  Backwards to influence assessment?
    5. Sean: maybe influence provides a measure of size too
    6. Pinar: ROs, nano-pubs and similar, they enable the _explicit_  assertion  and sharing of research artifacts and findings. In this context they  make "measuring re-use as impact" [1] more possible in my opinion. [1] Re-use as Impact: How re-assessing what we mean by "impact" can support improving the return on public investment, develop open research practice, and widen engagement http://altmetrics.org/workshop2011/neylon-v0/
  3. Aren't you broadening the definition of research to include those who are just tinkering?
    1. I recognise this question as one that often arises perhaps as a reaction against the democratisation angle (the "long tail") and especially now that citizen science comes up.  I have nothing against democratisation and tinkering, the answer is to pay attention to quality (and influence as in the previous question) and we are doing that in Wf4ever. This is also where our attention to aspects of reproducibility matters – not just can we reproduce the research of citizens but can they reproduce the work of experts.  I could also have given the principled answer that publicly funded research should be publicly accessible and that universities have a public role.
    2. Graham: Convention is "evaluate first, then publish"
  • No labels