Archive for category MIBBI
MIAPE: Gel Informatics is now available for Public Comment
Posted by peanutbutter in bioinformatics, data standards, Journals Publishing, MIBBI, Proteomics on August 18, 2008

PSI logo
The MIAPE: Gel Informatics module formalised by the Proteomics Standards Initiative (PSI) now available for Public Comment on the PSI Web site. Typically alot of this information will be contained in the image analysis software, so we would especially encourage software vendors to review the document. The public
comment period enables the wider community to provide feedback on a proposed standard before it is formally accepted, and thus is an important step in the standardisation process.
This message is to encourage you to contribute to the standards development activity by commenting on the material that is available online. We invite both positive and negative comments. If negative comments are being made, these could be on the relevance, clarity, correctness, appropriateness, etc, of the proposal as a whole or of specific parts of the proposal.
If you do not feel well placed to comment on this document, but know someone who may be, please consider forwarding this request. There is no requirement that people commenting should have had any prior contact with the PSI.
If you have comments that you would like to make but would prefer not to make public, please email the PSI editor Norman Paton.
Double standards in Nature biotechnology
Posted by peanutbutter in bioinformatics, data standards, Journals Publishing, MIBBI on August 7, 2008
OK, So that is a relatively inflammatory and controversial headline, edging on the side of tabloid sensationalism. What is refers to is probably a situation that I may never find myself in again, which is in this months edition of Nature Biotechnology I am an author on two, biological standards related publications.
The first is a letter advertising the PSI’s MIAPE Guidelines for reporting the use of gel electrophoresis in proteomics. This letter is also accompanied by letters referring to the MIAPE guidelines for Mass Spectrometry, Mass Spectrometry Informatics and protein modification data.
The second is a paper on the Minimum Information about a Biomedical or Biological Investigations (MIBBI) registry entitled Promoting coherent minimum reporting guidelines for biological and biomedical investigations: the MIBBI project.
The following press release describes this paper in more detail.
More than 20 grass-roots standardisation groups, led by scientists at the European Bioinformatics Institute (EMBL-EBI) and the Centre for Ecology & Hydrology (CEH), have combined forces to form the “Minimum Information about a Biomedical or Biological Investigation” (MIBBI) initiative. Their aim is to harmonise standards for high-throughput biology, and their methodology is described in a Commentary article, published today in the journal Nature Biotechnology.
Data standards are increasingly vital to scientific progress, as groups from around the world look to share their data and mine it more effectively. But the proliferation of projects to build “Minimum Information” checklists that describe experimental procedures was beginning to create problems. “There was no way of even finding all the current checklist projects without days of googling,” says the EMBL-EBI’s Chris Taylor, who shares first authorship of the paper with Dawn Field (CEH) and Susanna-Assunta Sansone (EMBL-EBI). “As a result, much of the great work that’s going into developing community standards was being overlooked, and different communities were at risk of developing mutually incompatible standards. MIBBI will help to prevent them from reinventing the wheel.
The MIBBI Portal already offers a one-stop shop for researchers, funders, journals and reviewers searching for a comprehensive list of minimum information checklists. The next step will be to build the MIBBI Foundry, which will bring together diverse communities to rationalise and streamline standardisation efforts. “Communities working together through MIBBI will produce non-overlapping minimal information modules,” says CEH’s Dawn Field. “The idea is that each checklist will fit neatly into a jigsaw, with each community being able to take the pieces that are relevant to them.” Some, such as checklists describing the nature of a biological sample used for an experiment, will be relevant to many communities, whereas others, such as standards for describing a flow cytometry experiment, may be developed and used by a subset of communities.
“MIBBI represents the first new effort taking the Open Biomedical Ontologies (OBO) as its role model”, says Susanna-Assunta Sansone. “The MIBBI Portal operates in a manner analogous to OBO as an open information resource, while the MIBBI Foundry fosters collaborative development and integration of checklists into self-contained modules just like the OBO Foundry does for the ontologies”.
There is a growing understanding of the value of such minimal information standards among biologists and an increased willingness to work together across disciplinary boundaries. The benefits include making experimental data more reproducible and allowing more powerful analyses over diverse sets of data. New checklist communities are encouraged to register with MIBBI and consider joining the MIBBI Foundry.
Press release issued by the EMBL-European Bioinformatics Institute and the Centre for Ecology and Hydrology, UK.
The First MIBBI workshop: Day 2
Posted by peanutbutter in conference, conference report, data standards, MIBBI on April 7, 2008
The second day of the MIBBI workshop was more “free flowing” compared to the first day. During the second day we focussed on the process of MIBBI, house-keeping, infrastructure and the website.
The main focus of the day was discussing what it means to be registered on the MIBBI site and be a member of the Foundry. As a straw man, rather than starting from scratch, we used the OBO foundry principles to see if they could be applied to reporting check-lists, and came up with a draft set of principles. A full “official” report of the workshop should be forthcoming.
As a break from the rigorous standards development and process of reporting check-lists we went to a local restaurant and experienced anarchy when it came to understanding the menu. The meal itself was brilliant, however some degree of semantic extraction had to be applied to the menu to actually understand what we were ordering – No fancy algorithms were applied here, other than aksing the staff so explain it in English! And we think we have problems in the life-sciences! An example of the menu can be seen in the photo graphs which also include the delegates the meal and the venue.
![]() |
First MIBBI workshop 2-3rd April 2008 |
The First MIBBI Workshop: Day 1
Posted by peanutbutter in bioinformatics, CARMEN, conference, conference report, data standards, FuGE, MIBBI, neuroinformatics, ontology, open data, open science on April 2, 2008
MIBBI is a registry of scientific experiment reporting guidelines with the idea to foster a foundry of best practice to further develop and encourage modular development and re-use of reporting guidelines. The first workshop is being held at the EBI on the 2nd – 3rd April 2008 and is a relatively closed workshop to those developers and guidelines that are registered on the site. The schedule for day one is a whistle stop tour consisting of 5 min talks (adjusting for an academics interpretation of what 5 minutes means) for all the guidelines that exist, their scope and the people behind them. Due to this I am not going to comment on individual talks. I presented two talks during the day. One on CARMEN and the development of the MINI: Electrophysiology reporting guidelines, and one, standing in for Andy Jones on FuGE.
I tried sharing these slides via google presentation, they looked quite nice. However, wordpress does not seem to allow them to embed. So I put them on slide share instead. These set the tone for the discussions for the afternoon and tomorrow.
Do scientists really believe in open science?
Posted by peanutbutter in bioinformatics, CARMEN, data standards, Journals Publishing, MIBBI, neuroinformatics, ontology, open data, open science, Social Media on June 26, 2007
I am writing this post as a collection of the current status and opinions of “Open Science”. The main reason being I have a new audience; I am working for the CARMEN e-Neuroscience project. This has exposed me, first hand, to a domain of the life-sciences to which data sharing and publicly exposing methodologies has not been readily adopted, largely it is claimed due to the size of the data in question and sensitive privacy issues.
Ascoli, 2006 also endorses this view of the neuroscience and offers some further reasons why this is the case . He also includes the example of exposing neuronal morphological data and argues the benefits and counters the reticence to sharing this type of data.
Hopefully, as the motivation for the CARMEN project is to store and share and facilitate the analysis of neuronal activity data, some of these issues can be overcome.
With this in mind I want to create this post to provide a collection of specific blogs, journal articles, relevant links and opinions which hopefully will be a jumping-off point to understanding the concept of Open Science and embracing the future methodologies in pushing the boundaries of scientific knowledge.
What is Open Science?
There is no hard and fast definition, although according to the Wikipedia entry:
“Open Science is a general term representing the application of various Open approaches (Open Source, Open Access, Open Data) to scientific endeavour. It can be partially represented by the Mertonian view of Science but more recently there are nuances of the Gift economy as in Open source culture applied to science. The term is in intermittent and somewhat variable use.”
“Open Science” encompasses the ideals of transparent working practices across all of the life-science domains, to share and further scientific knowledge. It can also be thought of to include the complete and persistent access to the original data from which knowledge and conclusions have been extracted. From the initial observations recorded in a lab-book to the peer-reviewed conclusions of a journal article.
The most comprehensive overview is presented by Bill Hooker over at 3quarks daily. He has written three sections under the title “The Future of Science is open”
In part 1, as the title suggests, Bill presents an overview on open access publishing and how this can lead to open-science (part 2). He suggests that
“For what I am calling Open Science to work, there are (I think) at least two further requirements: open standards, and open licensing.”
I don’t want to repeat the content already contained in these reviews, although I agree with Bill’s statement here. There is no point in having an open science philosophy if the data in question is not described or structured in a form that facilitates exchange, dissemination and evaluation of the data, hence the requirement of standards.
I am unaware of community endorsed standard reporting formats within Neuroscience. However, the proliferation of standards in Biology and Bioinformatics, is such, that it is fast becoming a niche domain in its own right. So much so, that there now exists a registry for Minimum Information reporting guidelines, following in the formats of MIAME and MIAPE. This registry is called MIBBI (Minimum Information for Biological and Biomedical Investigations) and aims to act as a “one-stop-shop” of existing standards life-science standards. MIBBI also provides a foundry where best practice for standards design can be fostered and disparate domains can integrate and agree on common representations of reporting guidelines for common technologies.
Complementary to standard data structures and minimum reporting requirements, is the terminology used to described the data; the metadata. Efforts are under way to standardise terminology which describes experiments, essential in an open environment, or simply in a collaboration. This is the goal of the Ontology of Biomedical Investigations (OBI) project which is developing “an integrated ontology for the description of biological and medical experiments and investigations. This includes a set of ‘universal’ terms, that are applicable across various biological and technological domains, and domain-specific terms relevant only to a given domain“. Already OBI is gaining momentum and currently supports diverse communities from Crop science to Neuroscience.
Open licensing of data may address the common arguments I hear for not releasing data, that “somebody might use it”, or the point blank refusal of “not until I publish my paper”. This is an unfortunate side effect of the “publish or perish” system as commented on bbgm and Seringhaus and Gerstein, 2007, and really comes down to due credit. In most cases this prevents real time assessment of research, complementary analysis or cross comparisons with other data sets to occur alongside the generation of the data, which would in no doubt enforce the validity of the research. Assigning computational amenable licenses to data, such as those proposed by Science Commons, maybe one way of ensuring that re-use of the data is always credited to the laboratory that generated the data. It is possible paradigm that “Data accreditation impact factors” could exist analogous to the impact factors of traditional peer-reviewed journals.
Open science may not just be be about releasing data associated with a peer-review journal, rather it starts from exposing the daily recordings and observations of an investigation, contained in the lab-book. One aspect of the “Open data” movement is that of “Open Notebook Science” a movement pioneered by Jean-Claude Bradley and the Useful Chemistry group, where their lab-book is is open and access-able on-line. This open notebook method was further discussed by a recent Nature editorial outlining the benefits of this approach. Exposing you lab-book could allow you to link the material and methods section of your publication, proving you actually did the work and facilitating the prospect of other researchers actually being able to repeat your ground breaking experiments.
Already many funders are considering data management or data sharing policies, to be applied to future research proposals. The BBSRC have recently released their data sharing policy which states that, “all research proposals submitted to BBSRC from 26th April 2007 must now include a statement on data sharing. This should include concise plans for data management and sharing or provide explicit reasons why data sharing is not possible or appropriate“. With these types of policies a requirement to research funding the “future of science is open“.
The “Open Science” philosophy appears to be gaining some momentum as is actively being discussed within the scientific blogosphere. This should not really come as a great surprise as science blogging can be seen as part of the “Open science” movement, openly sharing opinions and discourse. Some of the more prominent science blogs focusing on the open science ideal are Open access News, Michael Eisen’s Open Science Blog, Research Remix, Science Commons, Peter Murray-Rust.
There are of course alot more blogs discussing the issue. Performing an “open science search” on Postgenomic (rss feed on search terms please, Postgenomic) produces an up to the minute list of the open science discourse. Although early days, maybe even the “open science” group on Scintilla (still undecided on Scintilla) will be the place in the future for fostering the open science community.
According to Bowker’s description of the traditional model of scientific publishing, the journal article “forms the archive of scientific knowledge” and therefore there has been no need to hold on to the data after it has been “transformed” into a paper. This, incorporated with in-grained social fears, as a result of “publish or perish”, of not letting somebody see the experimental data before they get their peer-reviewed publication, will cripple the open science movement and slow down knowledge discovery. Computational amenable licences may go some way to solve this. But raising the awareness and a clear memorandum from the major journal publishers that, exposing real-time science and publishing data will not prevent publication as a peer-reviewed journal, can only help.
In synopsis I will quote Bill again as I think he presents a summary better than I could;
“My working hypothesis is that open, collaborative models should out-produce the current standard model of research, which involves a great deal of inefficiency in the form of secrecy and mistrust. Open science barely exists at the moment — infancy would be an overly optimistic term for its developmental state. Right now, one of the most important things open science advocates can do is find and support each other (and remember, openness is inclusive of a range of practices — there’s no purity test; we share a hypothesis not an ideology). “