Archive for category ontology
I described in an earlier post that data sharing in Neuroscience is relatively non-existent. Some commentary on the subject has appeared since then via the 2007 SfN Satellite Symposium on Data Sharing entitled Value Added by Data SharingLong-TermPotentiation of Neuroscience Research, published in Neuroinformatics. I was also excited to see an article published last week Data Sharing for Computational Neuroscienc, also in Neuroinformatics. However, there is a caveat or two. Apart from ignoring all the data representation issues presented in other domains such as bioinformatics, the re-use of data models such as FuGE, or contribution to ontology efforts such as OBI, all these articles are not open access! How ironic, or should that be how embarrassing. Phil also covers this issue in his blog.
Oh well, looks as if there is still a challenge in the domain of Neuroscience for access to valuable insights into information flow in the brain. Who want to know how the brain works anyway? You can always pay $32 to springer if you want to find out.
I have the unenviable task of developing an ontology for the CARMEN project which will allow the process of electrophysiology experiments, the generated data, the analysis of the data and the services that perform the analysis, to be described, and in addition be computationally amenable. Collecting the words that are required to described these tasks are relatively trivial. However, getting the scientists to realise they have assigned numerous meanings to the same word or term requires a little bit more patience on my part.
It also requires me to educate the scientists, in that building an ontology for electrophysiology is a little more complicated than putting some “words” in a text file.
The words in an ontology have to be explicitly defined so as to be completely unambiguous both to the scientist, who generate the data, and the informaticians who want to analyse the data, either immediately or several years down the line. The data should be described in such a manner to an agreed level of detail that no longer requires the informatician to pick up the phone and politely ask “how did you generate this piece of data?”.
The first stage I am trying to overcome or relay to the scientists is that although you use the same “words” you often use the words to describe different things in different contexts. This situation is generally less important when described in a journal publication but in presents issues when you use the words to annotate data and infer knowledge.
I have been trying to work out the best way to get this message across and to develop a methodology for collecting agreed definitions for words. I could have always put up a wiki or an issue tracker to do this, but this doesn’t always guarantee contribution. I feel the process needs to be mediated to turn the natural language definitions into more explicit normalised ontological definitions. Taking this into account I have decided to apply crowdsourcing to Ontology development.
Simply this means sending an email out entitled “Metadata term of the week”. This process was suggested to me by my boss Phil Lord. In this email I pick a word and attempt to define it. If I get it right then there is no need to respond. If you disagree with the definition then you have to respond with an alternative and therefore a discussion ensues and ends with an agreed definition. With this process the scientists get to see that other scientists within the project define or describe words slightly differently enough that they no longer are talking about the same thing.
The first Metadata term of the week was “spike sorting” and we received the following definitions
- Spike sorting is a process of assigning data spikes to sets, where each set is identified with a single neuron
- Spike sorting is a process aiming at separating spikes generated by different cells based on shape discrimination algorithms
- Spike sorting is a technique used in single-cell neural recordings which assigns particular spike shapes to individual neurons
- Spike sorting is a classification procedure. We can think about a forest (time series) where M animals of K different types live (M spikes of K different neurons). All animals are different but say two rabbits are a little bit more similar than the rabbit and fox. So, we need classify all M animals and to say about each to what particular class among K classes this animal belongs.
- Spike sorting is the process of identifying the waveforms associated with action potentials of an individual neuron within time series data.
All trying to say the same thing, although when taken explicitly they start to “mean” different things. Which led us to defining a three more terms in order to answer the original question:
a) An action potential is a sudden depolarization of the membrane potential of a cell . [synonym: spike]
b) Spike detection is a data extraction process that classifies the waveforms associated with action potentials and identifies the time point of when the spike event initiates. The input to this process is a continuous waveform. The output is a single sequence of spike event times.
c) Spike sorting is a data extraction process that assigns detected spike event times to individual neurons. The input of this process can be a continuous waveform or a sequence of spike event times. The output of this process are sets(or categories) of spikes. Each set is assumed to correspond to a single neuron.
This peer-production processes took approximately 4 days to conclude and I think it has succeeded in addressing three issues
- Highlighting the ambiguity and the use of terms, even within a small and enclosed group of scientist, within a single project.
- The peer-production of ontology terms and definitions.
- The engagement of the community within the project.
I would love to know peoples comments on this process or any alternative suggestions. Feel free to comment.
A new book has been released on the Semantic Web and the applications in Life sciences, intuitively entitled Sematic Web: Revolutionizing Knowledge Discovery in the Life Sciences. I have purchased a similar book before entitled Ontologies for Bioinformatics and was not overly impressed with its content. This book however, as pointed out by Duncan on Nodalpoint is written by people who are actually using the technology, I have met several of the people involved in authoring some of the chapters and I think I will definitely be purchasing a copy, or rather the lab will be purchasing a copy.
Yes, I am still alive. My blog posting has taken a back seat these last few months as I wrestle with my thesis write-up. The whole write-up processes is progressing nicely with no major hiccups, although slightly slower than predicted. I am now located in my write-up retreat in Lausanne, Switzerland, although I hope my discussion chapter doesn’t become too neutral as a result.
I commented previously on Expo an ontology of scientific experiments. The Journal article for “An ontology of Scientific experiments” has just been published in the Journal of The Royal Society Interface. Several comments have developed on the NewScientistForum page concerning the article and have echoed my original comments. More comments on the article are posted on NodalPoint.
"Called EXPO, it can be used to translate scientific experiments into a format that can be interpreted by a computer."
Wow! translate experiments, that's impressive I would love to find out how the scrawny hand-writing, contained inside the standard lab notebook, dog-eared and drenched in all manner or reagents, gets translated into an ontology, that's more impressive than the ontology itself.
However on a more serious note, I agree with the concept, that something like this should exist. It would definately help in dissemination and analysis of data (providing the data is freely provided and in a standard structure of format), but this kind of process, an ontology for all of science, would have to have a major open development across all scientific disciplines, as FUGO (Functional Genomics Investigation Ontology) are doing, in order to be agreed upon and for the claim that it represents all scientific experiments.
Going by the article, and snooping around the EXPO site, a cross discipline development process doesn't appear to have taken place, or planned for the future (my apologies if this is not the case). And according to the article it has been tested on two use-cases, one on particle physics and one on evolutionary biology, this must be the only wet-lab science that exists these days. It would be interesting to see if the large scale collaboration effort of FUGO agrees with or already has a similar structure to EXPO
A quote at the bottom of the article says "Software to speed up this process could be a big boost ", I think he meant to say, "This would be impossible without software being available from the wet-lab bench, to data storage, to the publication process".