Category Archives: Uncategorized

Galaxy Tool Generating Dataset Collections

As part of the Alveo project we’ve been using the Galaxy Workflow Engine to provide a web-based user-friendly interface to some language processing tools. Galaxy was originally developed for Bioinformatics researchers but we’ve been able to adapt it for language tools quite easily. Galaxy tools are scripts or executable command line applications that read input data from files and write results out to new files. These files are presented as data objects in the Galaxy interface. Chains of tools can be run one after another to process data from input to final results.

One of the recent updates to Galaxy is the ability to group data objects together into datasets. These datasets can then form the input to a workflow which can be run for each object in the dataset.  This is something we’ve wanted for Alveo for a long time since applying the same process to all files in a collection is a common requirement for language processing.   After a bit of exploration I’ve worked out how to write a tool that generates a dataset and since the documentation for this is somewhat sparse and confusing, I thought I’d write up my findings.

Continue reading

DADA Project Update

The DADA project is developing software for managing language resources and exposing them on the web. Language resources are digital collections of language as audio, video and text used to study language and build technology systems. The project has been going for a while with some initial funding from the ARC to build the basic infrastructure and later from Macquarie University for some work on the Auslan corpus of Australian Sign Language collected by Trevor Johnston. Recently we have two projects which DADA will be part of, and so the pace of development has picked up a little. Continue reading

An RDF Realisation of LAF in the DADA Annotation Server

The Linguistic Annotation Framework defines a generalised graph based
model for annotation data intended as an interchange format for transfer
of annotations between tools.   The DADA system uses an RDF based representation
of annotation data and provides a web based annotation store.  The annotation
model in DADA can be seen as an RDF realisation of the LAF model. This paper
describes the relationship between the two models and makes some comments on
how the standard might be stated in a more format-neutral way.

Download PDF: An RDF Realisation of LAF in the DADA Annotation Server

Ingesting the Auslan Corpus into the DADA Annotation Store

Steve Cassidy and Trevor Johnston.

The DADA system is being developed to support collaborative access to and annotation of language resources over the web.  DADA provides a web accessible annotation store that delivers both a human browsable version of a corpus and a machine accessible API for reading and writing annotations.  DADA implements an abstract model of annotation suitable for storing many kinds of data from a wide range of language resources.  This paper describes the process of ingesting data from a corpus of Australian Sign Language (Auslan) into the DADA system.  We describe the format of the RDF data used by DADA and the issues raised in converting the ELAN annotations from the corpus.  Once ingested, the data is presented in a simple web interface and also via a Javascript client that makes use of an alternate interface to the DADA server.

Download PDF: Ingesting the Auslan Corpus into the DADA Annotation Store

Arduino & Physical Computing

I gave a talk last week introducing the Arduino platform to some MQ students and staff. It seemed to go well and there is a bit of interest in carrying on with a regular meetup in the Electronics labs – more details to come when we organise a time. Meanwhile, here are my slides from the talk, not that they’re very informative by themselves but I wanted to try out slideshare. Continue reading

Using Robots to Teach Programming

This is a project idea for an Honours student or similar. Please contact me if you’d like to follow this up.

I’ve been having fun with arduino boards lately; these are small single chip development boards which have input output lines that can read sensors and control motors etc. They are programmed in Wiring which is really C with some sugar and libraries added. I’ve been thinking that the arduino would make a nice platform to stimulate some interest in beginning programmers as a break from the usual run of problems that we set them. This project would focus on developing a set of exercises suitable for a first or second programming class (I’m thinking COMP125) to develop some of the ideas explored there (data structures, simple algorithms) in a concrete context. Part of the project would be building a suitable platform (I fancy a Blimpduino) and then perhaps evaluating the use of the platform with real live first year students.

A RESTful interface to Annotations on the Web

Annotation data is stored and manipulated in various formats and there have been a number of efforts to build generalised models of annotation to support sharing of data between tools. This work has shown that it is possible to store annotations from many different tools in a single canonical format and allow transformation into other formats as needed. However, moving data between formats is often a matter of importing or exporting from one tool to another. This paper describes a web-based interface to annotation data that makes use of an abstract model of annotation in its internal store but is able to deliver a variety of annotation formats to clients over the web.

Presented at the The 2nd Linguistic Annotation Workshop (The LAW II) at LREC2008, Marrakech.
Download PDF

Sparql Endpoint for Python WSGI

As part of DADA (and yes, that page is a bit out of date) I wanted to provide a Sparql endpoint to allow experimentation with querying the raw RDF annotation data. So far, we’ve built everything using Redland in Python but it seems there is no exsiting Sparql endpoint implementation for this combination. The Sparql protocol document is long but as far as I can tell the core of the protocol is a simple GET request with an encoded Sparql query, results are returned as raw XML in the special Sparql result format or as RDF/XML if the return type is a graph. This proves to be very easy to implement on top of Redland since it’s query operator returns exactly those result types.

So, I present SparqlEndpoint-0.1, a python module that provides a WSGI conformant implementation of a Sparql Endpoint for Redland. It almost certainly doesn’t implement all of the protocol standard and it can be improved no end, for example by making it independant of the RDF backend it queries (eg. using RDFlib).

I’m not putting up a demo endpoint just yet as I’m having severe performance issues with my development server in combination with Redland. The triple store is growing rapidly to the millions of triples and the result is a huge latency (tens of minutes) to perform some queries. Given some recent discussion on the Redland list I’m wondering whether a jump to one of the RDF specific stores is the thing to do. This would probably mean rewriting my code in Java but based on the Berlin Sparql Benchmark numbers, Sesame and Jena have the kind of performance I need (sub second query response times on 100M triples).

Well, enough of that. If you are interested in SparqlEndpoint please download and take a look. If there is interest I’m happy to share it and host development somewhere accessible.

An Evaluation of Portfolio Assessment in an Undergraduate Web Technology Unit

One of the perennial issues that is raised in student surveys is that of effective feedback. As part of our ongoing review of teaching, we identified feedback on assessment as a target area for 2007; this paper describes the evaluation of one strategy for improving this feedback that was implemented as part of an undergraduate unit.

Paper to be presented at the National UniServe Conference 2007, Sydney, Australia. Download PDF.

Version Control for RDF Triple Stores

RDF, the core data format for the Semantic Web, is increasingly being deployed both from automated sources and via human authoring either directly or through tools that generate RDF output. As individuals build up large amounts of RDF data and as groups begin to collaborate on authoring knowledge stores in RDF, the need for some kind of version management becomes apparent. While there are many version control systems available for program source code and even for XML data, the use of version control for RDF data is not a widely explored area. This paper examines an existing version control system for program source code, Darcs, which is grounded in a semi-formal theory of patches, and proposes an adaptation to directly manage versions of an RDF triple store.

Paper presented at ICSOFT 2007, Barcelona, Spain, July 2007. Download PDF