SESSION 1: CLOUD COMPUTING (Wednesday, 4:00–5:30 PM, Room 306B)

Building SkyNet – A Data And Model Driven Approach To Managing Datacenters

Michael DeHaan, Puppet Labs

There are certain aspects any software managing a large number of computers should obey, but usually don’t. Industry tends to create expensive systems management software that is bad for humans and often worse for machines.

The future of enslaving humanity by robots requires that we become smarter about how we build and adopt systems management software today, in order to use our vast armies of machines as efficiently as possible.

This talk will look at a model and data driven approach to managing machines, citing numerous examples from Puppet, a system that actually obeys these properties. We’ll cover how you should really look at your datacenter from the needs of your data and your other software, and how this inversion in approach is the eventual key to building SkyNet by 2014.

Cloud Computing with the Simple Cloud API

Doug Tidwell, IBM

The Simple Cloud API is a project sponsored by several leading vendors (Zend, Go Grid, IBM, Microsoft, Nirvanix and Rackspace). In this session we’ll take a look at how to use the API to access different kinds of cloud services in an open, flexible way.

Most cloud APIs require programmers to think about arcane details instead of business logic. The Simple Cloud API lets you write one application that runs with multiple cloud vendors, despite the differences in their APIs. This session will show you how to use it to write elegant, flexible, business-oriented code that insulates your application from the APIs and wire formats underneath.

We’ll wrap up the session with a discussion of how you can add support for other providers by implementing the Simple Cloud API yourself.


SESSION 2: MEDIA ON THE WEB (Thursday, 1:30–3:00 PM, Room 306B)

Implementing the Media Fragments URI Specification

Davy Van Deursen, Ghent University IBBT
Raphaël Troncy, EURECOM
Erik Mannens, Ghent University IBBT
Silvia Pfeiffer, Vquence
Yves Lafon, W3C
Rik Van de Walle, Ghent University IBBT

In this paper, we describe two possibilities to implement the W3C Media Fragments URI specification which is currently being developed by the Media Fragments Working Group. The group’s mission is to address media fragments on the Web using Uniform Resource Identifiers (URIs). We describe two scenarios to illustrate this implementation. More specifically, we show how User Agents (UA) will either be able to resolve media fragment URIs without help from the server, or will make use of a media fragments-aware server. Finally, we present some ongoing discussions and issues regarding the implementation of the Media Fragments specification.

Exposing Audio Data to the Web: an API and Prototype

David Humphrey, Seneca College
Corban Brook, Canadian Water Network
Alistair MacDonald, Bocoup

The HTML5 specification introduces the audio and video media elements, and with them the opportunity to dramatically change the way we integrate media on the web. The current API provides ways to play and get limited information about audio and video, but gives no way to programatically access or create such media. We present a new enhanced API for these media elements, as well as a working Firefox prototype, which allows web developers to read and write raw audio data. We will demonstrate examples of how this new audio data API can be leveraged to improve web accessibility, analyze audio streams, make in-browser synthesizers and instruments, process digital signals, create audio-based games, and drive animated visualizations; in tandem, we will also explore the code necessary for web developers to work with audio data in JavaScript, and implement various audio algorithms, for example, Fast Fourier transform (FFT). Finally, we will entertain further possibilities that such an API would provide, such as text to speech, speech to text analysis, “seeing” 3D using sound, etc.

Invited Talk: HTML5: Where Are We Now?

Mark Pilgrim, Google

A year ago, Vic Gundotra stood up in front of 4000 Google I/O attendees and announced that “HTML5 is here.” where is HTML5 now? I’ll revisit the 5 HTML5 features from the Google I/O keynote (canvas, video, geolocation, appcache, and web workers) and look at current browser support and Google’s roadmap for further implementation in Chrome and Chrome OS.


SESSION 3: SOCIAL WEB (Thursday, 3:30–5:00 PM, Room 306B)

Enabling WebGL

Catherine Leung and Andor Salga, Seneca College of Applied Arts and Technology

WebGL leverages the power of OpenGL to present accelerated 3D graphics on a webpage. The ability to put hardware-accelerated 3D content in the browser will provide a means for the creation of new web based applications that were previously the exclusive domain of the desktop environment. It will also allow the inclusion of features that standalone 3D applications do not have. While WebGL succeeds in bringing the power and low-level API of OpenGL to the browser, it also expects a lot of web developers, who are used to the DOM and JavaScript libraries like jQuery. This talk will look at how mid level APIs can help web developers create unique 3D content that is more than just duplicates of a standalone desktop application on a web page. We will present one such web application named Motionview, built with C3DL, that provides a new means for artist and motion capture studios to communicate with each other. We will also highlight some upcoming project ideas that make use of 3D browser technology in a way that would not have been possible in a desktop environment.

The Spoken Web – Software Development and Programming through Voice

Arun Kumar, Sheetal K. Agarwal and Priyanka Manwani, IBM Research India

It has been a constant aim of computer scientists, programming language designers and practitioners to raise the level of programming abstractions and bring them as closer to the user’s natural context as possible. The efforts started right from our transition from machine code programming to assembly language programming, from there to high level procedural languages, followed by object oriented programming. Nowadays, service oriented software development and composition form the norm.

There have also been notable efforts such as Alice system from CMU to simplify the programming experience through the use of 3D virtual worlds. The holy grail has been to enable non-technical users such as kids or non-technical people to be able to understand and pick up programming and software development easily. We present a novel approach to software development that lets people use their voice to program or create new software through composition. We demonstrate some basic programming tasks achieved through simple talking to a system over an ordinary phone. Such programs constructed by talking can be created in user’s local language and do not require IT literacy or even literacy as a prerequisite. Our field experiences with such voice driven interface for programming tasks have been encouraging. We believe this approach shall have a deep impact on software development, especially development of web software in the very near future.

“Follow Me”: A Web-based, Location-sharing Architecture for Large, Indoor Environments

Polychronis Ypodimatopoulos and Andrew Lippman, MIT Media Laboratory

We leverage the ubiquity of bluetooth-enabled devices and propose a decentralized, web-based architecture that allows users to share their location by following each other in the style of Twitter. We demonstrate a prototype that operates in a large building which generates a dataset of detected bluetooth devices at a rate of ~30 new devices per day, including the respective location where they were last detected. Users then query the dataset using their unique bluetooth ID and share their current location with their followers by means of unique URIs that they are in control of. Our separation between producers (the building) and consumers (the users) of bluetooth device location data allows us to create socially aware applications that respect user’s privacy while limiting the software necessary to run on mobile devices to just a web browser.


PROGRAM UPDATE: SEMANTIC WEB (Friday, 11:00 AM–12:00 PM, Room 306B)

Creating your own ARIA compliant UI widgets with HTML and XHTML

Dave Raggett, W3C

Developers face plenty of challenges for creating cross browser dynamic web pages. Search engines provide access to plenty of examples for how to achieve particular user interface effects, but very few take into account what is needed to support assistive technologies for people with disabilities. This talk will present examples to show how easy it is to create a variety of common user interface widgets that comply to the latest draft proposals for W3C WAI-ARIA guidelines using clean markup and simple scripts. This do it yourself approach provides an alternative to using big libraries, and gives you greater freedom in picking your own look and feel. (http://www.w3.org/2010/Talks/www2010-dsr-diy-aria/)


SESSION 4: SEMANTIC WEB (Friday, 1:30–3:00 PM, Room 306B)

IBM’s Jazz Integration Architecture: Building a Tools Integration Architecture
and Community Inspired by the Web

Scott Rich, IBM

IBM’s Jazz project started five years ago to build a new tools integration architecture and platform. As we progressed, we realized that the ideal tool integration architecture would borrow heavily from the architecture of the World Wide Web. We have developed an integration architecture which builds on RESTful principles, shared resource designs, and linked data. Our platform takes this architecture and provides useful integration services leveraging Web technologies such as RDF and SPARQL query and OAuth to enable the delivery of new tools built in this architecture. Last year, we shipped our first tools built purely on this architecture, as well as integrations to many other tools using the integration architecture.

Encouraged by our initial experience with this architecture, we decided that this integration architecture could be leveraged across the tools industry. We started an Open Source project at open-services.net to form domain groups to standardize resource designs and protocols for different tool domains. We can talk about this experience and the insights we gained in achieving a useful least common denominator standard for domains like Change Management. These specs have proven to be extremely powerful in delivering tool integrations.

TWC Data-Gov Corpus: Incrementally Generating Linked Government Data from Data.gov

Li Ding, Dominic DiFranzo, Alvaro Graves, James R. Michaelis, Xian Li,
Deborah L. McGuinness and James A. Hendler
Rensselaer Polytechnic Institute

Increasingly, US government data are being opened (via websites such as Data.gov) for public access. In this paper, we present a Semantic Web based framework for incrementally generating Linked Government Data (LGD) from online US government datasets. Focusing on the tradeoff between high quality LGD generation (expensive due to heavy experts’ collaboration) and massive LGD generation (expensive due to large amount of data), our work is highlighted by the following features: (i) using minimal conversion to lower the cost of massive LGD generation; (ii)using Web3.0 technologies (e.g. Semantic MediaWiki) to incrementally enhance LGD data into better quality as well as reduce development cost by reusing user contributed cross-dataset mappings which otherwise are hardcoded in applications.

XRX and Dynamic RESTful Services and the Xorian Server

Kurt Cagle, O’Reilly & XMLToday

REST, Representational State Transfer, is fast becoming one of the most powerful development paradigms on the web. Shifting away from a complex API, the fundamental notion of REST is the confluence of the use of strict HTTP verbs to perform CRUD type operations and the move towards resource based, rather than method based, application development. In essence, RESTful services treat the web as if it was a database, and works remarkably well for dealing with databases as holders of model instance information.

XRX stands for XQuery|REST|XForms, and is a programming paradigm ideally suited for working with XML Databases using a RESTful service paradigm. XQuery engines such as Mark Logic or eXist-db increasingly are supported by or support servlet interfaces, making it possible to write web applications directly using XQuery. XForms technology used in conjunction with this can provide a full development circuit.

Dynamic XRX Services takes this to the next level by providing a framework for associating HTTP verbs with specific XQuery processing pipelines. This association makes it possible to build multiple scripts based upon core objects that can be used to control the presentation of content, how content is filtered, query options, sort options and pagination, with similar pipelines controlling input of resources via PUT or POST.

This talk will examine the open source Xorian system, written by author Kurt Cagle, that implements a Dynamic XRX Services platform on the eXist-db platform, and illustrate its applicability for document/data type applications such as library systems, legal archive management systems, historical archive management, news entity management and more generic content management systems.


SESSION 5: INVITED TALK (Friday, 3:30–4:30 PM, Room 306B)

Extending Google Wave

Joe Gregorio, Google

This talk is an overview of Google Wave, how you can use it to have discussions, collaboratively edit documents, plan meetings, and more. Then we’ll dive into the extension mechanisms for Wave. Extensions include robots, which are web services that act as full participants in a wave, to gadgets which are small packages of HTML and JavaScript to extend the user interface of a Wave.