Monday, December 5, 2011

Mobile Semantics, writ large

The call for papers in the "Semantic Web in a Mobile World" special issue of the Journal of Web Semantics lists the following (and extensive) topics of interest.

  • RDF/Linked Data storage and processing on mobile devices
  • Data and information management on mobile devices
  • Reasoning on mobile devices
  • Mobile indexing and retrieving of multimedia data such as audio, video, images, and text
  • Pub-/sub-systems and middleware for mobile semantic applications
  • Scalability and performance of semantic mobile technologies
  • Mobile semantic user profiling and context modeling
  • Mobile semantic cloud computing
  • Interoperability of mobile semantic applications
  • Browsing semantic data on mobile devices
  • Mobile semantic annotation and peer tagging
  • Mobile semantic mash-ups
  • Mobile semantic multimedia
  • Mobile applications for the social semantic web
  • Mobile semantic e-learning and collaboration
  • Location-aware mobile semantic applications
  • Mobile semantic eGovernment applications and services
  • Innovative and novel user interfaces for mobile semantic applications
  • Development methods and tools for mobile semantic applications
  • Privacy and security for mobile semantic devices and applications
  • Data sets for the mobile semantic web

What is interesting about this list is that it deals almost exclusively with semantic processing ON the device. It is for storing and processing on the device, information management on the device, semantic browsing on the device, and so on.

But I think this view is too narrow. It ignores the fact that, potentially EVERY activity on a mobile device is rich in semantics. Whatever activities one performs on a mobile device take place at a precise point in a constantly changing space-time continuum, and every action can be situated in the context of other activities. Workflows on mobile devices punctuate our daily activities in ways that are completely different to the way we work on our stationary workstations. This rich network of contextual facts qualifies as semantics in a mobile world, even if the device itself does not come loaded with "semantic apps".

This point became very clear in a class I recently taught, where a student group presented their semester assignment. They developed a web application which accepted as input the user's evernote notes, and "semantified" them by adding contextually relevant external information. This involves the addition of Flickr photos taken nearby, DBPedia facts about entities mentioned in the note, and weather information. The application itself is currently fairly limited, but the potential obvious. Flickr photos could be selected by date as well as location. Information could be added from social sites, in the way that the iPhone app Roamz already does. But more interestingly, the use of the device could also be used as context. Did you just use maps before you made the evernote note (perhaps you are in an unfamiliar part of town)? Did you call someone straight after (perhaps to tell them about your find). Did you set up a calendar event, or send an email?

The point of course is that everything we do on a mobile device -- a self contained ecosystem of vital applications situated in space and time -- is potentially oozing with semantics. If we take this broad view, then mobile semantics is not an emerging, esoteric world of phones silently reasoning over ontologies. Instead, it is a new approach to exploit the wealth of existing applications and data with the power of semantics, both on and off the device itself.

Sunday, November 27, 2011


The big news in the iPhone 4S seems to be the coming of Siri. While many people were disappointed at the lack of a bigger screen or a brand new exterior to make people drool with envy, some recognized the inclusion of Siri as a game changing innovation.

It is not the voice recognition capability, but the "semantic recognition capability" that impresses the most. For example, here are three simple questions that Siri can answer (from the "Let's talk iPhone" event):

1) “What is the weather like today?” (Siri answered: “Here’s the forecast for today”),
2) “What is the hourly forecast?” (Siri answered: “Here is the weather for today”), and
3) “Do I need an raincoat today?” (Siri answered: “It sure looks like rain today”).

The first two are probably easy enough to achieve just with sophisticated voice recognition, but the third is a lot more tricky. Siri has to know that in asking about clothing, you are "really" asking about weather. But how does she know that?

While the details of Siri's technology are proprietary, Tom Gruber, one of Siri's creators, gives us some brilliant insights in this keynote address. These are the essential points:
  1. Task oriented
  2. Context is king
  3. Precise and limited information space
  4. Semantic auto completion / snap to grid rather than general intelligence
These are what make Siri work in functionally focused mobile devices. That is, devices which are most likely to be used for a set number of fairly routine tasks. The first point simply re iterates that the tasks you are most likely to want to perform on a mobile platform are limited. Siri is not about the long tail of human activities, but the "fat head" as Tom calls it! But it is the second point, context, that makes it easier to guess what the person wants. Where are they? What time is it? What applications are they wanting to use?  Answers to these questions help narrow down the space of possibilities. The third point is about bringing in wider data sources by choosing and modeling external information sources that are likely to be relevant to the possible tasks. That is, the interface between external data and internal task descriptions is precise. These external sources can contribute to the process of guessing the user's intention. Putting these pieces together makes it possible to realize the goal of "auto completing" the user request with the most appropriate action. It is the "semantic snap to grid" which makes Siri appear to understand a request in an intelligent manner. Magic.

The key is precise modeling with semantic technologies in a context aware mobile platform. Mobilesemantics has come of age!

Hello Mobile Semantics

The growing use of smart mobile devices seems like an irreversible trend. This is creating a need for applications that can seamlessly manage information in a constantly changing physical and information environment. This in turn creates great opportunities for new technologies that offer powerful and flexible knowledge management. Semantic technologies are bound to play a leading role in this.

Already a leading industry conference for semantic technologies has recognized this, and has had a special event to discuss the possibilities.

The Journal of Web Semantics will have a special issue on the subject.

And of course Siri, creation of the Semantic Web guru Tom Gruber.