- RDF/Linked Data storage and processing on mobile devices
- Data and information management on mobile devices
- Reasoning on mobile devices
- Mobile indexing and retrieving of multimedia data such as audio, video, images, and text
- Pub-/sub-systems and middleware for mobile semantic applications
- Scalability and performance of semantic mobile technologies
- Mobile semantic user profiling and context modeling
- Mobile semantic cloud computing
- Interoperability of mobile semantic applications
- Browsing semantic data on mobile devices
- Mobile semantic annotation and peer tagging
- Mobile semantic mash-ups
- Mobile semantic multimedia
- Mobile applications for the social semantic web
- Mobile semantic e-learning and collaboration
- Location-aware mobile semantic applications
- Mobile semantic eGovernment applications and services
- Innovative and novel user interfaces for mobile semantic applications
- Development methods and tools for mobile semantic applications
- Privacy and security for mobile semantic devices and applications
- Data sets for the mobile semantic web
What is interesting about this list is that it deals almost exclusively with semantic processing ON the device. It is for storing and processing on the device, information management on the device, semantic browsing on the device, and so on.
But I think this view is too narrow. It ignores the fact that, potentially EVERY activity on a mobile device is rich in semantics. Whatever activities one performs on a mobile device take place at a precise point in a constantly changing space-time continuum, and every action can be situated in the context of other activities. Workflows on mobile devices punctuate our daily activities in ways that are completely different to the way we work on our stationary workstations. This rich network of contextual facts qualifies as semantics in a mobile world, even if the device itself does not come loaded with "semantic apps".
This point became very clear in a class I recently taught, where a student group presented their semester assignment. They developed a web application which accepted as input the user's evernote notes, and "semantified" them by adding contextually relevant external information. This involves the addition of Flickr photos taken nearby, DBPedia facts about entities mentioned in the note, and weather information. The application itself is currently fairly limited, but the potential obvious. Flickr photos could be selected by date as well as location. Information could be added from social sites, in the way that the iPhone app Roamz already does. But more interestingly, the use of the device could also be used as context. Did you just use maps before you made the evernote note (perhaps you are in an unfamiliar part of town)? Did you call someone straight after (perhaps to tell them about your find). Did you set up a calendar event, or send an email?
The point of course is that everything we do on a mobile device -- a self contained ecosystem of vital applications situated in space and time -- is potentially oozing with semantics. If we take this broad view, then mobile semantics is not an emerging, esoteric world of phones silently reasoning over ontologies. Instead, it is a new approach to exploit the wealth of existing applications and data with the power of semantics, both on and off the device itself.