Here is a web site with my mobile applications on the Apple app store.
mobilesemantics
Friday, March 25, 2016
Friday, September 28, 2012
Open Eyes. Open Data
The business case for open data (government data in particular) is a hot topic at the moment. There are already several interesting and detailed discussions on the topic, including this thoughtful blog article from Jeni Tennison who has recently been appointed Technical Director of the Open Data Institute. But in my opinion these discussions are not bold enough. I want to claim that open data is not just a promising new technology trend which has the potential to add value to our societies - instead, it plays an essential part in the very survival of our societies.
The "finance crisis" of the past few years has shown us the inherent vulnerabilities of both government and the private sector, and destroyed our complacent belief that our way of life will continue indefinitely. Suddenly we found ourselves in a "pathless wood".
The way back to prosperity is not yet clear. But one thing is clear - the solution is going to involve efficient and intelligent government. Yet in a beast as complex and distributed as governments can be, how is this intelligence going to be achieved? Paradoxically, I believe the answer can be seen in a claim about human consciousness, put forward by the Nobel Prize winning biologist and neuroscientist Francis Crick and his colleague Christof Koch.
Crick believes that consciousness (visual consciousness in particular) evolved in complex organisms by necessity, because their range of behavioral responses to their complex, dynamic environments could not be supported by simple hard-wired responses. Simple organisms like frogs can get by with un conscious reflexes which, for example, make it snap at any small prey-like objects. But more complex organisms with wider behavioral repertoires would need a proliferation of these dedicated responses to cope with a growing range of environmental contingencies. Clearly an inefficient arrangement. The answer according to Crick: "Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, in our view, is what seeing is about. As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation ..... and to make this interpretation directly available, for a sufficient time, to the parts of the brain that contemplate and plan voluntary motor output, of one sort or another, including speech."
It is time for governments to become conscious. It is time to replace inefficient, isolated and informationally encapsulated response modules with an efficient organism in which information from one part can be used by another part in a seamless, timely and flexible manner. It is time for a single, open, interoperable format for data representation and exchange. It is time to evolve.
The "business case" for open government data is the same as the "business case" for consciousness: survival.
The "finance crisis" of the past few years has shown us the inherent vulnerabilities of both government and the private sector, and destroyed our complacent belief that our way of life will continue indefinitely. Suddenly we found ourselves in a "pathless wood".
The way back to prosperity is not yet clear. But one thing is clear - the solution is going to involve efficient and intelligent government. Yet in a beast as complex and distributed as governments can be, how is this intelligence going to be achieved? Paradoxically, I believe the answer can be seen in a claim about human consciousness, put forward by the Nobel Prize winning biologist and neuroscientist Francis Crick and his colleague Christof Koch.
Crick believes that consciousness (visual consciousness in particular) evolved in complex organisms by necessity, because their range of behavioral responses to their complex, dynamic environments could not be supported by simple hard-wired responses. Simple organisms like frogs can get by with un conscious reflexes which, for example, make it snap at any small prey-like objects. But more complex organisms with wider behavioral repertoires would need a proliferation of these dedicated responses to cope with a growing range of environmental contingencies. Clearly an inefficient arrangement. The answer according to Crick: "Better to produce a single but complex representation and make it available for a sufficient time to the parts of the brain that make a choice among many different but possible plans for action. This, in our view, is what seeing is about. As pointed out to us by Ramachandran and Hirstein (1997), it is sensible to have a single conscious interpretation of the visual scene, in order to eliminate hesitation ..... and to make this interpretation directly available, for a sufficient time, to the parts of the brain that contemplate and plan voluntary motor output, of one sort or another, including speech."
It is time for governments to become conscious. It is time to replace inefficient, isolated and informationally encapsulated response modules with an efficient organism in which information from one part can be used by another part in a seamless, timely and flexible manner. It is time for a single, open, interoperable format for data representation and exchange. It is time to evolve.
The "business case" for open government data is the same as the "business case" for consciousness: survival.
Friday, May 11, 2012
Mobile semantic apps - where are they?
I have begun collecting a list of mobile semantic applications that are available for the iOS and Android platforms. It is on a separate page on this blog. Please suggest any apps I may have missed.
The point of the list is to make a catalog of actual, usable, "sell-able" applications that exist, to see what they do, and to try and describe how they use semantics.
One problem with finding such a list is that not all semantic applications advertise this openly. Siri for example does not mention the word in its description. Conversely, there may be applications which claim to be semantic, but which actually do little more than key word extraction, for example.
It seems to me that a "good" mobile semantic application should have at least these properties:
The point of the list is to make a catalog of actual, usable, "sell-able" applications that exist, to see what they do, and to try and describe how they use semantics.
One problem with finding such a list is that not all semantic applications advertise this openly. Siri for example does not mention the word in its description. Conversely, there may be applications which claim to be semantic, but which actually do little more than key word extraction, for example.
It seems to me that a "good" mobile semantic application should have at least these properties:
- The semantics should help rather than hinder the user. The app should not simply present 100 possible links for the user to follow.
- The application should present some clear advantages over non semantic versions. It should be able to do some clever and useful things that simply cannot be done by competing apps of similar functionality, without semantics. The semantics should make it the go to app in its domain.
- It should be usable. Nobody (almost nobody) wants to type SPARQL queries into a 4 inch touchscreen!
- The semantics should be non trivial. This is probably the hardest one to defend, but I'll give an example of what I have in mind. In the past I have seen research in which linking a keyword (for example) to a DBPedia (Wikipedia) article was considered to be "semantics". But Google maps links locations to Wikipedia entries, yet I wouldn't say Google maps is a semantic app.
Monday, December 5, 2011
Mobile Semantics, writ large
The call for papers in the "Semantic Web in a Mobile World" special issue of the Journal of Web Semantics lists the following (and extensive) topics of interest.
What is interesting about this list is that it deals almost exclusively with semantic processing ON the device. It is for storing and processing on the device, information management on the device, semantic browsing on the device, and so on.
But I think this view is too narrow. It ignores the fact that, potentially EVERY activity on a mobile device is rich in semantics. Whatever activities one performs on a mobile device take place at a precise point in a constantly changing space-time continuum, and every action can be situated in the context of other activities. Workflows on mobile devices punctuate our daily activities in ways that are completely different to the way we work on our stationary workstations. This rich network of contextual facts qualifies as semantics in a mobile world, even if the device itself does not come loaded with "semantic apps".
This point became very clear in a class I recently taught, where a student group presented their semester assignment. They developed a web application which accepted as input the user's evernote notes, and "semantified" them by adding contextually relevant external information. This involves the addition of Flickr photos taken nearby, DBPedia facts about entities mentioned in the note, and weather information. The application itself is currently fairly limited, but the potential obvious. Flickr photos could be selected by date as well as location. Information could be added from social sites, in the way that the iPhone app Roamz already does. But more interestingly, the use of the device could also be used as context. Did you just use maps before you made the evernote note (perhaps you are in an unfamiliar part of town)? Did you call someone straight after (perhaps to tell them about your find). Did you set up a calendar event, or send an email?
The point of course is that everything we do on a mobile device -- a self contained ecosystem of vital applications situated in space and time -- is potentially oozing with semantics. If we take this broad view, then mobile semantics is not an emerging, esoteric world of phones silently reasoning over ontologies. Instead, it is a new approach to exploit the wealth of existing applications and data with the power of semantics, both on and off the device itself.
- RDF/Linked Data storage and processing on mobile devices
- Data and information management on mobile devices
- Reasoning on mobile devices
- Mobile indexing and retrieving of multimedia data such as audio, video, images, and text
- Pub-/sub-systems and middleware for mobile semantic applications
- Scalability and performance of semantic mobile technologies
- Mobile semantic user profiling and context modeling
- Mobile semantic cloud computing
- Interoperability of mobile semantic applications
- Browsing semantic data on mobile devices
- Mobile semantic annotation and peer tagging
- Mobile semantic mash-ups
- Mobile semantic multimedia
- Mobile applications for the social semantic web
- Mobile semantic e-learning and collaboration
- Location-aware mobile semantic applications
- Mobile semantic eGovernment applications and services
- Innovative and novel user interfaces for mobile semantic applications
- Development methods and tools for mobile semantic applications
- Privacy and security for mobile semantic devices and applications
- Data sets for the mobile semantic web
What is interesting about this list is that it deals almost exclusively with semantic processing ON the device. It is for storing and processing on the device, information management on the device, semantic browsing on the device, and so on.
But I think this view is too narrow. It ignores the fact that, potentially EVERY activity on a mobile device is rich in semantics. Whatever activities one performs on a mobile device take place at a precise point in a constantly changing space-time continuum, and every action can be situated in the context of other activities. Workflows on mobile devices punctuate our daily activities in ways that are completely different to the way we work on our stationary workstations. This rich network of contextual facts qualifies as semantics in a mobile world, even if the device itself does not come loaded with "semantic apps".
This point became very clear in a class I recently taught, where a student group presented their semester assignment. They developed a web application which accepted as input the user's evernote notes, and "semantified" them by adding contextually relevant external information. This involves the addition of Flickr photos taken nearby, DBPedia facts about entities mentioned in the note, and weather information. The application itself is currently fairly limited, but the potential obvious. Flickr photos could be selected by date as well as location. Information could be added from social sites, in the way that the iPhone app Roamz already does. But more interestingly, the use of the device could also be used as context. Did you just use maps before you made the evernote note (perhaps you are in an unfamiliar part of town)? Did you call someone straight after (perhaps to tell them about your find). Did you set up a calendar event, or send an email?
The point of course is that everything we do on a mobile device -- a self contained ecosystem of vital applications situated in space and time -- is potentially oozing with semantics. If we take this broad view, then mobile semantics is not an emerging, esoteric world of phones silently reasoning over ontologies. Instead, it is a new approach to exploit the wealth of existing applications and data with the power of semantics, both on and off the device itself.
Sunday, November 27, 2011
Siri
The big news in the iPhone 4S seems to be the coming of Siri. While many people were disappointed at the lack of a bigger screen or a brand new exterior to make people drool with envy, some recognized the inclusion of Siri as a game changing innovation.
It is not the voice recognition capability, but the "semantic recognition capability" that impresses the most. For example, here are three simple questions that Siri can answer (from the "Let's talk iPhone" event):
1) “What is the weather like today?” (Siri answered: “Here’s the forecast for today”),
2) “What is the hourly forecast?” (Siri answered: “Here is the weather for today”), and
3) “Do I need an raincoat today?” (Siri answered: “It sure looks like rain today”).
The first two are probably easy enough to achieve just with sophisticated voice recognition, but the third is a lot more tricky. Siri has to know that in asking about clothing, you are "really" asking about weather. But how does she know that?
While the details of Siri's technology are proprietary, Tom Gruber, one of Siri's creators, gives us some brilliant insights in this keynote address. These are the essential points:
- Task oriented
- Context is king
- Precise and limited information space
- Semantic auto completion / snap to grid rather than general intelligence
The key is precise modeling with semantic technologies in a context aware mobile platform. Mobilesemantics has come of age!
Hello Mobile Semantics
The growing use of smart mobile devices seems like an irreversible trend. This is creating a need for applications that can seamlessly manage information in a constantly changing physical and information environment. This in turn creates great opportunities for new technologies that offer powerful and flexible knowledge management. Semantic technologies are bound to play a leading role in this.
Already a leading industry conference for semantic technologies has recognized this, and has had a special event to discuss the possibilities.
The Journal of Web Semantics will have a special issue on the subject.
And of course Siri, creation of the Semantic Web guru Tom Gruber.
Subscribe to:
Posts (Atom)