GTK WebCore

May 12, 2007

GTK-WebCore, a project about porting Apple WebKit to GTK is now up-and-running on my Ubuntu 7 wobbly desktop.

The codebase is still small compared to Moz and others, although the current trunk does not have all the features needed to make it feature-compatible.


Anyway, the codebase is now building on my devel box which is a really a big task by itself, setting-up the dependencies in order to get the trunk compiling.

First Class Browser Elements

This is really not an issue and it is best handled by standards what features go where, and stuff like when these features go marching-in into the code. I’m all for that simply because it creates a stable condition.

That’s probably why we have these projects like gtk-webcore. Innovators or simply those with so much time on their hands could play with outrageous concepts like making some objects first class browser elements.

The implications for those who grok these concepts are wide-ranging. You, my dear reader will probably nod in agreement how cool these ideas are, there is no doubt about it, gtk-webcore does have that potential to provide a big playground, a fertile ground for innovators to play in.

First class elements are basically protected by a standard. Imagine the chaos of not having a standard on how elements get rendered, it would not become a platform for users, though.

There is a need to have a balance between those who add elements to the browser and the ones who protect them. The process alone may not look like a walk-in-the-park, but the resulting product based on a moving standard could benefit a lot.


NLP Markov: Adding Genitive, Dative And Ablative Cases

May 8, 2007

The genitive, dative and ablative are grammar cases that modify nouns and verbs. These cases form a bulk of the written and coded literature published in the internet. Buried in these paragraphs are grammatical cases supporting an idea, an intangible concept, an emotional artifact.

Classifying these embedded grammatical constructs require a good number of tools still waiting to be developed. There are probably tools that are unknown at the time of this writing and they will be found sometime later.

One approach is by book catalog method which hasn’t been designed and implemented, yet. This approach involve deconstructing a document into several basic forms, for example outline and timeline format, then placing them in a catalog. Each node in the timeline is a taggable item where elements can attach.

Markov Chain algorithm have proven to be very useful in this regard as I was able to setup a probability function for a given lexeme. Granularity is currently at the word level, and it would be interesting if the algorithm perform well at the phrase level where these three grammar cases become significant.

One example of Markov Chain implementation is the Google Toolbar. I noticed not too long ago a new behavior was added to the search dropdown box. I type-in a word and below¬†appears a list of related words. These related words–I would imagine–are words generated by Markov Chain. What I found interesting about the list was the X number of words per row. I can’t tell what that does imply, but it does mean something big and grand.

There is still a disconnect between the output of a Markov algorithm with respect to “Phrase-ology” in the sense that a Markov algorithm is dependent on another variable which is not identified at the moment.

Solving the unknown would mean extending the parameters to N to cover for a subset of the entire phrase. Tagging may become optional in this case, though.

This is definitely an exciting experiment worth looking into. The question of simply applying Markov algorithm as a one-all solution brush; or treat it as a tool-in-a-toolbox, combining it with other methods to achieve a finer richer solution.


  1. Advanced Natural Language Processing

Ubuntu Desktop, Webservices, DOM, RDF

May 6, 2007


Setting-up the webservice wasn’t easy. It took me a good number of hours figuring out how this thing will land on a user home directory. Simply following a concept of having a separate HTTP server specifically for the desktop can provide a good separation, a layer between other HTTP programs was in order.

A new directory was created to house the webservice server, located at ~/webservice. The test webservice program finally ran in the late afternoon. One problem I did encounter was correctly setting-up an Ubuntu launcher icon capable of launching the server. I tried many times without success, though the server went up without any problems when started from inside a shell.

That’s only one piece of puzzle already in-place. The remaining pieces are still out there in the wild. But they will be added later, time permitting.


Basically, here is what I’m after.

  1. A desktop webservice – provide a set of services covering the desktop.
  2. An Object Model – an object model exposed to javascript and the CLR.
  3. An Entity-like client – code resides behind a URL, delivered via HTTP, then instantiated by a browser-like program, similar to Entity.
  4. RDF enabled – an agent residing behind the server. Connected to an Object Model and is reachable via Javascript and the CLR.