GTK WebCore

May 12, 2007

GTK-WebCore, a project about porting Apple WebKit to GTK is now up-and-running on my Ubuntu 7 wobbly desktop.

The codebase is still small compared to Moz and others, although the current trunk does not have all the features needed to make it feature-compatible.

WebCore

Anyway, the codebase is now building on my devel box which is a really a big task by itself, setting-up the dependencies in order to get the trunk compiling.

First Class Browser Elements

This is really not an issue and it is best handled by standards what features go where, and stuff like when these features go marching-in into the code. I’m all for that simply because it creates a stable condition.

That’s probably why we have these projects like gtk-webcore. Innovators or simply those with so much time on their hands could play with outrageous concepts like making some objects first class browser elements.

The implications for those who grok these concepts are wide-ranging. You, my dear reader will probably nod in agreement how cool these ideas are, there is no doubt about it, gtk-webcore does have that potential to provide a big playground, a fertile ground for innovators to play in.

First class elements are basically protected by a standard. Imagine the chaos of not having a standard on how elements get rendered, it would not become a platform for users, though.

There is a need to have a balance between those who add elements to the browser and the ones who protect them. The process alone may not look like a walk-in-the-park, but the resulting product based on a moving standard could benefit a lot.


Markov, Search, Mo’ Testing

May 11, 2007

I finally got reconnected to a random blogger who I once blogged about several years ago. The last entry made was in 2004 and I thought he decided to walk away and quit. Not that I know this person really well, he seemed to be happy living a retired life outside the country. Adjusted to the economic and social conditions that he wrote about that time. And so I wondered if he was still out there blogging, interpreting the things around him according to his own philosophical views. I tried Google without luck, using his nickname didn’t return anything except his old blogsite. I tried running his text through the Markov Chain algorithm to see what words stood out from the pack, then used that to query the net. Nothing returned. So I finally gave up the search and decided to just send him an email. It was a shot in the dark, “maybe this guy is already dead?” I asked. I got a reply from him saying he has a new blogsite up and running since 2004. So I scratched my head again, sat down and started figuring out what went wrong in the attempt to use Markov Chain along with other algorithms to track him down. Now that I got reconnected, it’ll be nice to read about his opinions. Maybe use his work to refine this simple application I’m trying to complete.

Midrange Setup Using PC Thin Clients

In the olden days, the big iron was the only game and IBM supplied these monsters. On the other hand, Unix came in midrange sizes sporting a nice little OS. I remember it took me quite a while to figure out crontab, though. I’m sure the programmer who wrote that was stoned. With just the a little man page, that was all it was about to carry out the task. Talk about being macho with a man-page was not what I would consider fun.

Looking back, it would probably be nice to have to re-create that environment I once worked-in, a throw-back in the days of the old AT&T SVR4 environment by putting together a bunch of old PCs as thin clients connected to an Ubuntu server. I heard the LTSP project are into this kind of stuff so I’ll head over there and look for more info. At the moment, the current setup are desktops connected to a hub.

Desktop Webservice

It looks like the idea of implementing a webservice specifically for a desktop could become a hit. I really like the idea of having to connect to a desktop as if it is like another website with a webservice. A new level of simplification was achieved, factoring out the differences and only taking the common parts really made sense. Another interesting piece was putting Avahi to work alongside the webservice. The implications of these two working together could mean more toys to play with.

GEGL

Nice, I just got my first dive into GEGL on the Edgy distro.


One Terabyte Load-And-Query Perf Test

May 7, 2007

Let me start with how much space is one terabyte. According to this webpage, one terabyte is about 1024 gigabytes; so let’s say that would be about four 256GB disks if you buy them from BestBuy, or ones they sell now you can get two 512GB disks currently selling at Fry’s. For a home enthusiast/hobbyist, buying two 512GB disks would be the right choice for a homebrew PC box. Stick it in the two bays, plug the cables and let Ubuntu take care of it. That’s pretty much it, got it squared-away.

512GB Disk

I’m going to the next point now and that is about data sourcing. The web is basically a network of data, a vast collection of whatever-you-wanna-call-it is right there sitting on the web. It’s just there, wow! Imagine the wealth of information you can get from that vast sea, ocean or even celestial data space. Man! that is simply awesome, mind-blowing just thinking about that great number of data. So yeah, data is up there waiting to be mined. That is the source, pure unadulterated wide-open wealth of knowledge right at your doorstep.

Here’s my third point, I’m going to relate the first paragraph to the second paragraph and it will go something like a data processing system on your home machine. All I got is one terabyte of empty space, waiting to be populated with collected data from the Great Web. I’m guessing one terabyte is enough to perform a simple experiment required to generate a very interesting report which may or may not have any value to anyone, except me. The resulting output definitely has a huge potential because I believe in this truism that “the perfect data is the one you have never seen yet.” Casting a big wide net to the web and hauling it over to a one terabyte space for processing will definitely capture that hidden gem. The most important part of the process is performing thing this called synthesis, which would even refine it a cleaner version.

This is my closing for this entry. Some of the tools are already in place, I just got it working yesterday, enough to proceed and carry-on to the next level of test. Although, I may have to cough-up some dough for the 1024GB disk as they are not cheap. 500GB disk is still pretty expensive compared to 160GB, though. It is definitely quite an investment for that small experiment I’d like to perform. Drive and Redland will be the ones doing the heavy lifting.


Ubuntu Desktop, Webservices, DOM, RDF

May 6, 2007

MyDesktop

Setting-up the webservice wasn’t easy. It took me a good number of hours figuring out how this thing will land on a user home directory. Simply following a concept of having a separate HTTP server specifically for the desktop can provide a good separation, a layer between other HTTP programs was in order.

A new directory was created to house the webservice server, located at ~/webservice. The test webservice program finally ran in the late afternoon. One problem I did encounter was correctly setting-up an Ubuntu launcher icon capable of launching the server. I tried many times without success, though the server went up without any problems when started from inside a shell.

That’s only one piece of puzzle already in-place. The remaining pieces are still out there in the wild. But they will be added later, time permitting.

Design

Basically, here is what I’m after.

  1. A desktop webservice – provide a set of services covering the desktop.
  2. An Object Model – an object model exposed to javascript and the CLR.
  3. An Entity-like client – code resides behind a URL, delivered via HTTP, then instantiated by a browser-like program, similar to Entity.
  4. RDF enabled – an agent residing behind the server. Connected to an Object Model and is reachable via Javascript and the CLR.