somewhere to talk about random ideas and projects like everyone else

stuff

#firefox

Offline Wiki Redux 30 December 2011

There’s just something incredibly alluring about the concept of holding the sum of human knowledge with you at all times. While near-ubiquitous connectivity alleviates this to a certain extent, the momentary lapses of networking are incredibly corrosive to an information dependent mentality. Wikipedia never ceases to amaze me and, while I’ve tried in the past to encapsulate part of its sheer awesomeness, this marks a much more significant attempt.

The differences start even before the data gets to the application. The preprocessing toolchain was entirely rewritten for a multitude of reasons. First of all, it compresses not the entireity, but rather the most popular subset of the English Wikipedia. Two dumps are distributed at time of writing, the top 1000 articles and the top 300,000 requiring approximately 10MB and 1GB, respectively. While ostensibly, the mere top 300k articles is far too narrow to delve deep into the long tail, the breadth of the meager 1/25th of articles consistently surprises me in its depth. The advantage is that at 1GB, it’s relatively easy to fit into any system. The algorithm which strips extraneous content has been made far more sophisticated than the original series of regular expressions. This enables greater compression and less accidentally omitted content.

On the application end, the application has switched from a GWT-compiled LZMA SDK to a speedy, pure javascript decoder. This makes page loads significantly speedier and allows greater compression ratios, for individual blocks can be made larger (256KB instead of 100KB). It also now uses WebGL Typed Arrays to further speed things up, such as sending data to and from the WebWorker thread.

The interface was redesigned with CSS media queries to dynamically transition between different modes in response to different viewing environments. The interface consists of two regions: the fixed position recessed left panel which holds the page title, a search bar, controls and the page outline. This collapses down to a toolbar header automatically when the screen estate is limited. It uses an Apple-esque noise texture background.

Downloads happen in little units called chunks (they’re half a megabyte for the dump file and about four kilobytes for the index). The local file can be built up out of order. While online, all storage operations check the virtual file, indexed db, or web sql database. If it’s not there, it transparently uses an XMLHttpRequest in order to fulfill the request and caches it to disk in the respective persistence mechanism. A bitset is used to keep track of which chunks are already downloaded and which need to be downloaded.

http://offline-wiki.googlecode.com/git/app.html


Why the Chrome Web Store is Bad for the Web. 02 June 2011

Note: I’ve changed a few things that will hopefully make my point a bit more clear

Apple got it right in 2007.

If you’ve read any of the other posts in this blog, you will probably come under the assumption that I’m a devout follower of the Church of Google. Thus it will probably be quite a surprise to read the headline, something which appears downright sacrilegious: it questions the infallibility of the great Google. But I try very hard to maintain some semblance of objectivity and rationality, and this post will be about why I think the Chrome Web Store is bad for the web.

The Chrome Web Store is the applications and extensions gallery for Chrome. It’s Google’s centralized repository and directory for discovering Chrome-related things. Just hearing the name of it, you can probably tell, it’s likely quite inspired by the iOS/iTunes/Mac App Store. It’s not because they aren’t able to innovate (or it might, but I won’t take that view), but it’s probably the result of the huge App Store boom. It’s not that even what Apple did was particularly innovative, but somehow it managed to secure billions of dollars for the company, and all it’s competitors quite rationally want a chunk of it. This however, isn’t about improving the state and future of the web, but rather the indulgence of buzzwords. This post isn’t only about the Web Store, but rather the entire Chrome Applications and Extensions systems. From distribution to installation and the user experience afterwards.

Introduction.

There are two types of installable web applications that exist in the Chrome Web Store: hosted apps, in other words “glorified bookmarks”, and packaged apps. Glorified bookmarks are relatively hard to create, expensive and have no real additional functionality. Packaged applications evade the standardized mechanisms for offline web properties and eliminates many of the advantages of web apps in the first place.

Chrome’s developer overview for creating installable web apps describes the system as a solution to one, rather insignificant, problem. It’s the problem of permissions escalation: some technical detail that hardly seems important. Put simply, it’s that users get annoyed when they’re asked to hit “Okay” to annoying permissions prompts. And so Google’s solution is to invent a certain class of web site which has different security properties, where all the permissions are put into a single prompt.

To users, however, the existence of a web app is a solution to a much different user experience problem: they want to hit nice large pretty icons to go to sites which they frequently visit. But somehow, the solution they opted for creating these large clickable bookmarks is quite terrible. The only user-facing purpose of installable applications is the ability to bookmark with a large icon, something that Apple got right with iPhone OS in 2007.

Apple got it right.

I love those four words in that order, it feels so sensationalist and rebellious. But before the Cult of Apple leaps on that statement, notice the wording “Apple got it right”. It doesn’t strictly mean that whatever Apple’s doing now is right, just that what it did is right. In fact, that’s exactly what happened. Apple got it right, then made it different, and Google made it wrong.

First, we need to recall the distant year of 2007. It was quite a while ago, and I won’t pretend that my memory is that great. But it was a long time ago, a full year before the first beta release of Google Chrome. The iPhone was released with it’s plethora of eight apps and no ability to install more. The App Store didn’t exist, and the closest semblance was the Installer.app for jailbroken devices (Cydia came later). A few months later, Apple released a series of updates, and Steve Jobs signaled what he believed to be the future of iPhone applications: The Web. It doesn’t come surprising that Apple’s Mobile Safari was and likely still is (more or less) the best browser for any mobile device.

The important aspect is the way these web applications were installed. You went to Mobile Safari, and browsed around. You found a web app, and you used the web app the way the web was intended. No installations, you just navigate to a URL and start using it. You find the app useful and/or awesome, and you “bookmark” it. But, instead of actually doing the browser “bookmark”, you hit the button right below: “Add to Home Screen”. It asks you for a name for that application, automatically prepopulated with the document page title. You hit “Add”, and you now have a nice, shiny icon on your home screen. You can hide the browser chrome and it becomes indistinguishable from the normal native application experience.

That app icon is just an image URL specified with a single meta tag. It’s totally decentralized in every way, and represents the openness and simplicity that simply makes sense for this platform. All a developer needs to do to enable their web site to turn into a fully fledged web application is to add a <link rel="apple-touch-icon" href="/customIcon.png"/> in the head section of the page.

Contrast that with what Google requires: creating a Google checkout account, entering credit card information, navigating to the Chrome Web Store page and clicking several links in the footer in order to navigate to the page where you have to pay $5 for creating an app, create several icons, copy the manifest.json template and editing some values pointing to the icon locations, going to chrome://extensions, enabling developer mode, adding the unpacked app to make sure that it works, then going back to the original directory, zipping it up, and uploading it to the Chrome Web store, where you have to write a description, add screenshots, reupload an icon, publish, wait ten minutes, and then spam the internet with that link and edit your site’s code to point to that page. It’s an awful much to go through in order to just create a bookmark.

Apple turns evil.

This subtitle is intentionally misleading. I don’t really think Apple’s evil, but that loaded four letter word is much more concise than the more appropriate phrase “Apple adopts a new platform and shifts ideologically to favor a system which is ultimately in conflict with and entirely inapplicable to the web in its current state or in the foreseeable future”.

Apple’s prescience of the power of the web was sadly a bit anachronistic. The web technologies that would enable their vision were not yet ready. The second browser wars haven’t really even begun, and the jailbreakers, despite handicaps, still managed to develop that platform more than the officially sanctioned web developers could. Browsers were too slow, hardware was to slow, there weren’t enough features, not enough could be done, and the paradigm was not well understood.

Apple followed the lead set by the jailbreak community and launched their own native application development and distribution system: The App Store. It was a hit, and soon became a super huge buzzword. It an all that it represented: centralized one click micropayment driven mobile advertising funded indie developer weekend novelty apps.

Google gets it wrong.

So there was an App Store craze, and everyone wanted one. So it logically follows that Google built an App Store. But the web had no notion of apps. There were web applications, but they weren’t rigidly defined as apps. This is where Google got it wrong. The Chrome Web Store needed to sell apps, and had to create a dichotomy out of the web in order to do so. It created a distinction between web apps and websites where none had existed and shouldn’t have ever existed.

The false dichotomy.

Steve Jobs said that on Mobile, people want Apps, not websites. Before blindly mimicking the concept of apps on another platform, one should probably explore why users like apps over websites. It’s because the mobile app offers a _better mobile-optimized _interface to whatever they’re doing.

Websites aren’t generally designed for mobile, they are often slower, and can’t make use of persistent user interface elements like a tab bar. Apps aren’t popular because of the existence of the App Store. It’s because there’s additional value provided in having those apps, that users use the App Store to get them.

However, web apps, just like websites are optimized for normal computers. Web apps are no better than web sites, and when web apps really have nothing to provide, their respective web stores are useless.

One purported reason for creating the distinction between apps and websites is to give developers the opportunity to charge for the application in the web store. But why should the ability for an author to receive money for his or her respective works be exclusive to web apps? Why not all web sites?

While it’s quite clear that if anything meant to supplant a desktop application and is built for the web can be considered a “web app”, nearly everything else exists in a sort of gray area. Facebook, Twitter and the other social networking are predominately content focused, but have some app-like characteristics, and so they could be considered “web apps” too, despite how there aren’t really desktop equivalents. But what about sites like the New York Times? Pure content sites would logically seem to be the farthest one can go from the concept of an “application”. It’s clear that any web site can be considered a web app.

Since anything can be considered a web app, the Web Store is a mere directory of a certain number of websites. It’s a limited subset of the internet with terrible discoverability properties restricted only to sites where the owner (or a particularly devout fan) is willing to pay $5 in order to allow a subset of users to bookmark the site. It’s proprietary, no other site can have quite the same properties as the web store because Google has the Web Store URLs hard coded into chrome somewhere. Searching in the web store really isn’t that great either, with no ability to search reviews, no pagerank, no search operators, no ability to search within the content of apps. You would figure that if Google were to clone a subset of the internet, at least they would get search right.

This closed, exclusive and excessively tedious process for creating mere bookmarks attacks some of the web’s traditional benefits and ideals.

Packaged applications.

The above sections dealt with how the “glorified bookmarks” are useless and downright harmful. There is a second class of applications which are similar to the former in that they also get a pretty large prominent and clickable icon, but different in that they actually provide functionality that is different from mere ordinary websites. Its virtues include that they tend to work offline and have the ability to do certain things that normal web apps can not do. However, it pretty much stops there.

Packaged apps work offline, but their mechanism evades the standardized system of HTML5. Rather than promoting the use of standards, they promote the use of a proprietary and nonstandard signed zip package.

As they’re “packaged”, they aren’t really “true” web apps, because they don’t actually operate in the scope of the web. They are much closer to desktop apps, practically. They have no URLs, and thus can’t be linked to, evading the very first two letters in HTML and HTTP: “HyperText”. One of the greatest things about web sites is that they can be linked to, and they almost always share that universal identifier to share with people. It’s universally accessible and one of the few things that actually enables intra-site interoperability.

Conclusion.

While the “glorified bookmark” class of applications, which make up the vast majority of the Chrome Web Store, can be quite easily fixed by implementing something akin to the iPhone OS home screen web apps, the “packaged applications” are a bit more interesting. They are the source of that problem which the applications system was meant to resolve: permissions. In the current state, there is no system for handling multiple permissions on the web, aside from flooding the screen with infobars, when even that only partially works. What the web needs is a user friendly, informative, and useful system for giving additional permissions to web sites.

Along with that, the Web Store handles the selling of applications. Accepting money is a two part process, consisting of authentication and payment. Browsers should handle user identity, since they have the resources to do it right, in a nuanced, secure, efficient and user friendly way. Once that’s done, payment would be a logical extension to that. A developer could drop in a Google Checkout widget to have one-click in-app-purchases by tying into the secure browser identity system.

The Chrome Web Store should be reduced to a community maintained directory of useful web applications, something like a wiki, and there shouldn’t be a $5 fee to add applications.

Some people have expressed the idea that the Chrome Web Store is useful because it allows Google, a trusted party, to take down dangerous or malicious applications quickly. And while this is true, note that the Web Store is not actually the sole means to install chrome applications, and a malicious party would most likely exploit those alternate channels, and the only way to combat those is to institute a sort of Kill Switch, much like the kind that iOS, Kindle, and Android already implement.


drag2up 2.0 - drag and drop uploading for all sites 26 December 2010

drag2up was a browser extension I built a few months ago, and recently bumped it up to 2.0. The basic idea is to use HTML5 File API and the drag/drop integration that Chrome and Firefox implements to enable uploads to any website by simply dragging and dropping the file from your computer onto the respective site. Instead of the trouble of opening a new tab, navigating to your favorite file provider, waiting for it to load, pressing the browse button, navigating to the folder with your image, pressing “Open”, then hitting the submit button, waiting for the upload to finish, copy the link, find the original tab among the mess of tabs that fills your tab bar and finally scrolling down, selecting the box and pasting the URL. All to share a three megabyte file. drag2up streamlines the process into a single, swift gesture where you drag the file onto the text entry field. The link is instantly added while the file is being uploaded in the background. If someone navigates to the link before it’s done uploading, the page uses the Google App Engine Channel API to stream real time upload status.

It still has a pretty simple user interface that works with zero initial setup. In addition to using gist and imgur for text files and image uploading respectively, it includes many additional services, configurable through a simple drag and drop interface. The new version also sports Firefox support.

New Features

  • Firefox + Chrome
  • Background Uploading
  • imgur, imm.io, ImageShack
  • Flickr, Picasa
  • gist, pastebin, mysticpaste
  • Chemical Servers, DAFK, Hotfile
  • Dropbox, CloudApp
  • Built in URL shorteners Right now, the only service that doesn’t quite work is Dropbox. The application hasn’t been approved for production status yet. And a number of the hosts do not work with Chrome 8 because of typed arrays and binary XHR issues.

Try out the extension now :) I would really appreciate any and all feedback.

Technical Information

There’s some pretty cool stuff going on in the new release. This release is really close to the bleeding edge of what’s capable on the web and with browsers. On Chrome, to upload files, the multipart/form-data xhr is being pieced together with array buffers and typed arrays, stuff from the WebGL specification. The Firefox version is based on the latest beta release of Mozilla Jetpack (hacked slightly so that the resulting xpi works with 3.6 as well as 4.0). Transferring data between the background page and the individual content scripts (or pageMods in Jetpack’s terms) is done with the createBlobURL (also called createObjectURL) function and binary XMLHttpRequest. On older versions of Firefox, FileReader is used to load files into base64 encoded data urls. Inter-frame communication is done with postMessage and native JSON serialization and parsing functions. Picasa and Dropbox support are built on Javascript implementations of OAuth (based on Clip It Good and Chromepad). So, yeah, there’s lots of new and super cool stuff in there.

The content script was more or less rewritten in order to support Gmail, which led to some interesting design changes. Firstly, the new version establishes a sort of hierarchy, a difference between a root frame and the most subordinated one. Each drag event is sent to the top using postMessage, where the root frame decides whether or not to render the targets or to remove them. Once the decision is made, it trickles downward to each of it’s immediate frames and then trickles back down to each subordinate frame. Whenever a frame is created, a loop detects that one has been added and attempts to access it’s DOM in order to insert a script element that initializes the code. Interestingly, content scripts (in chrome) are unable to access the child frames so a script needs to be inserted into the immediate page in order to insert a script into a child page. The scripts also set document.__drag2up to ensure there aren’t any frames that load themselves twice (interestingly creating an event and dispatching it seeing if there’s a event listener that does preventDefault isn’t a reliable indicator. I wanted to ideally create a system that was mostly undetectable by the parent page, but eventually settled for this simpler and more reliable implementation).

Once the “trickle_reactivate” message (every message happens to be eighteen characters, because of certain really weird design ideas) is recieved by all the frames, they loop through all the elements on the page, searching for elements that meet the isDroppable criteria. It does not trigger on elements who have a width*height < 100, so no really tiny boxes. It needs to be an input element type text or textarea that isn’t read only or disabled, or it needs to be an element whose contents are editable. Then it checks the positioning of the element using the document.elementFromPoint function twice (one accounting for scrolling and one that doesn’t to differentiate fixed positioning from others). The drop target is a div with a massive style attribute. It has rounded corners, css transitions and is positioned either absolutely or fixed based on the positioning of the associated input element. It’s centered around the element or completely covers it for smaller areas. When the user hovers over it, the opacity changes (the behavior is actually reversed from the last version). The original version faded out slightly when hovered over, which, after some thought, I realized made no sense. Similar to the drop target is a purple settings box on the lower right corner. I’m actually quite displeased with it. It’s not a great user experience, but at least it’s noticable. Primarily, it was added because I have no idea how to get the Preferences button to work with Jetpack and Google manages to bury the Options button somewhere deep within the depths of Chrome’s single menu.

Once the file is dropped, the content script decides whether it was a file that was dropped or an image that was dropped from some other page. If it’s an image (because that’s the simpler case to deal with), it does something that’s actually slightly counterintuitive. Using getData(‘text/uri-list’) isn’t reliable as in many cases, the image is wrapped in a link tag and that only gives the link URL instead of that image. So instead, it reads text/html and inserts it into a temporary div and pulls out the source attribute which is then sent to the background page. If the thing that was dropped is a file, first, it checks if the browser supports one of the Blob URI creation methods and if that’s available, it uses that to create a URL. If not, the file is read using FileReader as a base64 encoded data url and sent to the background page. But not immediately, because often the script is running in an unprivledged environment, so it does a postMessage to the topmost parent which should be privileged. The privileged content script sends the message to the even more privileged background page that initiates the actual uploading process.

Before going into what the background page actually does, there’s the settings page. I’m actually pretty proud of how it turned out. With the large number of hosts that are supported, I needed a way for the user to select hosts for different types of files. I could have used select boxes, but they’re not generally great when dealing with image based concepts, because a company’s logo is often more recognizable than the name (odd that I’m saying that, seeing as this blog has neither a distinctive name nor even a favicon). So instead, I built this little grid where the user can drag hosts into boxes on the right. It’s made with jQuery UI, because I don’t particularly like the API’s provided in HTML5.

The background page handles the uploading. Basically, there’s a bunch of javascript files that are loaded into it. The hardest part was by far, OAuth. If anything, this project has only refined my dislike of OAuth. And because this post is already insanely long, I’ll go and rant about the problems of OAuth anyway. There’s already a great article on how terrible of an idea it is for OAuth to be used in any application that runs code on the client. But Photobucket, Picasa and Dropbox still don’t understand that (or don’t care) and only provide OAuth. But that’s not really a problem, I just hate OAuth because even though It’s a standard, each service provider has some little quirks that take hours to debug and ends up being extremely frustrating. Maybe the OAuth gods are angry with me or something, but it’s incredibly troublesome.

Try out the extension now :) I would really appreciate any and all feedback.


Weppy Updates Opera, Chrome and Firefox support and simpler usage 09 October 2010

With help from @Frenzie and @paul_irish, the latest not-yet-versioned release of Weppy, my Javascript WebP to WebM conversion library, or something of a polyfill for a format that is yet to be part of any specification (HTML5 seems to specifically reference the image src attribute are examples such as PNG, GIF, JPEG, APNG, PDF, XML, SVG, SMIL, and MNG). The new release brings some awesome new features that really don’t change much and shouldn’t really be used in the real world because most browsers in the world still aren’t Firefox, Chrome or Opera - that is, a large portion of the browser market includes browsers like Safari and IE, either suffering from antiquity (IE6! aah!) or just liking h264 (IE9 + Safari).

The new release supports Opera. I never bothered debugging Opera, I figured it was another huge issue that would demand a rewrite (as supporting Firefox had needed, because the order of the object keys isn’t preserved and breaks the EBML result, or at least for Firefox’s parser which seems to be somewhat stricter than Chrome’s, is that ffmpeg?). And after premature optimization (stripping “unnecessary” EBML tags), my code didn’t work in chrome, so I had to revert to an earlier revision. All my testing code was based on file drag-drop stuff, and Opera doesn’t support that. Until I saw this mozillazine topic, I didn’t care, but it was a lot easier to fix than I feared.

Part of the solution was getting rid of the canvas stage. Admittedly, the canvas stage was pretty useless once the toDataURL() stage was removed before the first public release, but I didn’t feel like deleting code, so it stayed there. Also, I noticed that the global variable that gets introduced was accidentally named “WebM”, which is wrong, it should be “WebP”, but because of the uncreative format naming and similarities, I didn’t notice. Not sure, but it seems to be more stable now.

Chrome probably will add WebP soon, and it needs to be future proof, detecting whether or not a browser supports the WebP format. To do that, it creates an Image, sets the src to a data url of a 4x4 webp image and listens to the onload and onerror events, checking if the size is correct and there were no errors loading it. The routine is expected to error and totally untested as there aren’t any browsers that support the feature yet for me to try.

Another change, is that by default, it will automatically load all the same-origin (because of the limitations of XHR) webp images (from <img> tags), on the DOMContentLoaded event, so the library is practically drop-in now. In any web page, you can pretty much add <script src=”http://antimatter15.github.com/weppy/weppy.js“></script> and on the supported browsers, it should automatically load and replace all WebP images, though not something I would really recommended.

The demo is the same place it always was: http://antimatter15.github.com/weppy/demo.html

There is also this nifty hack that uses <canvas> to add an alpha channel to the WebP image (adapted from the original JPEG one): http://antimatter15.github.com/weppy/alpha/alpha.html

Also, please follow me on twitter.


Gmail Style HTML5 Drag/Drop 01 October 2010

I’m making a (really awesome) chrome extension that involves dragging and dropping files. MDC, as usual, has great information on the topic. Gmail does it almost perfectly, with the green file drop target. But all other implementations of this feature suffer from two issues, and Gmail’s code is far too obfuscated for any mere mortal to interpret (no doubt thanks to closure). When implementing the file drop target, two fairly important user experience issues occur. I don’t know what way Gmail does it, and I just used whatever solution worked.

Firstly, is that you only want to trigger the drop target once a file is being dragged onto the web app. You don’t want a file drop target to appear once someone starts dragging text (fairly simple, check that e.dataTransfer.types includes File). Even trickier (even Gmail doesn’t get this right) is dragging links also triggers the file drop target for some reason (it’s tricky because then e.dataTransfer.types also includes the File type for some odd reason). There’s no way to access file data as it’s being dragged (getData always returns undefined). One thing that I still have no idea about is inter-browser-window data transfers.

Secondly, is how to get the dialog to dismiss itself once the dragged image is dragged out of the viewport. It’s pretty tricky because of how the events bubble and don’t always go through <body>. The solution I eventually arrived at was to add a timer on the dragleave and see if within fifty milliseconds (a random amount of time, zero works too, but fifty feels safer), another drag event is fired.

Hopefully, people will find this useful :)


CSS3 Sideways Google 08 December 2009

I was surprised to find that one of the main referrers to my site recently has been Twitter. Looking into it, it seems to all be for my quickly hacked together CSS3 Sideways Google. Which uses the new CSS3 transforms supported by Webkit and Gecko (Firefox, Chrome and Safari) as well as the freaky DirectX stuff Microsoft has (Only major browser that isn’t supported is Opera, who is lagging a bit).

And in the spirit of CSS3 Sideways Google, I present CSS3 elgooG

It also appears that the awesome rotateme.org website was built entirely off of the original CSS3 Sideways Google and actually got 1500 diggs :)


VectorEditor Updates lines, rotation, more 10 August 2009

During the last two days I worked a bit on my cross platform, Raphael based vector graphics editor. It now supports Firefox, Opera, Chrome, probably Safari and magically, something called IE. Yes, it works on that nasty terror. Really, the project started with just the idea of being able to support IE. Sure it has a few neat features (multiple select mainly), but the fundamental idea is to support IE and to do so in a stable manner. It’s actually running quite well in IE, though only the latest version has been tested.

Among the updates is a new delete tool that is far more flexible and powerful. It is now not just a button but an entire tool. So while you can still click on it to delete your current selection, you can also use the tool to click on shapes or drag and delete whole groups (not sure what that thing is called). It even has a nice red tint to signify deleting. There is also event listening, vX support (it only uses events and position), and selecting fill, stroke, stroke opacity, fill opacity, and stroke width.

It also integrates well into the Ajax Animator in an almost drop-in replacement type. Maybe eventually something to choose between VectorEditor and Onlypaths. The really only bug features there are multiple select and drag and line editing.

Lines are now done almost perfectly. Dragging them works perfectly and it shows two little boxes on the ends that fan be dragged to edit. This vastly simplifies the old issues with lines and stick figures. Stick figures that inherently satisfy me a lot because that was the highes level of animation I ever did.

It’s probably a bad thing that the developer od an animation application never did anything more complex than stick figures, and probably makes it seem strange for me to even start it. But anyone with more knowledge of animation would not be so naiive as to attempt this.

http://jsvectoreditor.googlecode.com/svn/trunk/index.html



Ajax Animator 0.20 Browser Support 05 August 2008

It supports Firefox 1.5 to Firefox 3.0, Opera 9+ (hopefully), Safari 3+, and it was supposed to support IE, but for some reason, the compilier makes IE fail.


Bug Fixes 29 July 2008

Okay, so in IE you can now actually draw (wow!) though tweening doesn’t work yet :(

I fixed a bug when in Firefox 2 where the preview tooltips end up to be huge for no reason.


Experimental Comet MMORPG 02 May 2008

Note: This is likely the first AND last release Ever. I’m gonna go on working on Ax v0.2 after this is finished (tomorrow)

I think its finally ready for showing people (still proprietary though :P). It is coded using Ajax/Long-Polling Comet, which as far as I’m concerned, is the first of it’s kind. It uses ExtJS for its user interface. it’s page load is ~2mb in size, and there is a huge amount of static data handled. Its following how most my apps are made: using the most client side code possible. I donno why, but it just is.

Since it uses Comet technology (specifically, long-polling), the requests made are minimized, and supposedly much more scalable. Requests are only made when necessary, and due to Comet technology, it only updates when there is something to update.

Since most of the processing is done on the client side, the server only has to handle the static content.

An interesting, albeit geeky feature, is a sort of command-line functionality. I attempted to build the entire system (somewhat like a Unix system) where most if not everything graphical is backed by a set of commands. Well, the real-time chatbox (which again, uses comet to reduce load on servers) detects if the input starts with the > character. if it does, then it parses it as a set of commands. Its relatively smart, so if you type in a global variable name, i’ll give you a JSON dump of the contents, and if its a function, i’ll call it, and if it is an expression then it’ll eval it.

There is no such plotline yet. Just random stuff that pops in my head (read: iWorld) and etc. there is also no title yet. (named: “Untitled”). Part of the game itself, is to build the game, using its built-in pixel-sprite-graphics editor.

Currently, it is restricted to modern browsers only. Firefox being my development platform (duh. what kind of js developer doesn’t use firefox/firebug? but i heard the IE8 dev toolbar is good cause its a clone of firebug..) will obviously work best. IE may or may not work. though platform agnosticism was one of the design goals. Opera/Safari is likely, but i’m not certain. most mobile browsers will fail (i tried apple iphone safari support, but it is weird around ExtJS)

It has some features, such as a relatively nice UI (well… nice compared to the others). Session saving, worlds, sprite/npc authoring, Character IDE, npc battles, pvp, store, panning, prelimary quests, items, friends, magic/abilities, a CLI, some crappy code, moderation, adminstration, etc.

Oh, and the password is “password” in case you wanna make new sprites. The URL is http://cometrpg.110mb.com/