Skip to content


gravity

Here’s my foray into the flash-esque html5 game arena. It’s a simple game built initially in <canvas> but later scrapped for Raphael because I guess it’s something more suitable for svg than canvas. The interface is fairly simple, you click to start the game, where your projectile is sent off at the velocity relative to green blob in the center. Once it’s launched, the projectile is affected by the gravitational field of all the planets in some fairly pretty near-orbits. Once the projectile is in motion, clicking drops a new planet at where your cursor is, holding down makes the planet grow. The objective is to have the projectile not accelerate off the screen.

As per Kepler’s laws, getting near a planet produces the “gravitational slingshot” effect. Since the projectile tends to fly toward the center of planets, a magical divide-by-zero causes the infinite acceleration toward doom.

As with several other of my  recent projects, it supports various configuration options via the url query string. If you don’t know how it works, basically, you append ?opt1=val1&opt2… to the url. Example: gravity2.html?grav=4 , simply gravity2.html?fastest or a combo of gravity2.html?fastest&grav=4&random. The current options are fastest to prevent the targeting of 80fps (accepts no args), target the target fps (obviously can not be used with fastest, in the case, fastest takes precedence) and it accepts one numerical argument, the default is 80. grav accepts one numerical argument, the default is 4 and is the attraction of the planets (zero isn’t very fun). random makes the planets start off in random places rather than the predefined magical positioning.

Feel free to post highscores in the comments.

Posted in Gravity.

Tagged with , , , , , .


μwave: updates

Over a few days, things can change fairly quickly. There have been several speed improvements, a new Forum-Style blip rendering option which arranges blip linearly by the time edited with each containing a formatted quote of the parent to establish context. Attachments are now fully supported, including thumbnails and download links. The operations engine was totally rewritten which uses asynchronous XMLHttpRequest, a new callback based system and support for a batch operations (which means fewer requests and faster responses). A wavelet header containing a list of all participants in the entire wave has been added, as well as an Add Participant button. A specialized, extremely fast gadget viewer was added, which allows for blazingly fast rendering of two popular gadgets (and more will come), it works by bypassing the entire gadget infrastructure and loading trusted code directly inline with the DOM. There is a “New Wave” button which allows people to create new waves directly from the client. The OAuth backend was authenticated with google, for more secure login transactions. Blips have a new context menu which allows for features such as Delete Blip, Edit Blip and Change Title. A full changelog can be found here.

Try it out.

Posted in Google Wave, Microwave.

Tagged with , , , , , , , .


Steganography in Javascript

Steganography Kitteh

For no real reason, I was reading the Wikipedia article on Digital Steganography and saw the interesting image where an image of a kitty is extracted from some boring trees. I decided to port the example to <canvas>.

Posted in Other.

Tagged with , , , , .


μwave: lightweight mobile wave client

I like bragging when I do something nobody else has done before. And μwave is the first true third-party wave client which is compatible with Google Wave. It’s free to use at http://micro-wave.appspot.com/ and works great on mobile devices. It supports searching for waves, opening them and writing replies.

Currently it doesn’t know read/unread state of the waves from the search panel and doesn’t know read/unread blips, but as of time of writing, its a limitation and flaw in the current version of the Google Wave Data API (introduced just ten days ago at Google I/O). Expect this to be resolved in the near future with upcoming versions of the API and this application.

The source code for the server component is open-source and can be found on github (though it’s slightly outdated, but the important stuff is there). It’s fairly simple (It’s based on the original example code so I’m going to have the same MIT license), but one of the few python scripts which can do authentication with google and pass commands to the data api.  It relies on the Python OAuth Library.

The Blip renderer component of this is licensed under the MIT license and can be found on the old microwave repo. The only part left is the interface, which is going to be the usual GPLv3.

For a little bit of history, this isn’t exactly a new project. The google code project has existed since January 9th, 2010. The purpose was to create a mobile-friendly version of Wave Reader. But that goes even deeper, I can trace it back to the original Static-Bot dated to October 18, 2009. Then, the Google Wave embed API allowed people to view waves only if someone had a Google Wave account and was logged in at the time. This was quite problematic as Wave was still a limited preview which not many people had and probably hampered adoption.

Another separate but eventually convergent issue which led to the microwave project was “Desktop Wave Reader + GWave Client/Server Protocol” post which I made on October 29th of 2009.

During late October of last year, I reverse-engineered some of the features of the Google Wave client. Up until then, the only published specs were the federation protocols, which dealt with how multiple wave servers would use a common protocol to allow multiple users without a central authority and for the gadget and robot apis. Notably missing was a client/server api, for a user of specifically the google wave client, which did not yet support federation (and to date, preview still does not), and to browse/view the waves in one’s inbox without needing to switch to an entirely new provider. The first component was the ability to read waves. After that was accomplished, I tried to reverse engineer a more complex aspect of the protocol, which was the ability to search waves. I eventually realized that that component, search was part of a larger puzzle, which was the real-time BrowserChannel wire protocol which virtually all of wave was based. I made some progress, but near the end, I gave up in frustration. Luckily, someone else became interested in the same thing, and Silicon Dragon basically got search working.

This happened now in early December. I started on a project called Wave Reader, which merged the ideas of static-bot with the desktop wave reader and a new functional blip rendering engine. At that time, the Google Wave client was still horrendously slow, taking several minutes at times to load large waves.

On January, I began a project to merge Wave Reader and the wire protocol (search). I thought an awesome name would be microwave (or μwave) and started the code repo on January 9th. I worked on it a bit, so that it was mostly complete, with search and loading all working, with one missing component: login. Eventually, I got bored and the project lay abandoned for a few months.

This gets us to basically 4 days ago, when I started working on a renaissance of the μwave project, based on the recently released Google Wave Data API. The first component was creating a new blip renderer specifically designed for parsing the new (much cleaner) json format which is part of the robots api. Then I created a client around that and created a python backend for having it work on app engine.

The Future is always awesome to prophesize about. In the coming weeks or days, google will probably update the data api to allow for information like  So, while http://micro-wave.appspot.com will likely remain free and maintained for the forseeable future, I do plan on making a paid iPhone/iPad app. The iPhone app may have some extra features like offline/caching support.

Posted in Google Wave, Microwave.

Tagged with , , , , , .


Google Wave Data API supports all Robots V2 Operations

Here’s something which Google has been not documenting (I daresay hiding?). There’s no limit to Wave Client developers anymore. Though this wasn’t documented at all, all the methods of the Active Robots API work on the new platform, as long as you prepend “wave.” to the method name.

As some of you may know (or not), I’m working on a mobile wave client. Actually, I was for a while, but I never got around to make login work, so it was never really published. Then Google I/O came around last week, and suddenly my reverse-engineering efforts were sort of obsoleted by an official api (however, the official api isn’t quite as awesome as my reverse-engineered code, so I may continue on that a little bit). I was downright disappointed about the new Data API after finding out that it was still read-only (except for marking a whole wave read/unread). This discovery definitely changes things.

This is taken from ops.py in the official Google Wave Robots API 2.0 (Python version, of course :P)

# Operation Types
WAVELET_APPEND_BLIP = 'wavelet.appendBlip'
WAVELET_SET_TITLE = 'wavelet.setTitle'
WAVELET_ADD_PARTICIPANT = 'wavelet.participant.add'
WAVELET_DATADOC_SET = 'wavelet.datadoc.set'
WAVELET_MODIFY_TAG = 'wavelet.modifyTag'
WAVELET_MODIFY_PARTICIPANT_ROLE = 'wavelet.modifyParticipantRole'
BLIP_CREATE_CHILD = 'blip.createChild'
BLIP_DELETE = 'blip.delete'
DOCUMENT_APPEND_MARKUP = 'document.appendMarkup'
DOCUMENT_INLINE_BLIP_INSERT = 'document.inlineBlip.insert'
DOCUMENT_MODIFY = 'document.modify'
ROBOT_CREATE_WAVELET = 'robot.createWavelet'
ROBOT_FETCH_WAVE = 'robot.fetchWave'
ROBOT_NOTIFY_CAPABILITIES_HASH = 'robot.notifyCapabilitiesHash'

Try running them in the data api and you get the awesome

501 “notImplemented: The method wavelet.participant.add is not implemented”

Hmm.. This looks bad. But then, the offically documented wave.robot.fetchWave seems an aweful lot like ROBOT_FETCH_WAVE’s “robot.fetchwave” hmm, maybe it’s the wave. prefix which makes all the difference. And that’s exactly what does.

Posted in Google Wave.


Simple Javascript 3D Function Plotter

sin(sqrt(sq(x)+sq(y)))/sqrt(sq(x)+sq(y))

http://antimatter15.com/misc/f(x).html?sin(sqrt(sq(x)+sq(y)))/5

http://antimatter15.com/misc/f(x).html?sin(sqrt(sq(x)+sq(y)))/sqrt(sq(x)+sq(y))

I think function plotters are cool, and since 3d is all the hype nowadays, why not make a 3d function plotter? I like how WolframAlpha does it quite nicely, but it doesn’t allow panning/moving of the camera. Just as a disclaimer, I made this because it’s cool, not because I spent lots of time on it, on the contrary, it’s taken from the three.js 3D Canvas library floor demo with a 3 line change to make it pull a function from the URL.

Posted in 3D, Function Plotter.

Tagged with , , , , , , .


Cross Domain XHR with postMessage

A bit ago, I posted a little flow chart about the possibility of a Bookmarklet driven model for privledge escalation with XHR, the most important part was the postMessage based emulation of XHR, and for an offline Wave Reader that i’m working on, I needed it and created this.

In around 30 lines of code, I implemented a small subset of the XMLHttpRequest API using postMessage. While the better pmxdr project does the same thing, it uses a different API. I just implemented it with the normal XMLHTTPRequest API, which isn’t the ideal way anyway, but works. Notably, my code relies on json2.js unless you’re relying on Native JSON and DOM Level 3 window.addEventListener.

github.com/antimatter15/pmxhr

Posted in Google Wave.


I can haz WebNotifications

I just have an urge to draw an LOLCat every time I ever encounter something which asks for permission.

The WebNotifications API is one of the few things where I need to request permission.

I don’t really like the API described within the spec, but I guess it suffices. Making a message aut0-dismiss is really quite convoluted, I have to use a HTML-notification and use <script>setTimeout(function(){window.close()},5000);</script> as part of the content. Not only that, but I can’t use data: urls (or at least in the Chrome implementation, which may be a bug). Unless I’m missing something huge that is.

So a while ago, I made a little Twitter trends notifier with Jetpack, and why not make one in HTML5 for Chrome?

http://antimatter15.com/misc/html5twitmon.html

Usage:

- Click the kitteh to grant permission for notifications

- Wait a bit and some updates will happen. It’s ideally a “background” or “pinned” tab.

Note, that at time of writing, it’s using the webkitNotifications object, which is likely only supported by webkit, and as far as I’m aware, the only UA implementing it is Chrome.

Technical details:

It uses localStorage, WebNotifications, Native JSON, and Array.filter, JSONP. Since for some reason I can’t get it to work with data: urls, I used a sort of proxy, http://anti15.welfarehost.com/jshtmlwrite.html#<stuff here> which contains the code:

document.write(unescape(location.hash.substr(1)));

Posted in Web Notifications.


Ajax Animator iPad Support

Today I went to the magical Apple Store and tried out the iPad for the first time. I really have to say that it’s quite magical, though it doesn’t fulfill the criterion for Arthur C. Clarke’s Third Law despite what Jonathan Ive says. Though I really haven’t tried any large area multitouch interface before (sadly), and I would expect it to be a somewhat similar if not exact replica of the experience. Keynote and Numbers were pretty neat (I suck at typing on the iPad in any orientation, so I don’t like Pages). That’s enough to show that iPad is not just a content consumption tool as the iPod and iPhone primarily are, but also content creation.

Anyway, in a few minutes I just swapped the mousedown, mousemove, mouseup events with touchstart, touchmove, touchend events respectively in the core of VectorEditor, while adding a new MobileSafari detection script (var mobilesafari = /AppleWebKit.*Mobile/.test(navigator.userAgent);) and in a quite analogous “magical” way, VectorEditor works in iPhone/iPod Touch and theoretically iPad, Just dragging the vectoreditor files over to the Ajax Animator folder and recompiling should bring iPad support to Ajax Animator with virtually no work.

I haven’t tested it. Downloading XCode 3.2.2 right now so hopefully I can test it soon. Stupid how it’s what? 2.31 gigabytes?!

And possibly, I could use PhoneGap to hack together a App Store app which does the same thing (and maybe charge for it, which might be a bit cruel as this application is open source and works equivalently online – but I guess that’s okay for those people who don’t read my blog >:) ). Maybe get enough to buy an iPad :P

Anyway, though I’m pretty late to this and my opinion probably doesn’t matter at all, here’s a mini iPad review: It’s really really cool, feels sort of heavy, really expensive, hard to type on in any orientation (interestingly it has that little linke on the f and j keys with the keyboard which feels useless since I always thought the point of that was so you can tactile-ily or haptically or tactically or whatever the right word is, find the home row, but since there’s no physical dimension to an iPad, it just strikes me as weird and wanting of that tactile keyboard). Otherwise, browsing really really feels great. Only thing I miss is the Macbook Pro style 3 finger forward/backward gestures (@AAPL plz add this before iPad2.0, and also, get iPhoneOS 4.0 to work on my iPhone 2g or at least @DevTeam plz hack 4.0 for the 2g!).

Oh, and for those lucky enough to have a magical iPad, the URL is http://antimatter15.com/ajaxanimator/ipad/ at least until there’s enough testing to make sure that I didn’t screw up everything with my MobileSafari hacks.

Posted in Ajax Animator, VectorEditor.

Tagged with , , , , , , , , .


Idea: Lego Mindstorms IDE for iPad

A three second mockup.

I don’t have an iPad, nor is it #1 on my wish list (It mostly means any tablet platform but since none of the other ones are really recognizable, I’m jumping on the 4-letter apple product bandwagon). But I am fascinated by touchscreens.

I started programming when I was 7 when I got my first Lego Mindstorms RIS/RCX 2.0 kit (and I loved the 13+ sticker on the box back then :P). So I’ve always had a fondness for the platform, it’s really great for getting kids into robotics and engineering. Kudos to Lego.

Recently, I’ve played around with the current rendition of the Mindstorms platform, the NXT. It’s an evolutionary advancement for the platform and maintains the original intuition of the system while catering to those who don’t really grow out of the original system.

The interface is, a very kid-friendly drag-and-drop block layout. I actually sort of like it, though it’s not something which a desktop application could easily be made in. It’s very procedural, and that’s well suited for telling a car to explode and magically arrange red and blue balls into designated corners.

But really, where drag and drop really shines, the place where it really is meant to be, is on a multitouch tablet. It just makes sense. On a large multitouch surface, coding using simple finger gestures and dragging just makes sense. Lego’s own Labview interface, called NXT-G has large icons and is built entirely on the dragging and dropping. Its something that just feels right on a touchscreen.

The gestures need to be tailored to the specific platform, I propose that two fingers, like on a Macbook, should be used to pan around the canvas of the code. Blocks are dragged from a list on the side onto an execution path. On a block already on the canvas, touching and dragging does the logical thing: it moves the position. Touching a block on the canvas without dragging makes a pie menu type display ooze out from the block. The list would be a bunch of output “pipes” which another finger can be used to drag and link onto other blocks which display another pie menu (though only showing inputs rather than outputs) where letting go would create the connection.

Implementation-wise, one could try porting NBC/NXC (which is written in Pascal and already has the makefiles for WinCE/ARM and FreePascal does seem to be able to compile to iPhone/iPod Touch and the iPad should be a virtually equivalent platform). Probably something made in SVG and/or <canvas> could be used to create the interface which can be loaded with a UIWebView or using the PhoneGap platform. Then it would convert the graphical representation into some NXC code, compile it, and use the built in Bluetooth 2.1 + EDR support in an iPad to send it to the Lego NXT brick and do magic.

Posted in Uncategorized.

Tagged with , , , , , , , , .