somewhere to talk about random ideas and projects like everyone else

stuff

#javascript

Methods and Functions 16 December 2015

I watched a recording of Rob Pike’s talk Simplicity is Complicated a few weeks ago, and I was struck by his sentiment that many popular languages seem to be converging by adding popular features from other languages.

In particular, it seems that many popular languages combine aspects of object-oriented and functional programming— support for classes, high-order functions, and reduction. Sometimes this leads to situations where the distinction between methods and functions are confusing or inconsistent.

In Javascript, you find the length of an array or string using a getter method "blah".length but in Python[^2], it’s done through a function invocation len("blah").

This dichotomy is more evident when operations get chained in sequence to process some stream of data. Here’s an example in Java 8 of the object-oriented approach where methods are chained[^1]:

double average = roster.stream()
    .filter(p -> p.getGender() == Person.Sex.MALE)
    .mapToInt(Person::getAge)
    .average()

An alternative way to write the code is by thinking about it as function composition— as you can do in Haskell:

(sum (map sqrt (filter (>3) [1,2,3,4,5,6,7])))

An interesting exercise is to pay attention how your eyes move as you scan through snippets of code to figure out what it does.

To do that, let’s abstract away the slightly different syntax for these different languages and imagine a hypothetical language where x.y.z is syntactic sugar for z(y(x)). That is, when a function name is suffixed after a . behind some expression, it is equivalent to wrapping the entire expression and using it as the first argument.

More concretely, 32.double is equivalent to double(32), and 49.divide(7) is equivalent to divide(49, 7).

For a more complex example:

range(0, 20).map(x: sqrt(x)).filter(x: x > 3).sort.sum

sum(sort(filter(map(range(0, 20), x: sqrt(x)), x: x > 3)))

With the first code snippet, the process can be read left-to-right, and you can imagine it as the story of a little chunk of data flowing through the system.

With the second code snippet, you instead read the function from the inside-out. To a certain extent, it’s right-to-left, but you have to check the other side to make sure you aren’t forgetting any extra arguments.

Let’s try to visualize this:

sum(sort(filter(map(range(0, 20), x: sqrt(x)), x: x > 3)))
                    ------------
                ----            ------------
         -------                             ----------
    ----                                               -
---                                                      -

You can see here that interpreting the behavior with the functional style[^3] begins in the middle and approaches the beginning right-to-left. But when you reach some new function name, you have to zig-zag to the opposite side to visually inspect whether or not the method has any additional arguments.

range(0, 20).map(x: sqrt(x)).filter(x: x > 3).sort.sum
------------
            ----------------
                            -----------------
                                             -----
                                                  ----

The method chaining approach (read: OOP style) is nice because it fits the conceptual model of data flowing through a sequence of transformations, and it doesn’t disrupt the typical western left-to-right reading order.

To me, this seems much easier both to read and to write (for I am not so blessed as to use Paredit and must carefully tread a vast sea of syntax to manually close my parens) and I’ve always been mildly infuriated trying to learn Haskell[^4] because function composition feels much less natural[^5].

One of the stated reasons for why Python uses len is that you can’t glance at the type of the expression by looking at its head. When you’re scanning through a line and you see that the outermost function is “len”, you can immediately tell that the output is a number. When length is determined in postfix, you can’t determine the final type of an expression until you look at the end of the expression.

Another problem is that this linearized sequential syntax only helps when the the structure of an expression is relatively linear. Imagine instead that you have an expression like

sum(range(floor(sqrt(multiply(3, 4))), divide(square(5), 2)))
                     --------------           
                -----              -          ---------
          ------                    -  -------         ----
    ------                           --                    -
----                                                        -

In this case, both multiply(3, 4) and square(5) are of relatively similar depth which gives it the humps making it look like a snake eating an elephant.

Flattening it with the . syntax then forces range into a little invisible spot in the middle and that makes it a lot harder to interpret the behavior of the program.

3.multiply(4).sqrt.floor.range(5.square.divide(2)).sum

In fact this dot stuff is beginning to look a lot like the postfix notation in stack-based programming languages like FORTH) (read: similarly unreadable).

That actually leads to the nifty realization that it’s kind of like tail call optimization for the human mind.

When reading some code composed of nested function application, you end up having to keep track of how many level within the function you are. There’s an idea in psychology that human working memory can contain up to 7 ± 2 items. So reading some deeply nested set of expressions is naturally difficult.

But if the expression tree naturally represents a sequence of actions transforming a piece of data— you can reorganize it in such a way that depth of your mental stack doesn’t keep growing and objects can be substituted in place.


Passing objects as the first argument isn’t really all that strange. The way classes are implemented in python, methods are just functions with an implicit self argument.

class Dog:
    def talk(self, text):
        print "woof!", text, "bark!"

shibe = Dog()
shibe.talk("wow! such speak!")

So with the code above, shibe.talk("wow! such speak!") is really just shorthand for Dog.talk(shibe, "wow! such speak!").

Neither is it particularly strange to access method properties through function calls. This seems to be quite typical for accessing fields of typeclasses in Haskell.

data Car = Car {
    company :: String, 
    model :: String, 
    year :: Int
}
let WusTangClan = Car {company="Ford", model="Wustang", year=1967}
model(WusTangClan) -- "Wustang"

[^3]: Note that while I’m characterizing this as the functional style, many functional languages such as F# have a “pipe-forward” operator “|>” which has a similar effect. Haskell seems to have a convention where the data is passed as the last argument rather than the first argument, which makes implementing this particular strategy a bit clumsy.

[^2]: In the Python FAQ Guido describes the rationale behind the “len” function as opposed to using a method on strings and lists. Interestingly more recent revisions of the FAQ make it sound as if it were some mistake rendered unfixable by history.

[^1]: Something analogous to this sort of chained-method style is also used in popular frameworks like d3 and jQuery.

[^4]: Well, there’s also that incident where I tried installing some Hello World example with Cabal and ended up, through a far more infuriating series of events, rendering my Macbook unbootable.

[^5]: In similar vein, I’d like there to be a “@\” in Mathemath so that I can do data \@ transformation 1 rather than the other way around with the existing “/@” operator



Cooley-Tukey FFT + DCT + IDCT in under 1K of Javascript 30 May 2015

Technically this is a Hough transform and isn't at all related to the FFT*, but it looks a lot cooler than any of the actual FFT/DCT pictures I have, so I might as well stick it here

Almost a year ago, I wrote a simple script for computing perceptual hashes in Javascript (it’s going to be part of a forthcoming Naptha feature which allows it recognize source images for memes and do perfect inpainting). One step of the process is computing the 2D Discrete Cosine Transform of a shrunken version of the image.

The problem is that step was kind of ridiculously slow, and it’s not really hard to see why: there’s a series of four nested for loops— because that’s the obvious naive way for computing the DCT.

In practice, it’s actually not all that bad because you only have to do it once per image, and the dimensions of the source image are a fixed 32x32. But it still felt pretty bad, so when I was looking over the code again earlier this week, I tried doing some research to figure out a faster way to do the DCT.

Surely you’d imagine that there have to be extraordinarily efficient implements of the DCT out there, given that it’s the very foundation of JPEG out there. And you’d be right, but they’ve generally unrolled the butterfly into a series of bit-shifts which are quite unlikely to generalize beyond the 8x8 block size.

I’m reasonably confident that FFTW3 has a fast DCT implementation which also is capable of generalization, but it seems buried within quite a lot of code.

MiniFFT

This isn’t a super performant implementation, but it’s not horrendously slow and it doesn’t use recursion. It’s probably pretty trivial to make it quite a bit faster by precomputing sine and cosine tables. This uses the standard Cooley-Tukey radix-2 decimation-in-time algorithm, which means it only works for one dimensional arrays with a length which is a power of two.

It doesn’t check that the length is a power of two, it’ll just give you the wrong answer. You should definitely check that it’s true beforehand.

function miniFFT(re, im) {
    var N = re.length;
    for (var i = 0; i < N; i++) {
        for(var j = 0, h = i, k = N; k >>= 1; h >>= 1)
            j = (j << 1) | (h & 1);
        if (j > i) {
            re[j] = [re[i], re[i] = re[j]][0]
            im[j] = [im[i], im[i] = im[j]][0]
        }
    }
    for(var hN = 1; hN * 2 <= N; hN *= 2)
        for (var i = 0; i < N; i += hN * 2)
            for (var j = i; j < i + hN; j++) {
                var cos = Math.cos(Math.PI * (j - i) / hN),
                    sin = Math.sin(Math.PI * (j - i) / hN)
                var tre =  re[j+hN] * cos + im[j+hN] * sin,
                    tim = -re[j+hN] * sin + im[j+hN] * cos;
                re[j + hN] = re[j] - tre; im[j + hN] = im[j] - tim;
                re[j] += tre; im[j] += tim;
            }
}

The transformation happens in place, so it’ll modify whatever arrays you pass in.

var re = [1, 2, 3, 4],
    im = [0, 0, 0, 0];
miniFFT(re, im);
console.log(re, im); // [ 10, -2, -2, -2 ] [ 0, 2, 0, -2 ]

Minified, it’s only 359 bytes. Unlike this other contender for the dubious prize of smallest javascript FFT implementation, it doesn’t require a special complex number library three times its size (though this doesn’t fit in a tweet— I’ll be really impressed if someone manages to do that).

I haven’t really golfed the code to the smallest size possible, but I think it’s at a reasonable balance of brevity while still being somewhat comprehensible.

It’s based on an in-place FFT from Princeton’s intro CS class Project Nayuki’s FFT.

MiniIFFT

Turns out that you can invert the FFT modulo a scaling factor just by swapping the real and imaginary arguments.

function miniIFFT(re, im){
    miniFFT(im, re);
    for(var i = 0, N = re.length; i < N; i++){
        im[i] /= N;
        re[i] /= N;
    }
}

MiniDCT

This is an implementation of a Type-II DCT using MiniFFT. For details about Makhoul’s algorithm for transforming a DCT into an FFT, see this post

function miniDCT(s){
    var N = s.length,
        K = -Math.PI / (2 * N),
        re = new Float64Array(N),
        im = new Float64Array(N);
    for(var i = 0, j = N; j > i; i++){
        re[i] = s[i * 2]
        re[--j] = s[i * 2 + 1]
    }
    miniFFT(re, im)
    for(var i = 0; i < N; i++)
        s[i] = 2*re[i]*Math.cos(K*i)-2*im[i]*Math.sin(K*i);
}

Like MiniFFT, it operates in-place (well, not really, but it writes its output to the source array).

var arr = [3, 4, 1, 7];
miniDCT(arr);
console.log(arr); // [ 30, -5.09493566, 7.0710678118, -8.604744653 ]

It produces the same output as scipy’s fftpack, but Matlab’s DCT uses orthogonal normalization and produces a different result. However, it’s pretty simple to do it the Matlab way— just scale everything by 1/sqrt(2 * N) and divide the first element by sqrt(2).

function matlabDCT(arr){
    miniDCT(arr)
    arr[0] /= Math.sqrt(2);   
    for(var i = 0, N = arr.length; i < N; i++) arr[i] /= Math.sqrt(2 * N);
}

MiniIDCT

I really considered renaming this series of functions into TinyFFT and friends because of the predicament of MiniIDCT— yeah, exactly— the horror— repeating I’s.

This is an implementation of a Type-III DCT, also known as the IDCT because it happens to be the inverse of a Type II DCT (frequently refered to as “The DCT”).

function miniIDCT(s){
    var N = s.length,
        K = Math.PI / (2 * N),
        im = new Float64Array(N),
        re = new Float64Array(N);
    re[0] = s[0] / N / 2;
    for(var i = 1; i < N; i++){
        var im2 = Math.sin(i*K), re2 = Math.cos(i*K);
        re[i] = (s[N - i] * im2 + s[i] * re2) / N / 2;
        im[i] = (im2 * s[i] - s[N - i] * re2) / N / 2;
    }
    miniFFT(im, re)
    for(var i = 0; i < N / 2; i++){
        s[2 * i] = re[i]
        s[2 * i + 1] = re[N - i - 1]
    }
}

This is based on Tom Grydeland’s IDL implementation of the IDCT.

Let’s start with a 8 element array of real numbers (DCT values).

[A, B, C, D, E, F, G, H]

From that it generates the corresponding complex values. Note that the real part is intact, and the complex part is the original array with its head chopped off and the rest of it flipped around and attached to a water balloon.

[A, B, C, D, E, F, G, H] - j[0, H, G, F, E, D, C, B]

Then the complex values are rotated by exp(i * pi / (2 * N)) to get spectral components. After an IFFT that gets transformed into:

[a, h, c, f, e, d, g, b]

It took me a while to see what was going on there, but the even indices are map to the first half of the results, and the odd ones map to the second half in reverse

Scrambled      a h c f e d g b
Even Indexed   a   c   e   g
Odd Indexed      h   f   d   b
Odd Reversed     b   d   f   h
Unscrabmled    a b c d e f g h

HeapQueue.JS 28 January 2014

heap

I have a habit of writing blog posts in Sublime Text in lieu of legitimate READMEs. This project was simple and short enough that I think I did manage to write something with a passable semblance to a functional README before initiating this quest to craft a blog post.

This is hardly a project that I’d expect to go down in the annals of my blog as particularly interesting or notable. Instead, it’s a rather boring low-level computer sciencey thingy who would feel perfectly comfortable amidst homework assignments.

In fact, it’s virtually assured that documenting this endeavor– the combination of writing the words which comprise this post and the documentation already written on how to use this rather simple library– will inevitably consume more of my time than actual coding (people who are more familiar with highly intricate professional software tapestries may claim that such is merely the ideal and typical experience of a programmer).

Notice that I’ve managed to spend three paragraphs (well, my paragraphs are on the order of two sentences, so it doesn’t even qualify for the fourth-grade definition for what a paragraph is) talking about the uninteresting character of the project, its relative diminutive size, and the process of writing about the subject, without actually describing what exactly the project is. This, more than perhaps any other project of mine deserves such a diversion, for it is unlikely to yield “much amaze” or even a modest “such wow”.

It is a binary heap priority queue, a teensy bit shy of 50 lines, a mere kilobyte minified. It’s very loosely based off of Adam Hooper’s js-priority-queue but considerably smaller.

Now that I’ve gotten the actual informative and useful part of the blog post off my shoulders, I can take you along for the unproductive plunge into the rather lame backstory for the project. I was working on a port of the Telea inpainting algorithm and needed a priority queue, because the little incremental insertion sort that I hacked together in two lines was taking a few orders of magnitude longer than was acceptable.

With my considerable Google-Fu, I searched “javascript heap queue” and found js-priority-queue as well as PriorityQueue.js. They both happened to be implemented in CoffeeScript, which I love, except not really. I like the syntax, but creating a project using it usually involves setting up some compilation pipeline in order to get it to do the things that I want it to do. Overall, it’s a bit too much of a hassle for little projects.

I couldn’t actually find the source code for PriorityQueue.js so I settled for js-priority-queue. It was a bit annoying in that it required require.js to function and included several storage strategies that were totally useless. So I copied and pasted a few files together and stripped it of its AMD dependencies and created a somewhat dieted version.

But as 9th grade World History teaches us, appeasement never works, and this purely pragmatic minification leads inexorably to an irrational code golfing. At that point, I was overcome by this undeniable urge to cleanse it further, perhaps blinded by not-invented-here syndrome and software anorexia. Minutes later the file had been laid waste and in its place there existed its bare skeletal remains.

Now all I need to do is push it to github and write a blog post…


Ocrad.js Pure Javascript OCR via Emscripten 31 December 2013

doge

As with any minor stepping stone on the road to hell relentless trajectory of Atwood’s Law, I probably don’t need to justify the existence of yet another “x, but now in Javascript!”, but I might as well try. After all, we all would like to think that there’s some ulterior motive to fulfilling that prophesy.

On tablet or other touchscreen devices- of which there are quite a number of nowadays (as the New Year’s Eve post, I am obliged to include conjecture about the technological zeitgeist), a library such as Ocrad.js might be used to add handwriting input in a device and operating system agnostic manner. Oftentimes, capturing the strokes and sending them over to a server to process might entail unacceptably high latency. Maybe you’re working on an offline-capable note-taking app, or a browser extension which indexes all the doge memes that you stumble upon while prowling the dark corners of the internet.

If you’ve been following my trail of blog posts recently, you’d probably be able to tell that I’ve been scrambling to finish the program that I prototyped many months ago overnight at a Hackathon. The idea of the extension was kind of simple and also kind of magical: a browser extension that allowed users to highlight, copy, and paste text from any image as if it were plain text. Of course the implementation is a bit difficult and actually relies on the advent of a number of newfangled technologies.

If you try to search for some open source text recognition engine, the first thing that comes up is Tesseract. That isn’t a mistake, because it turns out that the competition is worlds away in terms of accuracy. It’s actually pretty sad that the state of the art hasn’t progressed substantially since the mid-nineties.

A month ago, I tried compiling Tesseract using Emscripten. Perhaps it was a bad thing to try first, but soon I learned that even if it did work out, it probably wouldn’t have been practical anyway. I had figured that all OCR engines had been powered by artificial neural networks, support vector machines, k-nearest-neighbors and their machine learning kin. It turns out that this is hardly the norm except in the realm of the actually-accurate, whose open source provinces live under the protection of Lord Tesseract.

GOCR and Ocrad are essentially the only other open source OCR engines (there’s technically also Cuneiform, but the source code is in a really really big zip file from some website in Russian and its also really slow according to benchmarks). And something I didn’t realize until I had peered into the source code is that they are powered by (presumably) painstakingly written rules for each and every detectable glyph and variation. This kind of blew my mind.

Anyway, I tried to compile GOCR first and was immediately struck by how easy and painless it had been. I was on a roll, and decided to do Ocrad as well. It wasn’t particularly hard- sure it was slightly more involved but still hardly anything.

GOCR and Ocrad both only operate on NetPBM files (supporting other files is done in typical unix fashion by piping the outputs from programs that convert file formats). Nobody really uses NetPBM anymore, so in order to handle a typical use case, I’d need some means of converting from raw pixel values into the format. I Googled around for Javascript implementations of PBM/PGM/PNM, finding nothing. I opened the Wikipedia page on the format and was pleased to found out why: the format is dirt simple.

If you know me in person, you’ll probably know that I’m not a terribly decisive person. Oftentimes, I’ll delay the decision until there isn’t a choice left for me to make. Anyway, serially-indecisive-me strikes again, so I alternated between the development of GOCR.js and Ocrad.js, leading up to a simultaneous release.

But in the back of my mind, I knew that eventually I would have to pick one for building my image highlighting project. I had been leaning toward Ocrad the whole time because it seemed to be a bit faster and more accurate when it came to handwriting.

Anyway, I spent a while building the demo page. It’s pretty simple but I wouldn’t describe it as ugly. There’s a canvas in the center which can be drawn to in arbitrary fonts pulled via the Google Font API. There’s a neat little thing which lets you draw things on the canvas. And to round out the experience, you can run the OCR engine on your own images, by loading the image file onto canvas to support any file the browser does (JPG, GIF, BMP, WebP, PNG, SVG) or by directly feeding the engine in its native NetPBM formats.

What consistently amazes me about Optical Character Recognition isn’t its astonishing quality or lack thereof. Rather, it’s how utterly unpredictable the results can be. Sometimes there’ll be some barely legible block of text that comes through absolutely pristine, and some other time there will be a perfectly clean input which outputs complete garbage. Maybe this is a testament to the sheer difficulty of computer vision or the incredible and under-appreciated abilities of the human visual cortex.

At one point, I was talking to someone and I distinctly remembered (I know, all the best stories start this way) a sense of surprise when the person indicated that he had heard of Tesseract, the open source OCR engine. I had appraised it as somewhat more obscure than it evidently was. Some time later, I confided about the incident with a friend, and he said something along the lines of “OCR is one of those fields that everyone comes across once”.

I guess I’ve kind of held onto that thought for a while now, and it certainly seems to have at least a grain of truth. Text embedded into the physical world is more or less our primary means we have for communication and expression. Technology is about building tools that augment human capacity and inevitably entails supplanting some human capability. Data input is a huge bottleneck, and while we’re kind of sidestepping the problem with things like QR codes by bringing the digital world into the physical. OCR is just one of those fundamental enabling technologies which ought to be as broad in scope as the set of humans who have interacted with a keyboard.

I can’t help but feel that the rather large set of people who have interacted with the problem character recognition have surveyed the available tools and reached the same conclusion as your miniature Magic 8 Ball desk ornament: “Try again later”. It doesn’t take long for one to discover an instance of perfectly crisp and legible type which results in line noise of such entropy that it’d give DUAL_EC_DRBG a run for its money. “No, there really isn’t any way for this to be the state of the art.” “Well, I guess if it is, then maybe it’ll improve in a few years- technology improves quickly, right?”

You would think that some analogue of Linus’s Law would hold true: “given enough eyeballs, all bugs are shallow”- especially if you’re dealing with literal eyeballs reading letters. But incidentally, the engine that absolutely everyone uses was developed three decades ago (It’s older than I am!), abandoned for a decade before being acquired and released to the world (by our favorite benevolent overlords, Google).

In fact, what’s absolutely stunning is the sheer universality of Tesseract. Just about everything which claims to have text recognition as a feature is backed by it. At one point, I was hoping that Mathematica had some clever routine using morphology and symbolic new kinds of sciences and evolved automata pattern recognition. Nope! Nestled deep within the gigabytes of code lies the Chuck Testa of textadermies: Tesseract.


Array Addition and Other Fun Javascript Hacks 30 June 2013

fun hacks

It’s times like these that I’m reminded of my favorite childhood TV show (actually it’s kind of a close contest between this and Spongebob), Mythbusters, and that perennial warning: “don’t try this at home”. But this isn’t actually dangerous in any physically life-damaging way, it’s just dangerous in that it’s a very bad idea. The whole thing started with a friend who was irked by a few Javascript oddities, one of them being the inability for arrays or tuples of numbers to add.

I actually came across an idea some time ago while reading about cross-site JSON exploits. But the premise for adding arrays is rather simple: since the language only supports subtracting and adding numbers, the javascript engine will try to convert an array to a number by calling its valueOf method. There really isn’t a numeric representation of an array so valueOf usually just returns NaN, which isn’t tremendously useful. However, we can override the valueOf function to return any number we want.

We can create a valueOf function which assigns numbers to arrays according to a specific pattern, such that the resultant number can reveal (without much ambiguity) which arrays were involved, and whether they were added or subtracted. There are probably a myriad of ways to construct numbers like that, but one of the simplest (or at least the first one that I could come up with) is to raise some given radix (at least 3) to a unique number that increments every time valueOf is called. You can understand it pretty simply in base 10.

Lets say our first array is [1, 2, 3] and our second array is [5, 4, 3]. Every time the instance’s valueOf is called, we plop it onto a temporary global array and record the index. In this case, lets assign the first array index 1, and the second array index 2. If we raise the radix 10 to that index, we can get the respective unique numbers: 10 and 100. Now these numbers can be added or subtracted, leading to 110 or 90 or -90 (or unaltered, leaving 10 and 100). To find out what operations and what numbers are involved in the operation, we can first add 1000 which is 10 raised to the size of the global array, which has the useful effect of making everything positive. Now we’re left with 1110, 1090, 0910, 1010, and 1100. We can ignore the first and last digits, and each digit can be either a 1, a 0 or a 9 (if it’s anything else, this is just a number, not the result of a magical array addition). A 1 tells us the number was added, a 0 tells us the number isn’t used, and a 9 tells us that the number was subtracted.

This leaves a weird problem though, which is how you’re going to convert a number back into that array transparently. And this segues into the more atrocious segment of this hack. If and when Skynet and the other assorted malevolent (uh… I mean, misunderstood) artificial intelligences develop their courts, the fembots and gentledroids of the jury will no doubt consider me guilty of whatever felonious usercrimes may exist. They’ll consider me an equal of Norman Bytes, butchering idiomatic javascript in the shower.

What exactly makes it such a heinous offence, you may find yourself asking. The answer is simple: the unholy matrimony of with and Object.defineProperty (also, keep in mind that it isn’t DOMA’s fault).

The great thing about Object.defineProperty is that it lets you get in the middle of the whole variable assignment process. The problem is that this only works with object properties, not top level variables. But Javascript has a nice (read: wretched) with statement which lets you treat variables as if they were object properties. This still leaves a slight problem because there’s no way to define a catch-all property. And if nothing so far has wrought utter terror to your soul, this last critical part may very well do exactly that. Since variables that are called must exist there in name, we can use the Function toString method to decompile the source and use a simple regular expression (any symbol which starts with A-z $ or _ followed by any of that or a number) to extract candidate variable tokens, and for each one, we can define a getter and setter on a new object which is subsequently passed as the first argument to the function whose first enclosed statement is a with.

The power of intercepting all function calls, variable declarations and retrievals then comes by recursively creating another fake object filled with getters and setters whenever a property is accessed. For method calls to primitive types like strings and numbers, we do the same type of sorcery but directly on their respective prototype properties. Whenever a function is passed a number which happens to match the pattern for an array addition or subtraction, we can passively intercept and substitute its value. Any string which matches a certain pattern of CSS selectors can be then transparently substituted with the NodeList which results from a document.querySelectorAll. And we can change all the variable declarations for a for..in loop such that array values are used instead of keys.

And now, four minutes before the end of this month, I’ve successfully yet again managed to eke out a blog post to fulfill my quota. And I guess I don’t have an Humane Society to prove that no humans were harmed in the making of this blog post, but how bad could it possibly be— it’s only 150 lines.


hqx.js - pixel art scaling in the browser 31 March 2013

Screenshot 2013-03-31 at 5.15.07 PM

Every once in a while some gadget has the misfortune of epitomizing the next first world problem. I guess right now, this is owning a Retina (or equivalent) laptop, tablet (arguably phone, but most web pages are scaled out so it’s not that big of a problem) and being irked at the prevalence of badly scaled graphics. So there’s a new buzzword “Retina Ready” for websites, layouts and designs which support higher resolution graphics for devices which support it, often meaning of lots of new files and new css rules. It’s this trend of high-pixel-density devices (with devices like the iPad 3, Retina Macbook Pro, Nexus 10 and Chromebook Pixel - though I for one don’t currently have any of them, just this old glitchy-albeit-functional first generation Chromebook) that is driving people to vector icon fonts.

But the problem of radical increases in terms of resolution isn’t a new one. Old arcade games rarely exceeded 260x315, and the Game Boy Color had a paltry 160x144. While a few people still nostalgically lug around game cabinets and dig out their dust-covered childhood handheld consoles for nostalgic sneezing fits, most of the old games are now played with emulators running on systems several orders of magnitude more sophisticated in every imaginable aspect. So that arcade monitor that once could engross a childhood (and maybe early manhood) now appears nothing more than a two inch square on a twenty inch monitor. But luckily there is a surprisingly good solution to all of this in the form of algorithms designed in particular for scaling pixel art.

The most basic form of image scaling that exists is called nearest-neighbor interpolation, which is extra simple for retina devices because it means simply growing the size of each pixel by a factor of two along each axis. That leads to things which are blocky, and unless you’re part of an 8-bit retro-art project with a chiptune soundtrack looks ugly.

The most common form of image scaling borrows a lot from the math and signal processing fields, with names like bilinear, bicubic, and lanzcos essentially they treat an image as some kind of composition of sinusoidal parts and try to ideally extrapolate and interpolate such that visible artifacts are marginalized. It’s all very mathy, but the result is kind of the opposite of nearest-neighbor because it has the tendency to make things blurry and fuzzy.

The thing is that the latter tries to reach some kind of mathematical ideal, because images taken by your friendly neighborhood DSLR-toting amateur (spider-powers optional) are actually samples of real world points of data— so this mathematical pursuit of purity works out very well. There’s still the factor-of-four information-theoretic gap that needs to be filled in with best-guesstimates, but there isn’t really any way to improve the way a photograph is scaled without using a higher-resolution version of said photograph. But most photographs that are taken already are sixteen-megapixel monsters and they usually still look acceptable when upscaled.

The problem arises with pixel art, little icons or buttons which someone painstakingly drew in Photoshop one lazy summer afternoon in the late 90s. They’re everywhere and each pixel isn’t captured and encoded by a sampling algorithm of some analog natural phenomona— each pixel was lovingly crafted and planted by some meticulous artist. There is no underlying analog signal to interpret, it’s a direct perceptual hookup to the mind of the creator— and that’s why bicubic sampling looks especially bad here.

Video games, before 3d graphics engines and math-aware anti-aliasing concerned with murdering jaggies, in the old civilized age of bit-blitting, were mostly constructed out of pixel art. Each color in that limited palette was placed there for a reason and could be exploited by specialized algorithms to construct higher-quality upscaled versions which remained sharp. These come with the names EPX, Scale2x, AdvMAME2x, Eagle, 2×Sal, Super 2×Sal, hqx, and most recently, Kopf-Lischinksi. These algorithms could be applied in real time to emulator windows to acceptably scale a game to new sizes while eschewing jagged corners and blurry edges.

Anyway the cool thing is that you can probably apply these algorithms in lieu of the nearest-neighbor or bilinear scaling algorithms used by browsers on retina platforms to effortlessly upgrade old sites to shiny and smooth. With a few rough heuristics (detect if an image appears to be a sprite by testing for a limited palette, see if the image is small or a perfect square, detect if it has transparent pixels) this could be packed into a simple script include that website makers could easily inject into their pages to automagically upconvert old graphics to new shiny high-resolution ones without having to go through the actual effort of drawing new high resolution graphics and uploading them online. And this could also be packaged as a browser extension so that, once and forever after, this first-world nuisance shall be no more.

Before setting out to port hqx-java to javascript, I actually did some cursory googling to see if it actually had been done before. Midway through writing this post, I found out that it actually had been done before, in a better way, so I won’t even bother linking to my inferior version. But either way the actual goal of this project was the part which was detailed in the last paragraph, that of an embeddable script or browser extension which could heuristically apply pixel-scaling algorithms— something I probably won’t bother trying to do until at least after I get my college laptop (which I anticipate will be a Retina Macbook Pro 15”). Nonetheless, I haven’t written an actual blog post in almost three months and it’s the last day of this month, and I guess it’s better than having you all (though nobody’s probably going to read this now that Google Reader has died) assume that I’ve died. Anyway, now I’m probably going to retroactively publish old blog posts in previous months to fraud continuity.


Whammy: A Real Time Javascript WebM Encoder 19 August 2012

This is sort of a conceptual reversal (or not, this might just be making the description needlessly confusing) of one of my older projects,Weppy. First, what Weppy did was it added support for WebP in browsers which didn’t support it by converting it into a single-frame video. This is instead predicated on the assumption that the browser already has support for WebP (at this point, it means it only works on Chrome since it’s the only browser which actually supports WebP), not only decoding WebP but encoding it as well.

The cool thing about WebP which was exploited in Weppy is that it’s actually based on the same codec as WebM, On2’s VP8. That means the actual image data, when the container formats are ignored, are virtually interchangable. With a catch: it’s intraframe only.

So it’s a video encoder in that it generates .webm files which should play in just about any program or device which supports the WebM format. But interframe compression is actually a fairly important thing which could reduce the file size by an order of magnitude or more.

But, there isn’t too much you can do on the client side in the ways of encoding stuff. And whatever you do, you basically can’t do interframe compression (aside from some really rudimentary delta encoding). More or less, when your only alternative is to maintain an array of DataURL encoded frames or encoding it (rather slowly) as a GIF, a fast but inefficient WebM encoder stops looking too bad.

This was actually Kevin Geng‘s idea, and he contributed some code too, but in the end most of the code was just leftovers from Weppy.

Demo

http://antimatter15.github.com/whammy/clock.html

Basic Usage

First, let’s include the JS file. It’s self contained and basically namespaced, which is pretty good I guess. And it’s not too big, minified it’s only about 4KB and gzipped, it’s under 2KB. That’s like really really tiny.

<script src="whammy.js"></script>

The API isn’t terrible either (at least, that’s what I’d like to hope)

var encoder = new Whammy.Video(15); 

That 15 over there is the frame rate. There’s a way to set the individual duration of each frame manually, but you can look in the code for that.

encoder.add(context or canvas or dataURL); 

Here, you can add a frame, this happens fairly quickly because basically all it’s doing is running .toDataURL() on the canvas (which isn’t exactly a speed-demon either, but it’s acceptable enough most of the time) and plopping the result onto an array (no computation or anything). The actual encoding only happens when you call .compile()

var output = encoder.compile(); 

Here, output is set to a Blob. In order to get a nice URL which you can use to stick in a &lt;video&gt; element, you need to send it over tocreateObjectURL

var url = (window.webkitURL || window.URL).createObjectURL(output); 

And you’re done. Awesome.

Documentation

Weppy.fromImageArray(image[], fps) this is a simple function that takes a list of DataURL encoded frames and returns a WebM video. Note that the images have to all be encoded with WebP.

new Weppy.Video(optional fps, optional quality) this is the constructor for the main API. quality only applies if you’re sending in contexts or canvas objects and doesn’t matter if you’re sending in encoded stuff

.add(canvas or context or dataURL, optional duration) if fps isn’t specified in the constructor, you can stick a duration (in milliseconds) here.

Todo

This pretty much works as well as it possibly could at this point. Maybe one day it should support WebWorkers or something, but unlike the GIF Encoder, it doesn’t actually require much real computation. So doing that probably wouldn’t net any performance benefits, especially since it can stitch together a 120-frame animation in like 20 milliseconds already.

But one of the sad things about it is that now it uses Blobs instead of strings, which is great and all except that blobs are actually slower than strings because it still has to do the DataURL conversion from string to Blob. That’s pretty lame. Firefox supports the canvas toBlob thing, but for some reason Chrome doesn’t, but eventually it probably might, and that might be useful to add.

Also, if someone ever makes a Javascript Vorbis encoder, it would be nice to integrate that in, since this currently only does the video part, but audio’s also a pretty big part.


Distributed Pi Revisited 11 August 2012

So I’ve been interested in distributed computing for some time, since 2007, basically around the time I started doing web development. I’ve always sort of romanticized the notion of distributed computing because of its vast theoretical potential. Projects like BOINC, SETI@Home and Folding@Home have always given me some kind of idealized notion of “computing for good”, inspiring some kind of useful social, scientific or otherwise beneficial change to humanity. But those projects never got the kind of adoption which could truly change the world, together they form networks which are specialized but can, in the end, only eke relatively minute performance.

Part of the problem lies in their intrinsic forbidding voluntary nature. It takes too much effort to install and amplifies the problems intrinsic to any “democratic” system: why bother? Voter apathy often stems from a feeling that an individual contribution regresses towards nothing, which is statistically certainly true, but never helpful if it plays a part in the participation of one of these collective entities.

In a broader sense, I’ve always suspected that part of what is necessary for technological progress is the loss of control. Certainly something which appears true from a human experience, that the vast arrays of neurons bound by a cranium are uninviting and wholly unwilling to expose their inner raw computation power to the emergent conscience within them. No doubt the human brain is good at computing, just not (as it appears) math, but intuition (especially of the physical kind) can only arise when a system internalizes some very complex math. We just can’t access it, because evolution or some other process has determined that it was something that had to be done away with on the climb to higher cognitive powers.

Likewise, I think computers are convergent in that sort of way. There’s a great ladder of abstraction which is growing taller and taller, a tower of babel of sorts, and at the end, perhaps we’ll find a similar goal, maybe not of god per se, but an equally sought target of consciousness or some other high level intellectual faculties contemporaneously denied to computers.

Part of this is losing control, which is inevitable and politically dangerous. Computing terminals are powerful and capable of much more than they’re being used for. In fact, most computers spend most time idling, the processors get ever faster not for the handling of the idle time but in order to smooth out the few bursts that actually require fast computation.

It’s impractical for operating systems to build into them some kind of distributed computing platform, however ideal that would be. It’s too contentious to ever get adopted, conceptually a short step away from the ability for a mega-corp to pilfer your precious information as well.

But the browser, specifically the web, provides us with an interesting opportunity. Here, we have significantly less fear of personal privacy, since it comes with an expectation of sorts for information sharing. The existence of client side scripting and its prevalence gives an implicit permission to exploit system resources during the site’s tenure.

Now, of these granted permissions, we have the freedom now to exploit in order to create something truly remarkable. Because it’s not so much voluntary on the part of the user, so much as voluntary on the part of the site maintainer who now has the responsibility of allocating and managing the system resources of his visitors, for their brief but additive virtual encounters. The users lose control, and that’s a good thing.

Old Stuff

When I first wrote that introduction, I mentioned that I was interested in distributed computing for quite a while, and that’s true. 2007 was, at time of writing five years ago. A bit less than a third of my life, which is fairly significant. It doesn’t even quite belong to recent memory, and perhaps has escaped living memory into something quasi-zomboid. Part of the reason why this section is relatively short compared to the other ones is because I really don’t have a very good memory of what that was like, and it’s rather sad that my previous attempts never had long writeups regarding their potential and process (however, there do tend to be more little updates about the process).

Anyway, as it’s been such a long time, I’m trying to dig through old stuff to pick out exactly what I did back then. The first things I can find are somewhat easier and simpler problems, finding palindromes, or at least words which spell different words when reversed (I believe this was based on a program I had written a few years prior in Visual Basic). Another was for cracking hashes in a distributed manner.

Back then, I think those were one of the first explorations into the concept, of distributed computing in the browser. It was the days before WebWorkers and Cross-Origin Resource Sharing was in its infancy. The pool of possible computation was however, still large but most of it was forbidden. Take note, however that distributed and parallel computing have existed for far longer, perhaps even longer than the idea of a computer.

Computers get faster each year, thanks to Moore’s Law and more and more people get connected to the Internet each year. Perhaps in addition to Metcalfe’s law with regard to the value of a telecommunication’s network, is the ability to harness idle computation power in order to increase the vale of its participants linearly as the network grows. There are almost a billion people who are connected to the Internet, and however insignificant of a contribution each of those terminals brings to the network would add up into something truly remarkable.

However, hash cracking and palindrome finding are fairly trivial. They have little real world practicality and don’t seem in any way representative of the greater problems facing humanity. While it’s a little impractical to aim for some kind of project which has immediate applications to the value of humanity, it’s certainly a worthy target which is worth approximating. They’re isolated examples which are easy to parallelize and hardly count as real ventures into the field.

Calculating digits of pi is marginally less useless and represents something which is significantly harder and tasks which may be closer to the kinds of computations which are performed in the real world. There have been other projects with similar goals, most notably, PiHex, which acts as a sort of vindication of this as a possible legitimate attempt. At this point, I have no idea if the algorithm works with digits in the order of trillions, and that’s one of the big reasons I’m not actually trying to succeed PiHex.

Revisting

Okay, so I decided to dig this up again.

Why? Because I really want to play around with server-side JS a little more (just in case getting a node vps has anything to do with merit). The sort of funny thing is that’s exactly what it used to be, back in the ancient past, before NodeJS existed (it was before Google Chrome was released, and V8 wasn’t open source). I used to have an application which would schedule jobs for computation on the client and provide them in a more useful format.

However, I never bothered saving the files outside of the AppJet web IDE and the code was lost when the service was discontinued. I tried porting to Google App Engine, but that version was plagued with strange bugs which ended up printing out the wrong digits after a few thousand were computed.

Right now, revisiting the idea, I’ve added a few things which are somewhat more characteristic of the change in web browsers since my first attempt. The old version tried to mock threading by the liberal application of setTimeout, which meant that most interface interaction wouldn’t be terribly affected. However, it did incur a noticeable slowdown. Now, it uses WebWorkers, which provides numerous interesting possibilities. First and perhaps most important is context isolation and lightweight, asynchronous embedding in multiple domains (origins). Since WebWorkers can work (see what I did there?) across multiple origins and still have access to XMLHttpRequest, and the advent of Cross-Origin Resource Sharing (CORS), it’s easy to embed this in a page. The low embedding overhead and the true multi-threading abilities bring a lot in the way of making this more than an intriguing concept and into something much more practical.

The code, which is now on the Github repository in a new folder named node has a very basic interface. Rather than manipulating the widths of divs for progress indicators, now it uses &lt;progress&gt; (it’s actually really cool to see how far the state of web platform’s state of affairs has changed over the past few years). One cool thing about the progress bar is that it guesses progress by dividing the current prime number by the end point, which has an interesting effect of making the progress bar faster towards the end (this happens because the distribution of prime numbers asymptotically declines). While I could possibly invert the effect in order to create a more linear progression fairly trivially, behavioral studies indicate that people who stare at progress bars all day feel less irritated (i.e. more satisfied) when the bar speeds up in the beginning and end (then again, the progress indicator isn’t really meant to be shown to people and if it is, the user experience is hardly something which is being optimized for, and this does nothing in the way of speeding up the beginning, in fact it’s quite the opposite by slowing down the beginning, so perhaps it should first linearize the progress and then map it to some function which as accelerated starts and ends, perhaps some trigonometric function).

Part of what is cool about the project is the underlying algorithm, a port of Fabrice Bellard’s optimized version of the Bailey–Borwein–Plouffe algorithm. Unique to the algorithm is that it uses up comparably little memory and as such it’s uniquely suited to distributed computing, especially in the browser environment which demands relatively thin clients (however, browser caching means that the download would only necessarily need to be completed once, and embedding the computation in an iframe with certain storage permissions and appCache could allow persistence). It’s a relatively short algorithm and fairly easy to port.

However, as it’s a digit extraction algorithm, it’s not very efficient for calculating digits sequentially. That actually is something of a feature which I neglected in the so-called modern incarnation. Older versions had the prime number (sub-job) allocation done on the server, which meant using up a lot of memory in keeping track of the jobs. The current one has a server which is entirely unaware of the actual computational process, which leads to being lighter persistence-wise but comes at the obvious cost of losing granularity in scheduling.

One optimization about this strategy is that a client doesn’t have to wait for a server to respond with another job request in order to continue. Instead, it operates in an almost completely asynchronous manner. On the client’s first request, the server gives it a job which is given by a starting point (a prime number to continue from) and a digit number. From that point, the client begins computing until that digit is complete, sending its progress back to the server once in every specified interval (a couple of seconds).

If the client was to disconnect in the middle of a computation (which can take quite some time, especially in later digits, so its more than certain to happen), the server will be able to resume the calculation by sending the request to another client after a certain expiration date has passed. So future refinements to the server (and possibly the client) could make it more efficient with processing digits of less significance by allocating jobs and taking into account certain checkpoints. For instance, all the prime numbers between 3 and some integer N have to be iterated through (where N is approximately 3 times the number of digits from the radix). Rather than maintaining a “single-threaded” system (as it currently does, the parallel processing power comes from sending out multiple digit segments to be processed), it could instead send out simultaneous requests for different segments. The client may have to be modified in order to stop at certain designated checkpoints to prevent clients from overlapping.

This sort of scheme would be more efficient than storing jobs used in previous versions, first of all by retaining the sort of “dumb” server which doesn’t do any real computation that contributes to the process. Instead it only acts as a mediator and persistence layer (as perhaps is the best role to give to any server). Rather than keeping track of every single prime number as a different row in some database, the server would only need to keep track of as many sections as clients exist.

The new version also has some interesting changes. First is the switch to NodeJS, which was hinted at before, since it acts as a persistent server rather than something which is called and disposed like a CGI process, the queuing system is now entirely in-memory. However, as soon as its possible, it writes out the completed digits into a file (pi.txt), and if the server is ever interrupted the computations resume from the last saved version of the digits (only the jobs which were processing during the shutdown are lost). However, this would need to be changed if it were to be some more ideal use of the digit extraction method, since in such a situation, the vast majority of jobs would be for a single span of digits (and it would really suck to lose all of those).

So in a sense, this update constitutes a regression, a step in the wrong direction in that it’s using the algorithm wrong. However, in another sense, it paves the way to a set up which has a few more ideal properties, a lighter weight server end and a more efficient use of computation time on the part of the clients. In essence, the shift from fixed sized discrete computational tasks (or jobs) to mere ranges. The obvious benefit to the latter is the reduced memory demand and the ability to operate continuously, i.e. without pausing for acquiring tasks and behaving truly asynchronously (in a very Node.JS style). Perhaps taking it further along this direction (since WebWorkers are allowed to access applicationCache), clients could be given a starting point and are just let to rip through a few hundred cycles, synchronizing less frequently. Some specialized data structures might be employed to keep track of individual contributions so that the ranges are doled out optimally. Maybe this should instead evolve into some kind of successor to the PiHex project rather than some incredibly slow and inefficient sequential pi calculation platform.

One thing which I didn’t explore was the concept of having a persistent socket to the server. For this set of circumstances, the benefits of maintaining a persistent bidirectional socket weren’t large enough to warrant that kind of development. Right now, it’s communicating through a few small GET requests to a server. Part of the reason GET was used rather than POST was that it was designed with the idea that it could theoretically run in a cross-domain manner, and sending a POST request requires preflighting with CORS headers. Since it’s only ever really sending 30 or so bytes at a time, any additional overhead should be avoided. But this itself is also perhaps an argument for using WebSockets (though certainly not long-polling, since that incurs even more overhead). Quite often the synchronization requests don’t require the server to give anything back, but HTTP-wise, the server’s always going to have to send about a hundred bytes of headers to fulfill the HTTP requirements. Also, whenever the server sends a request, it wastes about a third of a kilobyte with header information like the User-Agent. Network-wise it might in the end be more efficient just to have a low-overhead persistent socket connection.

Having a persistent socket would make synchronization interesting as well. While the changes wouldn’t be terribly drastic since there’s still the goal of minimizing transmission overhead, a few changes to the scheduling could be conveyed without necessarily waiting to the next checkpoint. And of course, there’s the trade-off which comes with the question of how long the checkpoints should be spaced. Right now, it’s something fairly low in the range of 10 seconds since the duration of a single page visit usually isn’t much more than a few seconds. However, if a client-side persistence layer is added, the checkpoints could be spaced out much wider. Perhaps instead of every few seconds, one could be made for every minute, hour, day or even week. Where a single computational task allocation could span multiple domains, multiple days and while the user is online and offline. However, having wider-spaced checkpoints does incur a cost in terms of synchronization. It reduces the operator’s sense of intimate awareness of what’s actually happening and increases the chance of overlaps.

Another possible step is some kind of framework which handles all of these problems automatically. Instead of having to manually manage variables and the synchronization of tasks between clients and servers, one could construct a domain specific language for distributed computing (a superset or subset of javascript which automatically manages the state of variables in loops and such to compile into some kind of client for generic parallelized algorithms). Maybe it’ll do something cool and look at which loop is the optimal one to split up based on the amount of data which needs to be sent across so that it could be minimized.

One of the things I played with while revisiting this project was playing with LLJS, another kind of specialized language which builds upon Javascript. As it’s marketed, the “bastard child of Javascript and C”, which manually manages memory. I was hoping that using a typed language might bring some speed improvements (not really, in this situation). However, LLJS might be a good basis for this auto-magical compiler for synchronous routines into largely parallelized code. However, maybe there are in fact limits to parallel computing, and it’s better to search for specific algorithms which have the right properties. And maybe in the end, the problem of porting it to Javascript and managing the client-server communication fades into a relative triviality.


Pinball 19 June 2012

Last year, for Computer Science 1, I built a pinball game in Java. Yes, Java, as in sans-script. I wrote it in Eclipse and it was pretty functional. Of course, the biggest part of the game was the collision detection, collision detection which, mind you, I built with only the slightest knowledge of how it’s actually done. So yeah, I made up a scheme. It sucks, but at least it sort of works. I was actually fairly proud of it since it was basically the only thing which was mildly interesting and non teacher directed (that is, it took more than one hundred ninety two seconds to write after the teacher gave a cue to attempt the task). I thought it was so cool in fact, that I decided to port it over to both Javascript and Coffeescript, the latter being a language which was uber-hip and awesome (I believe these events transpired approximately at the time of Coffeescript’s big one-oh release).

In essence, the collision detection scheme I devised was pretty simple. The ball experienced gravity in the form of a constant downward acceleration. It would move up and down. That was maintained by a variable (actually an array to mimic a vector), named position, and another one named velocity, where the y-coordinate for the latter was incremented by some term every time a frame was rendered. The collision detection system deals with the intersection between lines. First it takes the current position of the ball, and envisions that the current velocity is a line extending infinitely in that direction. All the other attributes of the environment are also lines, and the algorithm treats them all as unbounded (rather than segments). It quite trivially computes the intersection between the lines and calculates (with euclidean distance) how far that intersection is from the ball’s current point. If it detects a collision (the distance is less than the specified radius of the ball), then the ball’s direction is altered based on its current position and the tangent of the line it intersected with.

Fast forward a few months, and it’s July of 2011. Since then, I’ve basically forgotten about the pinball game, because it really isn’t cool enough to be worth revisiting. But I was in Virginia’s Summer Residential Governor’s School for Science/Math/Technology (hardly wholly unexpected). It was some dark night and I was in the back row of a white shuttle bus en route to an observatory where artificial light is verboten. Well, it started raining and the trip was cancelled and we were going back and I was bored. So I pulled out my iPad, which I had for some reason, and decided to look at what I had on my Dropbox which I could play around with offline. Lo and behold, there’s a pretty crappy Javascript pinball implementation (which notably, never having been finished is just a ball bouncing between a few lines). I changed the parameters and made weird shapes the ball hopelessly bouncing around in awkward trajectories, and I thought it was cool.

A friend, sitting next to me noticed that the balls were speeding up. He inquired whether or not I was using the euler method, and I didn’t know what that was. Either way, inadvertently, I was in fact using that. But the point was that here he was, making a fuss about the fact that my simulation in two dimensions blatantly ignores the conservation of momentum. And as such, I was compelled to finding a solution. I studied the code for a bit longer, trying to expose the nasty snippet which might have made it evade the laws of physics, albeit to no avail. In the midst of my struggle, he noticed at some point that since the trajectory paths are simple parabolas, it should be possible to find a closed form solution to the collision problem (rather than my iterative approach. In the end, I nobly gave up on my epic quest to fix the universe, and worked on some other crap.

A week ago, I finished my first formal (that is, in school) calculus class, where I learned what euler’s method was and how I inadvertently implemented it. But I still had to take the class final, which the teacher rather kindly decided to make extremely easy. And since I also had a huge math portfolio (which I decided to typeset with LaTeX, as I have for nearly every paper this year, for no real reason whatsoever) due that day, he made it take home, with the only caveat that we aren’t to collaborate on the final in his presence (eg. after school in his room), a subtle nod of assent to all other forms of aid. Well, at home, I thought that I should probably make use of all the tools I have at my disposal. I thought of that copy of Mathematica which hopefully wasn’t collecting dust on my hard drive platter because that might lead to a head crash, but alas was figuratively gathering dust.

I decided to start playing around with the awesomeness that is a computer algebra system, named after a guy whose last name is that otherwise inexplicable W on the periodic table (purely coincidental, unless Stephen Wolfram is actually a british iron manganese tungstate mineral). In what felt to be no time (at least it seemed interesting and productive enough that I in no way thought to discern the real time which elapsed), I had completed my final exam in a somewhat minimalistically formatted Mathematica notebook. I had, figuratively, fallen in love.

Soon afterwards, as I was working on some other completely unrelated project (creating a intraframe-only WebM encoder), I needed something somewhat cool to generate a series of frames to be encoded into video. Digging through my (newly reorganized) Dropbox project folder, I found that old incomplete pinball game. And I remembered that story and that brief suggestion and decided to use my newfound Mathematica prowess (that is, my ability to use Solve[] with some degree of it not puking) to find that solution to the collision between a parabola and a line. To my dismay, Mathematica returned an ungodly mess of an equation. And it also didn’t work for vertical lines.

But eventually, I did manage to get it to work. And thankfully now it obeys the laws of physics, at least to about 10x10^-13. And I also realized that that demo isn’t actually that cool but I used the older version anyway for my video encoder demo. So that’s the happy ending, everything lived happily ever after despite the fact the algorithm is still totally buggy and breaks when things are either too fast (velocity > 4000px/second) or too slow (velocity < 1px/second). And with corners, it occasionally fails. But I’m long since past the threshold of caring. At least now, you can speed it up a lot because the path prediction and the collision detection are completely separate from the render cycle (which also lets the speed become independent of processor speed). It uses requestAnimationFrame and other stuff, and if you want to try it, here.


Offline Wiki Redux 30 December 2011

There’s just something incredibly alluring about the concept of holding the sum of human knowledge with you at all times. While near-ubiquitous connectivity alleviates this to a certain extent, the momentary lapses of networking are incredibly corrosive to an information dependent mentality. Wikipedia never ceases to amaze me and, while I’ve tried in the past to encapsulate part of its sheer awesomeness, this marks a much more significant attempt.

The differences start even before the data gets to the application. The preprocessing toolchain was entirely rewritten for a multitude of reasons. First of all, it compresses not the entireity, but rather the most popular subset of the English Wikipedia. Two dumps are distributed at time of writing, the top 1000 articles and the top 300,000 requiring approximately 10MB and 1GB, respectively. While ostensibly, the mere top 300k articles is far too narrow to delve deep into the long tail, the breadth of the meager 1/25th of articles consistently surprises me in its depth. The advantage is that at 1GB, it’s relatively easy to fit into any system. The algorithm which strips extraneous content has been made far more sophisticated than the original series of regular expressions. This enables greater compression and less accidentally omitted content.

On the application end, the application has switched from a GWT-compiled LZMA SDK to a speedy, pure javascript decoder. This makes page loads significantly speedier and allows greater compression ratios, for individual blocks can be made larger (256KB instead of 100KB). It also now uses WebGL Typed Arrays to further speed things up, such as sending data to and from the WebWorker thread.

The interface was redesigned with CSS media queries to dynamically transition between different modes in response to different viewing environments. The interface consists of two regions: the fixed position recessed left panel which holds the page title, a search bar, controls and the page outline. This collapses down to a toolbar header automatically when the screen estate is limited. It uses an Apple-esque noise texture background.

Downloads happen in little units called chunks (they’re half a megabyte for the dump file and about four kilobytes for the index). The local file can be built up out of order. While online, all storage operations check the virtual file, indexed db, or web sql database. If it’s not there, it transparently uses an XMLHttpRequest in order to fulfill the request and caches it to disk in the respective persistence mechanism. A bitset is used to keep track of which chunks are already downloaded and which need to be downloaded.

http://offline-wiki.googlecode.com/git/app.html


Simple Anti Phishing Mechanism 12 November 2010

I just prototyped the anti phishing system described on the earlier post. Gareth Hayes noted that the the page could have been wrapped in an <iframe> to reuse the identicon. This implementation only shows the identicon if both javascript is present and functional and top==window. Hopefully there aren’t many other huge flaws with it. So it would be awesome if you tried it out.


Weppy Updates Opera, Chrome and Firefox support and simpler usage 09 October 2010

With help from @Frenzie and @paul_irish, the latest not-yet-versioned release of Weppy, my Javascript WebP to WebM conversion library, or something of a polyfill for a format that is yet to be part of any specification (HTML5 seems to specifically reference the image src attribute are examples such as PNG, GIF, JPEG, APNG, PDF, XML, SVG, SMIL, and MNG). The new release brings some awesome new features that really don’t change much and shouldn’t really be used in the real world because most browsers in the world still aren’t Firefox, Chrome or Opera - that is, a large portion of the browser market includes browsers like Safari and IE, either suffering from antiquity (IE6! aah!) or just liking h264 (IE9 + Safari).

The new release supports Opera. I never bothered debugging Opera, I figured it was another huge issue that would demand a rewrite (as supporting Firefox had needed, because the order of the object keys isn’t preserved and breaks the EBML result, or at least for Firefox’s parser which seems to be somewhat stricter than Chrome’s, is that ffmpeg?). And after premature optimization (stripping “unnecessary” EBML tags), my code didn’t work in chrome, so I had to revert to an earlier revision. All my testing code was based on file drag-drop stuff, and Opera doesn’t support that. Until I saw this mozillazine topic, I didn’t care, but it was a lot easier to fix than I feared.

Part of the solution was getting rid of the canvas stage. Admittedly, the canvas stage was pretty useless once the toDataURL() stage was removed before the first public release, but I didn’t feel like deleting code, so it stayed there. Also, I noticed that the global variable that gets introduced was accidentally named “WebM”, which is wrong, it should be “WebP”, but because of the uncreative format naming and similarities, I didn’t notice. Not sure, but it seems to be more stable now.

Chrome probably will add WebP soon, and it needs to be future proof, detecting whether or not a browser supports the WebP format. To do that, it creates an Image, sets the src to a data url of a 4x4 webp image and listens to the onload and onerror events, checking if the size is correct and there were no errors loading it. The routine is expected to error and totally untested as there aren’t any browsers that support the feature yet for me to try.

Another change, is that by default, it will automatically load all the same-origin (because of the limitations of XHR) webp images (from <img> tags), on the DOMContentLoaded event, so the library is practically drop-in now. In any web page, you can pretty much add <script src=”http://antimatter15.github.com/weppy/weppy.js“></script> and on the supported browsers, it should automatically load and replace all WebP images, though not something I would really recommended.

The demo is the same place it always was: http://antimatter15.github.com/weppy/demo.html

There is also this nifty hack that uses <canvas> to add an alpha channel to the WebP image (adapted from the original JPEG one): http://antimatter15.github.com/weppy/alpha/alpha.html

Also, please follow me on twitter.


3D Sculpting Tool 08 September 2010

So for the past few days, I discovered this neat app called Sculptris. It’s this amazing digital sculpting app where your canvas is basically a blob of virtual clay that can be molded into any shape with any level of detail. It’s windows-only, and I thought it would be neat to have something like it that works in the browser. First I thought of using WebGL, but trying to start chromium with webgl always ends up with a GLXFBConfig error, and on Firefox, it’s stuck with using libosmesa which gives absolutely terrible framerates (IDK why but Fx4 crawls on my computer. Panorama is so slow it lags from simple dragging). Eventually, I ended up basing it off my js1k function plotter, because it’s a fairly simple 3d renderer.

The pictures on the top of the page are of the death star and alderaan, respectively, mostly because planets (and moons that aren’t moons) are mostly spherical and explosions and light rays are pretty much the only things that this app is suitable for at these stages.

So how does it work? It’s actually really simple, and probably too simple. At ~300 LOC of pretty trivial JS, it’s not much at all (though it is hideous, I must warn you). It is based on my js1k entry, and I considered making this a 1k submission, but I gave up and said I couldn’t. It starts off simple enough, do lots of freaky trigonometric loops to generate a set of points that lie on a sphere. Then loop through every 50 milliseconds, apply a 3d transformation, get points that lie within a radius of the mouse position and reference them in the selected array. Then you segregate the selected array into the foreground and background, find the arithmetic mean of the fg and bg, respectively and find the difference. Then I convert it to spherical coordinates and get rid of the ratio to ignore the magnitude of the vector and add it to the position of foreground points. Then loop through all the points and wherever there’s a long distance, insert a new dot at the midpoint.

For no apparent reason I decided not to put many words on it. I was actually reluctant to add “points” and “undo”, but it’s fine print in faded color that doesn’t help the user at all.

  • Middle Click + Drag to rotate (in 3d)
  • Right Drag to deflate the shape
  • Left Drag to inflate the shape
  • Middle Scroll to zoom in/out
  • The slider bar is for controlling the size of the selection.
  • The checkbox enables/disables Undo/Redo (via CtrlZ/CtrlY or CtrlZ/Ctrl+Shift+Z). App: http://antimatter15.github.com/3d-sculpt/3d.html

Mirrored Version: http://antimatter15.github.com/3d-sculpt/mirror.html

(Not as in the other meaning for mirror, but it actually reflects the X axis so that it’s symmetrical, which is a neat feature in sculptris)

Github: http://github.com/antimatter15/3d-sculpt



1k JS 3d Function Plotter 06 August 2010

Interestingly, it does seem that a lot of the demos for the js1k competition are a whole lot more impressive than the 10k competition. Despite that js1k started with no prizes and 10k has a collective $10,000 worth of prizes. Though I do have several entries on both. Anyway, this is the continuation of my old 3d function plotter, but that one doesn’t work anymore because i’m evil and hotlinked the github repo and three.js updated in an api-breaking way.

Anyway, after you vote up http://10k.aneventapart.com/Entry/46 and http://10k.aneventapart.com/Entry/18 you should totally try out my 3d function plotter at http://js1k.com/demo/62


DOM Indexer JS Compression 05 August 2010

There’s lots of compression systems for JS out there. There’s the really smart JS rewriter magical rhino-based ones like Closure and YUI. There’s the string-based ones, packer base62, huffman, and lz77. But of the latter category, they all rely on a sort of dictionary coder, where the dictionary (or huffman tree) needs to be sent alongside the compressed content.

Unlike strings of bits, javascript code often refers to methods on the document object model. If we were to crawl the DOM, we could get a list of DOM properties and use that as the shared dictionary which doesn’t need to be changed, sent or stored. Ever.

Example:

document.getElementById —> $dgEln

document.body.appendChild(s); -> $dbaChp(s);

And as the dictionary never needs to be stored, it’s dynamically computed based on the default browser DOM, there is a constant overhead for the indexer. The dictionary does not change size based on the size of the content.

The first prototype relied on recursively indexing properties 2-3 levels from window using the ES5 Object.getOwnPropertyNames. That feature is only supported on IE9, Chrome 6, probably the latest versions of Safari and Firefox 4. As such, it severely limits the applicability of the algorithm.

Version two switches it to a simple for..in loop. The problem here is that it’s no loner possible to index certain things. Properties like Math, which are marked DontEnum internally can not be iterated, and thus can not be compressed (Math.cos would have been $Mac8). However, it probably makes up for this by recursively indexing things substrings of the discovered objects. So document.getElementById would be interpreted as the full document.getElementById instead of seperately compressing document and getElementById. It also uses a new hash function which makes the output more readable and the length proportional to the length of the source. length becomes $l6, and document.body.removeChild becomes $dbrChp. However, this also removes the possibility of that nice fallback on things not supported by the browser. (if(document.getElementsByClassName){}else{} would have been translated as d3k2.s93k and back as document.s93k on browsers that dont support it and so browser detection gracefully degrades without blah is not defined errors).

Overall. What is the compression ratio? Not that great. If you do lots and lots of DOM access then you might get a decent ratio, but the code is slow and the ratios are relatively insignificant.

Minify: http://antimatter15.com/misc/js-minify/minify.html

Maxify (decoder): http://antimatter15.com/misc/js-minify/maxify.html


JavaScript <canvas> to (Animated) GIF 23 July 2010

This is the GIF which was generated from the canvas.

This is the raw canvas element saved as a non-animated PNG

I’ve tried this before but it didn’t work. <canvas> can’t do toDataURL('image/gif'), and the primitive GLIF library couldn’t do much so I never had the opportunity to test my gif-merging code that I had. But I’m at it again, this time, porting it from the AS3GIF library, an awesomely comprehensive bitmap to binary gif encoder that even supports LZW compression (and the patent has luckily expired. Yay!). AS3Gif is supposed to “play and encode animated GIFs”, but since web pages can usually natively play GIFs fine, it’s only a port of the GIFEncoder portions of the library. And it works really well. The rest of this post is copied from the Github readme. Interesting how the w2_embed/anonybot embed post was a blog post turned into readme, this is a readme turned into blogpost. I’ll start with a link to the Github repo anyway:

http://github.com/antimatter15/jsgif

Basic Usage

Since it pretty much is GIFEncoder, you could consult the as3gif how-to page

But there are some differences so I’ll cover it here anyway.!

This is the GIF which was generated from the canvas.

You first need to include the JS files. It’s probably best if you include it in this order, but it shouldnt’ matter too much.

<script type="text/javascript" src="LZWEncoder.js"></script>
<script type="text/javascript" src="NeuQuant.js"></script> 
<script type="text/javascript" src="GIFEncoder.js"></script>

If you want to render the gif through an inline <img> tag or try to save to disk or send to server or anything that requires conversion into a non-binary string form, you should probably include b64.js too.

<script type="text/javascript" src="b64.js"></script>

Simple enough right? Now to convert stuff to GIF, you need to have a working or at least some imageData-esque array.

<canvas id="bitmap"></canvas> 
<script> 
  var canvas = document.getElementById('bitmap'); 
  var context = canvas.getContext('2d'); 
  context.fillStyle = 'rgb(255,255,255)'; 
  context.fillRect(0,0,canvas.width, canvas.height); //GIF can't do transparent so do white 
  context.fillStyle = "rgb(200,0,0)"; 
  context.fillRect (10, 10, 75, 50); //draw a little red box

Now we need to init the GIFEncoder.

var encoder = new GIFEncoder();

If you are making an animated gif, you need to add the following

encoder.setRepeat(0); //0 -> loop forever //1+ -> loop n times then stop 
encoder.setDelay(500); //go to next frame every n milliseconds

Now, you need to tell the magical thing that you’re gonna start inserting frames (even if it’s only one).

encoder.start();

And for the part that took the longest to port: adding a real frame. encoder.addFrame(context);

In the GIFEncoder version, it accepts a Bitmap. Well, that doesn’t exist in Javascript (natively, anyway) so instead, I use what I feel is a decent analogue: the canvas context. However, if you’re in a situation where you don’t have a real <canvas> element. That’s okay. You can set the second parameter to true and pass a imageData.data-esque array as your first argument. So in other words, you can do encoder.addFrame(fake_imageData, true)as an alternative. However, you must do an encoder.setSize(width, height); before you do any of the addFrames if you pass a imageData.data-like array. If you pass a canvas context, then that’s all okay, because it will automagically do a setSize with the canvas width/height stuff.

Now the last part is to finalize the animation and get it for display.

encoder.finish(); 
var binary_gif = encoder.stream().getData() //notice this is different from the as3gif package! 
var data_url = 'data:image/gif;base64,'+encode64(binary_gif); 

Docs

Each of the files exposes a single global (see, at least it’s considerate!). But since there’s three files, that means that there’s three globals. But two of them are more of supporting libraries that I don’t totally understand or care about enough to document. So I’m just gonna document GIFEncoder.

new GIFEncoder() This is super parent function. You really don’t need the new keyword because It’s not really even using any special inheritance pattern. It’s a closure that does some var blah = exports.blah = function blah(){ for no good reason. Anyway, it returns an object with a bunch of methods that the section will be devoted to documenting. Note that I’ve never tested more than half of these, so good luck.

Boolean start() This writes the GIF Header and returns false if it fails.

Boolean addFrame(CanvasRenderingContext2D context) This is the magical magic behind everything. This adds a frame.

Boolean addFrame(CanvasPixelArray image, true) This is the magical magic behind everything. This adds a frame. This time you need you pass true as the second argument and then magic strikes and it loads your canvas pixel array (which can be a real array, I dont care and I think the program has learned from my constant apathy to also not care). But note that if you do, you must first manually call setSize which is happily defined just below this one.

void setSize(width, height) Sets the canvas size. It’s supposed to be private, but I’m exposing it anyway. Gets called automagically as the size of the first frame if you don’t do that crappy hacky imageData.data hack.

void setDelay(int milliseconds) the number of milliseconds to wait on each frame

void setDispose(int code) Sets the GIF frame disposal code for the last added frame and any subsequent frames. Default is 0 if no transparent color has been set, otherwise 2. I have no clue what this means so I just copypasted it from the actionscript docs.

void setFrameRate(Number fps) Sets frame rate in frames per second. Equivalent to setDelay(1000/fps). I think that’s stupid.

void setQuality(int quality) Sets quality of color quantization (conversion of images to the maximum 256 colors allowed by the GIF specification). Lower values (minimum = 1) produce better colors, but slow processing significantly. 10 is the default, and produces good color mapping at reasonable speeds. Values greater than 20 do not yield significant improvements in speed. BLAH BLAH BLAH. Whatever

void setRepeat(int iter) Sets the number of times the set of GIF frames should be played. Default is 1; 0 means play indefinitely. Must be invoked before the first image is added.

void setTransparent(Number color) Sets the transparent color for the last added frame and any subsequent frames. Since all colors are subject to modification in the quantization process, the color in the final palette for each frame closest to the given color becomes the transparent color for that frame. May be set to null to indicate no transparent color.

ByteArray finish() Adds final trailer to the GIF stream, if you don’t call the finish method the GIF stream will not be valid.

String stream() Yay the only function that returns a non void/boolean. It’s the magical stream function which should have been a getter which JS does support but I didnt’ feel like making it a getter because getters are so weird and inconsistent. Like sure there’s the nice pretty get thing but I think IE9/8 doesn’t implement it because it’s non standard or something and replaced it with a hideously ugly blah blah. So Anyway, it’s a function. It returns a byteArray with three writeByte functions that you wouldn’t care about and a getData() function which returns a binary string with the GIF. There’s also a .bin attribute which contains an array with the binary stuff that I don’t care about.

WebWorkers

The process isn’t really the fastest thing ever, so you should use WebWorkers for piecing together animations more than a few frames long. You can find the rest of the WebWorkers section on the actual readme, because the rest is just a huge block of code with comments.

http://github.com/antimatter15/jsgif


gravity 10 June 2010

Here’s my foray into the flash-esque html5 game arena. It’s a simple game built initially in <canvas> but later scrapped for Raphael because I guess it’s something more suitable for svg than canvas. The interface is fairly simple, you click to start the game, where your projectile is sent off at the velocity relative to green blob in the center. Once it’s launched, the projectile is affected by the gravitational field of all the planets in some fairly pretty near-orbits. Once the projectile is in motion, clicking drops a new planet at where your cursor is, holding down makes the planet grow. The objective is to have the projectile not accelerate off the screen.

As per Kepler’s laws, getting near a planet produces the “gravitational slingshot” effect. Since the projectile tends to fly toward the center of planets, a magical divide-by-zero causes the infinite acceleration toward doom.

As with several other of my recent projects, it supports various configuration options via the url query string. If you don’t know how it works, basically, you append ?opt1=val1&opt2… to the url. Example: gravity2.html?grav=4 , simply gravity2.html?fastest or a combo of gravity2.html?fastest&grav=4&random. The current options are fastest to prevent the targeting of 80fps (accepts no args), target the target fps (obviously can not be used with fastest, in the case, fastest takes precedence) and it accepts one numerical argument, the default is 80. grav accepts one numerical argument, the default is 4 and is the attraction of the planets (zero isn’t very fun). _random _makes the planets start off in random places rather than the predefined magical positioning.

Feel free to post highscores in the comments.


μwave updates 03 June 2010

Microwave June 2 Search

Over a few days, things can change fairly quickly. There have been several speed improvements, a new Forum-Style blip rendering option which arranges blip linearly by the time edited with each containing a formatted quote of the parent to establish context. Attachments are now fully supported, including thumbnails and download links. The operations engine was totally rewritten which uses asynchronous XMLHttpRequest, a new callback based system and support for a batch operations (which means fewer requests and faster responses). A wavelet header containing a list of all participants in the entire wave has been added, as well as an Add Participant button. A specialized, extremely fast gadget viewer was added, which allows for blazingly fast rendering of two popular gadgets (and more will come), it works by bypassing the entire gadget infrastructure and loading trusted code directly inline with the DOM. There is a “New Wave” button which allows people to create new waves directly from the client. The OAuth backend was authenticated with google, for more secure login transactions. Blips have a new context menu which allows for features such as Delete Blip, Edit Blip and Change Title. A full changelog can be found here.

Try it out.


2.3KB JS Library with ajax, animation, dom manipulation, json encoding 07 March 2010

vX was an old project which aimed to do lots of nifty things in the least code possible (3KB without gzip!), but among its goals, developer friendliness wasn’t present. So now, I made a new library, this one is 5KB before gzip, and supports most of what vX does, but with DOM manipulation as well, in a jquery-like manner, and animations with queuing and easing. The new library (which doesn’t yet have a name) aims to be really lightweight (which really doesn’t matter much), while being more developer friendly.

For example, I can select all elements with the class “test” using .cls(“test”), and make them purple by doing .cls(“test”).css(“color”,”purple”) similar to jQuery. I can do .id(“yay”).load(“lib.js”) to load something using ajax. Creating elements is pretty easy too, say I want to load a random script, I can do .create(“script”).attr(“src”,”url”).appendTo(.tag(“body”)) and I can add events to something using .tag(“button”).index(0).on(“click”,function(e){alert(“yay”)}).

Here’s a nice little demo of what it can do http://jsbin.com/owuge/31

I intended to post this back in March 7th, but I was working on a rewrite of the library, which would be a tiny bit larger but implement CSS-selectors and be easier to maintain.


Wave Reader 4.6 - Insanely Fast Edition 14 January 2010

Loading a 500 blip wave in Google’s GWT Client takes 3:34 to get to a usable state (Where the scroll bar works) on a 3ghz Core2 Duo (whose extra core admittedly won’t do much). It also uses 972 MB of RAM.

Loading the same wave in my Wave Reader, takes 678 milliseconds. A 315x speed-up. Also, my client is totally unoptimized, pure 30KB of javascript. On top of those features, anyone can view waves, without a google account, individual blips can be linked to, it supports rendering almost everything Wave can, that is gadgets, inline replies, nesting, font color/size, italics, bold, everything you could probably expect. Interestingly, when you add an attachment to Wave, and delete the parent blip, it still stores the attachment on the current wave state, and this client can display/link to them without using Playback. There is an option to generate plain simple HTML for turning a Wave into a standalone page or Website. Private waves can be exposed read-only as a website simply by adding the gwavereader@googlewave.com username.

Using it is simple, take a Wave https://wave.google.com/wave/#restored:wave:**googlewave.com!w%252Br5lewFqCA** and then put it after the Wave Reader URL http://antimatter15.com/misc/wave/read.html?**googlewave.com!w%252Br5lewFqCA** And magically you have a super awesome URL to link to.

You can learn some tricks on how to use it to do some more awesome things http://antimatter15.com/misc/wave/read.html?googlewave.com!w%252BrnG0vaFXA such as the before mentioned HTML generation.

Samples (Some random waves): http://antimatter15.com/misc/wave/read.html?googlewave.com!w%252Br5lewFqCA (New for 4.6 Inline reply support) http://antimatter15.com/misc/wave/read.html?Ze3l0mj0A http://antimatter15.com/misc/wave/read.html?oPg9HfEXE&beta http://antimatter15.com/misc/wave/read.html?Mu9eK7j2H http://antimatter15.com/misc/wave/read.html?AhbL5fooD&beta http://antimatter15.com/misc/wave/read.html?googlewave.com!w+UDMZOGpSG


Pure JavaScript HTML5 <canvas> to (Animated) GIF Conversion 03 January 2010

Based on as3gif Ported by antimatter15

This is the raw canvas element saved as a non-animated PNG
This is the GIF which was generated from the canvas.
This is the GIF which was generated from the canvas.

AS3GIF lets you play and encode animated GIF’s with ActionScript 3

Since web pages can usually natively play GIFs fine, it’s only a port of the GIFEncoder portions of the library.

Basic Usage

Since it pretty much is GIFEncoder, you could consult the as3gif how-to page

But there are some differences so I’ll cover it here anyway.

You first need to include the JS files. It’s probably best if you include it in this order, but it shouldn’t matter too much.

<script type="text/javascript" src="LZWEncoder.js"></script>
<script type="text/javascript" src="NeuQuant.js"></script>
<script type="text/javascript" src="GIFEncoder.js"></script>

If you want to render the gif through an inline <img> tag or try to save to disk or send to server or anything that requires conversion into a non-binary string form, you should probably include b64.js too.

<script type="text/javascript" src="b64.js"></script>

Simple enough right? Now to convert stuff to GIF, you need to have a working <canvas> or at least some imageData-esque array.

<canvas id="bitmap"></canvas>
<script>
  var canvas = document.getElementById('bitmap');
  var context = canvas.getContext('2d');
  context.fillStyle = 'rgb(255,255,255)';
  context.fillRect(0,0,canvas.width, canvas.height); //GIF can't do transparent so do white

  context.fillStyle = "rgb(200,0,0)";  
  context.fillRect (10, 10, 75, 50);   //draw a little red box

Now we need to init the GIFEncoder.

  var encoder = new GIFEncoder();

If you are making an animated gif, you need to add the following

  encoder.setRepeat(0); //0  -> loop forever
                        //1+ -> loop n times then stop
  encoder.setDelay(500); //go to next frame every n milliseconds

Now, you need to tell the magical thing that you’re gonna start inserting frames (even if it’s only one).

  encoder.start();

And for the part that took the longest to port: adding a real frame.

  encoder.addFrame(context);

In the GIFEncoder version, it accepts a Bitmap. Well, that doesn’t exist in Javascript (natively, anyway) so instead, I use what I feel is a decent analogue: the canvas context. However, if you’re in a situation where you don’t have a real <canvas> element. That’s okay. You can set the second parameter to true and pass a imageData.data-esque array as your first argument. So in other words, you can do encoder.addFrame(fake_imageData, true) as an alternative. However, you must do an encoder.setSize(width, height); before you do any of the addFrames if you pass a imageData.data-like array. If you pass a canvas context, then that’s all okay, because it will automagically do a setSize with the canvas width/height stuff.

Now the last part is to finalize the animation and get it for display.

  encoder.finish();
  var binary_gif = encoder.stream().getData() //notice this is different from the as3gif package!
  var data_url = 'data:image/gif;base64,'+encode64(binary_gif);

Docs

Each of the files exposes a single global (see, at least it’s considerate!). But since there’s three files, that means that there’s three globals. But two of them are more of supporting libraries that I don’t totally understand or care about enough to document. So I’m just gonna document GIFEncoder.

new GIFEncoder() This is super parent function. You really don’t need the new keyword because It’s not really even using any special inheritance pattern. It’s a closure that does some var blah = exports.blah = function blah(){ for no good reason. Anyway, it returns an object with a bunch of methods that the section will be devoted to documenting. Note that I’ve never tested more than half of these, so good luck.

Boolean start() This writes the GIF Header and returns false if it fails.

Boolean addFrame(CanvasRenderingContext2D context) This is the magical magic behind everything. This adds a frame.

Boolean addFrame(CanvasPixelArray image, true) This is the magical magic behind everything. This adds a frame. This time you need you pass true as the second argument and then magic strikes and it loads your canvas pixel array (which can be a real array, I dont care and I think the program has learned from my constant apathy to also not care). But note that if you do, you must first manually call setSize which is happily defined just below this one.

void setSize(width, height) Sets the canvas size. It’s supposed to be private, but I’m exposing it anyway. Gets called automagically as the size of the first frame if you don’t do that crappy hacky imageData.data hack.

void setDelay(int milliseconds) the number of milliseconds to wait on each frame

void setDispose(int code) Sets the GIF frame disposal code for the last added frame and any subsequent frames. Default is 0 if no transparent color has been set, otherwise 2. I have no clue what this means so I just copypasted it from the actionscript docs.

void setFrameRate(Number fps) Sets frame rate in frames per second. Equivalent to setDelay(1000/fps). I think that’s stupid.

void setQuality(int quality) Sets quality of color quantization (conversion of images to the maximum 256 colors allowed by the GIF specification). Lower values (minimum = 1) produce better colors, but slow processing significantly. 10 is the default, and produces good color mapping at reasonable speeds. Values greater than 20 do not yield significant improvements in speed. BLAH BLAH BLAH. Whatever

void setRepeat(int iter) Sets the number of times the set of GIF frames should be played. Default is 1; 0 means play indefinitely. Must be invoked before the first image is added.

void setTransparent(Number color) Sets the transparent color for the last added frame and any subsequent frames. Since all colors are subject to modification in the quantization process, the color in the final palette for each frame closest to the given color becomes the transparent color for that frame. May be set to null to indicate no transparent color.

ByteArray finish() Adds final trailer to the GIF stream, if you don’t call the finish method the GIF stream will not be valid.

String stream() Yay the only function that returns a non void/boolean. It’s the magical stream function which should have been a getter which JS does support but I didnt’ feel like making it a getter because getters are so weird and inconsistent. Like sure there’s the nice pretty get thing but I think IE9/8 doesn’t implement it because it’s non standard or something and replaced it with a hideously ugly blah blah. So Anyway, it’s a function. It returns a byteArray with three writeByte functions that you wouldn’t care about and a getData() function which returns a binary string with the GIF. There’s also a .bin attribute which contains an array with the binary stuff that I don’t care about.

WebWorkers

The process isn’t really the fastest thing ever, so you should use WebWorkers for piecing together animations more than a few frames long.

I haven’t actually tried it yet, but here’s some incomplete mock-JS which should be able to do stuff once you add the boring stuff like serializing and deserializing the content (actually, i have most of the serializing done but you have to deserialize that and that’s really the boring part).

var frame_index,
    frame_length,
    height, 
    width,
    imageData; //get it from onmessage

var encoder = new GIFEncoder(); //create a new GIFEncoder for every new job
if(frame_index == 0){
  encoder.start();
}else{
  encoder.setProperties(true, true); //started, firstFrame
}
encoder.setSize(height, width);
encoder.addFrame(imageData, true);
if(frame_length == frame_index){
  encoder.finish()
}
postMessage(frame_index + encoder.stream().getData()) //on the page, search for the GIF89a to see the frame_index


var animation_parts = new Array(frame_length);
//on the handler side:

var worker = new WebWorker('blahblahblah.js');
worker.onmessage = function(e){
  //handle stuff, like get the frame_index
  animation_parts[frame_index] = frame_data;
  //check when everything else is done and then do animation_parts.join('') and have fun
}
var imdata = context.getImageData(0,0,canvas.width,canvas.height)
var len = canvas.width * canvas.height * 4;
var imarray = [];
for(var i = 0; i < len; i++){
  imarray.push(imdata[i]);
}

worker.postMessage(frame_index + ';' + frame_length + ';' + canvas.height + ';' + canvas.width + ';' + imarray.join(','))

Wave Unread Navigator 13 December 2009

I made an extension which shows a green sign (like gmail) on the side whenever there are blips outside the viewport. It allows you to quickly and effectively use keyboard navigation without blindly checking above and below, or generally scroll without needing to guess the position.

It comes as a Chrome Extension and also as a Greasemonkey Userscript.


ShinyTouch/JS 28 August 2009

Yay for yet another demo that strives to mix an mash almost everything HTML5 related! ShinyTouch in JS dumps the stuff from a <video> tag with ogg encoded video (well, almost all video from linux is ogg encoded so it’s just whatever format i got first from cheese). It gets dumped into <canvas> and getImageData does it’s magic.

Interestingly, if you don’t use the video and just do data from a raw image, you get upwards of 125fps on V8. Adding the video, it ceases to work on Chromium (maybe a linux thing? this tells me it’s just linux, but you can never be so sure).

//At this point, run away as the algorithm gets messy and hackish

So the thing just searches from right to left up to down within the quad. When it finds a column of something that fits the rgb range of the finger that is larger than a certain threshold, it checks for a reflection from the point. If it detects a reflection then yay! it throws the data at the perspective warper (based on a python one which is based on a C# one and though it would probably be easier to port from C# to JS making long chains of derivative work is fun). If there wasnt a reflection then it logs that and if that number is larger than some othe rthreshold then it kills the scanning and goes on with it’s life. The reflection algorithm just takes the point 5 pixels to the right and assumes that would be a reflection if there was one and a point 15px above and 5px to the left (nasty stuff) and takes the hue value from their RGB values. It takes the absolute value of the difference of the hue values multiplied by 100 (or 200 in the python version) and compares it with a preset configuration variable.

So now that that horrible algorithm which was just whatever came to my little totally untrained mind first. But it works semi-decently, at least for me. But you can hopefully see how nasty it’s inner workings are and it inspires people to clean it up. It’s quite a bit more readable than the Python version and only 200 lines of JS so it won’t be too hard to understand.

But since HTML5 has no Video capture thing for webcams, and my webcam doesn’t work with flash so I can’t use that canvas<-flash webcam bridge i built, uh, almost 2 years ago. So now you just get to gaze at my finger moving for like 20 seconds!

http://antimatter15.com/misc/shiny/shinytouch.html


Freemovie/JS Pure Javascript SWF Generator 17 August 2009

I’m making a crude python script that translates PHP to JS rather hideously. It probably will not work on any other codebase. It was a script quickly hacked together to one purpose.

Freemovie is

FreeMovie is an SWF generator library written in PHP and ported to Ruby. FreeMovie can be used to develop Web and desktop aplications. Speaking of the Ruby port, I can’t find it. So if anyone finds it, I think it might be useful somewhere. Found it in the CVS, it’s really incomplete compared to the PHP version.

The machine translated code (Not entirely autogenerated, I wrote a line or two of it) is not too hideous, the tabbing is slightly off, but it’s at least mostly readable. It was a lot worse 2 hours ago (half of the lines had indentation, other without any, debugging comments everywhere saying useless and distracting things).

The translator is only 107 lines of (hideous code, though the language is beautiful, I guess if you loved JS enough, you could try running the program though skulpt) python (+ 20 or so in another file to change chr() to String.fromCharCode, etc). After that, it uses 6 PHP compatibility functions and 5 from the PHP.js project to cover the features that I’m too lazy to put in the compiler or are just not features in JS.

http://antimatter15.com/misc/freemovie/js/demo/fm-demo-0.htm

The above demo generates a flash image entirely client-side, though the resulting base64 encoded data is sent to be decoded on the server since you can’t load a SWF from a data-url. If someone finds out how, it would be cool to tell me.

A big issue though, is that for some odd reason half the shapes don’t work. The ones in the demo work, but all filled shapes, and the circles/arcs do not work.


Distributed Computing Take III 31 December 2008

I donno why, but i’m revisiting this. I was trawling across Wikipedia one day, and I got to the article about Pi. I tried distributing Pi a while ago, actually, before I did the hashes. But I never ended up implementing it because it didn’t seem feasable, as all the algorithims I encountered (or tried porting) required lots of memory, something very hard to distribute for this scenario. But this time, I found these. Looking through them, and googling in the process, I found http://www.omegacoder.com/?p=91, and ported it over to Javascript. It was relatively slow compared to the SuperPi implementation in Javascript, but it was easily distributed.

One problem though, is that it gets slower every iteration (to find the net block of digits). Finding .141592653 will be roughly 20ms faster than the next 9 digits (it processes in blocks of 9). Not only would it take longer, but it occupied 100% of the CPU, and it would pop up that ever-annoying “This script may make your computer non responsive” window. So I implemented this pattern to make it not lock up any browser other than Chrome (and possibly WebKit Nightly).

Still, it would take up 100% of the CPU. I ran it overnight and got to digit 17,000.

Eventually, it would take about half an hour for a single iteration (at the 20000th digit). With web-based distributed computing, I can’t rely much more time than what Google Analytics reports to be 00:02:24 (my Average Time on Site). And that’s half an hour with a 3ghz Intel Core 2 Duo (it’s dual core, but the script, is single threaded).

I then split the function into smaller parts. the main function was split up, and the loops were divided across users. Now, it can scale easily. It uses virtually no visible CPU. and fits well into that 2 minute timeframe.

Try it out here, but don’t stay too long, because i only set there to be 500 “jobs”.


Controlling A Native App from a Javascript Desktop 29 August 2008

Well, I worked on a sort of VNC-like solution to controlling native desktop applications from a remote PC. It’s an interesting concept, I settled with something less than easy to use, and less than really feasable. It’s a Proof of concept, and it’s likely I won’t work on it again (Like ForkSwif).

It is an application (module) on a hacked Ext 2.0 Desktop example that uses Ajax (Polling) to query a local PHP proxy to query a remote desktop. The remote desktop is running some software (powered by .NET sadly…) that captures the window’s contents, does a diff to see if there are modifications and where (only sends updates to changed parts of the screen, sorta like VNC). It base64 encodes it and sends it along HTTP to the proxy, which sends it to the javascript client.

The client can send events to be repeated on the remote desktop, currently only left mousedown and mouseup (so, basically only clicking), but using the Keyboard should be easy enough.

I imagine that a more feasable option is to create a javascript X11 client, taking proxied connections to a X session under SSH so it is better with window-specificness, and an overall more stable platform, so you could also run multiple applications simultaneously on the desktop.

here’s an early screenshot:

Controlling A native desktop app from a javascript desktop
Controlling A native desktop app from a javascript desktop
Controlling A native desktop app from a javascript desktop (early build)

Files here:

http://antimatter15.110mb.com/misc/coolstuff.zip

Extract desktop2 to your Apache server, start Windows Calculator (or another app of your choosing). Run Screen.exe, type the app title, press “Get Handle” and check the “Run Server” box. Navigate to the desktop.html (on your PHP-enabled apache), and start the “NV Window” app.

To be able to control the app, you have to set it to your network ip (not localhost) on a VM or a different computer. and configure sample.js at line 191.

proxy: "gdat.php?url=",
updateurl : "[http://localhost:12345](http://localhost:12345/)",
baseurl : "[http://localhost:12345/base](http://localhost:12345/base)",
showimg : false,
uinterval: 1000,
updater: null,
xoffset: -8,
yoffset: -28,

change localhost to the server’s IP. And tick “Control Desktop”

Note that you need firebug.

If i can think of a name for the app, i’ll be sure to creakte a google code project for it. so if anyone want’s the source, think of a name for it.


New File Format 17 November 2007

I’m working on a better save/preview file format, even merging the save/preview formats into one.

it’s just a string seperated by “;;” before it, is a standard JSON Array. After, is the standard XML compiler stuff. the json has info like framerate,height,width,current frame, tweens, keyframes, animation length, comments?, rating?, size? etc.

the javascript save/load implementation is so far done.