Skip to content

August Progress Report

It must be infuriating to be in a situation where there’s a clear problem, and all the obvious remedies continually fail to produce any legitimate result. That’s what this blog is like: every month rolls by with some half baked ideas partially implemented and barely documented. And in the last day of the month, there’s a kind of panicked scramble to fulfill an entirely arbitrary self-enforced quota just to convince myself that I’m doing stuff.

Anyway, it’s pretty rare for me to be doing absolutely nothing, but mustering the effort to actually complete a project to an appreciable extent is pretty hard. This might be in some way indicative of a kind of shift in the type of projects that I try to work on- they’re generally somewhat larger in scope or otherwise more experimental. And while I may have been notorious before for not leaving projects at a well documented and completed state, these new ideas often languish much earlier in the development process.

There’s always pressure to present things only when they are complete and presentable, because after all timing is key and nothing can be more detrimental to an idea than ill-timed and poor execution. But at the same time, I think the point of this blog is to create a kind of virtual paper trail of an idea and how it evolves, regardless of whether or not it falters in its birth.

With that said, I’m going to create a somewhat brief list of projects and ideas that I’m currently experimenting with or simply have yet to publish a blog post for.

  • One of the earlier entries of the backlog is a HTML5 scramble with friends clone, acting highly performant on mobile with touch events and CSS animations while supporting keyboard based interaction on desktop. I’ve always been intending to build some kind of multiplayer real time server component so that people could compete in real time. Maybe at some point it’ll be included as a kind of mini game within protobowl.
  • The largest project is definitely Protobowl, which has just recently passed its one year anniversary of sorts. It’s rather odd that it hasn’t formally been given a post on this blog yet, but c’est la vie. Protobowl is hopefully on the verge of a rather large update which will add several oft-requested features, and maybe by then I’ll have completed something which is publishable as a blog post.
  • Font Interpolation/Typeface Gradients. I actually have no idea how long this has been on my todo list (years, no doubt), but the concept is actually rather nifty. With attributes like object size or color, things can be smoothly interpolated in the form of something like a gradient. The analogue for this kind of transition when applied to text would be the ability to type a word whose first letter is in one font, and the last letter being another font, with all the intermediate letters some kind of hybrid. I never did get quite far in successfully implementing it, so it may be a while until this sees the light of day.
  • I’ve always wanted to build a chrome extension with some amount of OCR or text detection capabilities so that people could select text which was embedded within an image as if it weren’t just an image. At one point I narrowed down the scope of this project so that the OCR part wasn’t even that important (the goal was then just some web worker threaded implementation of the stroke width transform algorithm and cleverly drawing some rotated boxes along with mouse movements). I haven’t had too much time to work on this so it hasn’t gone too far, but I do have a somewhat working prototype somewhere. This one too is several years old.
  • In the next few days, I plan on publishing a blog post I’ve been working on which is something of a humorous satire on some of the more controversial issues which have arisen this summer.
  • And there are several projects which have actually gotten blog posts which weren’t themselves formal announcements so much as a written version of my thinking process behind them. I haven’t actually finished the Pedant, in spite of the fact that the hardware is in theoretically something of a functional state (I remember that I built it with the intent that it could be cool to wear around during MIT’s CPW back in April, but classes start in a few days and it hasn’t progressed at all since). Probably one of the most promising ideas is the kind of improved, vectorized and modern approach to video lectures- but that too needs work.
  • I’m building the website for a local charity organization which was started by a childhood friend, and maybe I’ll publish a blog post if that ever gets deployed or something.

Posted in Meta.

Tagged with , , , , , , , , , , , , , , , , , , , , , .

Adaptive Passphrase Length

Longer passwords are more secure. But if you’ve ever been on a website or computer network which mandated that you change your password every 30 days to a string which you haven’t used in the past 180 which could not contain any part of your username or an English language word without being liberally adorned with num3r1cal and sy#b@!ic substitutions and aRbiTRarY capitalization, you should be able to understand the pain of long and arcane passwords.

At time of writing, I’m a high school senior with something on the order of three weeks left in school. I can’t help but feel a tad wistful of the four years I’ve spent roaming these red-tinted halls of learning. Many of the particularly bittersweet memories include PE, a mandatory elective for my freshman and sophomore years. A bizarre policy forbid the possession of backpacks in the locker rooms, leading to the acquisition of the skill to open rotary combination padlocks, a skill which I just as quickly dispossessed myself of, having absolutely no need for a locker after that.

During that time, I had read about a pretty nifty hack involving Master Locks at the time. If you entered the combination and opened the lock and closed it again and left it on the last digit, you could open it again by rotating clockwise by fifteen and rotating back to the final digit (to fight the possibility of attackers who are aware of the exploit, you could turn the dial clockwise some number less than 15 after locking it, so that you would only have to remember the final digit). It was this trick that I used to open my locker just about every other school day for those  two years- a reduction in security no doubt, but a great convenience. I’ve been thinking over the design of the lock recently, and I’m not sure if it’s better classified as a bug or a feature. Conventional wisdom dictates that no doubt this substantial reduction in the key space is a rather severe design flaw. The fact that many newer locks were immune (a lot of locks will snap the dial forward when closed so that it is physically impossible to land on the final digit) to this trick seems to support the notion that this was somehow an undesired consequence of locksmithery rather than some kind of delightful accident.

But the more I think about it, the more this bug seems like a feature. Perhaps the loss of security isn’t nearly as egregious as it appears, and this little shortcut is actually a delightful little accident. Nearly every school day, I’d only have to take four seconds to open my locker because of this handy trick, except on those occasions some kid before me decided to walk around the lockers to press their heads against the metallic locks to spin them and pretend they can lock-pick, in which case, I’d have to enter the whole combination to open sesame. That is, you only need to know one number to open it unless someone earlier guessed wrong, in which case, you’d have to enter three numbers.

The next story is about an iPhone.

I playfully grab at a friend’s phone for the second time in a week. A friend takes a selfie using the newly emancipated phone, with a rather contorted grin. I dutifully replace her lock screen photo, which had previously been an aerial shot of her grassy picturesque college campus with my friend’s contorted close-up selfie. She learns to be a bit more protective about her phone, quickly issuing a light tap on the power button to reengage the lock screen, a four headed Cerberus with an exponential backoff growl. But this doesn’t mean anything to the enterprising background changer, because the unlocking gesture happens all the time and it’s fairly easy to figure out what numbers someone’s typing.

This is a fairly innocent class of authentication breach, but it should be fairly unnerving given the extent to which our lives and personal information are accessible. Anyway, these two stories led me to think about ways to both simplify the process of entering passcodes and passphrases while simultaneously hardening against some general attacks, in particular replay attacks (and to a lesser extent, man in the middle).

The premise is simple enough: As you’re typing a password, letter by letter and stroke by stroke, each character is processed (either locally, or sent to a server). As soon as the authorization engine is comfortable with the amount of data provided, it sends a signal so that no more data is transmitted, and that authorization is complete.

In the same manner as the old MasterLock, maybe in a particularly auspicious occasion, one might only need one or two symbols in order to prove their identity. However, if the wrong digit is entered on the first try, the subsequent attempt must then obviously come with increased difficulty. Likewise, it may be advantageous to have a low initial difficulty if the login scenario seems very ordinary, which can be determined by any number of variables, including but not limited to the current IP Address, the location as determined by GPS or other means, stylometrics such as the rate and delay between typing individual letters, the number of simultaneous concurrent sessions, how the last session ended (by timeout or by an active logout). But an important part of the  system is that the difficulty has to be non-deterministic, because if we assume the attacker knows the system and can manipulate the variables at play, there is still a last vestige of hope that the attacker’s first guess is wrong (after that the difficulty climbs and the attacker is no longer really a problem).

Note that the server would not specifically reveal that a password is incorrect (until it’s manually submitted, like in existing systems). It would only indicate when the received content is sufficient.

In a keyboard input scenario, for a shell prompt or for generic password entry, it’s probably important to keep the text box open until the user acknowledges that he has been authorized, however in case the connection has been compromised by a non-interactive eavesdropping man in the middle with the intent to replay the password, the client should stop sending additional letters as soon as the server signals that it’s ok. This gives the server the option of never revealing to the user whether a response is sufficient, essentially forcing the user to enter the entire password and pressing submit.

Like how some safes or button pads have duress codes that silently call the police when triggered, there system could ignore invalid characters that prepend the actual password for entry, while subsequently blacklisting that given prefix and raising subsequent difficulty.

Of course this isn’t a complete solution. There needs to be some segmentation of access based on current authorization levels. For instance, checking whatever new mail or facebook statuses exist probably does not warrant much authentication friction, but combing through old personal documents and emails, installing root software or changing system settings, should probably require higher levels. This can probably help in that by keeping a unified password of sorts for all levels, but isn’t itself a complete solution. Two factor authentication is also important, in case a single device is compromised or some part of the scheme is untrusted.

I think it’d actually be pretty cool to see something like this implemented, because I think it might serve as a genuinely useful feature that enables a continuously variable spectrum that varies between when security is a situational necessity and when excessive user friction is detrimental.

Posted in Passwords, Security.

Tagged with , , , , , , , , , , , , , , , , , , , , , , , , .

Array Addition and Other Fun Javascript Hacks

fun hacks

It’s times like these that I’m reminded of my favorite childhood TV show (actually it’s kind of a close contest between this and Spongebob), Mythbusters, and that perennial warning: “don’t try this at home”. But this isn’t actually dangerous in any physically life-damaging way, it’s just dangerous in that it’s a very bad idea. The whole thing started with a friend who was irked by a few Javascript oddities, one of them being the inability for arrays or tuples of numbers to add.

I actually came across an idea some time ago while reading about cross-site JSON exploits. But the premise for adding arrays is rather simple: since the language only supports subtracting and adding numbers, the javascript engine will try to convert an array to a number by calling its valueOf method. There really isn’t a numeric representation of an array so valueOf usually just returns NaN, which isn’t tremendously useful. However, we can override the valueOf function to return any number we want.

We can create a valueOf function which assigns numbers to arrays according to a specific pattern, such that the resultant number can reveal (without much ambiguity) which arrays were involved, and whether they were added or subtracted. There are probably a myriad of ways to construct numbers like that, but one of the simplest (or at least the first one that I could come up with) is to raise some given radix (at least 3) to a unique number that increments every time valueOf is called. You can understand it pretty simply in base 10.

Lets say our first array is [1, 2, 3]  and our second array is [5, 4, 3]. Every time the instance’s valueOf is called, we plop it onto a temporary global array and record the index. In this case, lets assign the first array index 1, and the second array index 2. If we raise the radix 10 to that index, we can get the respective unique numbers: 10 and 100.  Now these numbers can be added or subtracted, leading to 110 or 90 or -90 (or unaltered, leaving 10 and 100). To find out what operations and what numbers are involved in the operation, we can first add 1000 which is 10 raised to the size of the global array, which has the useful effect of making everything positive. Now we’re left with 1110, 1090, 0910, 1010, and 1100. We can ignore the first and last digits, and each digit can be either a 1, a 0 or a 9 (if it’s anything else, this is just a number, not the result of a magical array addition). A 1 tells us the number was added, a 0 tells us the number isn’t used, and a 9 tells us that the number was subtracted.

This leaves a weird problem though, which is how you’re going to convert a number back into that array transparently. And this segues into the more atrocious segment of this hack. If and when Skynet and the other assorted malevolent (uh… I mean, misunderstood) artificial intelligences develop their courts, the fembots and gentledroids of the jury will no doubt consider me guilty of whatever felonious usercrimes may exist. They’ll consider me an equal of Norman Bytes, butchering idiomatic javascript in the shower.

What exactly makes it such a heinous offence, you may find yourself asking. The answer is simple: the unholy matrimony of with and Object.defineProperty (also, keep in mind that it isn’t DOMA’s fault).

The great thing about Object.defineProperty is that it lets you get in the middle of the whole variable assignment process. The problem is that this only works with object properties, not top level variables. But Javascript has a nice (read: wretched) with statement which lets you treat variables as if they were object properties. This still leaves a slight problem because there’s no way to define a catch-all property. And if nothing so far has wrought utter terror to your soul, this last critical part may very well do exactly that. Since variables that are called must exist there in name, we can use the Function toString method to decompile the source and use a simple regular expression (any symbol which starts with A-z $ or _ followed by any of that or a number) to extract candidate variable tokens, and for each one, we can define a getter and setter on a new object which is subsequently passed as the first argument to the function whose first enclosed statement is a with.

The power of intercepting all function calls, variable declarations and retrievals then comes by recursively creating another fake object filled with getters and setters whenever a property is accessed. For method calls to primitive types like strings and numbers, we do the same type of sorcery but directly on their respective prototype properties. Whenever a function is passed a number which happens to match the pattern for an array addition or subtraction, we can passively intercept and substitute its value. Any string which matches a certain pattern of CSS selectors can be then transparently substituted with the NodeList which results from a document.querySelectorAll. And we can change all the variable declarations for a loop such that array values are used instead of keys.

And now, four minutes before the end of this month, I’ve successfully yet again managed to eke out a blog post to fulfill my quota. And I guess I don’t have an Humane Society to prove that no humans were harmed in the making of this blog post, but how bad could it possibly be– it’s only 150 lines.

Posted in Fun.

Tagged with , , , , , , , , , , , , , , , , .

Automatically Scanning on Lid Close with Arduino

Arduino Scanner

Before the 29th of January, I had a rather irritatingly Rube Goldberigan scheme for using my CanoScan LiDE 80. It didn’t have native Linux support, so the only way I could scan documents was through a Windows XP (Technically MicroXP) installation inside a VMWare virtual machine extant for the sole purpose of indulging my blue moon scanning requirements. But after that fateful day, I had a new scanner which- in its matte black glory could reasonably produce bitmap reproductions from pieces of paper on its surface, notably sans the convoluted meta-computing.

Ubuntu ships with a nice Simple Scan app, which as the name might suggest, allows you to scan things rather simply. But for some reason I thought that getting a scanner was a good reason to start scanning every one of the numerous pieces of paper I get from school. Naturally, there’s a problem of course which arises from the fact that I have to open up an app (usually with the absurdly slow unity launcher) and then use my mouse to click buttons, all of which happens on top of the already undue burden of opening the scanner lid to put papers inside.

Obviously this system would be untenable for a mass scanning addiction. Maybe a reasonable person would at that point take a deep introspective about the end goal of such an endeavor. Maybe wasting the 5 Gigabytes a year needed to archive some 8,000 school papers (assuming that the files are stored at a fairly reasonable 200dpi as jpegs averaging around 5MB per)- most of which illegible chickenscratch anyway, plus the ten minutes or so per day at minimum to shuffle papers around. Well, a reasonable person would take into account the cost and relative benefit of such a scheme and decide that this really isn’t worth it.

But of course, I couldn’t be so overtly dismissive if I had that same reasonable sense. Obviously this setback was not a serendipitous invitation from providence to investigate the social, ethical and utilitarian merits of the goal (I say this of course with a great deal of sarcasm, but there are many less trivial opportunities that technologists ignore). Instead, ignoring whether or not it was a good idea in the first place, I decided to fix it.

The first step was trying to get the buttons to work. Canon scanners, and presumably all other scanners, have little buttons on the device that you can push to accomplish certain easy tasks. On Windows, it’s pretty cool because it’ll automatically launch whatever app it is that scans things and start it automatically. There’s a set of four: PDF, Auto, Copy and Email, pretty self explanatory at that (and if that fails to suffice, they’re coupled with cute monochrome icons).

The problem is that it doesn’t work automagically, at least on Linux. I found  little project called scanbuttond, or something to that effect, and tried patching it in some way that would fix it, but after investing an hour or so- I pretty much gave up.

The thing is that having button isn’t even the most efficient system for document scanning. So what physical cues are there to indicate intent-to-scan? The first and most obvious one is the presence of a document in the bay. It’s a bit difficult to divine when a document is placed on the surface, though it’s probably possible with some fancy arrangement of lasers or light sensors exploiting ftir or precisely measuring the weight of the entire scanner or monitoring the butterflies in proximity for fft’d wing flaps. Plus, chances are that immediately after a document’s been placed, you probably want some time to adjust the paper’s alignment and get your fingers off the paper (before the intense searing beam of panchromatic LED light reduces your fingertips to burnt crisps).

For flat documents, most of the time when that happens, you close the lid so that the foam white lid insert will press down on the document (flattening out the wrinkles endemic to a haphazard backpack-paper-shoving technique) after making adjustments to ensure proper alignment. The only problem with this scheme is if you’re scanning some thicker document (a book- whatever that is) or don’t feel like lowering the lid (now, why would you want to do that?).

So with a piece of folded up aluminum foil and foam and some judicious application of tape (in a setup eerily similar to that door alarm that I built from a kit from the Discovery Channel store when I was 7) I crafted a little switch mechanism that activated whenever the lid was closed. Then the two prongs of the foam switch were attached to a voltage divider on and Arduino Leonardo board so that it could measure when the thing closed or opened. A tiny piece of code ran on the Arduino which would do exponential smoothing on the voltage between the pins and send a signal over serial.

On the computer-side of things, there’s a little python script which listens on the serial device for whenever the lid has closed. It uses the pySANE bindings to scan the document and then using PIL compares it to a blank scan in order to figure out if what we scanned was actually empty. If it isn’t empty, then it saves it to the disk as a file named after the current date.

Twenty lines of Arduino C, and a hundred of Python later, I have a scanner which makes it exceedingly easy to digitize my paper trail. Too bad I haven’t used it in four months. I reinstalled Ubuntu at some point and never quite bothered setting it up again (even though it would only entail like two lines to attach it to the startup applications list). Even when it was working, it took some amount of work and offered almost no immediate benefit- it didn’t integrate any form of OCR or fancy visualization which could potentially make this interesting from a scientific perspective. But school is winding down, I mean literally graduation is a smidgen more than a week away, so I’m not exactly going to have much more paper to digitize. So I guess this marks the end of a failed experiment, the pre and post mortem- interesting in that it succeeds in just about everything I set out to do, but I never quite got it into using it.

Maybe you’ll find it more useful than I did.

Posted in Arduino + Scanner.

Tagged with , , , , , , , , , , , , , , , , , , , .

Recoloring Planck Data

Recolored and Merged OnWhite Thumbnail

The rationale behind this is actually pretty contrived, but one of my friends had an imminent birthday, and I had no idea what kind of present to get her. Incidentally she sent had been working on some project and sent me a copy to look over- a request that I honored by perpetually promising to get to it eventually. Sure, it was interesting enough, but several months elapsed and I was beginning to face the fact that I would in all likelihood never actually get to it (kind of like my bottomless Instapaper queue from three years ago)- but that resounding guilt instilled the notion that somehow she liked astrophysics (the paper was something on Perlmutter’s Nobel). So in the absence of any other good ideas, I decided to get her a giant printout of the classic WMAP CMBR.

Soon after finding a poster for sale off Zazzle entitled the “Face of God” (a particularly poetic pantheistic epithet), I found out that only a week earlier the European Space Agency had published the results of their Planck probe- a substantially higher quality rendition of the cosmic microwave background. So the solution would be simple, I’d just take that new, clearer image and upload it to that poster-printer under some clever title like “Face of God- Dove Real BeautyTM“, as if the NASA’s WMAP is some kind of odd gaussian girl trope.

Original OnWhite Thumbnail


But the ESA’s Planck coloring is for some unfathomable reason particularly ugly. Sure it has a kind of crude appeal reminiscent of some kind of yellowed 14th century cartographic map with its tan speckled shades of color, but in general, it’s just kind of ugly. Maybe it’s five hundred million years of evolution that makes me particularly predisposed to the blue-green aesthetic of leafy flora and the azure sky. Also, for sake of recognizability, the WMAP data has made its fame with that particular coloring, it’s kind of unreasonable to expect someone to recognize it even after the color scheme has been changed.

The task of recoloring it was actually pretty simple, I just had to locate a legend for the respective graphs- a solid gradient which spans from the cool side to the warm side (the actual range of the data is only ±2mK so there isn’t in this case a massive difference between cool and warm). After crawling through a handful of scientific publications, it’s easy enough to find one and take a screenshot.


The difference in width is actually a kind of useless distinction, an artifact of the resolution of the paper or image I extracted the gradient from. It’s kind of interesting because I don’t really have any idea what the mathematical basis of these gradients are. The WMAP one looks like a simple rainbow so it may just be the colors arranged in progressively increasing wavelength, while the ESA coloring appears to be some kind of linear interpolation between red, white and blue (if the nationalities were inverted, one might be tempted to say murrica).

But once the gradient is established, it becomes the trivial task of mapping the colors of one image to another, something that I kind of hackedly accomplished with a Python script using PIL (It took a minute or so to process the 8 million pixels, but that’s not really too bad). And then, because the ultimate purpose of my project wasn’t so much to attain scientific accuracy as feigning it with some kind of better aesthetic, I went to GIMP and superimposed a translucent copy of the WMAP data so the image isn’t quite so speckled and the larger continental blobs more apparent.

Here is the poster if you want it. And the resulting 6.1MB jpeg.

Posted in Planck/WMAP.

Tagged with , , , , , , , , , , , , , , , , , , , , , , , , , .

hqx.js – pixel art scaling in the browser

Screenshot 2013-03-31 at 5.15.07 PM

Every once in a while some gadget has the misfortune of epitomizing the next first world problem. I guess right now, this is owning a Retina (or equivalent) laptop, tablet (arguably phone, but most web pages are scaled out so it’s not that big of a problem) and being irked at the prevalence of badly scaled graphics. So there’s a new buzzword “Retina Ready” for websites, layouts and designs which support higher resolution graphics for devices which support it, often meaning of lots of new files and new css rules. It’s this trend of high-pixel-density devices (with devices like the iPad 3, Retina Macbook Pro, Nexus 10 and Chromebook Pixel - though I for one don’t currently have any of them, just this old glitchy-albeit-functional first generation Chromebook) that is driving people to vector icon fonts.

But the problem of radical increases in terms of resolution isn’t a new one. Old arcade games rarely exceeded 260×315, and the Game Boy Color had a paltry 160×144. While a few people still nostalgically lug around game cabinets and dig out their dust-covered childhood handheld consoles for nostalgic sneezing fits, most of the old games are now played with emulators running on systems several orders of magnitude more sophisticated in every imaginable aspect. So that arcade monitor that once could engross a childhood (and maybe early manhood) now appears nothing more than a two inch square on a twenty inch monitor. But luckily there is a surprisingly good solution to all of this in the form of algorithms designed in particular for scaling pixel art.

The most basic form of image scaling that exists is called nearest-neighbor interpolation, which is extra simple for retina devices because it means simply growing the size of each pixel by a factor of two along each axis. That leads to things which are blocky, and unless you’re part of an 8-bit retro-art project with a chiptune soundtrack looks ugly.

The most common form of image scaling borrows a lot from the math and signal processing fields, with names like bilinear, bicubic, and lanzcos essentially they treat an image as some kind of composition of sinusoidal parts and try to ideally extrapolate and interpolate such that visible artifacts are marginalized. It’s all very mathy, but the result is kind of the opposite of nearest-neighbor because it has the tendency to make things blurry and fuzzy.

The thing is that the latter tries to reach some kind of mathematical ideal, because images taken by your friendly neighborhood DSLR-toting amateur (spider-powers optional) are actually samples of real world points of data– so this mathematical pursuit of purity works out very well. There’s still the factor-of-four information-theoretic gap that needs to be filled in with best-guesstimates, but there isn’t really any way to improve the way a photograph is scaled without using a higher-resolution version of said photograph. But most photographs that are taken already are sixteen-megapixel monsters and they usually still look acceptable when upscaled.

The problem arises with pixel art, little icons or buttons which someone painstakingly drew in Photoshop one lazy summer afternoon in the late 90s. They’re everywhere and each pixel isn’t captured and encoded by a sampling algorithm of some analog natural phenomona– each pixel was lovingly crafted and planted by some meticulous artist. There is no underlying analog signal to interpret, it’s a direct perceptual hookup to the mind of the creator– and that’s why bicubic sampling looks especially bad here.

Video games, before 3d graphics engines and math-aware anti-aliasing concerned with murdering jaggies, in the old civilized age of bit-blitting, were mostly constructed out of pixel art. Each color in that limited palette was placed there for a reason and could be exploited by specialized algorithms to construct higher-quality upscaled versions which remained sharp. These come with the names EPX, Scale2x, AdvMAME2x, Eagle, 2×Sal, Super 2×Sal, hqx, and most recently, Kopf-Lischinksi. These algorithms could be applied in real time to emulator windows to acceptably scale a game to new sizes while eschewing jagged corners and blurry edges. 

Anyway the cool thing is that you can probably apply these algorithms in lieu of the nearest-neighbor or bilinear scaling algorithms used by browsers on retina platforms to effortlessly upgrade old sites to shiny and smooth. With a few rough heuristics (detect if an image appears to be a sprite by testing for a limited palette, see if the image is small or a perfect square, detect if it has transparent pixels) this could be packed into a simple script include that website makers could easily inject into their pages to automagically upconvert old graphics to new shiny high-resolution ones without having to go through the actual effort of drawing new high resolution graphics and uploading them online. And this could also be packaged as a browser extension so that, once and forever after, this first-world nuisance shall be no more.

Before setting out to port hqx-java to javascript, I actually did some cursory googling to see if it actually had been done before. Midway through writing this post, I found out that it actually had been done before, in a better way, so I won’t even bother linking to my inferior version. But either way the actual goal of this project was the part which was detailed in the last paragraph, that of an embeddable script or browser extension which could heuristically apply pixel-scaling algorithms– something I probably won’t bother trying to do until at least after I get my college laptop (which I anticipate will be a Retina Macbook Pro 15″). Nonetheless, I haven’t written an actual blog post in almost three months and it’s the last day of this month, and I guess it’s better than having you all (though nobody’s probably going to read this now that Google Reader has died) assume that I’ve died. Anyway, now I’m probably going to retroactively publish old blog posts in previous months to fraud continuity.

Posted in Design, Multimedia Codecs.

Tagged with , , , , , , , , , , , , , , , , , , , , , , , , .

Swipe Gesture 2

I’ve actually had this post fermenting in my blog’s draft folder since at least September, but I never actually got around to finishing the post. And now that Google’s enabling the new swipe-to-go-back by default on the dev channel versions of Chrome OS, it finally feels like the right time to post. As in, I feel like posting things right when they’re soon to be absolutely useless (Which, you might recall, was the case with my Google Wave client, which started selling on the App Store the day Google announced that Wave would be phased out and eventually shut down).

So, a few months ago, I finished my update to the Swipe Gesture extension, which featured a rather significant redesign. It supported multiple gesture directions and a fancy animated element to indicate the direction of the gesture.

Posted in Chrome Extensions, Swipe Gesture.

Tagged with , , , , , , , , , , , , , , , , .


On December 15th, 2012 12:15pm, I was accepted Early Action to the Massachusetts Institute of Technology, so I’ll be there starting this fall :)

This blog isn’t principally here to boost my ego, so I’ll leave it at that, because I’m writing this post some six months after-the-fact with a fake retroactive timestamp in order to fraud my monthly post-count goal.

Posted in Education.

Tagged with , , , , , , , , , .

Facebook Chat Bot, Turing Completeness

Okay, so I really want to maintain at least one blog post per month, and here I am, right before New Year’s with no apocalypse in sight to rapture me away from having to write a blog post. I don’t have anything particularly interesting ready to share, so here’s something fun that only took a few hours.

The rather long Bayeux tapestry of an image I have crammed to the right marks the culmination of a series of rather odd tangents. It also serves as a reminder for me to abandon hope, because the shape of the conversation forces whatever post I plan on writing to be verbose enough as to fill all that vertical space so that the actual text here isn’t dwarfed by the image, which would be aesthetically jarring.

It all started the day before yesterday (I’m pretty sure it wasn’t actually then, but this is my abridged timeline and I’m entitled to indulge in whatever revisionist temptations I have as I’m writing this at a rather late hour, because 2013 is creeping closer at an uncomfortably fast rate and I still have some finite quantity of homework I haven’t attempted to start over the course of the past week), when my friend Robert realized that a rather significant portion of my chat responses include “wat”, “walp”, “lol”, “yeah”, “:P”, “cool”. From that, he logically sought to create a naive bayes predictor of me, because, why have a person nod in consent, when you can have a robot which can, to some degree of accuracy, automatically give that nod of assent without bothering a physical human being.

This isn’t quite the same thing as a traditional chat bot, because traditional chat bots don’t pretend to be actual people, largely because the state-of-the-art of artificial intelligence is quite a ways off from creating something which can suitably pass a Turing test, and even further away from being able to learn the entire knowledge of a person and provide an intelligent response to every imaginable stimulus. A closer approximation would be that this is a sort of semi-automated chat-macro system, whereby my generally useless responses of agreement are outsourced to a rudimentary script, fully capable of replicating my own rather unhelpful self. And from that it’s able to try to behave human by replicating a very very narrow subset of activity, deferring to a real human transparently.

It’s somewhat like how GMail has a priority inbox feature which uses magical algorithms to separate the wheat from the chaff, diverting your attention away from the less subjectively important emails toward the few which actually matter. Except instead of giving the user ultimate discretion over the emails, this one would simply reply with “lol”.

It’s pretty easy to see any realistic application of this as rare. But I could totally be wrong, and I could be seeing this from totally the wrong angle. It’s entirely plausible that some subtle variant of this idea is absolutely brilliant, and will serve as the future of networked communication for decades to come. Maybe as email evolves the way of the Dodo, Instant Messaging will become the semi-permanent high-persistence low-immediacy ironic-initialism that takes email’s place, and the  rest of the world becomes burn-on-reading SnapChats. Maybe SnapChat gets combined with lifelogging, and every minute of your life gets divided into four second snippets, and sent to one of your four thousand middle-school friends, so they can bask in the new hyper-intimate super-network, fueled by non-discretionary artificial intelligence filters.

That was Robert’s end anyway. He went on combing through our accumulated ~30,000 messages with his handy Mathematica trying to identify pertinent features which might probably replace me. On my side of the thing, I started with the userscript which would run on my browser to intercept the messages he sent, process them, and to send out a response, if a suitable automated one existed.

This actually turned out to be slightly more difficult than I had anticipated, for a single reason, and it was and has been the same reason that has bothered me for quite some time. Firing events. I really probably should learn how to use Selenium or something to automate things in a nice and clean way.

And that’s completely ignoring the five hundred pound trivial solution in the room, which isn’t to use a 500lb gorilla as a metaphor. Facebook supports XMPP which is a nice open source protocol with a billion implementations and it’s really not that hard to automate. I mean I’ve even done it before. I have a Raspberry Pi after all, and I can use it to run useless services like this, because I’ve decided that that’s exactly what a Raspberry Pi is for. But I didn’t and I have no idea why. C’est la vie, I guess.

The secret ended up being to Google a bunch of phrases related to keyboard-event-emitting with initializeKeyboardEvent or something of that sort a lot of times and stealing some code from StackOverflow. It took a few tries, but eventually something that worked emerged, and it was cool.

For posterity, this was the code that worked:

But that wasn’t the end of it. No, now there was a piece of code which could convincingly send a message, but it also had to read messages too. That wasn’t hard, actually, I just created a MutationObserver object and named it after a Fringe character. But then, presumably because Robert’s code was still training on that 30k message corpus, or something inexplicable, I built a crappy rudimentary thing with four or so simple regular expression based rules.

One of them happened to be that if someone were to send me the message “lol”, I would inexplicably respond as well with “lol”. This wasn’t initially a problem because the idea was to create a robotic version of me and only me. But Robert found it cool too so he ran the same script. Problem was, that this meant that if either one of us were to introduce the word “lol”, it would spiral nigh perpetually ad nauseam.

Somehow this sparked the idea to implement simple esoteric languages on it. The first one was a simple substitution rule. 0 -> 1, 1 -> 10. And over a few iterations it grew and took up huge amounts of space.

Next up was implementing a tag system. I implemented the 2-tag collatz sequence one. The cool thing is that it’s actually insanely easy to implement. You just take a sequence of symbols, strip the first two, and append the first symbol after you’ve transformed it according to a certain transformation mapping.

It was interesting as each little computation of this tiny esolang of a chat conversation involved a round trip of hundreds of miles. But each tag was still letters, ‘a’, ‘b’, and ‘c’. And it was getting late, and I thought that maybe it would be nice to hark back at inside-joke/theory of sorts that sparked it all. So I decided that our tags could be, instead of solitary letters, comprise word symbols like “lol”, “d:” and “wat”, which perhaps is a subconscious reference to an old webcomic that I had posted a while ago about a steganographic system built on internet vernacular.

I updated my script, posted it to him, he ran it, and initiated it all by saying “lol lol lol”. And from then the line lengths grew and shrank like mountainous terrain, eventually converging on the final state, a solitary “lol”. And for some inexplicable reason, some weird quirk of fate, or some deeper underlying truth to the universe, Robert always got the last “lol”. When I had tried it with a random initiating code, it would always terminate in some odd number of iterations, leaving me with the lowly penultimate laugh.

It sort of broke after that, and we didn’t fix it.

Nonetheless, there’s something incredibly cool about how an idea for something questionably turing-test-worthy can evolve into something turing-complete.

And that’s a basic summary of how uneventful my winter break has been, incidentally written with exactly 1337 words.

Posted in Facebook Bot.

Tagged with , , , , , , , , , , , , , , , , , , , , , , .

A New Approach to Video Lectures

At time of writing, a video is being processed by my `` script, it’s only eight lines of code thanks to the beautifully terse nature of python and SimpleCV. And since it’s clearly not operating at the breakneck speed of one frame per second, I don’t have time to write this README, meaning that I’m writing this README. But since I haven’t actually put a description of this project out in writing before, I think it’s important to start off with an introduction.

It’s been over a year since I first wrote code for this project. It really dates back to late April 2011. Certainly it wouldn’t have been possible to write the processor in eight painless lines of python back then, when SimpleCV was considerably in more of an infancy. I’m pretty sure that puts the pre-production stage of this project in about the range of a usual Hollywood movie production. However, that’s really quite unusual for me because I don’t tend to wait to get started on projects often. Or at least, I usually publish something in a somewhat workable state before abandoning it for a year.

However, the fact is that this project has been dormant for more than an entire year. Not necessarily because I lost interest in it, but because it always seemed like a problem harder than I had been comfortable tackling at any given moment. There’s a sort of paradox that afflicts me, and probably other students (documented by that awesome Calvin and Hobbes comic) where at some point, you find a problem hard enough that it gets perpetually delayed until, of course, the deadline comes up and you end up rushing to finish it in some manner that only poses a vague semblance to the intent.

The basic premise is somewhat simple: videos aren’t usually the answer. That’s not to say video isn’t awesome, because it certainly is. YouTube, Vimeo and others provide an absolutely brilliant service, but those platforms are used for things that they aren’t particularly well suited for. Video hosting services have to be so absurdly general because there is this need to encompass every single use case in a content-neutral manner.

One particular example is with music, which often gets thrown on YouTube in the absence of somewhere else to stick it. A video hosting site is pretty inadequate, in part because it tries to optimize the wrong kinds of interactions. Having a big player window is useless, having an auto-hiding progress slider and having mediocre playback, playlist and looping interfaces are signs that a certain interface is being used for the wrong kind of content. Contrast this to a service like SoundCloud which is entirely devoted to the interacting with music.

The purpose of this project is somewhat similar. It’s to experiment with creating an interface for video lectures that goes above, in terms of interactivity and in terms of usability (perhaps even accessibility), what a simple video can do.

So yeah, that’s the concept that I came up with a year ago. I’m pretty sure it sounds like a pretty nice premise, but really at this point the old adage of “execution is everything” starts to come into play. How exactly is this going to be better than video?

One thing that’s constantly annoyed me about anything video-related is the little progress slider tracker thing. Even for a short video, I always end up on the wrong spot. YouTube has the little coverflow-esque window which gives little snapshots to help, and Apple has their drag down to do precision adjustment, but in the end the experience is far from optimal. This is especially unsuitable because moreso in lectures than perhaps in any other type of content, you really want to be able to step back and go over some little thing again. Having to risk cognitive derailment just to go over something you don’t quite get can’t possibly be good (actually, for long videos in general, it would be a good idea to snap the slider to the nearest camera/scene change which wouldn’t be hard to find with basic computer vision, since that’s in general, where I want to go). But for this specific application, the canvas itself makes perhaps the greatest navigatory tool of all. The format is just a perpetually amended canvas with redactions rare, and the most sensible way to navigate is by literally clicking on the region that needs explanation.

But having a linear representation of time is useful for pacing, and to keep track of things when there isn’t always a clear relationship between the position of the pen and time. A more useful system would have to be something more than just a solid gradient bar crawling across the bottom edge of the screen, because it would also convey where in the content the current step belongs. This is something analogous to the way YouTube shows a strip of snapshots when thumbing through the slider bar, but in a video-lecture setting we have the ability to automatically and intelligently build populate the strip with with specific and useful information.

From this foundation we can imagine looking at the entire lecture in it’s final end state, except with the handwriting grayed out. The user can simply circle or brush over the regions which might seem less trivial, and the interface could automatically stitch together a customized lecture at just the right pacing, playing back the work correlated with audio annotations. On top of that, the user can interact with the lecture by contributing his or her own annotations or commentary, so that learning isn’t confined to the course syllabus.

Now, this project, or at least its goals evolved from an idea to vectorize Khan Academy. None of these truly requires a vector input source, in fact many of the ideas would be more useful implemented with raster processing and filters, by virtue of having some possibility of broader application. I think it may actually be easier to do it with the raster method, but I think, if this is possible at all, it’d be cooler to do it using a vector medium. Even if having a vector source was a prerequisite, it’d probably be easier to patch up a little scratchpad-esque app to record mouse coordinates and to re-create lectures rather than fiddling with SimpleCV in order to form some semblance of a faithful reproduction of the source.

I’ve had quite a bit to do in the past few months, and that’s been reflected in the kind of work I’m doing. I guess there’s a sort of prioritization of projects which is going on now, and this is one of those which has perennially sat on the top of the list, unperturbed. I’ve been busy, and that’s led to this wretched mentality to avoid anything that would take large amounts of time, and I’ve been squandering my time on small and largely trivial problems (pun not intended).

At this point, the processing is almost done, I’d say about 90%, so I don’t have much time to say anything else. I really want this to work out, but of course, it might not. Whatever happens, It’s going to be something.

Posted in Vectorized.

Tagged with , , , , , , , , , , .