Work expands so as to fill the time available for its completion.
— Parkinson’s Law
For the past four or so months, I’ve been working on just one major project. It’s rather depressing to think that I built a reasonably impressive initial prototype over the course of about a dozen sleep-deprived hours, and that all I’ve accomplished since then is minor polish. Technically, there have been at least two rewrites, completely new capabilities and substantially improved technology, but none of that really matters when it comes to describing the project. A project is what it is, at 80% completion, at 95% and 99%.
Second semester at school has started, and that means that Pass/No Record isn’t a thing anymore, and everyone else is adjusting their behavior appropriately. Problem is, I haven’t changed. It turns out that an entire semester of racing toward the bottom is unsustainable when suddenly the floor has been adjusted to above your current position. But the more important point is that it’s leaving less time, and in particular, less contiguous time to work on anything.
Last month, I was working on a port of the Telea inpainting algorithm. Inpainting refers to any kind of image processing operation which attempts to intelligently fill in certain regions of an image with other content. Perhaps the most famous implementation of this is Photoshop’s Content Aware Fill feature, which uses a texture-synthesis and patch-based system, which enables them to copy over things like patterns, and textures, filling in vast contiguous regions of missing content. The problem is patch-based inpainting is almost always quite slow, in spite of its high quality results. There are simpler algorithms based on the Fast Marching Method like Telea or Navier-Stokes which use diffusion models and the ilk in order to propagate the solid colors of the bordering region of some inpainting mask inwards. I’ll write an actual blog post about this once I package it up and build a nifty demo for it.
Last year, Ben Kamens of Khan Academy posted about reverse engineering the flyout menu in Amazon, which suffered from the rather odd behavior of acting exactly how it should. You totally should be able to navigate to a submenu without worrying about accidentally mousing over something else. I looked around for a context menu library which could actually do this, but for some reason I couldn’t find one, so I decided to make my own.
I have a habit of writing blog posts in Sublime Text in lieu of legitimate READMEs. This project was simple and short enough that I think I did manage to write something with a passable semblance to a functional README before initiating this quest to craft a blog post.
This is hardly a project that I’d expect to go down in the annals of my blog as particularly interesting or notable. Instead, it’s a rather boring low-level computer sciencey thingy who would feel perfectly comfortable amidst homework assignments.
In fact, it’s virtually assured that documenting this endeavor– the combination of writing the words which comprise this post and the documentation already written on how to use this rather simple library– will inevitably consume more of my time than actual coding (people who are more familiar with highly intricate professional software tapestries may claim that such is merely the ideal and typical experience of a programmer).
Notice that I’ve managed to spend three paragraphs (well, my paragraphs are on the order of two sentences, so it doesn’t even qualify for the fourth-grade definition for what a paragraph is) talking about the uninteresting character of the project, its relative diminutive size, and the process of writing about the subject, without actually describing what exactly the project is. This, more than perhaps any other project of mine deserves such a diversion, for it is unlikely to yield “much amaze” or even a modest “such wow”.
It is a binary heap priority queue, a teensy bit shy of 50 lines, a mere kilobyte minified. It’s very loosely based off of Adam Hooper’s js-priority-queue but considerably smaller.
Now that I’ve gotten the actual informative and useful part of the blog post off my shoulders, I can take you along for the unproductive plunge into the rather lame backstory for the project. I was working on a port of the Teleainpainting algorithm and needed a priority queue, because the little incremental insertion sort that I hacked together in two lines was taking a few orders of magnitude longer than was acceptable.
I couldn’t actually find the source code for PriorityQueue.js so I settled for js-priority-queue. It was a bit annoying in that it required require.js to function and included several storage strategies that were totally useless. So I copied and pasted a few files together and stripped it of its AMD dependencies and created a somewhat dieted version.
But as 9th grade World History teaches us, appeasement never works, and this purely pragmatic minification leads inexorably to an irrational code golfing. At that point, I was overcome by this undeniable urge to cleanse it further, perhaps blinded by not-invented-here syndrome and software anorexia. Minutes later the file had been laid waste and in its place there existed its bare skeletal remains.
Now all I need to do is push it to github and write a blog post…
As with any minor stepping stone on the road to hell relentless trajectory of
On tablet or other touchscreen devices- of which there are quite a number of nowadays (as the New Year’s Eve post, I am obliged to include conjecture about the technological zeitgeist), a library such as Ocrad.js might be used to add handwriting input in a device and operating system agnostic manner. Oftentimes, capturing the strokes and sending them over to a server to process might entail unacceptably high latency. Maybe you’re working on an offline-capable note-taking app, or a browser extension which indexes all the doge memes that you stumble upon while prowling the dark corners of the internet.
If you’ve been following my trail of blog posts recently, you’d probably be able to tell that I’ve been scrambling to finish the program that I prototyped many months ago overnight at a Hackathon. The idea of the extension was kind of simple and also kind of magical: a browser extension that allowed users to highlight, copy, and paste text from any image as if it were plain text. Of course the implementation is a bit difficult and actually relies on the advent of a number of newfangled technologies.
If you try to search for some open source text recognition engine, the first thing that comes up is Tesseract. That isn’t a mistake, because it turns out that the competition is worlds away in terms of accuracy. It’s actually pretty sad that the state of the art hasn’t progressed substantially since the mid-nineties.
A month ago, I tried compiling Tesseract using Emscripten. Perhaps it was a bad thing to try first, but soon I learned that even if it did work out, it probably wouldn’t have been practical anyway. I had figured that all OCR engines had been powered by artificial neural networks, support vector machines, k-nearest-neighbors and their machine learning kin. It turns out that this is hardly the norm except in the realm of the actually-accurate, whose open source provinces live under the protection of Lord Tesseract.
GOCR and Ocrad are essentially the only other open source OCR engines (there’s technically also Cuneiform, but the source code is in a really really big zip file from some website in Russian and its also really slow according to benchmarks). And something I didn’t realize until I had peered into the source code is that they are powered by (presumably) painstakingly written rules for each and every detectable glyph and variation. This kind of blew my mind.
Anyway, I tried to compile GOCR first and was immediately struck by how easy and painless it had been. I was on a roll, and decided to do Ocrad as well. It wasn’t particularly hard- sure it was slightly more involved but still hardly anything.
If you know me in person, you’ll probably know that I’m not a terribly decisive person. Oftentimes, I’ll delay the decision until there isn’t a choice left for me to make. Anyway, serially-indecisive-me strikes again, so I alternated between the development of GOCR.js and Ocrad.js, leading up to a simultaneous release.
But in the back of my mind, I knew that eventually I would have to pick one for building my image highlighting project. I had been leaning toward Ocrad the whole time because it seemed to be a bit faster and more accurate when it came to handwriting.
Anyway, I spent a while building the demo page. It’s pretty simple but I wouldn’t describe it as ugly. There’s a canvas in the center which can be drawn to in arbitrary fonts pulled via the Google Font API. There’s a neat little thing which lets you draw things on the canvas. And to round out the experience, you can run the OCR engine on your own images, by loading the image file onto canvas to support any file the browser does (JPG, GIF, BMP, WebP, PNG, SVG) or by directly feeding the engine in its native NetPBM formats.
What consistently amazes me about Optical Character Recognition isn’t its astonishing quality or lack thereof. Rather, it’s how utterly unpredictable the results can be. Sometimes there’ll be some barely legible block of text that comes through absolutely pristine, and some other time there will be a perfectly clean input which outputs complete garbage. Maybe this is a testament to the sheer difficulty of computer vision or the incredible and under-appreciated abilities of the human visual cortex.
At one point, I was talking to someone and I distinctly remembered (I know, all the best stories start this way) a sense of surprise when the person indicated that he had heard of Tesseract, the open source OCR engine. I had appraised it as somewhat more obscure than it evidently was. Some time later, I confided about the incident with a friend, and he said something along the lines of “OCR is one of those fields that everyone comes across once”.
I guess I’ve kind of held onto that thought for a while now, and it certainly seems to have at least a grain of truth. Text embedded into the physical world is more or less our primary means we have for communication and expression. Technology is about building tools that augment human capacity and inevitably entails supplanting some human capability. Data input is a huge bottleneck, and while we’re kind of sidestepping the problem with things like QR codes by bringing the digital world into the physical. OCR is just one of those fundamental enabling technologies which ought to be as broad in scope as the set of humans who have interacted with a keyboard.
I can’t help but feel that the rather large set of people who have interacted with the problem character recognition have surveyed the available tools and reached the same conclusion as your miniature Magic 8 Ball desk ornament: “Try again later”. It doesn’t take long for one to discover an instance of perfectly crisp and legible type which results in line noise of such entropy that it’d give DUAL_EC_DRBG a run for its money. “No, there really isn’t any way for this to be the state of the art.” “Well, I guess if it is, then maybe it’ll improve in a few years- technology improves quickly, right?”
You would think that some analogue of Linus’s Law would hold true: “given enough eyeballs, all bugs are shallow”- especially if you’re dealing with literal eyeballs reading letters. But incidentally, the engine that absolutely everyone uses was developed three decades ago (It’s older than I am!), abandoned for a decade before being acquired and released to the world (by our favorite benevolent overlords, Google).
In fact, what’s absolutely stunning is the sheer universality of Tesseract. Just about everything which claims to have text recognition as a feature is backed by it. At one point, I was hoping that Mathematica had some clever routine using morphology and symbolic new kinds of sciences and evolved automata pattern recognition. Nope! Nestled deep within the gigabytes of code lies the Chuck Testa of textadermies: Tesseract.
If the frequency of these progress reports is to say anything, it’s that I’m terrible at sticking to a schedule and even worse at estimating progress. I was hoping that I’d be able to use the Thanksgiving break in order to be productive and get ahead of my various projects– in particular, getting the text-detection and OCR extension to a state complete enough to publish.
For the past few days I’ve been playing with writing a minimal cryptocurrency which incorporates the central innovation of Bitcoin— the hashcash based transaction chain. It’s a simple enough concept that I could manage to implement the core functionality in about a hundred lines of Coffeescript.
I’m also working on a cute little alarm clock which incorporates a pressure sensor underneath my bed in lieu of a snooze button. The goal is to use it as a framework for novel approaches to the tried and true problem of getting undergrads to wake up at ungodly hours (anything before noon). It’s powered by an Arduino and I2C ChronoDot and a large 7-segment display.
I’ve laid a little ethernet cable from the side of my bed, along the ceiling and to the door which contains a bunch of little EL wire inverter boxes (I’m smothering the transformers with Sugru in order to hopefully dampen the onset of my acoustically-induced insanity).
It’s Halloween, and I still haven’t posted a monthly blog post and I don’t quite feel like retroactively posting something next month. I’m understandably quite starved for free time with my attempts to reconcile sleep with college with social interaction- and from the looks of it, I probably won’t be able to publish the blog post that I’ve been working on for the better half of this month before it the month ends.
For the past five days, I’ve started using an actual laptop- a late-2013 Macbook Pro Retina 15″ (yes, I made it through the better part of two months of college without a laptop more sophisticated than a 2009-era Chromebook). Aside from the obligatory setup process and acclimation to the new operating system, and a mild bout of screen-size-anorexia (which with proper counseling, I’ve more or less recovered from- the 13″ is actually somewhat small, but I still can’t quite shake the feeling that 15″ is a smidgen too big), the process has been quite painless.
Getting a laptop slightly more capable than the Series 5 Chromebook (not the beefier Celeron model, the original Atom) is a quite long overdue change. I participated in my first hackathon (incidentally also the first time I’ve really written code since the start of the school year) during the beginning of the month. By the end of that 32-hour stretch, I did yearn for a functional trackpad, larger screen and more performant setup. But my shining knight in Unibody Aluminium armor would not have come until three weeks later. But I don’t think the productivity gains would have affected things too much- even with this dinky setup, the prototype scored the second place trophy.
The exact subject of the project was actually discussed briefly in the last progress update, on that long list of projects which I’ve yet to start. That night, I had actually taken the initiative to do a proper port of some Matlab implementation of the Stroke Width Transform. I hooked it up as a content script which would listen for mouse events over image elements and search for textual regions and draw semitransparent blue boxes where appropriate, and connected it to a node backend which would run tesseract to recognize characters. By the end of it all, I had enough for a pretty impressive demo.
Intermittently for more or less the entire month, I’ve been trying to improve the project- replacing some of the more hacky bits with more reliable implementations. I’ve read about some more algorithms, and experimented with different approaches to improve the method. I’m trying to add more stages to the text detection algorithm, such as the ability to segment an image into different lines, and improve the process of splitting lines into characters beyond mere connected components. But the process is rather tedious and with my limited free time, the project remains quite far away from public availability.
For the past four years, I’ve been using a Casio W-201 watch. It’s plastic and cheap and light, and there isn’t really much more I would ask for. I presume Douglas Adams would be rather disappointed by my fondness of digital watches, but I’m frankly just not smart enough to read an analog dial. I haven’t migrated to those fancy-schmancy smartwatches, and I don’t currently have any intention to part with my precious ticky thingy.
What’s ironic is that the one thing that annoys me about this watch is that sometimes it does, in fact, “tick”. It beeps, with the shrill piezoelectric sound that plagues anything with a four cent PCB. Whenever I switch its mode from military time to regular, or from clockface to stopwatch, I’m greeted by that hundred millisecond reminder that I have ears. And out of some odd sense of courtesy, or deep desire to be unnoticed, this really bothers me.
I may have been a tad disingenuous in saying that I’ve used the same watch for the past four years. Some time in the beginning of the span, I managed to lose my watch, and bought an exact replica. Well, it wasn’t quite the exact replica. The watch had the same model number, but the band that bore the Casio emblem seemed a different hue. Most irritatingly though, this replica made noises when the old one didn’t. At first I thought this was an edgy new feature that made things feel more responsive and tactile, but eventually I realized it was actually pretty annoying.
I distinctly recall the birth of a conspiracy- my subconscious irritation was percolating through neural pathways, assembling a notorious scheme to administer open-heart surgery on this diminutive timepiece to euthanize the noise, once and for all, under the noxious smother of a lead-tin alloy. But these fugitive fermentation of my mind took a backseat when I found my formerly missing watch limply lying on the edge of a basement table.
About a month ago, I had found myself in similar shoes. I had come onto campus with the sudden realization that my precious chronometer had vanished. I scoured the depths of the two pockets in my rather minimal backpack (I decided to pack light for senior year after lugging a monstrosity of a backpack for all of 11th grade), yet no plastic timekeeper was to be found.
Reluctantly, I located my contingency clock, and uneasily slid it onto my left wrist. But soon enough I had felt comfortable in the metaphorical transition of life which the watch represented. It’d be a new phase of my life, running at a different pace, in a new setting- I shouldn’t be encumbered by the gadgets of old.
They say all relationships suffer from a kind of honeymoon phase, where everything in the universe seems kept in perfect harmony. I celebrated my watch’s inaugural re-synchronization, meticulously ensuring that the deviation from NIST’s official US time was no more than a quarter-second. But as all quartz oscillators perceptively drift over time, my relationship with my lightless-sundial began to sour.
This edgy beeping noise began to perceptively evolve from neat to nuisance, and the long-lost scheme of silence started to surface. The final straw came when I had acquired a nifty petite set of mini-screwdrivers, ostensibly for eye-glass repair, courtesy of State Farm at the Career Fair. A particularly intractable case of post-midnight procrastination inevitably struck, leaving a desk strewn with digital watch-guts in its wake.
As I fumbled with how to trigger the precise reset code needed to get the watch to emerge from its disassembly-induced coma, I kept trying different permutations fitting some little spring in every nook and cranny I could find within the interior of the watch. As I haphazardly reassembled the watch, I noticed that the noise was gone, well, a rather significant caveat was that the screen would go black and the device would reset on every light flick of my wrist. From that, I realized that the spring must have been the connection between the ground and the piezoelectric buzzer- silencing the noise would be as simple as taking it out of the watch, rather than shoving it on top of the reset contact.
And as I began to appreciate the magnitude, or lack thereof, of insight gleaned from that epiphany, I came to realize the more protracted history of my first watch. When I had previously changed the battery on the original watch, I must have accidentally knocked out the most minuscule of springs, and simply overwritten the memories of the beep beforehand.
After all this time, the only thing that I had really wanted was to break something in the same way I had so many years prior. And I guess I might as well note as an epilogue that I found my original watch shortly thereafter while rifling through some cabinets. But I’ve embraced this new watch and all it symbolizes, perhaps ironically by physically handicapping it into the old one.
Recently, the American public has been forcefully made aware of the existence of various programs by the NSA- including massive infrastructure for intercepting all domestically routed communications to better protect us from imminent foreign threats. With legions of patriotic analysts, the NSA methodically ranks communications on the basis of their “foreignness” factor to determine candidacy for prolonged retention. Although it was developed with the best interests of the American people at heart, this program unwittingly ensnares communications of purely domestic nature on the order of tens of thousands of incidents per day. These innocent mistakes are putting the agency at a great risk because the 4th Amendment of the Constitution expressly prohibits such affronts to American privacy. Making determinations of foreignness is hard, but to prevent further inconvenience to the American way of life, we should take these leaks as an opportunity for us on the civilian front to aid the NSA by voluntarily indicating citizenship on all our networked communications.
Here, we define the syntax and semantics of X-No-Wiretap, a HTTP header-based mechanism for indicating and proving citizenship to well-intentioned man-in-the-middle parties. It is inspired by the enormously successful RFC 3514 IPv4 Security Flag and HTTP DNT header.
The HTTP header, “X-No-Wiretap” takes the value of the current user’s given name under penalty of perjury. The full name must be immediately followed by identity verification in the form of a standard U.S. Social Security Number, formatted with a hyphen “-” after every third and fifth digit.
Future revisions of the protocol may introduce additional forms of verification, as while the presence of an SSN should be able to lower the foreignness coefficient of the vast majority of domestic communications to well below 51%- initial research seems to indicate that the combination of full first name and SSN is able to reduce an associated message’s foreignness factor by over 76.8% for 99.997% of Americans. However, there is a chance that certain instances may additional require Passport, Driver’s License, Address, Birthdate, Mother’s Maiden Name, and Childhood Best Friend’s Name to further lower the foreignness factor. This capability will be addressed in future versions of the protocol.
What about SSL/TLS?
Of course adding encryption makes it substantially more difficult for the NSA to interpret the content of what a user is sending, and increases the chance that they may unwittingly collect and retain your communications. In order to address these concerns, this proposal necessarily deprecates all the SSL/TLS ciphers in favor of Double CAESAR’13, a thoroughly studied and well-known military-grade solution which offers excellent modes for graceful redegradation.
Isn’t it dangerous to send your social security number in plaintext along with every request?
Conventional security warns of the possibility of man-in-the-middle attacks, but these new intelligence revelations require entirely new types of cryptographic thinking. Here, the trusted entity is not the server acting at one end, it’s not even the user issuing the requests- but rather, it’s the bureaucracy sitting in the middle politely intercepting all traffic for benevolent analysis- protecting your way of life.
One may be tempted to characterize this as a sacrifice of privacy in order to optimize security, but this position is simply naive. Every new progressive initiative of the government advances both fronts- both security and liberty, never at the expense of either. If you take a holistic long term perspective on the impact on a global scale with a vast array of (classified) information sources, there is very little question that you too would arrive at the same conclusions on the genuine merits of this surveillance system.
In this case, the removal of encryption ensures that the government is able to parse the content of messages to identify terrorists. At the same time, the inclusion of the citizenship identification information should give citizens the safety of mind, knowing that their messages will not be stored indefinitely in a NSA datacenter.
What about Identity Theft?
What if you set up a server to transparently capture the browser headers? Any malicious entity could then collect all the social security numbers and real identities of everyone who happened to stumble onto their websites and use the information to sign up for credit cards, create hazardous investments, threaten or blackmail loved ones, and masquerade as a citizen while doing terrorist activities!
There isn’t any real evidence that such sweeping surveillance will even substantially reduce the chances of events that are intrinsically outliers anyway. On the other hand, identity theft is a real world issue which affects millions of Americans on a daily basis- and these changes will only make our real problems worse.
— Short-Sighted Critic
Our government has to reconcile with the fact that the flow of information has radically shifted in the past few decades- all the previous paradigms of privacy, security and adversaries have been obsoleted. Understandably, they need to create infrastructure to tackle this next generation of attacks. This could mean highly orchestrated attacks being planned online, and the government is justified in trying to exercise every available option to avert the next cyber-9/11. Our adversaries may have no limits to their capabilities, and so waiting for definitive evidence on the efficacy of counter-intelligence approaches is giving them an opportunity to plan their next attack.
When what’s at stake is the American way of life, it’s easy to put aside things that don’t really matter.
If the terrorists do find a way to cheat the foreignness heuristic, that’s not a problem, because this proposal is backwards compatible with the existing catch-all NSA policy. They can always, in the end, ignore the X-No-Wiretap header, but we wouldn’t know so it’d be okay.
When can I use this?
It’s expected that this proposal will breeze through the standardization process- because we as Americans can always get together and do that which must be done in these times which try men’s souls. Browsers should implement the feature as soon as possible, so that people can make use of the increased sense of security and privacy it affords.
If you’re truly eager to try it out, you can contribute to the prototype chrome extension which supports the header injection (the reversal of HTTPS Everywhere, a feature called HTTPS Nowhere hasn’t been implemented yet, but we’re accepting pull requests!). Since this extension is still experimental, inserting your personal identifiable information must be done by editing the source code, but you should expect a more user friendly interface in the next revision. Since it isn’t thoroughly tested, there may be a chance that it fails to leak the user’s personally identifiable information with every networked request, but rest assured this will be fixed as soon as the bugs are made aware to us.
We should all rally behind this proposal for a simple technical solution which will go a great length to simultaneously enhancing both privacy and security, while overall preserving the only thing which matters, our American way of life.
It must be infuriating to be in a situation where there’s a clear problem, and all the obvious remedies continually fail to produce any legitimate result. That’s what this blog is like: every month rolls by with some half baked ideas partially implemented and barely documented. And in the last day of the month, there’s a kind of panicked scramble to fulfill an entirely arbitrary self-enforced quota just to convince myself that I’m doing stuff.
Anyway, it’s pretty rare for me to be doing absolutely nothing, but mustering the effort to actually complete a project to an appreciable extent is pretty hard. This might be in some way indicative of a kind of shift in the type of projects that I try to work on- they’re generally somewhat larger in scope or otherwise more experimental. And while I may have been notorious before for not leaving projects at a well documented and completed state, these new ideas often languish much earlier in the development process.
There’s always pressure to present things only when they are complete and presentable, because after all timing is key and nothing can be more detrimental to an idea than ill-timed and poor execution. But at the same time, I think the point of this blog is to create a kind of virtual paper trail of an idea and how it evolves, regardless of whether or not it falters in its birth.
With that said, I’m going to create a somewhat brief list of projects and ideas that I’m currently experimenting with or simply have yet to publish a blog post for.
One of the earlier entries of the backlog is a HTML5 scramble with friends clone, acting highly performant on mobile with touch events and CSS animations while supporting keyboard based interaction on desktop. I’ve always been intending to build some kind of multiplayer real time server component so that people could compete in real time. Maybe at some point it’ll be included as a kind of mini game within protobowl.
The largest project is definitely Protobowl, which has just recently passed its one year anniversary of sorts. It’s rather odd that it hasn’t formally been given a post on this blog yet, but c’est la vie. Protobowl is hopefully on the verge of a rather large update which will add several oft-requested features, and maybe by then I’ll have completed something which is publishable as a blog post.
Font Interpolation/Typeface Gradients. I actually have no idea how long this has been on my todo list (years, no doubt), but the concept is actually rather nifty. With attributes like object size or color, things can be smoothly interpolated in the form of something like a gradient. The analogue for this kind of transition when applied to text would be the ability to type a word whose first letter is in one font, and the last letter being another font, with all the intermediate letters some kind of hybrid. I never did get quite far in successfully implementing it, so it may be a while until this sees the light of day.
I’ve always wanted to build a chrome extension with some amount of OCR or text detection capabilities so that people could select text which was embedded within an image as if it weren’t just an image. At one point I narrowed down the scope of this project so that the OCR part wasn’t even that important (the goal was then just some web worker threaded implementation of the stroke width transform algorithm and cleverly drawing some rotated boxes along with mouse movements). I haven’t had too much time to work on this so it hasn’t gone too far, but I do have a somewhat working prototype somewhere. This one too is several years old.
In the next few days, I plan on publishing a blog post I’ve been working on which is something of a humorous satire on some of the more controversial issues which have arisen this summer.
And there are several projects which have actually gotten blog posts which weren’t themselves formal announcements so much as a written version of my thinking process behind them. I haven’t actually finished the Pedant, in spite of the fact that the hardware is in theoretically something of a functional state (I remember that I built it with the intent that it could be cool to wear around during MIT’s CPW back in April, but classes start in a few days and it hasn’t progressed at all since). Probably one of the most promising ideas is the kind of improved, vectorized and modern approach to video lectures- but that too needs work.
I’m building the website for a local charity organization which was started by a childhood friend, and maybe I’ll publish a blog post if that ever gets deployed or something.
Longer passwords are more secure. But if you’ve ever been on a website or computer network which mandated that you change your password every 30 days to a string which you haven’t used in the past 180 which could not contain any part of your username or an English language word without being liberally adorned with num3r1cal and sy#b@!ic substitutions and aRbiTRarY capitalization, you should be able to understand the pain of long and arcane passwords.
At time of writing, I’m a high school senior with something on the order of three weeks left in school. I can’t help but feel a tad wistful of the four years I’ve spent roaming these red-tinted halls of learning. Many of the particularly bittersweet memories include PE, a mandatory elective for my freshman and sophomore years. A bizarre policy forbid the possession of backpacks in the locker rooms, leading to the acquisition of the skill to open rotary combination padlocks, a skill which I just as quickly dispossessed myself of, having absolutely no need for a locker after that.
During that time, I had read about a pretty nifty hack involving Master Locks at the time. If you entered the combination and opened the lock and closed it again and left it on the last digit, you could open it again by rotating clockwise by fifteen and rotating back to the final digit (to fight the possibility of attackers who are aware of the exploit, you could turn the dial clockwise some number less than 15 after locking it, so that you would only have to remember the final digit). It was this trick that I used to open my locker just about every other school day for those two years- a reduction in security no doubt, but a great convenience. I’ve been thinking over the design of the lock recently, and I’m not sure if it’s better classified as a bug or a feature. Conventional wisdom dictates that no doubt this substantial reduction in the key space is a rather severe design flaw. The fact that many newer locks were immune (a lot of locks will snap the dial forward when closed so that it is physically impossible to land on the final digit) to this trick seems to support the notion that this was somehow an undesired consequence of locksmithery rather than some kind of delightful accident.
But the more I think about it, the more this bug seems like a feature. Perhaps the loss of security isn’t nearly as egregious as it appears, and this little shortcut is actually a delightful little accident. Nearly every school day, I’d only have to take four seconds to open my locker because of this handy trick, except on those occasions some kid before me decided to walk around the lockers to press their heads against the metallic locks to spin them and pretend they can lock-pick, in which case, I’d have to enter the whole combination to open sesame. That is, you only need to know one number to open it unless someone earlier guessed wrong, in which case, you’d have to enter three numbers.
The next story is about an iPhone.
I playfully grab at a friend’s phone for the second time in a week. A friend takes a selfie using the newly emancipated phone, with a rather contorted grin. I dutifully replace her lock screen photo, which had previously been an aerial shot of her grassy picturesque college campus with my friend’s contorted close-up selfie. She learns to be a bit more protective about her phone, quickly issuing a light tap on the power button to reengage the lock screen, a four headed Cerberus with an exponential backoff growl. But this doesn’t mean anything to the enterprising background changer, because the unlocking gesture happens all the time and it’s fairly easy to figure out what numbers someone’s typing.
This is a fairly innocent class of authentication breach, but it should be fairly unnerving given the extent to which our lives and personal information are accessible. Anyway, these two stories led me to think about ways to both simplify the process of entering passcodes and passphrases while simultaneously hardening against some general attacks, in particular replay attacks (and to a lesser extent, man in the middle).
The premise is simple enough: As you’re typing a password, letter by letter and stroke by stroke, each character is processed (either locally, or sent to a server). As soon as the authorization engine is comfortable with the amount of data provided, it sends a signal so that no more data is transmitted, and that authorization is complete.
In the same manner as the old MasterLock, maybe in a particularly auspicious occasion, one might only need one or two symbols in order to prove their identity. However, if the wrong digit is entered on the first try, the subsequent attempt must then obviously come with increased difficulty. Likewise, it may be advantageous to have a low initial difficulty if the login scenario seems very ordinary, which can be determined by any number of variables, including but not limited to the current IP Address, the location as determined by GPS or other means, stylometrics such as the rate and delay between typing individual letters, the number of simultaneous concurrent sessions, how the last session ended (by timeout or by an active logout). But an important part of the system is that the difficulty has to be non-deterministic, because if we assume the attacker knows the system and can manipulate the variables at play, there is still a last vestige of hope that the attacker’s first guess is wrong (after that the difficulty climbs and the attacker is no longer really a problem).
Note that the server would not specifically reveal that a password is incorrect (until it’s manually submitted, like in existing systems). It would only indicate when the received content is sufficient.
In a keyboard input scenario, for a shell prompt or for generic password entry, it’s probably important to keep the text box open until the user acknowledges that he has been authorized, however in case the connection has been compromised by a non-interactive eavesdropping man in the middle with the intent to replay the password, the client should stop sending additional letters as soon as the server signals that it’s ok. This gives the server the option of never revealing to the user whether a response is sufficient, essentially forcing the user to enter the entire password and pressing submit.
Like how some safes or button pads have duress codes that silently call the police when triggered, there system could ignore invalid characters that prepend the actual password for entry, while subsequently blacklisting that given prefix and raising subsequent difficulty.
Of course this isn’t a complete solution. There needs to be some segmentation of access based on current authorization levels. For instance, checking whatever new mail or facebook statuses exist probably does not warrant much authentication friction, but combing through old personal documents and emails, installing root software or changing system settings, should probably require higher levels. This can probably help in that by keeping a unified password of sorts for all levels, but isn’t itself a complete solution. Two factor authentication is also important, in case a single device is compromised or some part of the scheme is untrusted.
I think it’d actually be pretty cool to see something like this implemented, because I think it might serve as a genuinely useful feature that enables a continuously variable spectrum that varies between when security is a situational necessity and when excessive user friction is detrimental.
We can create a valueOf function which assigns numbers to arrays according to a specific pattern, such that the resultant number can reveal (without much ambiguity) which arrays were involved, and whether they were added or subtracted. There are probably a myriad of ways to construct numbers like that, but one of the simplest (or at least the first one that I could come up with) is to raise some given radix (at least 3) to a unique number that increments every time valueOf is called. You can understand it pretty simply in base 10.
Lets say our first array is [1, 2, 3] and our second array is [5, 4, 3]. Every time the instance’s valueOf is called, we plop it onto a temporary global array and record the index. In this case, lets assign the first array index 1, and the second array index 2. If we raise the radix 10 to that index, we can get the respective unique numbers: 10 and 100. Now these numbers can be added or subtracted, leading to 110 or 90 or -90 (or unaltered, leaving 10 and 100). To find out what operations and what numbers are involved in the operation, we can first add 1000 which is 10 raised to the size of the global array, which has the useful effect of making everything positive. Now we’re left with 1110, 1090, 0910, 1010, and 1100. We can ignore the first and last digits, and each digit can be either a 1, a 0 or a 9 (if it’s anything else, this is just a number, not the result of a magical array addition). A 1 tells us the number was added, a 0 tells us the number isn’t used, and a 9 tells us that the number was subtracted.
What exactly makes it such a heinous offence, you may find yourself asking. The answer is simple: the unholy matrimony of with and Object.defineProperty (also, keep in mind that it isn’t DOMA’s fault).
The power of intercepting all function calls, variable declarations and retrievals then comes by recursively creating another fake object filled with getters and setters whenever a property is accessed. For method calls to primitive types like strings and numbers, we do the same type of sorcery but directly on their respective prototype properties. Whenever a function is passed a number which happens to match the pattern for an array addition or subtraction, we can passively intercept and substitute its value. Any string which matches a certain pattern of CSS selectors can be then transparently substituted with the NodeList which results from a document.querySelectorAll. And we can change all the variable declarations for a for..in loop such that array values are used instead of keys.
And now, four minutes before the end of this month, I’ve successfully yet again managed to eke out a blog post to fulfill my quota. And I guess I don’t have an Humane Society to prove that no humans were harmed in the making of this blog post, but how bad could it possibly be– it’s only 150 lines.