Skip to content


TwitMon: Twitter Trending Topics Notifier in Jetpack

I don't know how this is relevant other than it's the twitter error message. But It's soo cuute that it's absolutely irresistable for any post relating to twitter.

I don't know how this is relevant other than it's the twitter error message. But It's soo cuute that it's absolutely irresistable for any post relating to twitter.

http://antimatter15.com/misc/jetpack/twitter/twitmon.html

Since all news nowadays comes from twitter, I promised myself I would make something to keep up on the new tweets since I’m no longer an active user or follower. And BTW, don’t follow me (until i say so or something).

So it’s a jetpack add-on that queries twitter using it’s JSON API every 3.14159 minutes. It uses the Jetpack notification API to display new items whenever they come up.

Posted in Web Notifications.

Tagged with , , , , , .


ShinyTouch ideas

Is this the end goal of ShinyTouch?

Is this the end goal of ShinyTouch?

One potential I see for shinytouch is the ability for it to be embedded in a flash application which can be embedded into a web page. Then there could be a web 2.0 style JS API for awesome canvas tag based creations. Or it could just be used to interact with another flash application or game.
The reason why this is more likely able to be used as such is because setup for this is so easy that this could actually convince people to do it. With other systems you really have to convince people really well to be dedicated enough to set up the hardware whatever it is. At that point, the software is the easy part and the audience is more than glad to go through the hassle of downloading, running, configuring, and maybe even compiling.
But with shinytouch aiming at a different, larger and overall lazier (myself included in this group) audience. This means that it is really important to lower the entry barrier to the lowest possible level. I think being able to just move the webcam a little bit, go to a website and follow simple directions to use their own touchscreen is a very potentially attractive concept. It could even spawn more interest in the touchscreen, natural user interface communities.
This is really what I want he project to end up like. It seems quite practical to me. How do you feel about this?

(note that this is my second post entirely from my iPhone)

Posted in ShinyTouch, Touchscreens.

Tagged with , , , .


Google and Microsoft

Microsoft and Google have fundamentally different in their business models. Google uses Advertising and Search, with around 98% of their revenue comming from Advertising. Microsoft owns a monopoly on the Operating System business. Especially with the new Google Chrome OS that’s been recently announced, It really brings the question of their positions. Can Google really take Microsoft down? What kind of financial prowess, consumer brand loyalty or user lock-in does it really need to take on Microsoft?

Google is on much shakier territory. I could leave Google just by typing in “bing.com” or “yahoo.com” in the URL bar. Simple as that. Totally intuitive, something that (hopefully) nobody needs to call Tech Support to guide them. Typing 8 letters into the URL bar and pressing enter is all it takes to destroy the Google empire.

However, what about Microsoft. They own a monopoly on the Operating System market. How easy is it to install another operating system? Well, you need about 1-4GB of data for a modern OS, you need to either burn it or buy it from a store (I figure there are probably tons of tech support calls at this point), and then likely reconfigure BIOS, go through menus, fill out several forms and select the specific partition to install the OS to. This point is already unfathomable for a great majority of the userspace.

Google is very different from Microsoft, from their core business models. The Windows monopoly isn’t going anywhere in the near future. Google could be gone tomorrow.

Posted in Uncategorized.

Tagged with , , , .


New Site!

Yay! A picture of the blog and this article, maybe I should try making it include the picture and it can be a infinite loop of manual recursiony awesomeness!

Yay! A picture of the blog and this article, maybe I should try making it include the picture and it can be a infinite loop of manual recursiony awesomeness!

You may notice that the site is now using my actual domain! and that it’s also like twice as fast and 20 times more reliable! Also, the URLs are now insanely awesome rather than the ?p=314159 or something random like that.
That’s because I have a new host and this is a new site with WordPress 2.8.1 and stuff.
One thing I momentarily have forgotten is Ads (don’t you love them?). So they’ll be back soon.
I also have a new theme, based on the SimpleX and Carrington Blog themes.
I went back hundred or so posts and added catagories and tags to them for ease of navigation. I created new pages for Wikify, vXJS, ShinyTouch and MirrorTouch and I have a few new posts.
So I’d like to see your feedback on the new site.

Posted in Meta.

Tagged with , , , , , .


Redirect referred users to new site’s respective page

So as you may know, I have moved to a new host and I need to move all things over so I made this awesome script that I hacked together to redirect all users visiting the old site to the new site and the respective page as long as they are being referred from a third party.

It’s pretty short and goes at the top of index.php. Probably won’t be useful to anyone but whatever :)
if(isset($_SERVER['HTTP_REFERER']) && strpos($_SERVER['HTTP_REFERER'],"antimatter15") === FALSE){
header( "HTTP/1.1 301 Moved Permanently" );
header("Location: ?".$_SERVER['QUERY_STRING']);
die();
}

Posted in Meta.

Tagged with , .


Dreamhost

Dreamhost is pretty good. Maybe my expectations are low?

Dreamhost is pretty good. Maybe my expectations are low?

So now I have a cool new web host: Dreamhost. I’ll be using it for at least the next year. So far it’s really great. It has everything I really wanted which isn’t much (aside from SSH access).
Sure, it’s a massive overseller which has quite sky-high pricing for the purpose of bailing out really insanely cheap promotions, but I haven’t yet faced any problems with it. And may I mention that I’m largely a member because of one of their insanely cheap promotions? Don’t get me wrong, I did do tons of research prior and it seemed good to begin with, but the whole July Fourth $10 for hosting a whole year is pretty irrefutably awesome. decently fast. It actually supports URLs without a WWW (which was the reason this blog never used GoDaddy).
So what issues have I faced so far? Well, not many. And since setting up is usually the most troublesome and hard part, it’s setting a good precedent in my mind. Issues ive has aren’t really hosting related.
The control panel is actually really good. I don’t know why, but I really can’t stand using CPanel. I haven’t had a very good experience with it. They try far too much to make everything as if it’s intended to be something like your Netvibes or iGoogle homepage. And CPanel is remarkably unhelpful with issues (that I’ve had with SSH) and it wastes a lot of screen estate on listing basic server info that rarely changes or is useful in any way. The Dreamhost panel by comparison is menu based and intuitive. No big icons that make you feel stupid after looking for a long time and realizing it’s a huge icon in the center of the screen involving outdated and vague old desktop metaphors. Just simple menus. Sometimes it’s not very good at explaining why it switches PHP to CGI mode when you use the automatic installer. I wouldn’t reccomend it’s automated installer though. While I’ve had very few problems with using it to install the latest version of WordPress, the configuration is a bit lacking. Also, the generated wp_config.php from the automated installer is old and actually missing a few security features that makes it harder (just add the missing lines in) to install BBPress later on. Beside that, installing Trac+SVN may be only a dew clicks to do, but getting Trac to behave as expected (logging in) requires tedious amounts of command-line-fu. The installation options are seriously quite mediocre and they use big icons too. There are only like 9 available scripts with only a few CMSes a gallery or two and some eCommerce.
I can’t really do a summary, but it’s pretty good as of now.
Oh and another cool thing is that this entire post was written on my iPhone. Yes. On a touchscreen device with a virtual keyboard. And that’s not bad either. I’m typing quite fast on this device and not making too many mistakes either. The reason the first post from my iPhone came so latenis because WordPress for iPhone didn’t work on my old web host, but it works on my new one.

Posted in Meta.

Tagged with .


vX JS Library

vX is the world’s smallest Javascript library. It’s modular, powerful, unlikely to interfere with operations of other libraries, open source (MIT license), and cross-browser. It’s designed with size first and foremost and everything else secondary. The cross-browser GET/POST AJAX function with callbacks is only 200 bytes. The closest thing is over twice the size. This extreme density is present in every function of the library.

Currently, the whole framework, including Ajax, Events, URL Encoding, Animation (including Fading), Namespacing, JSON Serialization, JSON Parsing, Document onReady, HTML entities encode/decode, Array Index, Get Elements By Class Name, Object Extending, Templating, Queueing, Class Manipulation and more. is under 3KB total uncompressed.

All functions are aliased to full reader-friendly names as well as very consise abbreviations. For example, Ajax can be accessed by_.ajax or _.X.

Posted in vX JS.

Tagged with , , , , , , , .


Project Wikify

Imagine applying the collaborative wiki content model to all static content on the internet, allowing the community to exchange ideas, update content, fix errors, parody, and improve the internet as a whole.

Project Wikify is a bookmarklet that enables full-page contentEditable, and provides a way for multiple users to interact on the page. It is a platform for open, collaborative parody online and greatly reduces the level necessary for critique.

Posted in Project Wikify.


ShinyTouch Progress Update + Fresnel’s Equations

So I was looking through wikipedia to find out if there were some magical equations to govern how it should mix the color of the background screen contents and the reflection and make the application work better. I think Fresnel’s equations fit that description. It basically gives the reflectiveness of the substance from information about the substance, the surrounding substance (air) and the angle of incidence.

Well, this image really is quite intimidating. I wont even pretend to understand it, but it looks like Fresnels equations with different values of n1 and n2 (some ratio for different temperatures). And is the plot on the right the same Total Internal Reflection in FTIR?

Well, this image really is quite intimidating (maybe i'm not not elitist enough). I won't even pretend to understand it, but it looks like Fresnel's equations with different values of n1 and n2 (some ratio for different temperatures). And is the plot on the right the same Total Internal Reflection in FTIR?

It’s quite interesting, partly because the shinyness (and thus the ratios used to combine the background color with the finger to compare) depends on the angle of the webcam to the finger, which depends on the distance (yay for trigonometry?). So the value used isn’t the 50-50 ration that it currently uses in the algorithm universally, but it’s dependent on variables to Fresnel’s Equations and the distance of the finger.

I forgot what this was supposed to describe

I forgot what this was supposed to describe

Anyway, time for a graphic that doesn’t really explain anything because I lost my train of thought while trying to understand how to use Inkscape!

So here’s something more descriptive. The two hands (at least it’s not 3, and why they’re just lines with no fingers isn’t my fault) and they’re positioned at different locations, one (hand 1) is close to the camera while the second (hand 2) is quite far away. Because of magic and trigonometry, the angle of the hand is greater when it’s further away. Also, this plugs into Fresnel’s Equations which mean the surface is shinier for where hand 2 is touching while it’s less shiny for hand 1. So the algorithm has to adjust for the variation (and if this works, then it might not need the complex region specific range values).

Notice how Angle 2 for hand 2 is much larger than angle 1 because of how it's farther from the camera than hand 1

Notice how Angle 2 for hand 2 is much larger than angle 1 because of how it's farther from the camera than hand 1

So here it’s pretty ideal to have the angle be pretty extreme right? Those graphs sure seem imply that extreme angles are a good thing. But no, because quite interestingly, the more extreme the angle is, then the less accurate the measures from the x axis become. So in the image below, you can see that cam b is farther from the monitor (and thus has a greater angle from the monitor) and it can discern far more accurately depth than than cam a. The field of view for cam a is squished down to that very thin angle whereas the cam b viewing area is far larger. Imagine if there was a cam c which was mounted directly in front of the monitor, it would suffer from no compression of the x axis like a or b but instead it has full possible depth.

the more extreme the angle is, then the lower the resolution of the usable x axis becomes. So while you get better accuracy (shinier = easier to detect) the accuracy point you can reduce it to declines proportionally.

the more extreme the angle is, then the lower the resolution of the usable x axis becomes. So while you get better accuracy (shinier = easier to detect) the accuracy point you can reduce it to declines proportionally.

So for the math portion, interestingly, the plot for the decline in % of the total possible width is equivalent to the 1-sin() (I think, but if I’m wrong then it could be cos() and i suck at math anyway).

So since it's a given that angle C is *always* 90 degrees, you can think of precision to be 2a (or at least that's what I think, I'm probably wrong though, but if you're measuring A from the center of the monitor, then it's no longer 2a but just a instead).  So you can figure that the % of precision decreases as the 2*sin(A)

So since it's a given that angle C is *always* 90 degrees, you can think of precision to be 2a (or at least that's what I think, I'm probably wrong though, but if you're measuring A from the center of the monitor, then it's no longer 2a but just a instead). So you can figure that the % of precision decreases as the 2*sin(A)

So if you graph out 1-sin(n) then you get a curve where it starts at 100% when the angle the camera is positioned at is 0 degrees from an imaginary line perpendicular to the center of the surface, and it approaches 0% as the degree measure reaches 90 deg.

So interestingly when you plot it, basically what happens is a trade-off between the angle the camera is at and the precision (% of ideal maximum horizontal resolution) and accuracy (shinyness of reflection). I had the same theory a few days ago, even before I discovered Fresnel’s Equations, though mine was more linear. I thought that it was just a point in which the values dropped for the shinyness. I thought that the reason the monitor was shinier from the side is that it was beyond the intended viewing angle, so since there is less light at the direction, the innate shinyness is more potent.

So what does this mean for the project? Well, it confirms my initial thoughts that this is far too complicated for me to do alone, and makes me quite sad (partly because of the post titled Fail that was published in january). It’s really far too complicated for me. Right now the algorithm I use is very approximate (and noticably so). The formula improperly adjusts for perception and so if you try to draw a straight line across the monitor, you end up with a curved section of a sinusoidal-wave.

Trying to draw a straight line across the screen ends up looking curved because it uses a linear approximate distortion adjustment algorithm. Note that the spaces between the bars is because of the limited horizontal resolution, partly due to the angle, mostly due to hacks for how slow python is.

Trying to draw a straight line across the screen ends up looking curved because it uses a linear approximate distortion adjustment algorithm. Note that the spaces between the bars is because of the limited horizontal resolution, partly due to the angle, mostly due to hacks for how slow python is.

So it’s far more complicated than I could have imagined at first, and I imagined it as far too complicated for me to venture in this alone. But I’m trying even with this sub-ideal situation. So the rest of the algorithm for now will also remain with more linear approximations. I’m going to experiment in making more linear approximations of the plot of Fresnel’s equations. And hopefully it’ll work this time.

Posted in ShinyTouch.

Tagged with , , , , , , , , .


ShinyTouch: Zero Setup Single Touch Surface Retrofitting Technology

So Mirrortouch is really nice, it’s quite accurate, very fast, quite cheap and it’s my idea :)

But while trying to hook up the script to my webcam and looking at the live webcam feeds from it pointing at my monitor (aside from the awesome infinite-mirror effect!) I discovered an effect that’s quite painfully obvious but dismissed earlier: reflection.

So a few months ago, I just sat in the dark with a few flashlights and a 6in square block of acrylic. I explored the multitouch technologies with them. Shining the flashlight through the side, I can replicate the FTIR (Frustrated Total Internal Reflection) effect used in almost all multitouch systems. Looking from under, with a sheet of paper over and shining the flashlight up, I can experiment with Rear DI (Diffused Illumination). Shining it from the side but above the surface, I can see the principle of LLP Laser Light Plane, actually here, it’s more accurately like LED-LP). MirrorTouch is from looking at it with one end tilted torward a mirror.

If you look at a mirror, and look at it not directly on, but at an angle, however slight, you can notice that the reflection (or shadow, or virtual image whatever you want to call it) only appears comes in “contact” with the real image (the finger) when the finger is in physical contact with the reflective medium. From the diagram below, you can see the essence of the effect. When there is a touch, the reflection is to the immediate right (in this camera positioning) of the finger. If the reflection is not to the immediate right, then it is not a touch.

From the perspective of the camera

From the perspective of the camera

It’s a very very simple concept, but I disregarded it because real monitors aren’t that shiny. But when I hooked the webcam up to the monitor, it turns out it is. I have a matte display, and it’s actually really shiny from a moderately extreme angle.

So I hacked the MirrorTouch code quite a bit and I have something new: ShinyTouch (for the lack of a better name). ShinyTouch takes the dream of MirrorTouch one step further by reducing setup time to practically nothing. Other than the basic unmodified webcam, it takes absolutely nothing. No mirrors, no powered light sources, no lasers, speakers, batteries, bluetooth, wiimotes, microphones, acrylic, switches, silicon, colored tape, vellum, paper, tape, glue, soldering, LEDs, light bulbs, bandpass filters, none of that. Just mount your camera at whatever looks nice and run the software.

And for those who don’t really pay attention, this is more than finger tracking. A simple method of detecting the position of your fingers with no knowledge of the depth is not at all easy to use. The Wiimote method and the colored-tape methods are basically this.

The sheer simplicity of the hardware component is what really makes the design attractive. However, there is a cost. It’s not multitouch capable (actually it is, but the occlusion that it suffers from will deny the ability for any commonly used multitouch gestures). It’s slower than MirrorTouch. It doesn’t work very well in super bright environments and it needs calibration.

Calibration is at current stages of development, excruciatingly complicated. However, it can be simplified to be quite simple in comparison. The current one involves painful color value extraction manually from an image editor of your choice. Then it needs to run and you need to fix the color diff ranges. Before that you need to do a 4-click monitor calibration (which could theoretically be eliminated). But it could be reduced by making the camera detect a certain color pattern from the monitor to find out the corners and totally remove the 4 point clicking calibration. After that, the screen could ask you to click a certain box on the screen which would be captured pre-touch and post-touch and diff’d to get a finger RGB range. From that point, the user would be asked to follow a point as it moves around the the monitor to gather a color reflection diff range.

The current algorithm is quite awesome. It searches the grid pixel-by-pixel scanning horizontally from the right to the left (not left to right). Once it finds a row of 3 pixel matches for the finger color, it stops parsing and records the point and passes it over to the reflection analysis program. There are/were 3 ways to search for the reflection. The first one I made is a simple diff between the reflection and the surrounding. It finds the difference between the color of the point immediately to the right and the point to the top-right of the finger. The idea is that if there is no reflection, then the colors should basically roughly match and if it’s not then you can roughly determine that it is a touch.

This was later superseded by something that calculates the average of the color of the pixel on the top-right of the finger and the color of the finger. The average should theoretically equate the color of the reflection, so it diffs the averaged color with the color to the immediate right (the hypothetical reflection) and compares them.

There was another algorithm that is really simple for when it’s very very bright (near a window or something) and the reflection is totally overshadowed (pardon the pun, it wasn’t really intended) by the finger’s shadow. So instead of looking for a reflection, it looks for a shadow, which the agorithm thinks of as just a dark patch (color below a certain threshold). That one is obvoiusly the simples, and not really reliable either.

One big issue is that currently, the ranges are global, but in practice, the ranges need to vary for individual sections of the screen. So the next feature that should be implemented is dividing the context into several sections of the screen each with their own color ranges. It’s a bit more complex than the current system but totally feasable.

So the current program has the ability to function as a crude paint program and some sample images are on the bottom portion of this post.

Hai!!!!

Hai!!!!

Yayness!

Yayness!

:)

:)

No.

No.

Posted in ShinyTouch.

Tagged with , , , , .