somewhere to talk about random ideas and projects like everyone else

stuff

July 2009 Archive

Private Tracker Registration Checker in Python 31 July 2009

Well, I wanted a demonoid account for no apparent reason. So I wanted to search for private trackers, and found this app called Tracker Checker 2 It’s great and all, but it doesn’t work well on linux. Or at least for me, I run it and it pops up a window for a few milliseconds and then closes. There’s nothing in the tray but the process looks like it’s still running. So I looked at the trackers.xml file and thought it would be easy to create another one.

So I quickly hacked together a python script that parsed the xml file and checked for trackers. For some reason, Demonoid said it was open while it was actually closed, so I made a little extension to the format.

I’m actually probably not gonna use this, but I’ve made this private tracker registration checker app in python. It uses a trackers.xml file that is compatable with the Tracker Checker 2 app. It supports a slight extension to the protocol by being able to check if a certain phrase is not in a page. It’s multithreaded and uses expat for xml parsing and urllib2 to download the pages.

I think it would be pretty cool to integrate it with XMPP and port it to Google App Engine, and send out alerts to people when trackers are open.

It has no UI, it’s just a little command line app that could be used as a cron job and integrated with XMPP.

Download here


ShinyTouch is 1 month old 28 July 2009

For as long as my notes show, ShinyTouch is now 1 month old.

So today, I added VideoCapture support, so it will now work better on Windows.

Auto calibration has been rewritten, and a few other small changes.


ShinyTouch Auto-Calibration 25 July 2009

A few days ago I started working on auto calibration for shinytouch. Someone worked it a bit brfire and gave some PyGTK code that did fullscreen correctly but I ended up getting too confused (especially with embedding the video and images and threading delays). So now I started from scratch (or moreover continuing with using pygame) and now it is inherently not full screen. The auto calibration works by setting the contents of he window to be one color and taking a snapshot. After that the color is changes again and the snapshot is once again recorded. After gathering a pair, the software compares them pixel by pixel. It takes multiple trials and takes the average.

Then there is a function that makes cool stuff happen. It goes right to left to search for massive groups of consecutively marked hot pixel change areas. Ir searches for a general trapezoidal shape. And takes the lenghs and heights and positions. And right now, typing on my iPhone I wish the spell corrector was as awesome as the google wave contextual system. So in the near future, shinytouch will be as simple as launchingthe app and hitting a calibrate button and the computer does the rest. Maybe it might ask you to touch the screen in a special magic box or click on your finger afterwards but overall, zero setup and almost no calibration. So on another note, shinytouch is now almost a month old idea-wise. In my idea book, the references to it date back to June 28, though it may have originated a few days prior. For shinytouch the beginning window is quite a bit more broad. Anywhere from last year to march. I seem to recall late January experimentation with mirrors. So now, With shinytouch being the more promising (more acessible and radical) I have stalled development on mirrortouch. It’s quite annoying how fast time passes. There is so much that I really want to do but there is nost not enough time.


Ajax Animator + Vector Editor 24 July 2009

So yesterday, I worked a little on making a VectorEditor based Ajax Animator. It actually took suprisingly little work. The mostly modular and abstracted design of the Ajax Animator means that only a few files need to be changed. Mainly those in the js/drawing directory. Though there was a bunch of references to Ax.canvas.renderer.removeAll() which needed to be changed.

Another cool feature in that version is the ability to have layers show up concurrently. So you can see things while drawing just as they would be in export.

However, it’s not ready, it’s very very buggy, Lines and Paths aren’t tweenable yet, and it’s missing all those nice features of OnlyPaths that VectorEditor inherently lacks.

But the one really nice feature I think is the multi select. You can easily select a group of things which comprise some sort of shape, and move it all across.


How I would design a touchscreen browser 24 July 2009

This is again, an old idea of mine, I drew it on a sheet of paper maybe a year ago, but I just remembered it.

A common theme with modern browser is maximizing screen estate (which I don’t actually care about, becasue I have 2 huge monitors). But if I were to have a netbook or some otherwise technically restrained device, I would think that screen estate is important.

My Idea is pretty cool. The idea is that there is only a tab bar on top. It’s as usual, allocated to the tabs, and there is on the side, a new tab button. But for this, the new tab button occupies the entire rest of the space of the tab bar, because space is precious. Sort of like the Mozilla Fennec browser.

forward and backwards navigation is achieved by throwing (not just gentle pushing, throwing, it should be kinetic, if you don’t thow hard enough, it just shows some text saying the equivalent of “throw harder!”).

At least in the way I browse, I don’t enter URLs often unless I’m on about:blank. So there is no URL bar. To find what URL you’re on, or to enter a new one, simply double tap on the current tab. It expands and fills the tab bar with a text box and the other tabs are condensed to icons.

Swiping down shows a drop-down for a tab with options to do things like bookmark or view source.

Thowing a tab down (which is a more violent swipe) removes the tab. Something partly inspired by the Mac OS X dock.

The new tab button could also be a menu, swiping down to reveal a menu of bookmarks to select from.

And the new tab page could be almost like a desktop. with widgets, gadgets and whatever (Google wave? If only I got my dev invite :’(). Well, in my idea, the top portion of the new tab page could be the URL bar and the rest could be whatever other browsers are doing + maybe some widgets/gadgets Dashboard or Plasma style.



New ShinyTouch Algorithms 21 July 2009

The algorithms are nearing completion. They now function most of the time and are fairly situation-agnostic and mostly zero-config.

A cool new thing it does now is using the HSV color space rather than the RGB one and comparing the hue values. It turns out that this creates the most noticable difference between colors and it can easily tell different colors and shapes apart.

There is also a new shape-detection system, because all previous ones checked colors. This works by getting a sample of the finger color and the background color and comparing the closeness of surrounding pixels to each one. It ends when it finds a pixel closer to the bg than the finger. This one is truly zero config.

Another one checks the similarities in hue of the alleged reflection color and the known bg color. If it’s not similar enough, then it is recognized as a touch. It does use some configuration, but it shouldn’t vary much from situation-to-situation to require serious configuration.

There is also a new failure, which is generating a supposed ideal sum ratio to detect things (which is how the newly old one worked backwards). Though it spawned a new version that uses HSV instead, and it works pretty well.

Also, almost all the functions now create bar-graph visualizations. Very futuristic and augmented-reality style.


The interesting order of digital communication paradigms 15 July 2009

So about 2 Months ago I realized something quite interesting. It is that digital communication is creating new paradigm shifts (if I may call them that without all singularity theorists attacking me) in the order of the evolution of human communications - backwards.

A historic technological achievement  yet a recent human one

What? How is progress backwards? Well, think about it, just about the first type of digital communication was through text (computing may have existed prior to that using buttons and switches which could be argued, but I’m going to say that they weren’t real communication methods but rather just computing methods). Text, is written language and historically, written language is quite a recent invention. I may be thinking of telegrams or maybe I could just start with computer communication, but the point remains. What’s next? Well, after telegrams, people invented telephone, the logical successor. After the book was invented the radio. Even before text was totally phased out from computers, sound was there (this part is a assumption because you might have realized my age so I was never alive before the era of GUIs and Windows 95). But if you think of it, human written language was preceded by spoken language. You can see evidence even today with many developing countries having illiterate people. That usually means they don’t write but can speak. Preceding human speech is gestures and behaviors. Like recognizing that a tiger is chasing you, and running and having others interpret the message as: “hmm.. I should run too…”. These gestures, albeit historically primitive have not been captured in digital communications technology until the development of video. This development happened after the development of telephones. It is now the focus of things like YouTube and Skype. Quite recent advancements in technology that is just now being implemented. Gestures aren’t now just being developed in the form of video but also the cool natural user interfaces (again, natural not just because they feel natural to the user, but primitive data formats with less technology, ironically implemented with technology). Multi Touch, 3d tracking and gesture recognition are big in the news today; the Wii, Natal, Jeff Han, Surface, FTIR, DI, LaserTouch, LLP, PS3 eye (LOL my iPhone just autocorrected that as “pee”) and I couldn’t leave out a plug for my ideas, ShinyTouch and MirrorTouch.

Yay for a random picture of the technology singularity?

This isn’t some magical scheme of such to prove some error sort of divine creationism. No, this is a quite logical example of how human evolution interacts with technology, a world governed by Moore’s law (or something else like Kurzweil’s law since Gordon Moore might not want to be associated with everything suffering from exponential growth). Technology is built around the limitations of the age. One of the original a d ongoing issues is bandwidth. Text uses only 8 bits per character. Sound requires several hundred kilobits per second. Video requires an exponential leap with something like 32 bits times 640 times 480 times 30 bits of data per second. I don’t have a calculator now but you can quite easily understand how 32640480*30 is big. I’ve now calculated it to be somewhere around 211,968,000 bits per second, and that’s quite a bit bigger than audio. So it’s just that humans logically evolve more efficient and dense formats of communication, while digital technology just reduces bottlenecks and enable for the more primitive yet more data intensive communication systems to be implemented. Now for what’s interesting: the future. Now that we know the pattern of communication progresses backwards, what predates gestures? Well, I think it’s obvious, but never really been in reach of exploiting it directly. It’s never even itself been use as a communication substrate. And extrapolating the rest of the above noted correlations, it fits as something that requires unprecedented large bandwidth and computing. It’s more natural than anything else because it is moreso innate than learned. It’s thing that lies below all of that. It’s direct thought. Nothing comes more naturally to a human to thinking. We have evolved in recent (on the evolutionary timescale) years to have a massive ballooning of skull size, hopefully to make way for that Grey matter that goes in it. Thinking is something people do, and it’s universal. Neurons are not French or German, American or British, Chinese or African, northern or southern, accented or racist, wise or dumb, experienced (they may be old, but they don’t gain experience with age) or a n00b. They are just simple circuits that process and store data, passing it along in a giant, organic neural network. We are all born with them and they are always roughly alike. It is the ultimate in natural and innate thinking.

OM NOM NOM on ur brainz!

And there is evidence that it is currently being seriously considered. MRI scanning has greatly increased scanning resolution in recent years and EGG machines are now being developed further and being commercialized with companies like OCZ with their Neural Actuator or Emotiv’s EPOC product. So it is likely the next and as far as I can tell, the final communication paradigm. This is now the third blog post from my iPhone. But this time I did some editing on my computer.


TwitMon Twitter Trending Topics Notifier in Jetpack 15 July 2009

http://antimatter15.com/misc/jetpack/twitter/twitmon.html

Since all news nowadays comes from twitter, I promised myself I would make something to keep up on the new tweets since I’m no longer an active user or follower. And BTW, don’t follow me (until i say so or something).

So it’s a jetpack add-on that queries twitter using it’s JSON API every 3.14159 minutes. It uses the Jetpack notification API to display new items whenever they come up.


New Site! 14 July 2009

You may notice that the site is now using my actual domain! and that it’s also like twice as fast and 20 times more reliable! Also, the URLs are now insanely awesome rather than the ?p=314159 or something random like that. That’s because I have a new host and this is a new site with WordPress 2.8.1 and stuff. One thing I momentarily have forgotten is Ads (don’t you love them?). So they’ll be back soon. I also have a new theme, based on the SimpleX and Carrington Blog themes. I went back hundred or so posts and added catagories and tags to them for ease of navigation. I created new pages for Wikify, vXJS, ShinyTouch and MirrorTouch and I have a few new posts. So I’d like to see your feedback on the new site.


Redirect referred users to new site's respective page 14 July 2009

So as you may know, I have moved to a new host and I need to move all things over so I made this awesome script that I hacked together to redirect all users visiting the old site to the new site and the respective page as long as they are being referred from a third party.

It’s pretty short and goes at the top of index.php. Probably won’t be useful to anyone but whatever :)

if(isset($_SERVER['HTTP_REFERER']) && strpos($_SERVER['HTTP_REFERER'],"antimatter15") === FALSE){
  header( "HTTP/1.1 301 Moved Permanently" ); 
  header("Location: ?".$_SERVER['QUERY_STRING']);
  die();
}

ShinyTouch ideas 13 July 2009

One potential I see for shinytouch is the ability for it to be embedded in a flash application which can be embedded into a web page. Then there could be a web 2.0 style JS API for awesome canvas tag based creations. Or it could just be used to interact with another flash application or game. The reason why this is more likely able to be used as such is because setup for this is so easy that this could actually convince people to do it. With other systems you really have to convince people really well to be dedicated enough to set up the hardware whatever it is. At that point, the software is the easy part and the audience is more than glad to go through the hassle of downloading, running, configuring, and maybe even compiling. But with shinytouch aiming at a different, larger and overall lazier (myself included in this group) audience. This means that it is really important to lower the entry barrier to the lowest possible level. I think being able to just move the webcam a little bit, go to a website and follow simple directions to use their own touchscreen is a very potentially attractive concept. It could even spawn more interest in the touchscreen, natural user interface communities. This is really what I want he project to end up like. It seems quite practical to me. How do you feel about this?

(note that this is my second post entirely from my iPhone)


Dreamhost 13 July 2009

Dreamhost is pretty good. Maybe my expectations are low?
l_300_300_15C97183-86EE-4988-A495-C212B230EBC0.jpeg

So now I have a cool new web host: Dreamhost. I’ll be using it for at least the next year. So far it’s really great. It has everything I really wanted which isn’t much (aside from SSH access). Sure, it’s a massive overseller which has quite sky-high pricing for the purpose of bailing out really insanely cheap promotions, but I haven’t yet faced any problems with it. And may I mention that I’m largely a member because of one of their insanely cheap promotions? Don’t get me wrong, I did do tons of research prior and it seemed good to begin with, but the whole July Fourth $10 for hosting a whole year is pretty irrefutably awesome. decently fast. It actually supports URLs without a WWW (which was the reason this blog never used GoDaddy). So what issues have I faced so far? Well, not many. And since setting up is usually the most troublesome and hard part, it’s setting a good precedent in my mind. Issues ive has aren’t really hosting related. The control panel is actually really good. I don’t know why, but I really can’t stand using CPanel. I haven’t had a very good experience with it. They try far too much to make everything as if it’s intended to be something like your Netvibes or iGoogle homepage. And CPanel is remarkably unhelpful with issues (that I’ve had with SSH) and it wastes a lot of screen estate on listing basic server info that rarely changes or is useful in any way. The Dreamhost panel by comparison is menu based and intuitive. No big icons that make you feel stupid after looking for a long time and realizing it’s a huge icon in the center of the screen involving outdated and vague old desktop metaphors. Just simple menus. Sometimes it’s not very good at explaining why it switches PHP to CGI mode when you use the automatic installer. I wouldn’t reccomend it’s automated installer though. While I’ve had very few problems with using it to install the latest version of Wordpress, the configuration is a bit lacking. Also, the generated wp_config.php from the automated installer is old and actually missing a few security features that makes it harder (just add the missing lines in) to install BBPress later on. Beside that, installing Trac+SVN may be only a dew clicks to do, but getting Trac to behave as expected (logging in) requires tedious amounts of command-line-fu. The installation options are seriously quite mediocre and they use big icons too. There are only like 9 available scripts with only a few CMSes a gallery or two and some eCommerce. I can’t really do a summary, but it’s pretty good as of now. Oh and another cool thing is that this entire post was written on my iPhone. Yes. On a touchscreen device with a virtual keyboard. And that’s not bad either. I’m typing quite fast on this device and not making too many mistakes either. The reason the first post from my iPhone came so latenis because Wordpress for iPhone didn’t work on my old web host, but it works on my new one.


Google and Microsoft 13 July 2009

Microsoft and Google have fundamentally different in their business models. Google uses Advertising and Search, with around 98% of their revenue comming from Advertising. Microsoft owns a monopoly on the Operating System business. Especially with the new Google Chrome OS that’s been recently announced, It really brings the question of their positions. Can Google really take Microsoft down? What kind of financial prowess, consumer brand loyalty or user lock-in does it really need to take on Microsoft?

Google is on much shakier territory. I could leave Google just by typing in “bing.com” or “yahoo.com” in the URL bar. Simple as that. Totally intuitive, something that (hopefully) nobody needs to call Tech Support to guide them. Typing 8 letters into the URL bar and pressing enter is all it takes to destroy the Google empire.

However, what about Microsoft. They own a monopoly on the Operating System market. How easy is it to install another operating system? Well, you need about 1-4GB of data for a modern OS, you need to either burn it or buy it from a store (I figure there are probably tons of tech support calls at this point), and then likely reconfigure BIOS, go through menus, fill out several forms and select the specific partition to install the OS to. This point is already unfathomable for a great majority of the userspace.

Google is very different from Microsoft, from their core business models. The Windows monopoly isn’t going anywhere in the near future. Google could be gone tomorrow.


vX JS Library 13 July 2009

vX is the world’s smallest Javascript library. It’s modular, powerful, unlikely to interfere with operations of other libraries, open source (MIT license), and cross-browser. It’s designed with size first and foremost and everything else secondary. The cross-browser GET/POST AJAX function with callbacks is only 200 bytes. The closest thing is over twice the size. This extreme density is present in every function of the library.

Currently, the whole framework, including Ajax, Events, URL Encoding, Animation (including Fading), Namespacing, JSON Serialization, JSON Parsing, Document onReady, HTML entities encode/decode, Array Index, Get Elements By Class Name, Object Extending, Templating, Queueing, Class Manipulation and more. is under 3KB total uncompressed.

All functions are aliased to full reader-friendly names as well as very consise abbreviations. For example, Ajax can be accessed by.ajax or .X.


Ajax Animator History 12 July 2009

The Ajax Animator project started in early 2007, when I was in 6th grade. It was spawned by my interest in Flash in 2005 (because I liked expression by stickfigures and animation, and that it was one of the few ways to make applications or media for the Sony PSP) and my reluctance to pirate the Flash software after the trials expired. This intrest brought me to the liveswifers forum, which was engaging on a (as of yet and then) vaporware called OpenSwif. The idea for the Ajax Animator started when I was talking to a friend about a software program he used called Koolmoves. After making a forum post titled “Web 2.0 Flash IDE”, the project really started.

The development started based on RichDraw. It actually started out as RichDraw with a different layout. It was for a very long time built around the HTML/CSS/JS that was included in the RichDraw Demo. I never really modified RichDraw while using it, just building around it. I added a “timeline” (not at that time functional) which was just a dynamically generated table counting from 0-100. I added more stuff, looking for random cool scripts that made windows, dialogs, and color pickers.

Eventually, I found DHTML Goodies (mostly for it’s color picker widget), and then used it’s DHTML Suite to rewrite the entire application. After it was rewritten, It still was totally disfunctional. I added support for manual frame-by-frame animation and then Flash export thanks to freemovie. Around this time, I made a Google Code project for it and began using SVN.

After looking thorugh the DHTML Suite page, I found a link to another library called ExtJS. I ported from the DHTML Suite to ExtJS 1.0. Then I versioned it 1.0. I added some pretty neat features, like tweening, sharing, and more.

Later, when ExtJS 2.0 came out, I began developing the next version of the Ajax Animator. Also, realizing how incomplete the project was, the versioning scale was changed and it was now developing 0.20. It was a full rewrite from scratch. During development Ext 2.1 came out so development migrated to that version. This version polished things up a lot with newer development paradigms and a new vector drawing editor called OnlyPaths, contributed by josep_ssv. It had a new cross-platform JSON based serialized graphics format, and supported export to many different formats. One feature that was never ported to 0.2 was support for user accounts and server side storage.


ShinyTouch Progress Update + Fresnel's Equations 12 July 2009

So I was looking through wikipedia to find out if there were some magical equations to govern how it should mix the color of the background screen contents and the reflection and make the application work better. I think Fresnel’s equations fit that description. It basically gives the reflectiveness of the substance from information about the substance, the surrounding substance (air) and the angle of incidence.

Well  this image really is quite intimidating. I wont even pretend to understand it  but it looks like Fresnels equations with different values of n1 and n2 (some ratio for different temperatures). And is the plot on the right the same Total Internal Reflection in FTIR?
A really intimidating image from none other than Wikipedia

It’s quite interesting, partly because the shinyness (and thus the ratios used to combine the background color with the finger to compare) depends on the angle of the webcam to the finger, which depends on the distance (yay for trigonometry?). So the value used isn’t the 50-50 ration that it currently uses in the algorithm universally, but it’s dependent on variables to Fresnel’s Equations and the distance of the finger.

I forgot what this was supposed to describe
I forgot what this was supposed to describe

Anyway, time for a graphic that doesn’t really explain anything because I lost my train of thought while trying to understand how to use Inkscape!

So here’s something more descriptive. The two hands (at least it’s not 3, and why they’re just lines with no fingers isn’t my fault) and they’re positioned at different locations, one (hand 1) is close to the camera while the second (hand 2) is quite far away. Because of magic and trigonometry, the angle of the hand is greater when it’s further away. Also, this plugs into Fresnel’s Equations which mean the surface is shinier for where hand 2 is touching while it’s less shiny for hand 1. So the algorithm has to adjust for the variation (and if this works, then it might not need the complex region specific range values).

Notice how Angle 2 for hand 2 is much larger than angle 1 because of how it
Yay For Trig!

So here it’s pretty ideal to have the angle be pretty extreme right? Those graphs sure seem imply that extreme angles are a good thing. But no, because quite interestingly, the more extreme the angle is, then the less accurate the measures from the x axis become. So in the image below, you can see that cam b is farther from the monitor (and thus has a greater angle from the monitor) and it can discern far more accurately depth than than cam a. The field of view for cam a is squished down to that very thin angle whereas the cam b viewing area is far larger. Imagine if there was a cam c which was mounted directly in front of the monitor, it would suffer from no compression of the x axis like a _or _b but instead it has full possible depth.

the more extreme the angle is  then the lower the resolution of the usable x axis becomes. So while you get better accuracy (shinier = easier to detect) the accuracy point you can reduce it to declines proportionally.
Extreme angles have lower precision

So for the math portion, interestingly, the plot for the decline in % of the total possible width is equivalent to the 1-sin() (I think, but if I’m wrong then it could be cos() and i suck at math anyway).

So since it
More Trig!

So if you graph out 1-sin(n) then you get a curve where it starts at 100% when the angle the camera is positioned at is 0 degrees from an imaginary line perpendicular to the center of the surface, and it approaches 0% as the degree measure reaches 90 deg.

So interestingly when you plot it, basically what happens is a trade-off between the angle the camera is at and the precision (% of ideal maximum horizontal resolution) and accuracy (shinyness of reflection). I had the same theory a few days ago, even before I discovered Fresnel’s Equations, though mine was more linear. I thought that it was just a point in which the values dropped for the shinyness. I thought that the reason the monitor was shinier from the side is that it was beyond the intended viewing angle, so since there is less light at the direction, the innate shinyness is more potent.

So what does this mean for the project? Well, it confirms my initial thoughts that this is far too complicated for me to do alone, and makes me quite sad (partly because of the post titled Fail that was published in january). It’s really far too complicated for me. Right now the algorithm I use is very approximate (and noticably so). The formula improperly adjusts for perception and so if you try to draw a straight line across the monitor, you end up with a curved section of a sinusoidal-wave.

Trying to draw a straight line across the screen ends up looking curved because it uses a linear approximate distortion adjustment algorithm. Note that the spaces between the bars is because of the limited horizontal resolution  partly due to the angle  mostly due to hacks for how slow python is.
Issues with algorithm

So it’s far more complicated than I could have imagined at first, and I imagined it as far too complicated for me to venture in this alone. But I’m trying even with this sub-ideal situation. So the rest of the algorithm for now will also remain with more linear approximations. I’m going to experiment in making more linear approximations of the plot of Fresnel’s equations. And hopefully it’ll work this time.


ShinyTouch Zero Setup Single Touch Surface Retrofitting Technology 11 July 2009

So Mirrortouch is really nice, it’s quite accurate, very fast, quite cheap and it’s my idea :)

But while trying to hook up the script to my webcam and looking at the live webcam feeds from it pointing at my monitor (aside from the awesome infinite-mirror effect!) I discovered an effect that’s quite painfully obvious but dismissed earlier: reflection.

So a few months ago, I just sat in the dark with a few flashlights and a 6in square block of acrylic. I explored the multitouch technologies with them. Shining the flashlight through the side, I can replicate the FTIR (Frustrated Total Internal Reflection) effect used in almost all multitouch systems. Looking from under, with a sheet of paper over and shining the flashlight up, I can experiment with Rear DI (Diffused Illumination). Shining it from the side but above the surface, I can see the principle of LLP Laser Light Plane, actually here, it’s more accurately like LED-LP). MirrorTouch is from looking at it with one end tilted torward a mirror.

If you look at a mirror, and look at it not directly on, but at an angle, however slight, you can notice that the reflection (or shadow, or virtual image whatever you want to call it) only appears comes in “contact” with the real image (the finger) when the finger is in physical contact with the reflective medium. From the diagram below, you can see the essence of the effect. When there is a touch, the reflection is to the immediate right (in this camera positioning) of the finger. If the reflection is not to the immediate right, then it is not a touch.

From the perspective of the camera
ShinyTouch Diagram

It’s a very very simple concept, but I disregarded it because real monitors aren’t that shiny. But when I hooked the webcam up to the monitor, it turns out it is. I have a matte display, and it’s actually really shiny from a moderately extreme angle.

So I hacked the MirrorTouch code quite a bit and I have something new: ShinyTouch (for the lack of a better name). ShinyTouch takes the dream of MirrorTouch one step further by reducing setup time to practically nothing. Other than the basic unmodified webcam, it takes absolutely nothing. No mirrors, no powered light sources, no lasers, speakers, batteries, bluetooth, wiimotes, microphones, acrylic, switches, silicon, colored tape, vellum, paper, tape, glue, soldering, LEDs, light bulbs, bandpass filters, none of that. Just mount your camera at whatever looks nice and run the software.

And for those who don’t really pay attention, this is more than finger tracking. A simple method of detecting the position of your fingers with no knowledge of the depth is not at all easy to use. The Wiimote method and the colored-tape methods are basically this.

The sheer simplicity of the hardware component is what really makes the design attractive. However, there is a cost. It’s not multitouch capable (actually it is, but the occlusion that it suffers from will deny the ability for any commonly used multitouch gestures). It’s slower than MirrorTouch. It doesn’t work very well in super bright environments and it needs calibration.

Calibration is at current stages of development, excruciatingly complicated. However, it can be simplified to be quite simple in comparison. The current one involves painful color value extraction manually from an image editor of your choice. Then it needs to run and you need to fix the color diff ranges. Before that you need to do a 4-click monitor calibration (which could theoretically be eliminated). But it could be reduced by making the camera detect a certain color pattern from the monitor to find out the corners and totally remove the 4 point clicking calibration. After that, the screen could ask you to click a certain box on the screen which would be captured pre-touch and post-touch and diff’d to get a finger RGB range. From that point, the user would be asked to follow a point as it moves around the the monitor to gather a color reflection diff range.

The current algorithm is quite awesome. It searches the grid pixel-by-pixel scanning horizontally from the right to the left (not left to right). Once it finds a row of 3 pixel matches for the finger color, it stops parsing and records the point and passes it over to the reflection analysis program. There are/were 3 ways to search for the reflection. The first one I made is a simple diff between the reflection and the surrounding. It finds the difference between the color of the point immediately to the right and the point to the top-right of the finger. The idea is that if there is no reflection, then the colors should basically roughly match and if it’s not then you can roughly determine that it is a touch.

This was later superseded by something that calculates the average of the color of the pixel on the top-right of the finger and the color of the finger. The average should theoretically equate the color of the reflection, so it diffs the averaged color with the color to the immediate right (the hypothetical reflection) and compares them.

There was another algorithm that is really simple for when it’s very very bright (near a window or something) and the reflection is totally overshadowed (pardon the pun, it wasn’t really intended) by the finger’s shadow. So instead of looking for a reflection, it looks for a shadow, which the agorithm thinks of as just a dark patch (color below a certain threshold). That one is obvoiusly the simples, and not really reliable either.

One big issue is that currently, the ranges are global, but in practice, the ranges need to vary for individual sections of the screen. So the next feature that should be implemented is dividing the context into several sections of the screen each with their own color ranges. It’s a bit more complex than the current system but totally feasable.

So the current program has the ability to function as a crude paint program and some sample images are on the bottom portion of this post.

Hai!!!!
purty2009-07-02T18:46:47.550169

Yayness!
purty2009-07-10T19:21:53.657122

:)
purty2009-07-10T19:14:06.879415

No.
purty2009-07-02T18:48:09.650197