For as long as my notes show, ShinyTouch is now 1 month old.
So today, I added VideoCapture support, so it will now work better on Windows.
Auto calibration has been rewritten, and a few other small changes.
this title probably isn't very original
For as long as my notes show, ShinyTouch is now 1 month old.
So today, I added VideoCapture support, so it will now work better on Windows.
Auto calibration has been rewritten, and a few other small changes.
Posted in ShinyTouch, Touchscreens.
– July 28, 2009
So i had this idea to make a lossy text compression system. For those who don’t know the difference between lossy and lossless. Well, the main rationale is Wikipedia, I have a jailbroken iPhone and one of my favorite uses of it is the Wiki2Touch (which is now defunct, with the development blog gone and the google code project deleted). The rate of Wikipedia’s growth is close to a gigabyte a year. Extrapolating that growth, it will be soon that my iPhone 2G with 8GB of space will run out of room for Wikipedia. Now, the size is almost 6GB (bzip2 compression). Soon, it will approach 8gb – (200mb (root partition size) + 100mb (music) + 0mb (video) + 1gb (apps).
After googling the concept, it doesn’t seem very original. Even the compressing wikipedia idea doesn’t seem original! But there are some limitations to the current world-record breaking systems, they are very memory-intensive (won’t work on an iPhone!) and are also very slow (hitchiker’s guide isn’t/shouldn’t be slow!).
So here are some, somewhat inspired ideas for this lossy wikipedia encoding. Since the Wikipedia text is quite normal, capitalization is quite unnecessary and can be added automatically later on. It can search for words in a dictionary and use their indexes for the words (or even short phrases). Words that are not in the dictinonary can be searched in a large dictionary/thesarus and replaced with an appropriate synonym. After that, the data could be compressed with some bz2 or 7zip encoding.
It’s quite fast to decode each section with bzip or something, and just looking up the words in the dictionary index (which can be made to use little memory, because I’ve done so in a spell checker a few days ago).
Posted in Uncategorized.
– July 26, 2009
A few days ago I started working on auto calibration for shinytouch. Someone worked it a bit brfire and gave some PyGTK code that did fullscreen correctly but I ended up getting too confused (especially with embedding the video and images and threading delays). So now I started from scratch (or moreover continuing with using pygame) and now it is inherently not full screen. The auto calibration works by setting the contents of he window to be one color and taking a snapshot. After that the color is changes again and the snapshot is once again recorded. After gathering a pair, the software compares them pixel by pixel. It takes multiple trials and takes the average.
Then there is a function that makes cool stuff happen. It goes right to left to search for massive groups of consecutively marked hot pixel change areas. Ir searches for a general trapezoidal shape. And takes the lenghs and heights and positions. And right now, typing on my iPhone I wish the spell corrector was as awesome as the google wave contextual system.
So in the near future, shinytouch will be as simple as launchingthe app and hitting a calibrate button and the computer does the rest. Maybe it might ask you to touch the screen in a special magic box or click on your finger afterwards but overall, zero setup and almost no calibration.
So on another note, shinytouch is now almost a month old idea-wise. In my idea book, the references to it date back to June 28, though it may have originated a few days prior. For shinytouch the beginning window is quite a bit more broad. Anywhere from last year to march. I seem to recall late January experimentation with mirrors.
So now,
With shinytouch being the more promising (more acessible and radical) I have stalled development on mirrortouch. It’s quite annoying how fast time passes. There is so much that I really want to do but there is nost not enough time.
Posted in ShinyTouch, Touchscreens.
– July 25, 2009
So yesterday, I worked a little on making a VectorEditor based Ajax Animator. It actually took suprisingly little work. The mostly modular and abstracted design of the Ajax Animator means that only a few files need to be changed. Mainly those in the js/drawing directory. Though there was a bunch of references to Ax.canvas.renderer.removeAll() which needed to be changed.
Another cool feature in that version is the ability to have layers show up concurrently. So you can see things while drawing just as they would be in export.
However, it’s not ready, it’s very very buggy, Lines and Paths aren’t tweenable yet, and it’s missing all those nice features of OnlyPaths that VectorEditor inherently lacks.
But the one really nice feature I think is the multi select. You can easily select a group of things which comprise some sort of shape, and move it all across.
Posted in Ajax Animator.
– July 24, 2009
This is again, an old idea of mine, I drew it on a sheet of paper maybe a year ago, but I just remembered it.
A common theme with modern browser is maximizing screen estate (which I don’t actually care about, becasue I have 2 huge monitors). But if I were to have a netbook or some otherwise technically restrained device, I would think that screen estate is important.
My Idea is pretty cool. The idea is that there is only a tab bar on top. It’s as usual, allocated to the tabs, and there is on the side, a new tab button. But for this, the new tab button occupies the entire rest of the space of the tab bar, because space is precious. Sort of like the Mozilla Fennec browser.
forward and backwards navigation is achieved by throwing (not just gentle pushing, throwing, it should be kinetic, if you don’t thow hard enough, it just shows some text saying the equivalent of “throw harder!”).
At least in the way I browse, I don’t enter URLs often unless I’m on about:blank. So there is no URL bar. To find what URL you’re on, or to enter a new one, simply double tap on the current tab. It expands and fills the tab bar with a text box and the other tabs are condensed to icons.
Swiping down shows a drop-down for a tab with options to do things like bookmark or view source.
Thowing a tab down (which is a more violent swipe) removes the tab. Something partly inspired by the Mac OS X dock.
The new tab button could also be a menu, swiping down to reveal a menu of bookmarks to select from.
And the new tab page could be almost like a desktop. with widgets, gadgets and whatever (Google wave? If only I got my dev invite :’(). Well, in my idea, the top portion of the new tab page could be the URL bar and the rest could be whatever other browsers are doing + maybe some widgets/gadgets Dashboard or Plasma style.
Posted in Design, Touchscreens.
– July 24, 2009
http://jsvectoreditor.googlecode.com/
There’s the project page! If you want to try it out, the link is below. It’s only been tested on Firefox 3.0 so far, and please comment if there are any issues.
Posted in VectorEditor.
– July 22, 2009
I’ve decided to experiment with improving the original VectorEditor again, which was initially stopped because of the limitations of Raphael, but since it has by far matured a lot, I decided to try making it again.
Part of the reason, is that I want to make a Google Wave Ajax Animator Mini. Which is the Ajax Animator with a minimal UI, that replaces the user management and history features with those which are inherent for Google Wave. I think it is quite important to achieve better browser support for that to happen, so I think it’s a good idea to rethink the Vector Editor project.
It’s also inspired a bit from the svg-edit project, which doesn’t seem to actively persue support for IE, and has built itself around SVG (which is what OnlyPaths was also).
It’s mostly going to be for the mini-ajax animator.
Posted in Ajax Animator.
– July 22, 2009
The algorithms are nearing completion. They now function most of the time and are fairly situation-agnostic and mostly zero-config.
A cool new thing it does now is using the HSV color space rather than the RGB one and comparing the hue values. It turns out that this creates the most noticable difference between colors and it can easily tell different colors and shapes apart.
There is also a new shape-detection system, because all previous ones checked colors. This works by getting a sample of the finger color and the background color and comparing the closeness of surrounding pixels to each one. It ends when it finds a pixel closer to the bg than the finger. This one is truly zero config.
Another one checks the similarities in hue of the alleged reflection color and the known bg color. If it’s not similar enough, then it is recognized as a touch. It does use some configuration, but it shouldn’t vary much from situation-to-situation to require serious configuration.
There is also a new failure, which is generating a supposed ideal sum ratio to detect things (which is how the newly old one worked backwards). Though it spawned a new version that uses HSV instead, and it works pretty well.
Also, almost all the functions now create bar-graph visualizations. Very futuristic and augmented-reality style.
Posted in ShinyTouch, Touchscreens.
– July 21, 2009
So the key innovation in ShinyTouch is the using of reflections to determine touches. (Sure there’s some slight innovation with scanning the pixels in a certain way, but that’s nothing in comparison to the rest). Right now, it checks colors on only 3 pixels strategically and combines them using some ratios estimated from fresnel’s equations. I must be doing some stuff wrong because it’s not reliable at all.
So what is reliable and practical? I think maybe just something which recognizes the general characteristics of the reflection (like looking like the actual image) shapewise could work. I dont know.
So i have a little notebook with tons of random sketches on ideas and equations and tiny pseudo-code algorithms to theoretically improve the detection rate. One is basically analyzing more than teh single pixel it does now, by “measuring” the width of the reflection and matching it up with the image (and in all cases it should be equivalent). The problem is that it’s hard to measure, so I have this algorithm which I think may resolve it by comparing the value of the pixel in question with the known point and the known non-reflection.
But it’s nowhere near complete, and ideas are welcome.
Posted in ShinyTouch, Touchscreens.
– July 18, 2009
So about 2 Months ago I realized something quite interesting. It is that digital communication is creating new paradigm shifts (if I may call them that without all singularity theorists attacking me) in the order of the evolution of human communications – backwards.
What? How is progress backwards? Well, think about it, just about the first type of digital communication was through text (computing may have existed prior to that using buttons and switches which could be argued, but I’m going to say that they weren’t real communication methods but rather just computing methods). Text, is written language and historically, written language is quite a recent invention. I may be thinking of telegrams or maybe I could just start with computer communication, but the point remains.
What’s next? Well, after telegrams, people invented telephone, the logical successor. After the book was invented the radio. Even before text was totally phased out from computers, sound was there (this part is a assumption because you might have realized my age so I was never alive before the era of GUIs and Windows 95). But if you think of it, human written language was preceded by spoken language. You can see evidence even today with many developing countries having illiterate people. That usually means they don’t write but can speak.
Preceding human speech is gestures and behaviors. Like recognizing that a tiger is chasing you, and running and having others interpret the message as: “hmm.. I should run too…”. These gestures, albeit historically primitive have not been captured in digital communications technology until the development of video. This development happened after the development of telephones. It is now the focus of things like YouTube and Skype. Quite recent advancements in technology that is just now being implemented. Gestures aren’t now just being developed in the form of video but also the cool natural user interfaces (again, natural not just because they feel natural to the user, but primitive data formats with less technology, ironically implemented with technology). Multi Touch, 3d tracking and gesture recognition are big in the news today; the Wii, Natal, Jeff Han, Surface, FTIR, DI, LaserTouch, LLP, PS3 eye (LOL my iPhone just autocorrected that as “pee”) and I couldn’t leave out a plug for my ideas, ShinyTouch and MirrorTouch.
This isn’t some magical scheme of such to prove some error sort of divine creationism. No, this is a quite logical example of how human evolution interacts with technology, a world governed by Moore’s law (or something else like Kurzweil’s law since Gordon Moore might not want to be associated with everything suffering from exponential growth). Technology is built around the limitations of the age. One of the original a d ongoing issues is bandwidth. Text uses only 8 bits per character. Sound requires several hundred kilobits per second. Video requires an exponential leap with something like 32 bits times 640 times 480 times 30 bits of data per second. I don’t have a calculator now but you can quite easily understand how 32*640*480*30 is big. I’ve now calculated it to be somewhere around 211,968,000 bits per second, and that’s quite a bit bigger than audio. So it’s just that humans logically evolve more efficient and dense formats of communication, while digital technology just reduces bottlenecks and enable for the more primitive yet more data intensive communication systems to be implemented.
Now for what’s interesting: the future. Now that we know the pattern of communication progresses backwards, what predates gestures? Well, I think it’s obvious, but never really been in reach of exploiting it directly. It’s never even itself been use as a communication substrate. And extrapolating the rest of the above noted correlations, it fits as something that requires unprecedented large bandwidth and computing. It’s more natural than anything else because it is moreso innate than learned. It’s thing that lies below all of that. It’s direct thought.
Nothing comes more naturally to a human to thinking. We have evolved in recent (on the evolutionary timescale) years to have a massive ballooning of skull size, hopefully to make way for that Grey matter that goes in it. Thinking is something people do, and it’s universal. Neurons are not French or German, American or British, Chinese or African, northern or southern, accented or racist, wise or dumb, experienced (they may be old, but they don’t gain experience with age) or a n00b. They are just simple circuits that process and store data, passing it along in a giant, organic neural network. We are all born with them and they are always roughly alike. It is the ultimate in natural and innate thinking.
And there is evidence that it is currently being seriously considered. MRI scanning has greatly increased scanning resolution in recent years and EGG machines are now being developed further and being commercialized with companies like OCZ with their Neural Actuator or Emotiv’s EPOC product. So it is likely the next and as far as I can tell, the final communication paradigm.
This is now the third blog post from my iPhone. But this time I did some editing on my computer.
Posted in Uncategorized.
– July 15, 2009