Skip to content


I’m A WordPress Idiot

So in scribefire, i pressed “Save as Page” rather than “Save as Post” :(

Posted in Meta.

Tagged with , , .


I’m A Linux Idiot

So I needed to install phpMyAdmin, and having those epiphanies on how simple it is to install crap on Debian/Ubuntu, I typed in sudo apt-get install phpmyadmin and it worked fine. Mapped out all the dependencies, installed them all, and then popped up a nice user-friendly config window where you select which server to install it in.

I had Apache2, so i just hit enter. Opened up my browser and went to /phpmyadmin. hmm. 404? tried /phpMyAdmin, and same.

So I googled it, and there were all these success stories. I went and tried sudo dpkg-reconfigure phpmyadmin, that same window popped up, this time i tabbed over to the ok button. and pressed enter. Checked again, still broken?

So i found this guide, and it turns out you have to press space to select it -_-

Posted in Meta.

Tagged with , , , .


Calculate Pi!

Press this button and calculate pi!

Please note that this is a distributed effort and there is no simple way to get the final value as of yet.

Posted in Javascript Distributed Computing.

Tagged with , , .


Action Limit Exceeded

Google

 

Action Limit Exceeded

What happened?

You have performed the requested action too many times in a 24-hour time period. Or, you have performed the requested action too many times since the creation of your account.

We place limits on the number of actions that can be performed by each user in order to reduce the potential for abuse. We feel that we have set these limits high enough that legitimate use will very rarely reach them. Without these limits, a few abusive users could degrade the quality of this site for everyone.

Your options:

  • Wait 24 hours and then try this action again.
  • Ask another member of your project to perform the action for you.
  • Contact usfor further assistance.

 

Scary.

Posted in Ajax Animator.

Tagged with , , , .


Distributed Computing Take III

I donno why, but i’m revisiting this. I was trawling across Wikipedia one day, and I got to the article about Pi. I tried distributing Pi a while ago, actually, before I did the hashes. But I never ended up implementing it because it didn’t seem feasable, as all the algorithims I encountered (or tried porting) required lots of memory, something very hard to distribute for this scenario. But this time, I found these. Looking through them, and googling in the process, I found http://www.omegacoder.com/?p=91, and ported it over to Javascript. It was relatively slow compared to the SuperPi implementation in Javascript, but it was easily distributed.

One problem though, is that it gets slower every iteration (to find the net block of digits). Finding .141592653 will be roughly 20ms faster than the next 9 digits (it processes in blocks of 9). Not only would it take longer, but it occupied 100% of the CPU, and it would pop up that ever-annoying “This script may make your computer non responsive” window. So I implemented this pattern to make it not lock up any browser other than Chrome (and possibly WebKit Nightly).

Still, it would take up 100% of the CPU. I ran it overnight and got to digit 17,000.

Eventually, it would take about half an hour for a single iteration (at the 20000th digit). With web-based distributed computing, I can’t rely much more time than what Google Analytics reports to be 00:02:24 (my Average Time on Site). And that’s half an hour with a 3ghz Intel Core 2 Duo (it’s dual core, but the script, is single threaded).

I then split the function into smaller parts. the main function was split up, and the loops were divided across users. Now, it can scale easily. It uses virtually no visible CPU. and fits well into that 2 minute timeframe.

Try it out here, but don’t stay too long, because i only set there to be 500 “jobs”.

Posted in Javascript Distributed Computing.

Tagged with , , , .


3999 Spam

who’s the lucky spammer to post the 4000th spam?

Posted in Meta.

Tagged with .


Wikify Format 2.0

2.0 isn’t an actual version number, but i’ve added the new one.

it’s basically

Parent ID (or _body) > Element tag name : Parent Index > format type = patch/innerhtml data

or

_body>div:0>span:1>d=hello!

the formats are p, d, and o, or Patch, HTML, and Legacy, respectively. Patch uses diff-match-patch, unidiff style data. Patch is the same, and Legacy is an intermediate format of sorts, which is easily converted to from the old formats, but still follows the general pattern of location>type=data. The only difference between Legacy and HTML is that legacy uses a different location scheme.

The pluses of this new system, is that it’s more accurate, and your edits are more resistant to page changes. The data is more human readable, the system is more reliable, and stores less data on the server. The cons, are that there is a 20kb overhead in wikify core, and saving may take some more time.

Posted in Other, Project Wikify.

Tagged with , , .


Crashing IE

Well, I was trying to iron out a IE bug for project wikify. Interestingly, that bug I encountered, crashes every IE since 5.5 (not sure if it crashes  5.0 yet, browsershots are still loading)

http://wikify.googlecode.com/svn/trunk/v2/crashie.htm

really, this just is just *another* IE issue….

Posted in Project Wikify.

Tagged with , , , .


EtherPad

I just got accepted into beta for it, and it’s insanely great! There are some disconnect issues, but the latency is awesome, etc. If only syntax hightlighting was better.

And yes, i’m killing their servers by contributing to their viral growth :P

as a sidenote, i’m actually hyperlinking my links! (freaky, I know), and my posts are getting less and less mature over the months/years

Posted in Google Wave.

Tagged with , , .


Wikify Diff Engine

So i built a pretty crappy tree-diff system for Wikify. It completely ignores the creation of new nodes, or the deletion of the nodes, but it works most of the time. It’s tree based, so the data is fine-grained to the level of however small the nodes are made to be. But many pages with huge paragraphs or such, have huge nodes, and editing a single word would mean saving a huge amount of data.

So Wikify will now use both the tree-based diff (which is great for HTML/XML docs, as they are trees) and divide the changes into text and do text diff for that. Right now, the only thing sutable is the google-diff-match-patch project, which is absolutely amazing, except for how huge it is. But I figure its okay, because I’m already including the (relative) bloatness of jQuery… (especially compared to vX)

Posted in Other, Project Wikify.

Tagged with , , , .