Ivor O’Connor

February 22, 2010

Google’s Lack Of Testing Bites Me Again.

Filed under: Uncategorized — ioconnor @ 2:44 am

I am using google docs spreadsheets. There is a function called “ImportRange” allowing values of other spreadsheet files to be pulled into the current one. This feature allows for something along the lines of sql normalization. However it appears, like google’s support for gears, to be severly broken. Why does google release such bad code?

Here’s a link to their problem: http://www.google.com/support/forum/p/Google+Docs/thread?tid=2760c85e9be1ff85&hl=en

Maybe google needs to put a state-of-the-art testing program in place. I bet they could. And it would probably be insanely great. So they would no longer be just another company with buggy software. I can dream.

UPDATE 2010.03.15: It now appears to be working. The updates do not always happen. And when they do there is a lag. When the updates don’t happen the secret is to add more information until the software realizes another update should take place.

Advertisements

February 21, 2010

Dangerous Truths

Filed under: Uncategorized — ioconnor @ 10:52 pm

The following quote caught my eye today:

Charles Austin Beard
“You need only reflect that one of the best ways to get yourself a reputation as a dangerous citizen these days is to go about repeating the very phrases which our founding fathers used in the struggle for independence.”

February 7, 2010

Poor Man’s Web Link Checker

Filed under: Uncategorized — ioconnor @ 7:50 pm

There are link checkers for websites that work reasonably well like http://validator.w3.org/checklink. However it does not work on pages that are dynamically created with lots of JavaScript. For instance if your table of contents are generated with JavaScript the official http://validator.w3.org/checklink link checker will totally miss all links in the TOC!

To get around this on a *nix machine is fairly easy though. You simply need two lines. One line checks the external links and the other line checks the internal links. Here are the lines that assume you are in the root of the directory containing the website and all links are there in local files.:

  1. External links:
    curl -s -S $(grep -ir href= *.* | sed 's/.*href="//' | sed 's/\".*//' | sort -u | grep http | grep -v ^#) > /tmp/blahbla
  2. Internal links:
    for x in $(grep -ir href= *.* | sed 's/.*href="//' | sed 's/\".*//' | sort -u | grep -v http | grep -v ^# ) ; do if [ ! -f "$x" ]; then echo "File \"$x\" does not exists"; fi; done;

The first command simply finds the external links, fetches them to a tmp file, and in the process if there are any errors with the links displays them to the console.

The second command finds all the links to internal files and verifies the files exist on the hard drive.

No need for expensive tools that may not even work on your website!

Create a free website or blog at WordPress.com.