Michael Davies reckons I was whinging when I gave a method for people to filter out Twitter and other annoying crap from their feed from Planet Linux Australia. WTF? I gave a method for people to not have to read what they don't want to. Whinging is complaining without offering a solution -- you'll note I didn't demand the annoying crap be removed from the Planet.

Think before you post. The world is listening. Sage words indeed.

What this really brings up is what the purpose of a Planet is. In the past the shadowy PLA cabal have removed stuff silently. So what's the policy? Must it be geek only? So does that mean no mountain biking, home schooling, trouble with kids, politics? Or is it just that whoever it is that does the censoring didn't like my politics?

Feed minus Twitter crap

There's been some complaints on Planet Linux Australia about various Microblogging annoyance. I've been toying with Yahoo Pipes recently, and one of its features is regex filtering of feeds. That's very cool, and so I've now got a task for it.

So if you want to read Planet Linux Australia without the microblogging pollution, try using this RSS feed. Currently the only posts I'm filtering out are ones that titled Michael Still: Blatherings for.*. Are there others in the same vein?

Info about the pipe is here.

Update: Matt Mottrell adds "Roger Barnes - Photo of the day" and "Stuart Smith - Twitter Updates for". I've added them to my pipe.

Wow, there's still web agencies who think Flash is cool

Farm Digital

Wow, I can't believe there's still web agencies who do everything in Flash. This technique is so effective, the above is what Google can see of their site. That's right, the title.

Supposedly Adobe is working with the search engines to index the internals of Flash crap. I can see this ending in tears. They're either going to expose internals that were never intended to be visible (like the kind of people who do Flash know anything about security) or the designer will have to explicitely list keywords, which will end up being as useful as meta keywords.

Sniff browser history

Niall Kennedy has a really clever JavaScript hack to sniff a user's browser history. It's a pretty cool hack, but also a little scary since any web page you load can find out what pages you've been to.

It works like this: you have some links somewhere inside the DOM, and inspecting the CSS "visited" attribute, you can find out if the user's history contains the URL you're linking. Of course it doesn't actually have to be visible to the user.

His nice white-hat example presents only appropriate "add to your <social app>" links to readers. I can think of some less nice examples. You're selling stuff and want to know if the person is a customer of your competitor. Check to see if they've visited the logged-in area of the competitor's site, then present a specific comparison between your competitor and your own products. Or you could be very reactive to stuff being written about you in the press in the press, even though the person read the article a few days ago. Cool!

I'm looking at personalisation at the moment as a way to try and mediate around the inevitable conflict of home page real estate between divisions in the company. The more we can work out about people hitting our site, the more accurate we can be. This gives the evil genius in me some ideas! Mwahahahaha!!!

Big brother IS watching

I've been toying with an amazing new web analytics tool called ClickTale. It tracks everything a user does as they interact with your site, so as well as the usual heat maps and the like, you can actually observe a user's mouse moving around, clicks, typing, the lot! Amazing, but somewhat scary. Certainly could give some insights into how your users interact with your site, particularly complex elements like forms.

Here's a video of me testing it out to demonstrate quite how detailed it is. Wow!

Or view it on YouTube.

Google Developer Day

I went to Google Developer Day today and learnt a lot. There's some very clever people doing some very clever stuff.

The sessions I went to in the morning were iGoogle gadgets, which is some very interesting stuff. We're thinking about using some of this kind of stuff for usage indicators and the like. it's remarkably easy to knock up a cool little gadget. Next was the YouTube where I mostly read my emails, as I'm not that fussed.

OpenSocial is very interesting. The open approach definitely appeals -- I was using Friendster, Orkut and the like long before anyone discovered Facebook, and got pretty over the idea pretty quickly. I'm quite interested in the possibilities of some social networking apps, done right. Last.fm is brilliant, and gives me information about music I really care about. LinkedIn is very good for professional contacts, and does that one task well. So the open API really appeals so you could integrate all your social networking stuff. But from the sessions I went to, the OpenSocial stuff Google has is alpha quality at best right now. I'm sure it'll improve quite quickly though.

Mapplets was very interesting. It's much like the Maps API but designed to let the user put your data in as a layer amongst many, which has many possibilities. The presenter seemed very nervous talking in front of the crowd, but he actually did a really good job of explaining it, so hopefully he'll get over the nerves and become an excellent presenter.

One of the items in the morning keynote was GWT, which is a toolkit that allows developers to write Java code that compiles into optimized JavaScript. The integration he was demonstrating with Eclipse was amazing. Almost makes me want to learn the acronym-soup that is Java. But not quite enough.

It was a great day, with as you'd expect from Google excellent catering. Pretty amazing the stuff they turn on for developers, but then I guess they get the opportunity to poach the best and brightest. My only complaint would be the ditchwater coffee. It left a bad taste in my mouth all day! Ugh. It was like the worst business hotel or Amercian diner percolator shite. Maybe I'm just picky?

Firebug site down?

The other day I moaned about Firebug crashing my browser. Seems that the reason, though this is weird but probably due to boneheaded network configuration, is that the firebug domain servers are down. Fortunately, SMS had the source on his laptop and built up an xpi for me. Sweet!

In other news, I can overhear our departmental admin person berating someone in the IT department. I requested a work laptop with at least two gigs of RAM and a "large" hard drive, as I run lots of virtual machines for browser testing. I got a gig of RAM and a 12 gig hard drive. Do they even make 12 gig hard drives any more? Mobile phones come with more than that these days!

Weird Firefox crash

This is just bizarre. I've received a new laptop at work and so I'm installing all my stuff. Firebug and YSlow have become absolutely critical to my day-to-day work, so that was on the list of extensions I want. Even searching for "firebug" locks up Firefox. Going directly to the site locks up Firefox. WTF?

Is, perhaps, the annoying virus scanner that PCs here come with doing something weird? This happens whether I use the corporate proxy or bypass it.

Anyone else seen this behaviour?

Come on BBC, that's just lame!

Documentary archive has moved

I got this message in my feed reader this morning. It seems the BBC have moved the RSS feed for the "Documentary Archive" podcasts. The lame thing is, with a single line in their web server config, they could make this move completely seamless. Worse yet, when you follow that link there's nothing on the page called "Documentary Archive". Duh!