Su Tech Ennui: 2007

Friday, December 7, 2007

Get on the train!

I'm beta-testing a new Google service at the moment which isn't all that well known yet: Grand Central. It's a web-based voicemail system and like many of the VOIP services on the net, it lets you select a phone number from almost anywhere in the US - you're not limited to your local calling area.

When I picked my number, I started with a few words that might make good phone numbers when spelled out, and worked out the area code that corresponds to the first 3 digits - there aren't that many area codes out there that make sense as words so I was lucky when I found that "TOA" mapped to a valid one. Then they gave me a choice of a couple of dozen numbers, so I narrowed that list down to ones which started with an exchange code beginning with '5' so that my number would then be dialable as "TOAL nnnnnn". From there it was trivial to feed the handful of proferred numbers into a "phone to word" lookup program, and find which of them made a sensible word! Bingo! - a personalised phone number for free! :-)

The GC service sports some pretty nice features; the only downside is that it looks like it'll be free only during the beta test - that wasn't obvious until *after* I'd signed up. So there's a serious risk I might get hooked on this stuff (or have too many people using my new number to be willing to cancel it after the beta) and so end up paying for yet another phone line.

Still, it is nice. Really nice. Sign up at their web site - I put my name on the waiting list around 11pm one evening and had an invitation to join in my mailbox by midnight. Once you're in, it's like the early days of GMail where you get 10 invitations to pass on to friends, and they in turn get 10 more, ad infinitum.

G

Tuesday, December 4, 2007

Why Linux isn't ready for the desktop

My Linux at work (OpenSuSE 10.3) stopped working for no apparent reason the other day - well, not entirely, but it wouldn't start X-Windows and gave me the totally content-free error message "The display server has been shut down about 6 times in the last 90 seconds. It is likely that something bad is going on."

This was a major problem because I was running another server on that system under vmware, and when I tried to start up the virtual machine from the command-line instead of the usual GUI, that failed too. (I suspect it never had worked, I just hadn't tried it before now.)

Fortunately I have some experience of unix and after much net research I finally found the solution:

touch /var/cache/fontconfig/stamp
/usr/bin/fc-cache -f

but can you imagine what would happen if a typical home user hit a problem like that? It would be "Honey, where's the WinXP install disk?"...

As much as I love unix, it just isn't ready for prime time on the desktop. Which is why I do all my own work from the command-line still...

G

Monday, November 19, 2007

like a phoenix, risen from the ashes...

This post is dedicated to anyone Googling for "overheating problems for HP Pavilion dv4000"...

For a couple of weeks now, my HP portable has been getting very hot underneath at the back, at which point it unceremoniously powers down. I searched for other people with the same problem and found many.

Well, I'm delighted to report that the problem is trivial to fix, and cheaply too!

The idiots at HP designed the portable so that the intake to the cooling fan is on the underside of the machine, so if you put it down on a bed, or a particularly plush carpet, it blocks. The fan also gets blocked if - like me - you've had it for over a year, and have never once blown out the dust bunnies!

So that's the fix... just grab a can of compressed air (or your home vacuum) and blow through the inlet and outlet cooling vents. The outlet is at the rear left as you face the keyboard, and the inlet is on the underside about three inches in from the back.

Now my machine is nice and cool again, and I saved myself the cost of a new portable :-) (Although truth be told, I did actually buy myself one of the Walmart pre-black-friday specials for about $350 last week, thinking I was going to be needing a backup soon...)

By the way the Walmart machine is quite nice, though there are odd things about it that you have to do a fair bit of net.homework to master. Also it's not a good choice if you want a Linux portable. If you do, put vmware on it and get your linux fix that way for now.

Circuit City was selling a similar machine this morning for $270. At this rate we'll be hitting One Laptop Per Child prices with regular portables soon :-)


G

Sunday, November 4, 2007

Quick hack #17 in a series of 42: inlining LaTeX "\newcommand" macros

A poster on the TeXhax mailing list asked how he could pre-process his LaTeX file to remove his own macros (created using \newcommand) because his typesetter for some reason does not allow user-defined macros. It turns out that TeX/LaTeX doesn't have the ability to output the de-macroed source, so I hacked up a macro processor in C that's compatible with LaTeX's syntax and leaves anything it doesn't recognise untouched.

I was quite surprised how hard it was to get right. It took me about 3 hours to write rather than the 30 minutes I expected, and here's why: in previous code where I've needed to do macro expansion, I've done it as a filter on reading the source stream, and as a result, if a macro definition included a call to another macro, it was trivially expanded on the fly before being fed into the new definition. This implementation falls out in the wash with no explicit effort, and for the sort of things I've needed it for in the past (such as an alias mechanism for a command-line shell, or for a cpp-like source-code preprocessor) it's actually what you want, because aliases chain correctly when redefined (eg alias ls = ls -l - you don't need to worry if ls is already an alias or not).

However for TeX's mechanism, the generated text is then reprocessed for recursive macro expansion. This is useful in TeX because it lets you use a macro to generate other macros, but it does complicate the business of writing the macro expansion code.

The solution I used was one that I had first come across in 1977 in an implementation of the POP2 language, called "Wonderpop". Wonderpop treated its input stream as a list like any other list. The last element of the list was a function call to fetch data from the input file (or console), and if that was all that was left in the list, then it was just like normal I/O ... but if you joined some other elements in front of the list, the next time you read from it, you'ld get those elements before you'ld get the remaining data in the source file. In C terms it was something like ungetc(), but far more powerful because the list could be manipulated. As well as pushing text back on the head of the input stream, you could pull out the Nth item from the input stream, and if that item had not been read yet, the next N items from the input stream would be fetched and converted to list cells, and the object at the end of the list would be the lazy evaluation function which would read the rest of the stream only when needed.

In fact, I believe you could append items to the end of these lists (though I hadn't ever done that myself) - in which case, once the function to fetch new data had exhausted the real file, it would return items from the appended list elements.

Wonderpop made great use of this modifiable input stream facility, by allowing the user to modify PROGLIST, which was a stream containing the source of your program itself. By using syntax-directed macros, you could add new keywords to the language which were implemented by manipulating your source code as a list. For example, if the base language only supported LOOP/IF ... BREAK/ENDIF, you could build your own implementation of WHILE or FOR on top of it. This made Wonderpop into a user-extensible language for very little added complexity in the compiler. Very cool.

So a simple version of the POP2 model was what I used for this TeX-like preprocessor: it has a large buffer in front of the actual stream object, to which you can push data back and have it be re-read before you get to the following data from stdin. And of course to avoid multiple buffer copies (shunting up data after a read if you didn't want your array to grow indefinitely), it uses a cyclic buffer. One advantage of a cyclic buffer that I hadn't thought of before I started implementing the code was that you could push back items to either end of the buffer, so they would be read either instantly or after the current expansion was finished. Turned out to be useful and necessary.

Being a quick hack so that the guy could get his document printed, I haven't made it ultra robust yet, but with the exception of a couple of deliberate shortcuts that I already know about, I think the code turned out pretty nicely. Here it is: http://www.gtoal.com/src/newcommand/newcommand.c.html

G

Saturday, November 3, 2007

Lesser artists borrow; great artists steal.


Remind you of anyone?

I was reading this post by Adam Finley claiming that a 1948 Startling Comics cover was the inspiration for Bender in the wonderful Futurama cartoon, and it brought to mind a children's toy my wife keeps in her jewelry box... she picked this up in a packet of Fritos somewhere around 1968 give or take a year. About the same time that Matt Groening would be in his formative years, given that he's about the same age as us. I think it's much more likely that the youthful Groening had a penchant for potato chips than a collection of 1940's comics!

The robot's carrying a spanner, fer gossake! How much more Bender-like can you get? Don't you think it's a stronger resemblance than the more he-man-ly robot from Startling #49..?

I can easily imagine that the image stuck in Groening's mind without him explicitly remembering that he'd seen it elsewhere.

It's a cool toy. Fifty bucks and it's yours, Matt, or I'll trade you for a signed Bender drawing ;-)

G

Saturday, October 27, 2007

Myer and Sutherland's Great Wheel of Reincarnation


Back in the early 70's, Myer & Sutherland (the latter of Evans & Sutherland high-end graphics fame) suggested that trends in graphics workstations (and by extension, computing in general) followed the 'great wheel of reincarnation' principle; an example being the zugzwang between diskless workstations and local storage, which in my lifetime has gone back and forth I think about three times. (We're currently in the 'migrating away from local storage' phase, with both data files and applications moving to some generic cloud on the net.)
I was reminded of this when about a week ago I was given access to a compute cluster through my job - a farm of about 6000 CPUs (4 per chip and 1 chip per box) of which I can use up to 512 CPUs at a time, dedicated to me alone, for up to 48 hours. Generally the wait to run a job is less than a day. Pretty damned good fun, I can tell you.
Anyway, back to our story... I won't bore you with the details, but the essence of it is that this cluster is being run extremely similarly to the way that mainframes were run in the 60's; batch queues, no interactive work, and no multitasking.
For those of you who've grown up with Moore's Law (probably better called Moore's Postulate, but who am I to argue), we're starting to wonder if it will bottom out soon as there has to be a limit to the feature size and speed you can get on a chip as we finally approach atomic levels - we've improved performance until now by increasing CPU speed, and in the last few years, by trading huge RAMs for CPU power in our algorithms, and by trading expanding disk storage for CPU (eg Rainbow Tables)... but to continue doubling space X time perfomance every 18 months into the future, I'm convinced (and have thought this for some years now) that the next big step will have to be more CPU cores. I don't mean just 4 or 8 on a chip, I mean desktops where the owners are bragging 'Yeah, I've got 16K in mine' and they're referring to CPUs, not memory.
Which brings us back to my original subject: supercomputing using multiple desktop computers as a replacement for the mainframes of yore. If the Texas supercomputer cluster is anything to judge the state of the art by, there needs to be a major wakeup call in terms of operating systems, multitasking, multi-user use and interactive response if the desktop systems of ten years from now are to be built using what is basically the architecture of today's supercomputer clusters.
In other words, we need to parallel the progress from 1960's mainframes through 1970's multiuser systems, ending up with 1980's desktops which were as powerful as the mainframes of the 60's and as usable as the OS's of the 70's.
It's time for Myer and Sutherland's Great Wheel of Reincarnation to go round again.

Monday, October 15, 2007

Mechanized Reasoning

I recently came into possession of a copy of an old paper (1951) that described a relay-based custom computer which enumerated arbitrary boolean expressions. It's pretty damned interesting and well worth a read:
http://history.dcs.ed.ac.uk/archive/docs/mechanized_reasoning_screenres.pdf

I think it would be a fun project to write a graphical emulator for this (or even better, build a real one from relays which you can get cheaply and easily at Radio Shack).

Wednesday, October 10, 2007

Stupid Company Tricks

Back in the usenet days, I was a subscriber and occasional poster to a group called "risks digest" where we would highlight the stupid stuff that organisations and businesses would do which actually made things worse for their customers. Twenty years later I'm sad to see that corporate common sense has not improved in any way and people are still doing Really Dumb Things:
  • Take our cell phone company for example. ("Please!", as Rodney Dangerfield might add)... I forgot the account name associated with my phone so I went to their online page where you can ask for a reminder to be mailed to you. So far, so good. So I enter the phone number, and what do I see... "Your account information has been mailed to gtoal@gtoal.com". Yup, they don't just mail it out, they tell you to whom it has been mailed. So anyone wanting to find out who owns a phone number with this company can just submit an account reminder request and immediately see what email account is associated with the phone number, from which it is usually trivial to work out the person associated with it.
  • Here's a big computer company that heavily touts the security of their products... they have a web system for users which is where you download their software as well as being a chat forum. If you go to their 'lost password' page and request a reminder, they don't just email you with your password - nope, that would be a security risk because they'd need to keep your password unencrypted, so they helpfully change your password and send you the new one. No, you read that correctly - they don't send you a link where you can change your password, they go ahead and change it immediately. Never heard of 'denial of service' attacks, guys? You can lock anyone out of their service by requesting a forgotten password (no ID required, just the email address) until they receive that email and log back in to change it back.
  • Here's the worst forgotten password story of all. I forgot the password to my online bank account. Because I'ld never entered any initial 'security questions' on the web site, I couldn't get an email reminder and had to call in. Again, so far so good. Unfortunately they asked me the same 'security questions' that the web site would have asked - which I had never entered so I couldn't give them answers. God knows what answers they expected to hear. So they used their fallback procedure - asking me questions about things they knew the answers to, like where did I stay when I lived in Ohio, or which of these three Ohio businesses did I ever work for. Just one problem, I've never been to Ohio. I'm reasonably sure that an illegal immigrant migrant farm worker got my SSN here in the valley and used it while working in the fields up there. Unfortunately this info has got into my credit file with the big three companies (Experian etc). Here's the rub - my bank trusts the data from the credit companies implicitly and would not believe me that all the info they were using to ID me was wrong. I finally convinced them I was me by telling them at which bank branch I opened my account. Well duh - there's only one for this bank in the town where I live, and anyone can find out where I live pretty damned easily (especially if they've already got a copy of my erroneous credit report, which apparently is all that is needed to spoof someone's ID at this idiotic bank).

It amazes me that these are huge companies with large staffs and presumably they hire information security professionals. Just what sorts of idiots are running the security in these companies? I despair at times.

Wednesday, September 19, 2007

This is about me.

There is a generation of computer programmers who at the time of writing are aged around 45 to 65 years, who learned computing during a wonderful period when everything was new and one person could learn all there was to know about a computer system from the bottom up.

I consider myself one of that generation. I wasn't an exemplary student - I graduated with a lower second, and I could name half a dozen of my contemporaries who I considered good programmers rather than average ones like myself. But now I look back and realise that the merely average graduate of 1980 was a programming god compared to most of today's graduates.

So this is about me. This isn't one of those tactfully-written third-party expositions on the state of education that you'll find published in an Educause journal. This is one guy's personal experience and my view of what is wrong with Computer Science education today.

Typically, a student of my generation would be able to write an assembler by the end of our first year of study; a compiler by second year; boot a stand-alone executable with a run-time library that is accessed through kernel calls by the end of third year; and write a usable multi-tasking operating system by graduation at the end of fourth year. During this process we would have learned about hardware from the transistor level up and be intimately familiar with at least one architecture at the machine instruction level. We would not only have expertise in one programming language, but would have a working familiarity with half a dozen others (and as many machine architectures), and would have designed and implemented a language of our own at least once!

By graduation we would also have a good grasp of data structures and algorithmic complexity, and understand when and where it is appropriate to optimise code and when it is not worth the effort; we would have worked in group projects to build systems of significant size and they would have a sound grasp of software engineering principles.

Computer education in recent years appears to have lost that focus on learning the fundamentals. Systems are so complex that it is impossible to know what is going on inside. How many graduates could tell you exactly the steps that a Windows or Linux system goes through to load a partially linked executable file and start it running? In the 70's the answer would have been 'all of them', whether the system was a stand-alone mini or the University's mainframe with a home-grown O/S on it. Today the answer would be 'damned few'.

Nowadays we don't teach students how to build a raster display from scratch and program their own rasterops library - we give them a nice regular virtual image store and a copy of the SDL manual if they're lucky. A compiler class will show students how to tweak GCC to add some new trick, but it won't teach them to write a compiler from first principles. And bootstrap it using an assembler and a macro processor. I don't believe compiler writing has significantly improved since the 1970's when it was a high art, and a really good compiler could be written by one person in a year and a classroom-level one in a couple of weeks. Now it takes a team to write a compiler and it is so complex that the writers trip over each others feet.

Programmers today have much knowlege but little understanding. They use large packages and build complex systems, but they don't create the underlying architectures or have a gut-level understanding of how everything works. We are turning out IT consumers, not developers. Quiche eaters, not Real Programmers.

Let me give an example from someone I know. I don't mean to cause offence because he's a friend, but like so many of his peers he goes home after work and switches off the Computer Science part of his mind. I'ld be amazed if he reads tech blogs like this and would stumble across this description.

I once asked my friend what he programmed for fun after he went home from work. After being blown away to learn that he didn't program - at all - I asked what his best project was at University, where he'd graduated with a BSc in Computer Science. It turns out that his crowning acheivement, the pinnacle of his student years, and his big final-year project, was ... drum roll ... to write a reverse polish calculator. In fact he said he was one of only two in his class to succeed on the project.

I want you to think about that for a minute. A reverse-polish calculator.

So, no recursive-descent expression parser. No handling of operator precedence. No compiled code. No writing of the low-level floating point library. In fact I doubt very much if he even understood how floating point is represented. And he was proud of it!

These are the people who are graduating with a Computer Science degree nowadays. These are the people who are being hired to work in Computer Centers without ever being asked to write a line of code in their interviews.

Now, before I'm lambasted by Caltech and MIT and Stanford and UCLA graduates, I know full well there are still good Universities turning out competant students, but I also know that there are far more mediocre Universities turning out sub-standard students, ill equipped to survive in the real world, and so damaged by their Higher Education that they have no hope of recovering.

The good ones are few and far between.

I would like to suggest that it is time to re-examine computer science education, and take a long-term look back at what worked and what didn't, from the early days of computing to the present, and then prepare a set of curriculum materials that will give current students that same thrill of building it all themselves from the bottom up, while giving them a solid grounding in Computer Science that will see them through a lifetime. Fads and whims come and go. Languages come and go. Fundamentals always remain.

Specifically, I suggest not only creating teaching materials, but also a software environment based on emulation which will recreate the joy of learning about a machine and developing all the software for it from scratch. This will be a consistent program which can be followed from beginner level through to graduation, but modularised so that any step (such as the compiler-writing module) can be taken for stand-alone use by any university or interested learner without requiring to have completed the previous modules (as long as the student had an equivalent level of experience.)

This material, including the source code to all the development environments for the practicals, which would be the bulk of the learning experience, would be made available via the Web so that it could be adopted by Universities, and also picked up directly by programmers who want to learn but aren't being given the opportunity because their classes are all about Java and Ruby on Rails.

An example of the sort of practicals I envision include (not necessarily in chronological order):


  • Learn text editing with an old-school programmable text editor - not one that gives you RSI if you need to make the same sort of change to 100 different lines.
  • Learn to program - the basics
  • Algorithms, and algorithmic complexity from practical examples. Include some fun stuff like decrypting a 1:1 cypher - i.e. problem-solving exercises and not just programming exercises.
  • Machine architecture and instruction sets
  • Write a disassembler
  • Write an assembler
  • Write a trivial compiler
  • Learn what parsing is for, and design and implement a parser from first principles before reading any textbooks on how it is normally done. Then rewrite your parser twice, using two different methods of parsing
  • Write a macro processor or source to source language translator
  • Study more complex data structures and algorithms (crossover from theory classes) such as DAGs and graph minimisation (useful in advanced compiler writing)
  • Write a usable compiler
  • Design, implement, *and document* their own programming language (reusing their compiler project)
  • Write a binary to source decompiler
  • Write a stand-alone operating system (boot loader, basic I/O library)
  • Write a multi-tasking operating system, incl scheduler, for some affordable real hardware such as a Gameboy Advance
  • Write a text editor compatible with the one they started using when they first started programming
  • Write a screen editor and adding programmable feature set
  • Study graphics hardware architecture
  • Write a low-level graphics package (rasterops etc)
  • Write a higher-level graphics package (2D, 3D line-based, matrix transforms)
  • Write 3D rendering/shading code
  • Write a Scrabble game (concentrate on the engine, not the graphics). No lookahead needed for this one, it's mostly about data structures and algorithms for text manipulation.
  • Write an adversarial game (such as checkers or poker) which requires lookahead and pruning with algorithms like Minimax, NegaScout etc.
  • Write a video game (old school, pacman style, raw hardware) running on the O/S you wrote earlier, that relies on real-time cooperating processes
  • Write a video game (new school, multiplayer, rendered) that runs on the graphics library you wrote earlier.
  • A cooperative project where an interface has to be defined and one student or group depends on another to work to the interface. (For example, one group writes a first pass of a compiler that generates intermediate code; the other group writes a code generator for that intermediate code to a target architecture. Working examples to be supplied in case one group fails - but working example to be written deliberately poorly so it is not worth stealing! And as an example for the students to improve on.)
  • Implement a database from scratch. Write an application-specific search engine to exercise database lookups
  • Add persistence as a data attribute to a compiler
  • Implement a free-text search engine
  • Write a complete protocol handler (eg X25 Level 2)
  • Write a terminal emulator
  • Add multi-user terminals to multi-tasking operating system
  • Write a file server client
  • Write a file server

I've personally done most of the above as an undergrad or for amusement later. I've produced teaching materials that have been used successfully by self-motivated students via the Internet - the Static Binary Translation HOWTO - and with some other experienced programmers of a certain age helped motivated youngsters write their first compiler on the Yahoo group "Compilers 101". I strongly believer that Computer Science is fun and that the subject practically teaches itself if the exercises are sufficiently interesting. I also believe that if someone struggles with programming, it's obvious by the end of their first year, and they should go and find some other field that they're good at, and not burden the world with another unwanted mediocre programmer.

The list of practical projects given as an example above is more than I can expect any Computer Science course to take on nowadays and certainly more than I can turn out myself. I would judge that preparing the curriculum materials and test beds for a set of exercises such as the list above would take approximately three people and two years. Perhaps there are some retired lecturers/professors from the 70's who still feel they have a calling to pass on their knowlege who would want to work on this. The materials that would be created by a project like this could be used in 'pick and mix' fashion, not just in classes but by self-motivated learners teaching themselves on the internet. It should however have a long life because the fundamentals are essentially timeless. This teaching material would not become outdated due to new application libraries, new operating systems or new hardware.

The specific examples above are taken from my personal experience and my education at a good University that was one of the first to teach Computer Science - by the people who were at the leading edge of research in the field. I returned to my Alma Mater earlier this year when I organised a conference on the subject of Computer History and coupled it with a reunion of people who had learned their craft in the 60's and 70's. The sentiments I've described above about the decline of Computer Science education were shared by many of the participants, among them successful leaders of industry, and academics from a broad range of institutions.

It may be that the politics of education nowadays - more interested in greater student numbers in order to get tuition income, than in better student quality - will work against a movement that wants a revival in the teaching of fundamental computer science. Perhaps the complexity theorists have won, and good engineering will be forever delegated to a back seat role, but I live in hope that it is not too late to get us back on the right track.

Monday, September 17, 2007

Transferring mail from one gmail account to another using pop3

A year or so back when I set up my Google Apps For Your Domain (GAFYD) email account, I already had the best part of a year's mail in an old regular Gmail account and no way to transfer from one to the other.

GMail did allow you to import from external POP3 servers but apparently you couldn't pull mail from another gmail account. I haven't needed to do that since, but if it still behaves like that, here's the fix!

I discovered that if you did a DNS lookup of the Google POP3 server (pop.gmail.com) and entered the raw IP rather than the domain, it worked just fine :-) Whether it was an internal/external DNS view problem, or they actually did it deliberately by blocking the domain, I couldn't say.

I do have some vague recollection that the pop3 server wasn't accessible from the outside world. I guess GAFYD was able to access it because it was also on the inside...

Let's see if I can find my notes on how to do this... ah, here we are...

Username is 'whatever' if your email is 'whatever@gmail.com'. The Pop3 server is 66.249.83.109 (i.e the IP of pop.gmail.com), Port is 995, "Always use SSL" is checked.

I wouldn't be at all surprised if this workaround still works...

Cool Google-enabled search of all your own bookmarks

The iGoogle widget for searching a list of URLs can be used to create a search engine that searches all the sites you've bookmarked in your browser. If like me you have a somewhat fungible memory, and can remember that you've seen something relevant somewhere on the net but can't remember quite exactly where, then this is the hack for you.

Before you start, export your favorites from your browser. In IE this creates a file bookmark.htm - so find somewhere on the net that you can upload this file, you'll need it later as a URL.

Now, create a new search engine at Google's Coop page. It'll ask you for a list of URLs to start it off. Enter one URL, it doesn't really matter which as you'll be deleting it later. The form doesn't let you start with an empty list.

Once the search engine is created, go in to the management page for it and find the Sites tab where you can add new URLs to be searched. Click the "Add sites" button, and enter the URL of your bookmarks page. Here's the important part: select the check box for "Dynamically extract links from this page and add them to my search engine" and the sub-item for "Include all partial sites this page links to". After it's added, remove your original URL.

You also have the choice of searching *only* these sites, or (as I do) making them the premiere results of your search but allowing all the other normal Google search results to follow your preferred ones.

I bookmark stuff pretty liberally and I have found this to be amazingly useful since I first started using it.

Google Apps For Your Domain meets iGoogle at last!

As an early adopter of GAFYD I was rather disappointed with it, because it pretty much duplicated the functionality of iGoogle, but did so with slightly different mechanisms and a separate independent login mechanism - so I had *two* separate accounts with identical usernames at google, and it was hit or miss which google service used which authentication mechanism.

The most annoying thing was that iGoogle widgets were documented and easy(ish) to write, but GAFYD portlets were restricted to those supplied, with no documentation as to how to write your own.

Well, today I tried a crude hack, and it paid off. I don't know if this was always the case, or if they changed something recently, but it turns out that iGoogle portlets *do* work in GAFYD. All you have to do is ... select your widget and install it at iGoogle(which is easy, there's a simple button to do it at the iGoogle widgets site), then go to the widget on your iGoogle homepage and start as if you're going to share it. Don't share it by email, but ask for the URL to cut & paste. Then edit the URL to remove the front part which is the iGoogle installer, and keep the end part which is the true home of the widget. Remember to change it to http: instead of http%3A ...

Now go to 'add stuff' on your GAFYD home page. Find the small link at the top that allows you to add a portlet by URL, and paste in the URL you extracted from iGoogle.

Bingo! You now have your iGoogle widget running on your GFYD homepage, and if you need to, you can go and remove the first copy from your iGoogle page.

Now if only they would add the tabs :-/

When Good iPods go Bad.

My wife's iPod recently started acting up and wouldn't sync at all. And for some time before, it had been refusing to disconnect after a sync. I tried to debug it by installing iTunes on my MCE, which did recognise the iPod, and detected it as being in a semi-reinitialised state, but it failed every time it tried to finish the reinitialisation.

Turned out the problem was that the iPod drive has to be mapped for iTunes to do its magic... well, since the last time it was plugged in, the drive letter that had been assigned to it was now assigned to a network drive. Unfortunately Windows is so stupid that it really wanted that letter for the iPod, and so when iTunes tried to access the iPod as a drive, it was hitting the network share instead.

By going in to the Manage option of My Computer and reassigning the preferred drive letter in the disk management console, the iPod became visible and iTunes was able to properly reinitialise it. At which point when it was put back on my wife's PC, she was able to use it again just fine.

Which was a relief, because it was an 80Gb unit but out of warranty, and if it had been a hard drive error as originally thought, it would have been an expensive replacement.