Blog <-

Archive for the ‘opinion’ Category

RSS   RSS feed for this category

Companies, why don't you want my feedback?

I'm one of those people who think everything can always be a little bit better. Apparently companies aren't interested in hearing about customer experience, since it's basically always completely impossible to find a working, decent customer feedback point on any commercial website.

How sad is it that the only way to properly get into contact with a company is via Twitter (which is, of course, limited to 140 chars making it basically impossible to tell them about your issues/problems)? How sad is it that some companies actually artificially limit the number of characters you can enter in a feedback form on their website? Hello! Interwebtube bytes are free! No reason to limited the number of characters to one-thousand characters guys. What's that? Your time is too valuable to read through entire essays from frustrated consumers? Oh, that's just fine! I'll take my business somewhere else, thank you!

If any Quality Assurance Managers are reading this, I'll make it real easy for you:

  • An easy to find, CHEAP/FREE phone number on your site. One I can call for questions, feedback, etc. DO NOT try to sell me shit when I call you with a question or complaint. Just.. don't. I will take my business somewhere else.
  • An easy to find question/feedback email address on your website.
  • If you absolute must have a form, make sure it doesn't ask for my phonenumber, it doesn't limit the type of question I can ask (try including an "other reason" option?) and I don't have to jump through hoops to validate my information. I don't want you to have my address, phone number, email address, or anything else. You don't ask that information from customers who call you with a question, do you? Then allow – don't force – me to fill it out on your forms. I just want to let you know that there's a problem with your website! Today I had to fill out an online form and I had to provide my land-line phone number! "Hello?! 1999 called using their land-line! They want their ancient technology back!" Who still has a land-line, seriously?!

Companies, seriously… why do you make it so exceptionally hard for me to provide you with feedback? I'm trying to help! I want to let you know about broken restitution forms on your website, I want to let you know about why I went to the competitors so you can improve your products. I really do! So stop with the bullshit online questionnaires that pop up on your website when I least expect nor want to see a "Please participate in our questionnaires!" when that's not why I'm on your site!

Stop wasting money on crappy Quality Assurance Managers. If your website doesn't have email contact information, someone in your company needs to be fired.


The All-Web paradigm is a long way away

Google, with their Google Chrome OS, are betting on our computing-experience moving to the Cloud in the future. Some people agree with that prediction. As Hacker News user Wavephorm mentions:

The "All-Web" paradigm is coming, folks. And it really doesn't matter how much you love your iPhone, or your Android, or Windows phone. Native apps are toast, in the long run. Your data is moving to the cloud — your pictures, your music, your movies, and every document you write. It's all going up there, and local hard drives will be history within 3 years. And what that means is ALL software is heading there too. Native apps running locally on your computer are going to be thing of the past, and it simply blows my mind that even people here on HackerNews completely fail to understand this fact.

Although I believe many things will be moving to the cloud in the (near) future, I also believe there are still major barriers to be overcome before we can move our entire computing into the cloud. An 'All-web' paradigm, where there are NO local apps – where there is NO local persistent storage – is a long, long way off, if not entirely impossible.

The Cloud lacks interoperability

One major thing currently missing from the Cloud is interoperability between Web applications. As mentioned on Hacker News: "local hard drives will be history". I believe we are greatly underestimating the level of interoperability local storage offers. Name a single native application that can't load and save files from and to your hard drive? Local storage ties all applications together and allows them to work with each other's data. I can just as easily open an JPEG in a picture viewer as in a photo editing software package or set it as my background, etcetera.

If the All-web paradigm is to succeed, Web apps will need a way to talk to each other or at the very least talk to some unified storage in the Cloud without the user needing to download and re-upload files each time. Right now, if I want to edit a photo stored in Picasa in a decent image editor, I have to download it from Picasa, upload it to an online image editor, download it from there and upload it again to Picasa (and removing the old photo). I have a pretty decent internet connection, but most of my time will be spent waiting 80 seconds for a 3.5 Mb picture to download, upload, download again, etc.

Perhaps cloud storage providers will start publishing APIs so that other web apps can accesss your files directly, but given that the Web historically has been about being as incompatible as possible with everything else, I believe this will be a very large, if not insurmountable, problem.

User control will be gone

When Google launched the new version of its Gmail interface, many people were annoyed. Many people are annoyed with Facebook's TimeLine interface. Many of my friends still run ancient versions of WinAmp to play their music, simply because it's the best music player out there. With the All-web paradigm, choice over which programs you use, and which version you want to use will be gone. The big men in the Cloud will determine what your interface will look like. There will be no running of older versions of programs. Unless web applications find some way to unify storage, (as I mentioned earlier), there will be no way to migrate to another application. At the very least it will be painful.

Cloud storage is expensive

I'm sure we all enjoy our cheap local storage. If I need to temporary store a few hundred gigabytes of data, I don't even have to think about where or how to store it. My home computer has installs for twelve different Operating Systems through VirtualBox. It takes up about 100 Gb. My collection of rare and local artist's music is around 15 Gb. Backups of my entire computing history take up about 150 Gb. Where in the cloud am I going to store all of that? Dropbox? It doesn't even list a price for storage in the Cloud like that! Going from the prices they do list, to replicate my local storage in the Cloud, I'd be paying about $200. A month.

Internet connections are not up to par

We may think our internet connections are fast, and compared to a few years ago they are, but they're not fast enough by a long shot to do our daily computing in the Cloud. First of all, upstreams are generally much more limited than upstreams. If the All-Web paradigm is going to work, that has to change. But home internet connections aren't really the problem, I think. The real problem is mobile networks. The All-web paradigm requires being online all the time, everywhere. Lately there's been a trend (at least in my country) of reducing mobile internet subscriptions from unlimited data plans to very limited plans. A 500 Mb limit per month is not uncommon now. Telco's reasoning is that they need to recuperate costs for operating the network. Some still offer "unlimited" data plans where, after exceeding your monthly quota, you'll be put back to 64kb/s. It's enough to check my email (barely), but it surely isn't enough to do anyone's day-to-day computing from the Cloud.

And that's the situation here, in one of the most well-connected countries in the world. Think of the number of countries that aren't so fortunate. If nothing else, those countries will keep local computing alive.


Most web apps require a monthly subscription to do anything meaningful with them. It could be just me, but I much rather pay a single price up front after which I will be able to use my purchase for as long as I like. With the All-web paradigm, I'd have to pay monthly fees for Google (Documents/storage), Dropbox, Netflix, some music streaming service, a VPS for development, and a lot more.

With the current prices, the monthly costs to me would be unacceptable. It's a lot cheaper to get a simple $400 desktop computer, which can take care of all those needs. Say I use it for 4 years. That comes down to about $8.50 a month. The cheapest Dropbox account is more expensive than that.

But the high price isn't really the problem. The problem is continuous payments. Say I lose my job, and I have to cut costs. With local computing, I could say "well, this PC is old, and should be replaced, but since I'm low on money, I'll keep using it for another year". Cancelling my subscription to some/all my services means I lose some/all my data. Remember, we're talking about an All-web environment here. No local storage large enough to store my data. The risks are simply too big.


There's no such thing as privacy in the Cloud. Your personal information and data will be mined, abused and sold. You have no control over it. The more data that is stored, the larger the temptation for companies and criminals to monetize that data. Right now, most people don't care too much about privacy. We still have a choice about what we put in the cloud and what we keep to ourselves. That picture of your girlfriend in lingerie won't be ending up on Facebook any time soon, right? With an All-web environment, you'll have no choice. Want to store or edit a picture? It has to move to the cloud. Even those most unconcerned with privacy won't accept that.

The best we can hope for would be that web companies will treat our data confidentially. Hope. We have no control. Arguments that companies who abuse our data will soon lose all their users are not relevant. Your data will already be abused by that time. We only need a single incident for people to start distrusting the All-web paradigm. In fact, I think that has already happened.


In the future, many local applications will move to the cloud. In fact, many already have. Music and movie streaming, word processing, image editing, storage; they will move more and more to the Cloud. The All-web paradigm though, will never fly. It would be a huge step back in terms of convenience, cost, privacy and abilities. Local computing is here to stay. It may become more and more of a niche market, but it won't disappear.

Read less

A programmer once built a vast database containing all the literature, facts, figures, and data in the world. Then he built an advanced querying system that linked that knowledge together, allowing him to wander through the database at will. Satisfied and pleased, he sat down before his computer to enjoy the fruits of his labor.

After three minutes, the programmer had a headache. After three hours, the programmer felt ill. After three days, the programmer destroyed his database. When asked why, he replied: “That system put the world at my fingertips. I could go anywhere, see anything. Because I was no longer limited by external conditions, I had no excuse for not knowing everything there is to know. I could neither sleep nor eat. All I could do was wander through the database. Now I can rest.”

— Geoffrey James, Computer Parables: Enlightenment in the Information Age

I was a major content consumer on the Internet. My Google Reader had over 120 feeds in it. It produced more than a 1000 new items every couple of hours. I religiously read Hacker News, Reddit and a variety of other high-volume sources of content. I have directories full of theoretical science papers, articles on a wide range of topics and many, many tech books. I scoured the web for interesting articles to save to my tablet for later reading. I was interested in everything. Programming, Computer Science, Biology, Theoretical Particle Physics, Psychology, rage-comics, and everything else. I could get lost for hours on Wikipedia, jumping from article to article, somehow, without noticing it, ending up at articles titled "Gross–Pitaevskii equation" or "Grand Duchy of Moscow", when all I needed to know was what the abbreviation "SCPD" stood for. (Which, by the way, Wikipedia doesn't have an article for, and means "Service Control Point Definition")

I want to make it clear I wasn't suffering from Information Overload by any definition. I was learning things. I knew things about technology which I hadn't even ever used myself. I can tell you some of the ins and outs of iPhone development. I don't even own an iPhone. I can talk about Distributed Computing, Transactional Memory and why it is and isn't a good idea, without having written more than a simple producer/consumer routine. I'm even vehemently against writing to shared memory in any situation! I can tell you shit about node.js and certain NoSQL databases without even ever having installed – much less dived into – them. Hell, I don't even like Javascript!

The things is: even though I was learning about stuff, it was superficial knowledge without context and the kind of basic information that allows you to draw these conclusions you're reading about for yourself, without the help of some article. I didn't pause to think about conclusions drawn in an article, or to let the information sink in. I read article after article. I wasn't putting the acquired knowledge into practice. The Learning Pyramid may have been discredited, but I'm convinced that we learn more from doing than we do from reading about something.

So what makes reading so attractive that we'd rather read about things than actually doing them? And I know for a fact that I'm not alone in having this problem. I think – and this might be entirely personal – it's because of a couple of reasons.

One is that it's much easier to read about something than to actually figure things out yourself. I want to experiment with sharding in NoSQL databases? I have to set up virtual machines, set up the software, write scripts to generate testing data, think about how to perform some experiments, and actually run them. Naturally I'd want to collect some data from those experiments; maybe reach a couple of conclusions even. That's a lot of work. It's much easier to just read about it. It's infinitely easier to stumble upon and read an article on "How to Really Get Things Done Using GettingThingsDone2.0 and Reverse Todo Lists" than it is to actually get something done.

The second reason, at least for me, is that it gives me the feeling that I'm learning more about things. In the time it takes me to set up all the stuff above, I could have read who-knows-how-many articles. And it's true in a sense. The information isn't useless per se. I'm learning more shallow knowledge about a lot of different things, versus in-depth knowledge about a few things. It gives me all kinds of cool ideas, things to do, stuff to try out. But I never get around to those things, because I'm always busy reading about something else!

So I have taken drastic measures.

I have removed close to 95% of my feeds from Google Reader. I've blocked access to Reddit and HackerNews so I'm not tempted to read the comments there. I check (an aggregator for Hacker News, Reddit's /r/programming and some other stuff) at most once a day. Anything interesting I see, I send to my tablet (at most two articles a day), which I only read on the train (where I don't have anything better to do anyway). I avoid Wikipedia like the plague.

I distinctly remember being without an Internet connection for about a month almost four years ago. It was the most productive time of my life since the Internet came around. I want to return to the times when the Internet was a resource for solving problems and doing research, not an interactive TV shoveling useless information into my head.

Now if you'll excuse me, I have an algorithm to write and a website to finish.

Security Questions considered harmful

Many online services allow, or even worse, require, the so called "Security Question". It is a question/answer you can enter in case you ever forget your password or can't access your account for some reason. In my opinion, security questions are an incredibly bad idea, from a security perspective.

The usual security questions are things like "What was your mother's maiden name", "What's your pet's name", etcetera. People won't understand that actually supplying a truthful answer to these kind of questions exposes them to an incredible weakness in their account's security. These are all questions to which the answer can be found relatively easy by googling a person or applying a little social engineering. "Hey, I am John, and I think I might be related to you on your mother's side. What's her maiden name"?

The worst part is that every site has basically the same questions from which you can choose. This means that people either have to pick the same question and answer every time, or pick a different one for each account. The first will make them vulnerable to repeated attacks on all their online profiles once an attacker has found the answer. The second will make it very hard for people to remember that they must never let anybody know about their favorite pet's name "Buddy". A lose/lose scenario at best.

As is often the case with security protocols, they must be followed to the letter to be safe. One flaw in the procedure, and the security collapses. Security questions could be a good idea, provided that:

  • The user makes up his own question. No predefined questions should be supplied, and most importantly, different sites shouldn't all use the same questions.
  • The user should never be told what his security question was. If they need to reset their password, they should chose both the security question and the answer. This will make it much harder for a potential attacker to gain accees.

Of course, taking the above in consideration, security questions are just as hard to remember as a password, which makes them kind of pointless. Pointless or insecure, make your pick.

I'm ditching Chrome because of the http:// stripping.

New development builds, and apparently the Beta build of Chrome for the Mac, strip the 'http://' part from the URL input field. Since I run Chromium for Linux, which uses nightly builds of Chrome, I am already affected by this retarded decision.

For this reason I will no longer be using Chrome, nor will I recommend Chrome to anybody anymore. In fact, I will actively recommend using any browser other than Chrome, including Internet Explorer 6.

I could explain why such a 'trivial' change upsets me so much that I'd stop using an otherwise… promising.. product, but life is too short to argue with stupid people, so I'll just leave it at that.

Game review: Midnightclub Los Angeles (SUCKS)

I'm not normally into the game reviewing thing, but I'll make an exception for Midnightclub Los Angeles because it is, without a single doubt, the worst 'racing' (and I use the word 'racing' loosely here) game I have ever wasted money on.

The story: None. But that's okay. It's a racing game, it doesn't require a story.

The rest of the game, though, sucks so hard, it created a small black hole in my playstation 3 which then proceeded to break the game CD into 5 pieces. Oh, wait, no, that was me. This game is absolutely terrible. Know that old arcade racing game called 'Outrun'? It would have been better if they slapped some new graphics onto that and otherwise release it exactly the same as it was.

Mightnightclub Los Angeles truly sucks from every side you look at it. There's no manual gear change, the cars handle like shit, you keep having to spend your hard earned money on repairing your car every single time some puts drives into it, but it never tells you how much you have to pay to repair it. It's an arcade racer for crying out loud! Cars should be indestructible (and have rockets on them, preferably). There are no prices listed for the upgrades, nor are the effects they'll have on the car made clear. The customization is a wreck, the menu's are a disaster, and mobile phone that constantly pops up must have been inspired by the one in GTA IV. It makes you want to pull out your hair and rip out your eyes.

When it comes to racing, Midnightclub Los Angeles is nothing more than an extreme exercise in frustration. During the daytime races, it's almost always near impossible to see where you're going. But since the game is called Midnightclub Los Angeles, it's always impossible to see where you're going. The roads are packed with traffic, which can be fun, but not when there's shitloads of traffic in every single race. Now I've played 'traffic' racers before, but there were always at least some races without it. Given the fact that in this game traffic is just suddenly spawned in the middle of the road, there's no way to avoid it. They probably put this in because the actually driving, and by driving I mean cornering, in Midnightclub is so immensely boring that during the fifth race I literally fell asleep, and when I woke up, I had still won. True story. Okay, okay, not a true story.. it turns out I hit thirty-nine lampposts and sixthousand and one other cars on the road, but that's still the same as every other race.

Did I mention this game is utterly frustrating yet? But wait, there's more! You see, it doesn't actually matter at all how well you do in this game. Why? Because it has that nice catch-up built in. You know, the kind where it doesn't matter if you give your opponents a head start of twenty-five minutes, you'll still be able to catch up with them (and they with you if the roles are reversed) before the race ends. A true challenge. Fortunately, the huge, yellow, view-obstructing smoke plume makes the game lot's more interesting. You never know what you'll find behind it! (A traffic jam, probably, or a house). Another thing that makes the game much more frustrating and a totally worthless piece of shit that needs to be sent to hell and suffer an eternity just like its creators fun is that, while you have no idea where you're going and keep bumping into all kinds of shit that shouldn't have just spawned there, your opponents, being artificial intelligent with pre-knowledge of the entire track, every little shortcut, the traffic and everything else, never have this problem.

I could go on and on about this game, but what it comes down to is that it's simply not a racing game. It's not a racing simulator (which I certainly wasn't expecting), but it's also not an arcade racing game. This game doesn't have a single racing element in it. The best way to sum it up is: A "look at the map while avoiding cars and steer left/right depending on where the next blob on the radar tells you to go" game. There's no way to learn the track, since you always have to play multiple races right after each other. There's no way to see where you need to go, because the game only tells you one checkpoint in advance where the actually racing track is. (sometimes these points are about 30 meters apart in the game, in a corner). The game is simply no fun at all. All it is, is… you've guessed it…


If I awarded a score, stars, or funny little cars between the number of 0 to five, I'd give this game a -15 gazillion and a half. But since I don't do that, and hence cannot get the satisfaction of drilling this game six feet under, I'll just say that I literally (and when I say 'literally', I do mean that literally, not figuratively) spit on this game and then proceeded to break the CD in half (in five, actually) and tossing it in the trash. I felt somewhat sorry for the other items in there having to share their nice garbage-bin with such a piece of true trash, but what can you do if you don't have an incinerator at home.

The most annoying thing about email…

The most annoying thing about email…

When someone sends you an email, and not five minutes later proceeds to call you up or visit you to ask if you've already read their e-mail and what your response is gonna be. Then makes you explain your entire response and says "send me an email about that, will you".

Gift certificates

I don't understand gift certificates. I mean, the idea is quite good: A piece of paper that represents a certain value, and which you can then trade for goods of some kind. Much better than dragging all that gold around all the time. So in that regard, gift certificates are an awesome idea. Except that we already have this thingy which is made of paper (most of the time) and that represents a certain value. It's called "money".

The best thing about money is that you can spend it on everything, everywhere. Whereas most gift certificates are only valid in certain stores. The only reason gift certificates make sense is if the giver wants you to spend it in a certain store. I guess that's the reason gift certificates exist: vendor lock-in. Another brilliant way of controlling how and where we spend our money. Capitalism, yay.

I prefer money.

Ubuntu sucks!

I used to be real pleased with Ubuntu, because it got a couple of things right that Debian didn't. But I've upgraded my Ubuntu install three times now, and every time I upgraded everything broke.

The last time I upgraded, everything even remotely having anything to do with sound broke. This was because the geniuses at Ubuntu decided to include the shitty PulseAudio sound architecture in Ubuntu way before it was ready to be included. (Yeah, I know, not really PulseAudio's fault, but I'm just trying to get the PulseAudio crowd pissed at the Ubuntu crowd in the hopes that they'll gun them down).

Last time I upgraded, from Hardy to Intrepid, all my sound stuff broke again, flash didn't work anymore, my wireless broke (I've kind of fixed it now, but now NetworkManager keeps dropping the wireless connection when I push too much data through it, and can't get up again without a reboot – piece of shit), my hotkeys and all my keybindings broke, my sessions weren't saved anymore, my sound applet refuses to address the right mixer channel, my ~/.xinitrc is being ignored, I can't rm -rf / anymore (my favorite past-time thing in Linux :-( ).

Each time I upgraded Ubuntu, I found myself doing a clean install a couple of days later because too many things had broken. And even after that, many of the broken things were still busted.

I've had it with Ubuntu. It's a piece of shit. I'm going back to Debian Stable.

Performance optimization: The first thing to do

In the last couple of years, I've done a lot of performance optimization. I've optimized raw C, Python and PHP code. I've optimized databases: tweaked settings, memory usage, caches, SQL code, the query analyzer, hardware and indexes. I've optimized templates for disk I/O, compilation and rendering. I've optimize various caches and all kinds of other stuff like VMWare configurations.

As I've done a lot of optimization, I've noticed a couple of things. One of these things is how persons in different roles look at optimization:

Programmers always want to optimize the code. The code isn't optimal, and a lot of speed can be gained from optimizing the code. This usually boils down to optimizing algorithms to be more efficient with either CPU cycles or memory. In contrast to the programmer, the system administrator always wants to tweak the configuration of either the Operating System, the application itself, or some piece of middle-ware in between. Meanwhile, managers always want to optimize the hardware. "Developer time is more expensive than hardware", they quip, so they decide to throw money at faster and more hardware, instead of letting the developer optimize the code.

What none of them realise is that they're all wrong. None of these approaches are any good. When it comes to optimizations, all of the people above are stuck in their own little world. Of course managers just love the fact that programmers "don't understand cost versus benefit", and a nice saying such as "Developer time is more expensive than hardware" has a really nice ring to it. Managers have a high-level view of the application's eco-system, and so they search for the solution in the cheapest component: hardware. Programmers, on the other hand, know the system from a very low-level point of view. And they naturally love the fact that managers don't understand technology. They are intimately familiar with the code of their application, or the database running behind it, and so they know a lot of its weak spots. Of course, they'll assume the optimization is best performed there. The system Systems administrators have limited options either way, so they stick to what they can influence: configuration.

An excellent example of this is a recent post on the JoelOnSoftware blog. I'll recap the main points I'd like to illustrate here:

One of the FogBugz developers complained that compiling was pretty slow (about 30 seconds). […] He asked if it would be OK if someone spent a few weeks looking for ways to parallelize and speed it up, since we all have multiple CPU cores and plenty of memory. […] I thought it might be a good idea to just try throwing money at the problem first, before we spent a lot of (expensive and scarce) developer time. […] so I thought I'd experiment with replacing some of the hard drives around here with solid state, flash hard drives to see if that helped.

Suddenly everything was faster. Booting, launching apps… even Outlook is ready to use in about 1 second. This was a really great upgrade. But… compile time. Hmm. That wasn't much better. I got it down from 30 seconds to … 30 seconds. Our compiler is single threaded, and, I guess, a lot more CPU-bound than IO bound.

This is an excellent example of how a manager would try to solve optimization problems. At the start of the quote we see the typical way a developer would tackle the problem: parallelize and speed up. In other words: low-level optimizations.

Now it turns out Joel was wrong. Solid State disks didn't help at all, since their problem wasn't with disk I/O at all. But that doesn't mean the developer was right either! I like to see it as a kind of Schrödinger's Cat situation: both are wrong, until one is proven right. Why is that? Because they have no idea what the problem is!. All they're doing is guessing away at the problem in the hopes of finding out what exactly will solve it, without having any clue about the actual problem! We can see this quite clearly: after having dismissed disk I/O as the problem, they assume it must be because "our compiler is single threaded, and, I guess, a lot more CPU-bound than IO bound.". Again, they jump to conclusions without knowing what the problem is. So now they might not only waste a lot of time on solid state disks without fixing the problem, but they're about to spend weeks of developer time without knowing if that will fix the problem.

So, here is my point:

The most important thing about optimization is analysis.

You can't fix a problem by simply trying different solutions to see if they work. In order to fix a problem, you have to understand the problem first.

So, please, if you're a developer, don't assume saving a couple of CPU cycles here or there will solve the problem. And if you're a manager, don't assume some new hardware will solve the problem. Do some analysis first. Finding out if disk I/O, memory, CPU cycles or single threading is the problem is really not that hard if you spend a little time thinking about it and benchmarking various things. And in the end, you'll have a much better overview of the situation and the problem, and you'll be able to come up with specific solutions which will actually work.

And that's how you save money.