It was late morning when I got that sinking feeling. What I thought was an easy job was instead turning into a nightmare. I thought to myself, why oh why did people invent so many pointless variations?

The task at hand was adding 16-bit texture support to Warp3D Nova. This needed special handling because AmigaOS is a big-endian system but AMD's Southern Islands GPUs are little-endian. Big-endian systems store numbers with the Most Significant Byte (MSB) first, whereas little-endian systems store them the other way round; Least Significant Byte (LSB) first. Anyway, I thought that I simply needed to swap the bytes round, and then I'd be done. How wrong I was.

A simple test program generated textures that should look like this (below):

W3DNTextureTest ref

Yes, that's a really basic image, but it's enough to confirm whether texture data is in the right format or not. Testing the RGB565 format gave me a confusing result:

W3DNTextureTest RGB565 bad

I knew this couldn't be the result of endianness issues. Did I get get the colour channels round the wrong way? A quick search confirmed that the test data was correct. Weird.

Then things got even weirder. The RGBA5551 test resulted in:

W3DNTextureTest RGBA5551 bad visual

Whaaat? Experimenting with different values only made things more confusing. It looked like the GPU was reading it in ARGB1555 order. That, was an illusion. Testing the ABGR1555 format only confused me more.

I eventually realized that Southern Islands GPUs always number colour channels starting with the least significant byte or bit first. So, the channels are in right-to-left order rather than left-to-right for 16-bit formats. Hence, red and blue were swapped in RGB565, and those black areas in the RGBA5551 test were because the red channel was being treated as the alpha (transparency).

With that figured out I quickly implemented the necessary channel remapping code. But, it took me several times longer than expected.

Too Many Unnecessary Choices

My gripe with the whole thing is that we really don't need so many pixel formats. The RGB565 and RGBA5551 formats provide different features, so there's a valid reason for having those. However, both RGBA5551 and ABGR1555 deliver exactly the same performance and number of colours. We only need one. Likewise, with 32-bit pixel formats there's RGBA8888, ABGR8888, ARGB8888, and BGRA8888. The only difference is how the values are encoded into a 32-bit number.

These multitude of variations don't add anything useful. Alas, poor souls like me have to implement them in drivers and test them to make sure they work, even though most people will stick to just a few formats.

Big and little endian is another pointless difference that has caused a lot of unnecessary work and frustration. Using a little-endian GPU with a big-endian CPU inflates the workload. In fact, IBM started migrating PowerPC Linux from big-endian mode to little-endian mode precisely to avoid the endianness issue (the x86's dominance means that little-endian is better supported).

1 Switch Gives 2 Options; 64 Switches Gives 18,446,744,073,709,551,616

1 switch gives 2 options. 2 switches gives 4. 3 switches gives 8 combinations. 4 switches maps to 16. 5 switches gives 32, etc. Each new switch doubles the number of possibilities, and the complexity grow exponentially.

What I'm trying to illustrate is that you should try to keep things as simple as possible. Don't add unnecessary options and variations to your code. Ask yourself if that new switch/option/whatever you're planning really is worth it. Does it add enough new functionality to justify its existence? Not just new functionality, new useful functionality. If it is genuinely useful then great. If not, then drop it. Simplicity is worth making an effort for.