A very lively discussion indeed, sorry I didn't join it earlier. Here is my opinion: I support DevIL (as discussed in
the other thread) but I'm not of any strong opinion there because I'm not well versed in the various imaging libraries and their pluses and minuses. Note that SDL does have an imaging interface (in Gentoo, it's a separate SDL-image package), but personally, I've seen some ugly bugs in it (mostly some thread safety issues) -- I guess that's not a big deal if you're only loading from one thread.
However, I don't really care what you use. If you use an image format that doesn't support alpha layers, it's possible to use a second image to define the alpha layer. If you use an image format that doesn't support animation, you use a series of those image files for the animation, that's what hailstone was saying in this post:
I know of games using sprite sheets for animated images but there might be performance issues using it on a model and it would increase the size and complexity.
Memory footprint would increase, but complexity wouldn't increase much (well, beyond supporting any animated textures at all). Because if we do support animated textures at all, we would have to store each frame, uncompressed in memory anyway. So this becomes a matter of how we load each of those frames.
No, sprites are soooo outdated. I think when playing glest with its 36 units - and them all animated by sprites - my cpu will get melted... <img src="{SMILIES_PATH}/icon_rolleyes.gif" alt=":roll:" title="Rolling Eyes" />
Try not to think in the box. The config will have an option as to rather or not to enable animated textures. Don't assume automatically that because we use the word "sprite" that it will melt your CPU or look like commodore 64 sprites (or Atari 2600 sprites for that matter
). Actually, the textures will reside in the memory of the graphics card (as long as it has sufficient room) and swapping textures from frame to frame is an almost ZERO impact!! If you lack sufficient graphics memory,then yes, there is an overhead because it means more throughput is required between main memory and the graphics card, so being able to disable it would indeed help on slower machines (also, see my comment on LOD support below). Also, remember that the bottleneck of a particular performance problem may not always be where you expect it; it can be CPU, GPU, I/O throughput, low memory & the swapping that accompanies, etc.
Before doing too much work, know that it doesn't really matter. Let's say, the .dds textures, sure they are great, and seem to be small. But they only remain compressed in DirectX, because it is DirectX's format! If you try to run it with OpenGL, the dds texture will have to be recalculated anyway.
And most importantly, although I'm not sure of it for all formats, but jpeg for example becomes bmp size in the visual memory! Should do some research on this, but it's likely that it's going to be the same with other file formats, they won't remain compressed in the visual memory. That's what makes dds so special, if it remains compressed with OpenGL that is.
Why would dds remain compressed in OpenGL, much less DirectX? Even on the video card, you don't want to store textures compressed! That means you'll have to decompress them for EVERY time the texture is used, for EVERY frame. If you have a particular texture being used on 40 models or polygons and you're rendering 80 frames persecond (or trying to), you end up having to decompress it 1600 times per second! No, you don't store these things compressed in memory (either main memory or video memory). Maybe my common sense doesn't serve me well (as I don't know DirectX) and maybe DirectX can store these compressed in main memory, but the idea seems dubious to me.
So back to separating out animation frames or the alpha or team alpha into separate image files, all of this will have to be defined in an XML somewhere and we'll have to come to agreement on a sensible spec for all of this, but that shouldn't be too hard. Also note that with animated gifs or mngs, we would have to specify the animation speed anyway (I guess the default would be to use the animation speed specified in the image?). Finally, we may even want to get fancy and support morphing between images in an animation sequence to keep input files smaller and the net result prettier (much like we do with g3d models).
Also, the g3d format doesn't support animated skins, so that's another issue, we would have to bump the format or (perhaps an uglier solution) override it with info passed via the XML tag for the model. For that matter, we would have to do the same for using a separate image file for the alpha and team color layers.
As far as DDS, BLP and other proprietary formats, as long as we can read them from linux & mac w/o licensing issues, I don't care. If not, the answer is absolutely no.
I have no problem with animated gifs, color limitations and all. I also have no problem with jpegs, although I despise super-lossy compression for textures used in a 3d game. As somebody mentioned before, we're going to decode them into a bitmap in memory anyway. I just *ask* that if you use jpegs, please keep quality very high (i.e., compression low)!
Back to memory consumption issues, I'm not 100% sure of exactly how this works with the OpenGL portion of the image. Ideally, the textures are fed to the OpenGL library when the game starts up, but they do stay in main memory (uncompressed). I'm not certain, but they may be getting fed to the OpenGL interface at every frame. But if they are fed when the game starts, and your graphics card has sufficient memory, then even though the OpenGL library retains a pointer to where the uncompressed image data is in main memory, it may not attempt to access it for the remainder of game play. If the data is not accessed for a while, the kernel (of whatever OS you are running) can decide to write the memory pages that hold them to swap, leaving them as "cached swap pages" and then if it runs low on memory, drop them entirely to use those memory pages for something else. So even though all of these images remain in memory uncompressed, under ideal circumstances, they may not end up taking up actual memory (i.e., residing physically in memory).
Finally, please keep in mind a very important planned features of 0.3: LOD support (
https://bugs.codemonger.org/show_bug.cgi?id=20). When this feature is in place, each image will be stored multiple times in memory, once at full size, one at half, one quarter, one eight, etc. When you view a model or polygon that has a texture mapped to it, the size of the texture will depend upon the distance -- this keeps the amount of GPU used for texture mapping down to a minimum. The same will be true for models; each model will be stored multiple times in memory scaled down to lower polygon counts. Ideally, this will be transparent to the player because by the time their camera is far enough away from a model, the change in quality will be unnoticeable (in theory!