If I interpret the code correctly, G3D models are rendered in the following way:correct,
Each model is number of meshes.
Each mesh has a number of frames.
Each frame describes the coordinates of each vertex. Each frame must contain the same number of vertices - if a wagon has a wheel that turns, then the wheel might have 10 frames, and each describes the wheel at a different point of rotation.correct,
The position of the vertices for any given rotation are computed by the model making tools and not by glest.incorrect,
When loaded in the glest engine, it is stored in a vertex array.
Each screen update, the appropriate frame is selected for each mesh, and the appropriate vertex array is drawn.Each frame, the units 'progress' (a float) is used to select two frames, based on the anim-speed of the skill, and the vertex array sent to the video card is interpolated from these two frames (see InterpolationData), depending on the movement involved, very few frames can be used, and smoothly animated in Glest (by linear interpoletion, circular movements require more frames to look correct).
My immediate thought is this:
On account of the above, VBOs for units is not going to work, all those frames of vertex data is far too much to be putting in video memory.
VBOs could well be worthwhile for for tileset objects, which aren't animated, and account for a fair proportion of the typical rendering time. Terrain and water rendering desperately needs to be converted to using VBOs and and using GLSL for the actual rendering, this will be happening in the near future. This has has been semi-discussed recently on the dev mailing list, sign up!
[img]http://img20.imageshack.us/img20/5610/glest1.jpg[/img]It's not possible to express a character animation with matrix transformations, e.g. walking. Too many parts of the mesh move differently.
Mesh4 8 228 432 y y y y y y y IMMUTABLE
Mesh4 8 393 1200 x x x x x x x mutable
Mesh4 8 9 24 y y y y y y y IMMUTABLE
Mesh4 8 86 264 y y y y y x y mutable
Mesh4 8 29 108 y x x x y x x mutable
Mesh4 8 67 216 y y y y y y y IMMUTABLE
Mesh4 8 116 168 x x x x x x x mutable
Mesh4 8 116 168 x x x x x x x mutable
I've differed each mesh each frame and in the table below all the ys are frames that are exact duplicatesInteresting. Searching for simple transformations is probably a bit too complex, especially later in the file format. Immutable meshes could be interesting.Mesh4 8 228 432 y y y y y y y IMMUTABLE
Mesh4 8 393 1200 x x x x x x x mutable
Mesh4 8 9 24 y y y y y y y IMMUTABLE
Mesh4 8 86 264 y y y y y x y mutable
Mesh4 8 29 108 y x x x y x x mutable
Mesh4 8 67 216 y y y y y y y IMMUTABLE
Mesh4 8 116 168 x x x x x x x mutable
Mesh4 8 116 168 x x x x x x x mutable
We'd need to add a version 5 of the G3D format to support any of this, of course.That's the main obstacle here. There are also other possible optimizations like reorganizing the vertex data to use triangle strips which, as far as i know, is the fastest draw primitive. The half-edge data structure is a good way to get this done. I'm not sure if this helps much as our models are quite low poly.
If that gets done, then we could potentially get a big boost when it comes to tilesets. As I recall, most of the models from the default tilesets are g3d version 2, and trees eat a lot of performance.We'd need to add a version 5 of the G3D format to support any of this, of course.That's the main obstacle here. There are also other possible optimizations like reorganizing the vertex data to use triangle strips which, as far as i know, is the fastest draw primitive. The half-edge data structure is a good way to get this done. I'm not sure if this helps much as our models are quite low poly.
[img]http://img151.imageshack.us/img151/3180/glest1.png[/img][img]http://img219.imageshack.us/img219/6836/glest4.png[/img][img]http://img30.imageshack.us/img30/797/glest2.png[/img][img]http://img593.imageshack.us/img593/6013/glest3.png[/img]On account of the above, VBOs for units is not going to work, all those frames of vertex data is far too much to be putting in video memory.
I may not be totally understanding this, but if it increases performance without any cons, and you were capable of programming to figure this out (you seem to be a good programmer based on what I've seen), why don't you download the SVN and impliment this yourself? Then the GAE team could take a look and possibly merge it into Glest? (and on that said topic, perhaps the changing texture coords that you pointed out in a different topic).+1 :thumbup:
So it can go from 80MB down to 50MB. However, on reflection, 80MB isn't so scary for geometry?
What's keeping us out of VBO territory today other than writing the code?
So it can go from 80MB down to 50MB. However, on reflection, 80MB isn't so scary for geometry?
Not so scary, but most model's in mods are probably bigger, and any VRAM saved on vertex data could be used for textures, so it would certainly be worthwhile.
Yggdrasil is looking into using VBOs for static (single frame) meshes, if you can reduce the load for animated meshes, please do!
[url=https://github.com/williame/GlestToolsy]https://github.com/williame/GlestToolsy[/url] includes a renderer, and this renderer stores all frames and the indices and normals and such in VBOs.For normal maps, I had in mind a separate texture that would be read and intrepid for the normal map. Can the engine do it?
Still experimental, but yes. Start the program with '-test tr2'.Would that have anything to do with the "Render Surface" stats? That's the main difference I'm seeing.
Still experimental, but yes. Start the program with '-test tr2'.Would that have anything to do with the "Render Surface" stats? That's the main difference I'm seeing.
You bet. I'm getting 5ms with, 168ms without. :thumbup::o Wow.
yeah, but if you are saying 2.0+ then you can go wild with vbos and shaders and things?But nobody could possibly be using a default windows driver, could they? Wouldn't graphics drivers usually have 2.0+?
as I pointed out, the MS Windows SW driver is strictly 1.1
I say that the Glest forks should stop pandering to people with crappy integrated graphics cards. Anyone who has a dedicated graphics card should meet the standard (unless it is really old).Yeah, I've got intergrated graphics and I can play Warcraft 3 without any trouble, and that game is way ahead of us in terms of graphics even though it's a few years old. I'm sure we can afford to push the envelope at least that far.
The devs should push on with improving Glest's graphical capabilities instead of maintaining ridiculous support for machines that were never intended to play games in the first place!
All well to say you want fancier effects but warcraft has a fairly comparable engine and the difference is artwork.I don't know, but this seems like more than just artistic quality:
I don't know, but this seems like more than just artistic quality
The way I see it, it's not integrated cards that's the problem, but simply very outdated ones. However, I agree, we shouldn't be wasting our time supporting them. We can't just hold the game back so that those with 10+ year old computers too stubborn to get a new one can still play. They are an extreme minority anyway. After all, you can get a half decent computer for $400 at your nearest Walmart (I didn't say good computer, just half decent).^this
All well to say you want fancier effects but warcraft has a fairly comparable engine and the difference is artwork.I don't know, but this seems like more than just artistic quality:
(http://classic.battle.net/war3/images/human/units/animations/archmage.gif)(http://classic.battle.net/war3/images/orc/units/animations/spiritwalker.gif)
I'm not pushing too enthusiastically for graphical improvements, as graphics aren't that high a priority for me, but I don't think we should shy away from them for the sake of poor hardware either.
From http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xmlIt could be worth a look although from what I remember Unreal Engine requires power-of-two textures and they seem to be doing ok.
Non-power-of-two textures are supported if the GL version is 2.0 or greater, or if the implementation exports the GL_ARB_texture_non_power_of_two extension.
What about the glowing staff and the particles that following the censer? This is stuff you wouldn't be able to do in an xml file to the best of my knowledge, because it is directly attached to vertices/polygons/whatever. It would have to be part of the model data itself, and both things (particles and glow) are easy enough to do in Blender. My point was that, even at roughly equivalent poly count and texture dimensions, it's stuff like that that makes their units look better.For the left my guess is that since it's rendered to 2D it was done as a post process effect in an image manipulation program. For the right it is easy enough to specify in xml as long as it uses a skeleton structure. The problem with Glest is that it doesn't use a skeleton structure. Although it could still be possible to describe a path for the particle effect it would be trickier to sync up with the animation.
For the left my guess is that since it's rendered to 2D it was done as a post process effect in an image manipulation program.Actually, it appears that way in the game as well, and the game doesn't use any pre-rendered sprites.
For the right it is easy enough to specify in xml as long as it uses a skeleton structure. The problem with Glest is that it doesn't use a skeleton structure. Although it could still be possible to describe a path for the particle effect it would be trickier to sync up with the animation.Does it seem plausible to use Blender particle data in some future model format in GAE? I think that would be far simpler and more intuitive for everyone involved.
Woops. I thought the left one was from an earlier version of Warcraft that is 2d.For the left my guess is that since it's rendered to 2D it was done as a post process effect in an image manipulation program.Actually, it appears that way in the game as well, and the game doesn't use any pre-rendered sprites.
Does it seem plausible to use Blender particle data in some future model format in GAE? I think that would be far simpler and more intuitive for everyone involved.I think it might be possible to add named nodes/vertices that could be referred to in xml - like an effect point. Then you move those as a part of the animation. Although like Omega said it's more likely we will adopt a skeleton format but I don't plan to do it in the near future so I'm not sure what's going to happen.
Personally I'm in favour of the latter, but I imagine modder's may disagree on this ??That's cool with me.
However, the export script can't do anything about it, texture co ordinates range from 0.0 to 1.0 remember ;-)It can. The trick is not to use the texture coordinates :P http://www.blender.org/documentation/246PythonDoc/Image.Image-class.html
if material.getTextures()[0]:
image = material.getTextures()[0].tex.getImage()
if image:
width, height = image.getSize()
if !isPowerOfTwo(width, height):
print "Texture isn't power of two, can't export."
exit()
Cool, this would be worthwhile then...However, the export script can't do anything about it, texture co ordinates range from 0.0 to 1.0 remember ;-)It can. The trick is not to use the texture coordinates :P http://www.blender.org/documentation/246PythonDoc/Image.Image-class.html
It should also be checked in the engine in case someone uses an export script without the check. Unless we change the model version. If we were to do that we should also export tangents, normal map filename and specular map filename.And also because the texture can be changed at any time after the export!
I see no reason to bother with non power of two textures. Keep things the way they are.Right now, non-power-of-2 textures are supported - and they shouldn't be, is the general consensus.
It would be nice to allow the G3D exporter to export bump/specular maps for models, but I cant really see any advantage over Blenders native export of bump maps (tangent space maps in Blender) and simply adding "normal" to the filename.
QuoteIt would be nice to allow the G3D exporter to export bump/specular maps for models, but I cant really see any advantage over Blenders native export of bump maps (tangent space maps in Blender) and simply adding "normal" to the filename.
The filename trick only works for G3D v3 if I read the code correctly.
for each tile:
if tile not visible:
continue
if texture != prev_texture:
glBindTexture(..,texture)
glBegin(GL_TRIANGLE_STRIP)
... draw quad
glEnd()
prev_texture = texture
glBindTexture(...,all_tiles_texture)
for block in map_blocks:
if block not visible:
continue
.... draw vbo
Any status or stats on the migration to VBOs and shaders?
...
You might divide it up so you have a set of blocks for each tile or batch of tile textures, or you might do as I found fastest for my globes by having all the vertices in a single VBO and using range-elements drawing. But the fundemental thing is to move from drawing each tile individually to drawing big blocks of map in a single call. The GPU is very good at skipping those fringes around the meshes that fall outside the screen area. You can go further by putting these blocks of tile vertices into a single VBO and using glMultiDrawElements to draw all those blocks.
Doubtless you're all way ahead of me on this line of thinking, and I'm just stating the obvious
// build index array
m_indexArray.clear();
int tileCount = 0;
SceneCuller::iterator it = culler.tile_begin();
for ( ; it != culler.tile_end(); ++it) {
Vec2i pos = *it;
if (!mapBounds.isInside(pos)) {
continue;
}
int ndx = 4 * pos.y * (m_size.w - 1) + 4 * pos.x;
m_indexArray.push_back(ndx + 0);
m_indexArray.push_back(ndx + 1);
m_indexArray.push_back(ndx + 2);
m_indexArray.push_back(ndx + 3);
++tileCount;
}
// zap
glDrawElements(GL_QUADS, tileCount * 4, GL_UNSIGNED_INT, &m_indexArray[0]);