hurray for titi! I spent 20 minutes typing a response and then I learned some things from your question.
So first off, hailstone has it pretty close. Actually, frameUpdateBegin() and frameUpdateEnd() (for the network interface classes) are called before and after the "world" is updated. The only reason there are two calls is to mark the beginning of the frame (tell the ClientInterface or ServerInterface what the new frame is ) and do any pre-world update calculations and then the frameUpdateEnd() to tell the network layer to send (any) data queued up while updating the world. I want it to get sent at the end of updating the world so that it will be smaller (less packets means less headers and other overhead, plus if we're compressing the data, you get better compression ratios when compressing larger amounts of data).
But the short answer to titi's question, as it stands now is that I don't care about key frames anymore and I just send commands to everybody as soon as they are issued. More precisely, C1 would send the new command to both S and C2, S would know not to relay the command to C2 and both would process the command as soon as they received it (no matter which frame it was). However, I have a better idea now.
Before I go into that, the new class NetworkMessageStatus is used to transmit status changes between clients, servers and peers and is also the base class for many other messages (intro, ready, launch, command list, etc.). It's one of those bit mangeling classes, so let me just post it's current code (which is sure to change further). Note that the term "peer" is generic and can refer to any remote relationship (server to client, client to server or client to client)
class NetworkMessageStatus : public NetworkMessage {
enum DataMasks {
DATA_MASK_SOURCE = 0x0000000fu,
DATA_MASK_STATE = 0x000000f0u,
DATA_MASK_PARAM_CHANGE = 0x00000700u,
DATA_MASK_GAME_PARAM = 0x00001800u,
DATA_MASK_GAME_SPEED = 0x0000e000u,
DATA_MASK_HAS_FRAME = 0x00010000u,
DATA_MASK_HAS_TARGET_FRAME = 0x00020000u,
DATA_MASK_FRAME_IS_16_BITS = 0x00040000u
};
uint8 connections; /** bitmask of peers to whom a connection is established */
uint32 data; /** contains various data packed into 32 bits */
uint32 frame; /** (optional) the current frame at the time this message was generated */
uint32 targetFrame; /** (optional) the frame that actions specified in this packet are intended for */
public:
NetworkMessageStatus(NetworkDataBuffer &buf, NetworkMessageType type = NMT_STATUS);
NetworkMessageStatus(const Host &host, NetworkMessageType type = NMT_STATUS,
bool includeFrame = true, GameSpeed speed = GAME_SPEED_NORMAL, uint32 targetFrame = 0);
virtual ~NetworkMessageStatus();
uint8 getConnections() const {return connections;}
bool isConnected(size_t i) const {assert(i < GameConstants::maxPlayers); return connections & (1 << i);}
uint32 getData() const {return data;}
uint8 getSource() const {return static_cast<uint8> (data & DATA_MASK_SOURCE);}
State getState() const {return static_cast<State> ((data & DATA_MASK_STATE) >> 4);}
ParamChange getParamChange() const {return static_cast<ParamChange>((data & DATA_MASK_PARAM_CHANGE) >> 8);}
GameParam getGameParam() const {return static_cast<GameParam> ((data & DATA_MASK_GAME_PARAM) >> 11);}
GameSpeed getGameSpeed() const {return static_cast<GameSpeed> ((data & DATA_MASK_GAME_SPEED) >> 13) ;}
bool isResumeSaved() const {return static_cast<bool> (data & DATA_MASK_IS_RESUME);}
bool hasFrame() const {return static_cast<bool> (data & DATA_MASK_HAS_FRAME);}
bool hasTargetFrame() const {return static_cast<bool> (data & DATA_MASK_HAS_TARGET_FRAME);}
uint32 getFrame() const {return frame;}
uint32 getTargetFrame() const {return targetFrame;}
virtual size_t getNetSize() const;
virtual size_t getMaxNetSize() const;
virtual void read(NetworkDataBuffer &buf);
virtual void write(NetworkDataBuffer &buf) const;
protected:
void init(const Host &host);
void setConnection(size_t i, bool value) {
assert(i < GameConstants::maxPlayers);
uint8 mask = 1 << i;
data = value ? data | mask : data & ~mask;
}
void setConnection(size_t i) {
assert(i < GameConstants::maxPlayers);
data = data | 1 << i;
}
void setConnections(bool *values) {
connections = 0;
for(size_t i = 0; i < GameConstants::maxPlayers; ++i) {
if(values[i]) {
data = data | 1 << i;
}
}
}
void setSource(uint8 value) {
assert(value < GameConstants::maxPlayers);
data = (data & ~DATA_MASK_SOURCE) | value;
}
void setState(State value) {
assert(value < STATE_COUNT);
data = (data & ~DATA_MASK_STATE) | (value << 4);
}
void setParamChange(ParamChange value) {
assert(value < PARAM_CHANGE_COUNT);
data = (data & ~DATA_MASK_PARAM_CHANGE) | (value << 8);
}
void setGameParam(GameParam value) {
assert(value < GAME_PARAM_COUNT);
data = (data & ~DATA_MASK_GAME_PARAM) | (value << 11);
}
void setGameSpeed(GameSpeed value) {
assert(value < GAME_SPEED_COUNT);
data = (data & ~DATA_MASK_GAME_SPEED) | (value << 13);
}
void setResumeSaved(bool value) {
data = value ? data | DATA_MASK_IS_RESUME : data & ~DATA_MASK_IS_RESUME;
}
void setFrame(uint32 frame) {
data = data | DATA_MASK_HAS_FRAME;
this->frame = frame;
}
void setTargetFrame(uint32 targetFrame) {
data = data | DATA_MASK_HAS_TARGET_FRAME;
this->targetFrame = targetFrame;
}
};
So this class conveys a lot of status information and packs it down into a maxiumum of 13 bytes. Using other techniques, I can get this thing down to 5 bytes max, but this is good enough for now. The connections byte specifies a bit mask of each peer that the originator of this message is connected to. The data section uses 4 bits to specify the originator. This isn't usually necessary because that information can be learned from the socket it comes in on its self, but it may be relayed from the server, so that way a client can know the update of the status of one of it's peers that it may not be able to communicate directly with. But most importantly, there is a frame and targetFrame.
The targetFrame was originally intended to coordinate pause and speed change requests so that it could all happen at the same time on every participant in the game. It occurred to me that a game could be kept better in sync if commands were not executed immediately on the local machine. Instead, they could be given a target frame in the future and queued up locally and on the peers so that the command will (ideally) be executed at the same time on every machine. By having this behavior on the server as well, it should cancel out the "home field advantage" (i.e., being the server wont have any advantage).
This does not go for auto-commands by the way, these are always executed locally and are not transmitted at all because it's presumed that (as long as the data is the same on each machine) they can each figure these out on their own. Perhaps it would be helpful to transmit these however so the server can verify that an auto-command being executed on a client is accurate and be able to correct a client if needed. As an example, if a client thinks his unit is close enough to see an enemy unit and attack it but on the server, they are one cell off (not an unlikely condition), the server can send correcting information to update both the unit that attempted to execute the auto command and the unit it thought it saw. It may look funny because the unit would start to attack but then warp back though, but that hopefully shouldn't take long (maybe 400 milliseconds).
As a side note on the NetworkMessageStatus class, it can still be trimmed down a lot. I'm not using the DATA_MASK_FRAME_IS_16_BITS field and I can also cram part of the frame bits into the unused portion of the data field and only use extra bytes for frame. Also, I can have one message that sends the full frame as a 32 bit number but is only sent when the top 24 bits change. The rest of the time, it can transmit only the lower 8 bits of the frame number. Perhaps even better, I can use part of the data to specify how many bytes of the frame and target frame I'm sending and the upper portion will be recycled from the previous update. I wont mess with that until after all of these other issues are resolved, but there's a lot that can be done in very few bits.
Also, one thing that is changing is the way clients wait for a laggy server. At present, if a client hasn't received an update from the server for a key frame, the client stops rendering, updating sound, etc., it essentially suspends the main thread until that is received or a max wait timeout has expired. Now, it will behave as though it's paused, so rendering continues, the mouse cursor continues to move around, sounds don't stop and you can still issue commands, etc. You may have never experienced this before unless you play a game where the server is very slow (like when you compile a debug build and you're debugging it with millions of sanity checks running a second
), but it's a pain to deal with. I have some other ideas for ways to address this that I'll worry about once this networking rework is done.