*editors note: these posts were not posted on the original development days. They're posted whenever I have the time to write them up. Day 3 was Monday, 28/4/14. I don't plan on adding dates on the older or newer posts, since these posts are still in order and are close enough for the purpose of documenting what's going on, as they are generally written up 1-3 days after the day they cover.
Server design
We finally got around to writing out how the server code would work. By this point, we had talked about how most of the more complicated, problematic systems in the server code would function, for example the position interpolation to make up for connection latency when a player shoots, and how to set the data structures up so that when an event happens near a player, that player will receive an update.
We designed a system to make sending updates to players more efficient. Each player has a queue of "recent events" that need to be sent to them, and there will be a queue of players that need to be updated.
Each map chunk will contain a list of players who are in that chunk.
When an event occurs (remote player movement, remote player shoot, ect) that event is added to all players contained in the surrounding chunks queue, and for each player, if they aren't in the "to be updated" queue, they will be added to it.
The server will continuously get a player from the "to be updated" queue, and send all updates on that players' updates queue in a single packet.
This gives you a good trade-off between bandwidth efficiency (the TCP/IP header is going to be the largest part of the packet, the more updates you can package into a packet, the more efficient it is) and latency (waiting for events to queue increases latency).
If the server isn't under load, the latency will be low because players will be dequeued and packets sent before multiple updates occur. If the server is under load, multiple packets will be combined together as multiple events happen before each send, and the connection will become more efficient at the cost of increased latency.
There will be a minimum delay between movements (turn, move forward). Probably something around 50ms. This means we can store a queue with finite length of the players last positions, and using a shooter's latency, calculate whether the shot fired by the shooter should have hit the player.
This might get more complicated when we add client side interpolation of movement (for if a player moves more than once before the receiver receiving an update), but the basic idea should work. The other option is to have clients perform interpolation on their side and deside if a shot hits, but that is potentially easy to exploit.
The downside of doing everything on the server like this apart from potentially being more complicated, is that two players can shoot and kill each other because of latency. On the other hand, that's a fair way of processing things, because it means a player with less latency doesn't get an advantage over a player with more.
The other downside is we have to pick a maximum latency to support. I was thinking of using a value between 300-500ms. Realistically, there may very well be players with more latency than that, the question is whether we want to support higher latencies as players with high latency would mess up gameplay. Imagine you kill someone then over half a second later, you're told that they also killed you. It just wouldn't feel right. I guess we'll need to experiment with it and find the point where player latency starts making the game suck.
Another potential strange side effect of this is it might be possible for multiple people to kill the same person under certain conditions depending on how we implement things.
In the end, we removed the Serializer class and decided the events would have a function to write the event to the socket's output, which simplifies the server code (decided after making the diagram below).
We also decided it would be a good idea to have a ban system set up before release that would cover account bans and IP bans.
A lot of the more complex functionality of the server didn't show well on the diagram, so there was a lot more writing out how things would be implemented in the for the server than there was for the client. At this time, we haven't shared the text document with design notes, but the major things where the interpolation and event queues.
Git setup
We set up Git repositories for the client and server. We got everyone set up so that they could push and pull from the repository and figured all of that out. Kate tried to get a more elegant solution working using an eclipse plugin, but it turned out to be too problematic to set up, so we just used the standard Git GUI.
We got everyone familiar with working with the code, and each pushed an update to the repo.
We're breaking for a few days as Kate is busy. I'm writing up the server template code while that's happening so that we can get programming when we next get together.
Next steps:
(finish templating the server code)
Figuring out multithreaded socket IO, figuring out whether any of our classes will require blocking so things don't break.
Writing enough of the app so that we can start testing code.
Writing enough of the server so that we can start testing code.
Getting some basic IO going between the server and client (guess I'm going to have to hire that VPS pretty soon)