Came across a piece of interesting technology today that appears to be able to solve our current network latency predicament. I mean how do we really get millions of players into the same universe and still make it playable?
Hashgraph uses the old gossip protocol and a modified voting protocol to create a P2P network event manager.
Here is a link of the overview with videos and whatnot: https://hashgraph.com/
Here is a link to the deep dive marked at the time they actually reference how World of Warcraft might use this tech to track network objects and events with little to no latency: https://youtu.be/fes2ToZRI3E?t=29m33s
In case the marker doesn’t work fast forward to 29m33s for the reference.
I’d like to get a response from a dev on this one since it looks like this would be necessary for the game.
EDIT: Looks like this is free open source tech also….nice not having to pay for a piece of architecture :slightly_smiling_face:
EDIT: Tech Expo where the VP of development already had a client build an MMO using hashgraph: https://www.youtube.com/watch?time_continue=87&v=QUnqff3PYdA
We are not currently looking at Hashgraph, Blockchain, or any other distributed ledger or concensus algorithms. The biggest reason why not is that these algorithms all take a few rounds of communication before a computer in the network knows whether an event achieved concensus or not. All this passing of messages back and forth over the internet adds several round-trip times worth of latency that makes these algorithms unsuitable for realtime games. Concensus algorithms also suffer from scalability problems: the more computers in your network, the more communication they will have to do to reach agreement. This could be a real problem if we were to try and make a domestic internet connection handle network traffic to and from tens of thousands of other computers.
The only time I think a concensus algorithm is necessary is when you can’t trust the other computers in your network. Faced with such a situation the best you can hope for is to have everyone vote for their version of events and go with whatever the majority decide. However if we can trust the other computers then we can be a lot more efficient. That’s exactly why we pick the client/server model over peer-to-peer. By centralising authority over everything that happens in the game onto a server that we control, your client can trust whatever that server tells it. As a result your client doesn’t need to talk directly to any other computers. That greatly reduces the network traffic scalability problem for clients. It also means that latency between two clients is at worst the longest of their round-trip times to the server. Even better the latency from server to client is just the one-way latency (roughly half a round-trip time).
The network scalability problem is still present with client/server, but focuses on the server since that has to communicate with all the individual clients. As the number of clients per server grows there’s going to be a point were it can’t grow any further. Optimizations will allow us to push that number much higher than it is now, but sooner or later we’ll hit a limit we can’t reasonably hope to improve on. To get around this limitation we’ll rely on the fact that not only can clients trust our server, but also that servers can trust each other. Therefore we’ll be able to connect them in their own peer-to-peer network without the need for relatively expensive concensus algorithms. Trust means that because we believe all the servers to be “good” then there can be no conspiracy of “bad” servers. This is useful since it frees servers from having to police what every other server is up to. Each server in the network will have a part of the game that it has complete authority over, and the other servers take its word as law. As a result this trusted network will scale much more efficiently than an untrusted one. In fact, two servers in the same network don’t even have to be connected at all unless their clients are interested in each other. That greatly reduces the number of connections needed, and in turn problems with network scalability.
TL;DR – using a hydrid of client/server and p2p, and carefully choosing who to trust will allow greater scalability for less bandwidth and with lower latencies than could be achieved with current concensus algorithms.