Hi Guys
This, and a forum for punters only, have been on our radar for quite some time now. I do apologise for what might seem like slow progress in our development department.
The Non-Geek version:
ESA has knocked down all old server infrastructures (like knocking down a house) and completely rebuilt everything with some very advanced "up to the second" backup technology. Backing up data every hour is easy, super easy, but backing up data every single second (without killing the server CPU) is very very difficult. But we've cracked it and already ESA's website was running when we lost an entire server for 5 hours during the recent Amsterdam blackout.
www.euronews.com/2017/01/17/amsterdam-blackout-leaves-364000-without-power
The Geek Version:
We have been very busy behind the scenes to completely rebuild a 12 year old server infrastructure. I am happy to report that ESA now runs on the very latest cluster technology.
Instead of running a Master-Slave database setup, which is very easy, we run a Master-Master-Master setup with a quorum of 3 powerful database servers to ensure we never run into a split brain scenario. We'll never ever recover from split-brain and it has to be avoided at all costs. All of this stuff takes a LOT of work.
This forum post your reading right now (and all others) comes directly from those 3 powerful database servers. It did require us to update large parts of ESA code, to ensure that the code correctly handles failover scenarios.
The next step for us is to build a completely redundant photo server. This again isn't an easy task. NFS shares don't support automatic failover (I'm shocked), and while technologies like GlusterFS do support auto-failover, the documentation is very clear on this - you're not allowed to cross zones (ie datacenters, cities). We want to cross datacenters because we've learned 1 thing in the last 14 years: Datacennters DO go down no matter how many backup systems they have in place.
In summery: Guys we will develop all these extra things, but right now we are focusing on cleaning out old technology and building a rock solid up-to-the-second backed up server infrastructure that can never go down.
We're very close to finishing everything and should start development on all the features you've requested in the near future.
Regards
James