suddendly cannot connect to the database on any of my sites

Hi, I got this error a few moments ago:

OperationalError: (1105, '(proxy) all backends are down')

on one of our sites, trying to find the source of the problem I found out that none of my sites are working, so I assume this problem is not on my end, anyone else has this problem? is one of the sites not working

I'm getting the same error when I try to do db migrations in the console.

I can't connect to a mySQL database remotely and got the same '1105 (proxy) all backends are down' at 3:05 am and have not been able to connect since.

Same here, since the downtime earlier this morning.

For remote connections to mysql, I get

"OperationalError: (2013, "Lost connection to MySQL server at 'waiting for initial communication packet', system error: 0")"

For connections initiated on PA bash console, I get "OperationalError: (1105, '(proxy) all backends are down')"

Same here: "All backends are down".

the same here, I cannot connect via SSH tunnel (MySQL Workbench)


Any chance of some guidance here please?

Still not working.

All now working for me.

Hi guys, sorry about this. Things should be back to normal now.

When you access the database via mysql.server, it actually goes via a mysql-proxy service which runs on each of the servers. We had an AWS outage around 3AM this morning, which Giles and I were woken up by, and were hands-on to deal with. When things came back we bounced all the web servers out of caution, and when they rebooted the mysql-proxy service was restarted on each of them, and reconnected to the database. What we failed to realise was that the mysql-proxy services on the console and task servers did not automatically reconnect.

We've been through all of them this morning, so they should be working again. In short, this should only have affected console and task server processes this morning, your web apps should have been fine. In addition, mysql consoles you launch via the databases tab connect directly to the db, and bypass the proxy, so they would have been fine too.

Anyway, apologies again, we'll make a note to update our processes. Let us know if anything you've experienced doesn't seem to fit with this account.

No worries, thanks so much Harry + Giles for dealing with it at 3 AM, and for your usual thorough post-game analysis. Much appreciated.

Yes, I think that fits the facts. I had a load of programs running with remote sshtunnel connections to mysql.server that got stopped, as were some PA console applications.


Thank you very much for solving this so fast at 3 AM

We do have an alternative way of reaching the database that should help reduce this kind of thing in the future. If you use as the MySQL server address instead of mysql.server then you'll bypass the proxy and go straight to the appropriate database for your account. This would have meant that you'd have had the initial database outage from 3:15am to about 3:45am, but once that was fixed everything would have worked again, rather than you having to wait until 11am when we realised that the console and task server proxies were broken. So, still not ideal but quite a lot better :-)

Quite a few people have been beta testing the new server addressing scheme, and it seems to be working as well as the proxy system under normal circumstances, so we're planning to make it the official system sometime early next year.

After our system update today, the username-based hostname is now the official way to connect to your MySQL database. The old mysql.server system still works, but is now deprecated.