Deployments impacted by this issue have all been restored to service. If you are experiencing any abnormal behavior, please open a support ticket so that we can resolve your problem immediately.
A complete post-mortem will be sent to all impacted account owners.
Feb 17, 18:44 UTC
We have resolved all alerts produced from our monitoring system so to our knowledge all affected deployments are now healthy. If you are still seeing issues please contact email@example.com
Feb 17, 13:15 UTC
At this time all impacted databases are online.
We are closely monitoring all database deployments and their underlying hosts for any signs of regression.
All accounts impacted by today's event will receive post-mortem details within the next 24-48 hours.
Feb 17, 06:07 UTC
All hosts are online at this point, however some deployments are still experiencing related issues. We continue to work through individual deployment alarms.
Feb 17, 04:14 UTC
At this point in time most of the affected hosts have recovered, although three remain in a degraded state:
The Compose team is inspecting individual deployments and triaging any issues discovered on the recovered hosts.
Feb 17, 01:05 UTC
We are continuing to see progress, more hosts are coming online as the recovery operations complete. There still isn't an ETA, the recovery process time varies from host to host. Over 60 % of the unresponsive hosts are back up at this point in time.
Data storing members have not been affected in most cases, the issue is limited to hosts containing utility capsules: MongoDB configservers, Redis sentinels and etcd for PostgreSQL and standalone. We have seen this cause availability issues in some cases.
New deployments on AWS East are not affected.
Apologies to those impacted, we are working hard to resolve this as soon as possible.
Feb 16, 22:38 UTC
Hosts are still recovering at this time. We do not have a timeline on when all hosts will be recovered, but deployments will continue to come back online as our team works diligently to restore normal operations. The hosts that are online are still under some load so while the deployment may be online, you may notice performance issues.
Feb 16, 21:46 UTC
Correction: Both MongoDB and PostgreSQL deployments are affected by this issue. This is contrary to our previous update that only MongoDB was affected.
Feb 16, 20:19 UTC
Hosts are still recovering. We do not have a timeline on when all hosts be recovered, but deployments should be coming online as the hosts recover.
Feb 16, 19:32 UTC
Hosts that were affected by this outage have begun recovering. Deployments should begin resuming normal operations as these hosts recover.
Feb 16, 19:08 UTC
The scope of the deployments affected by this issue should be limited to MongoDB deployments only.
Feb 16, 19:02 UTC
Compose engineers are still in communication with AWS engineers, and working to resolve the issue.
Feb 16, 18:26 UTC
Compose is working with AWS support and our full engineering team to address an issue that's causing several hosts in our fleet of AWS US East hosts to be unresponsive or with degraded performance.
Feb 16, 17:29 UTC
Compose is currently experiencing an infrastructure outage within the AWS US-East-1 region. This is not related to Amazon themselves, but rather an issue within Compose's platform. This is currently priority 1 at Compose, and all hands are on deck to resolve this.
We have identified the root cause and are working with AWS to fix as soon as possible.
Feb 16, 16:34 UTC
We have identified the cause of this issue as widespread degradation of disk I/O across numerous hosts. With AWS assistance, we have been able to resolve the issue and continue to restore service to databases as quickly as possible.
Feb 16, 16:33 UTC
We are investigating reports of database access trouble primarily in the AWS US-East region. We are working as quickly as possible to identify and resolve the problem. Further updates will be posted here as they come available.
Feb 16, 15:57 UTC