Active Incident

Updated a few seconds ago

US-East




Degraded Performance

US-West




Degraded Performance

EU-Central




Operational

Wasabi Console




Operational

Incident Status

Degraded Performance

Components

US-West

Locations

US-West



October 13, 2019 14:28 UTC
[Monitoring] The us-west-1 region continues to operate at the current transfer rates. No further changes are scheduled through the weekend.

October 12, 2019 21:35 UTC
[Investigating] We continue to monitor the system with restricted ingest rates which we expect to maintain through the night.

October 12, 2019 14:32 UTC
[Investigating] We are currently seeing degraded performance on some files in us-west-1. We are investigating the cause at this time. Please continue to subscribe to the status page for updates.

Incident Status

Degraded Performance

Components

US-East

Locations

US-East



October 11, 2019 20:10 UTC
[Identified] The us-east-1 region continues to operate at a steady rate. No further changes are scheduled through the weekend.

October 10, 2019 15:39 UTC
[Identified] Our us-east-1 region is operating at a steady rate but not yet operating at our maximum expected throughput capacity. We are working on resolving some remaining connection errors reported by some customers and if you are experiencing connection errors including ‘max connection exceeded’ messages (regardless of the region that your bucket is located in) please contact support@wasabi.com. We are closely monitoring the system and will provide another update within the next 24 hours (or sooner if system conditions change).

October 9, 2019 22:01 UTC
[Identified] The us-east-1 system has been operating at a more steady rate throughout the day. We will continue to keep the rates low to ensure consistency overnight. If you experience connection errors including ‘max connection exceeded’ messages (regardless of the region that your bucket is located in) please contact support@wasabi.com as we work to eliminate any other scenarios that produce that error. We are preparing a few additional changes to be made during the day on Thursday 10 October 2019.

October 9, 2019 14:22 UTC
[Identified] We are maintaining reduced (but more consistent) ingress and egress rates in the us-east-1 region and have reduced the number of connection errors. If you still experience connection errors including ‘max connection exceeded’ messages (regardless of the region that your bucket is located in) please contact support@wasabi.com as we work to eliminate any other scenarios that produce that error. We continue to update the system in an effort to increase data throughput rates. Our next status page update is expected on Wed 9 Oct 2019 at approximately 2200 UTC.

October 9, 2019 01:36 UTC
[Identified] We are receiving reports that indicate the error message, "Maximum number of server active requests exceeded" persists. We will continue to work with Engineering to address this.

October 9, 2019 00:06 UTC
[Identified] We have increased the number of connections available for data in an effort to provide increased ingress and egress rates in the us-east-1 region. If you still experience connection errors or timeouts including 'max connection exceeded' messages, please contact support@wasabi.com. We are continuing to work on a permanent solution to the problem. Our next status page update is expected on Wed 9 Oct 2019 at approximately 1300 UTC.

October 8, 2019 20:04 UTC
[Identified] Reposting this following update from 17:45 UTC: The first set of updates and modifications to the system has been made and access to the Wasabi Console should be more consistent. We continue to work on the throughput issues and expect an update with regards to that by approximately 00:00 UTC tonight.

October 8, 2019 17:45 UTC
[Identified] The first set of updates and modifications to the system has been made and access to the Wasabi Console should be more consistent. We continue to work on the throughput issues and expect an update with regards to that by approximately 00:00 UTC tonight.

October 8, 2019 15:00 UTC
[Identified] We expect to have a change in the system to alleviate some of the contention at 1PM ET today (6PM UTC). The first impact should be to allow consistent wasabi console access. Further work on the data transfer continues as well.

October 7, 2019 23:45 UTC
[Investigating] We anticipate no changes to the system overnight tonight. We will continue in our current state. The expectations are that no changes will be made until 5PM UTC on Tuesday. We will update this timeline on Tuesday at approximately 1PM UTC.

October 7, 2019 19:49 UTC
[Investigating] Engineering continues to work on the system to try to provide more consistent access and speed. We anticipate a configuration change to help with this and which should also help with the console connection issue. We will update on the status of that change at approximately 0100 UTC.

October 7, 2019 16:13 UTC
[Investigating] We are experiencing performance problems with our US-East-1 data center which is currently functioning at a fraction of its normal speed and connection capacity level. Our other data centers in the US and Europe are operating normally. Our engineering team has been working around the clock to find the root cause of the problem. We'll be posting frequent status updates at status.wasabi.com until the problem is resolved. Currently we have stabilized the S3 throughput rates. The Wasabi Console access may be intermittently impacted. We will target our next update for 6PM UTC.

October 7, 2019 13:39 UTC
[Investigating] We are continuing to work on the issues impacting us-east-1 and will be posting a more specific update at noon ET.

October 7, 2019 00:29 UTC
[Investigating] We continue to monitor the system with decreased ingest rates. We expect an update during the day on Monday.

October 5, 2019 19:52 UTC
[Investigating] We performed a software upgrade to US-EAST-1 to resolve stability issues. We have kept the ingest rates down to monitor the system. We will return the system to full service soon.
Network errors on US-West-1Degraded Performance

Incident Status

Degraded Performance

Components

US-West

Locations

US-West



September 30, 2019 22:50 UTC
[Identified] Ingest rates have been raised back to normal levels. We expect to run this way throughout the week. A decision will be made prior to the weekend on whether to reduce the ingress rate. Egress rates remain unchanged.

September 27, 2019 20:55 UTC
[Identified] As a precaution and to ensure our ongoing system availability, we are going to lower the overall ingest rate slightly for the weekend. This will begin at approximately 00:00 UTC. We continue to monitor the system throughout the weekend as we work to resolve the issues on a permanent basis.

September 26, 2019 20:17 UTC
[Identified] Just a note that we have been running with increased ingest rates for the afternoon. We will post an update here if we determine that an adjustment down is warranted. If there are no changes, we'll update this page again on Friday by 1700 UTC.

September 26, 2019 13:30 UTC
[Identified] Overnight we did see ingest rates down a bit from yesterday evening, but engineering is working on some changes to the system which we expect will put us back up. Service continues to run with normal download rates at this time. We expect to be able to have those changes in place by our next status update at midnight UTC.

September 25, 2019 20:15 UTC
[Identified] The system continues to run at the reduced ingest rate. The system has been running since yesterday at our current levels. The system is still operating at less than normal levels for ingress, but egress rates should be unaffected. We will update again on Thursday at 12:00 UTC unless there are more changes to announce.

September 24, 2019 21:16 UTC
[Identified] The Ingest rate has been increased on the US-WEST-1 platform. We have added another storage pod to the region but not yet put this into service. We continue to monitor the platform through the evening and test the new pod. We will increase the ingest rate if testing continues without incident. We will post an update if this increase is put into effect. Otherwise, we'll update the status on Wednesday 25 September at approximately 12:00 UTC.

September 24, 2019 17:55 UTC
[Identified] The system continues to run at the reduced ingest rate. We have added another pod to the system and are testing this before adding to be addressable. We completed the previously scheduled maintenance window for a non-related issue at 5PM UTC. We intend to increase the ingest rate over the next few hours. We intend to post another update on the status at approximately 20:00 UTC.

September 24, 2019 10:07 UTC
[Identified] The us-west-1 system ran overnight at our reduced ingest rate and we were able to re-introduce the storage pod to the system. We will continue to monitor the system this morning before determine whether to raise the ingest level.

September 24, 2019 01:00 UTC
[Identified] The system performance has been steady and we expect to reintroduce the new storage pod to the system overnight. It will not be added into service at that time. We will continue to monitor the write performance overnight. The Read performance is functioning normally. Unless there is an urgent update, we expect to post a status update at approximately 10:00 UTC on Tuesday 24 September 2019.

September 23, 2019 20:27 UTC
[Identified] The issue that we were seeing in the US-West region is related to one of the storage pods that had been added to the location late last week. That pod has been removed from service at this time. We are working to resolve the problem on that pod before it is returned to service. The system continues to operate with the prior set of pods that were impacted when the new pods were added. We continue to work through issues on the existing storage pods in order to return to the previous level of service as soon as possible. The ingest rate of the full system continues to be lowered to allow the restoration of remaining storage pods. As we are able, we will begin to raise this limit. This limitation is for ingest only. There is no limit on read requests. Reads are working as normal. We will provide a status update next at approximately 0100 UTC.

September 22, 2019 22:12 UTC
[Investigating] System issues on US-West resulting in service disruption.

September 22, 2019 14:13 UTC
[Identified] We have begun restoration of service to the us-west-1 system and are operating with limited throughput at this time.

September 22, 2019 12:59 UTC
[Identified] Due to an issue in the Storage subsystem, we have stopped S3 traffic to/from the us-west-1 region for a period. We expect to be able to restore service at approximately 14:00 UTC. We will continue to keep this status page posted with our updates

September 22, 2019 12:06 UTC
[Investigating] We continue to investigate system errors on US-West-1

September 22, 2019 11:20 UTC
[Investigating] We are currently seeing degraded performance and 5xx errors on some files on US-West

Locations