Tell HN: Azure outage
Azure is down for us, we can't even access the azure portal. Are other experiencing this? Our services are located in Canada/Central and US-East 2
Azure is down for us, we can't even access the azure portal. Are other experiencing this? Our services are located in Canada/Central and US-East 2
It still surprises me how much essential services like public transport are completely reliant on cloud providers, and don't seem to have backups in place.
Here in The Netherlands, almost all trains were first delayed significantly, and then cancelled for a few hours because of this, which had real impact because today is also the day we got to vote for the next parlement (I know some who can't get home in time before the polls close, and they left for work before they opened).
Is voting there a one day only event? If not, I feel the solution to that particular problem is quite clear. There’s a million things that could go wrong causing you to miss something when you try to do it in a narrow time range (today after work before polls close)
If it’s a multi day event, it’s probably that way for a reason. Partially the same as the solution to above.
Washington State having full vote-by-mail (there is technically a layer of in-person voting as a fallback for those who need it for accessibility reasons or who missed the registration deadline) has spoiled me rotten, I couldn't imagine having to go back to synchronous on-site voting on a single day like I did in Illinois. Awful. Being able to fill my ballot at my leisure, at home, where I can have all the research material open, and drive it to a ballot drop box whenever is convenient in a 2-3 week window before 20:00 on election night, is a game-changer for democracy. Of course this also means that people who serve to benefit from disenfranchising voters and making it more difficult to vote, absolutely hate our system and continually attack it for one reason or another.
As a Dutchman, I have to go vote in person on a specific day. But to be honest: I really don't mind doing so. If you live in a town or city, there'll usually be multiple voting locations you can choose from within 10 minutes walking distance. I've never experienced waiting times more than a couple of minutes. Opening times are pretty good, from 7:30 til 21:00. The people there are friendly. What's not to like? (Except for some of the candidates maybe, but that's a whole different story. :-))
So, if you have a minor emergency, like a kidney stone and hospitalized for the day - you just miss your chance to vote in that election?
If so, I see a lot to dislike. As the point I was making is you can’t anticipate what might come up. Just because it’s worked thus far doesn’t mean it’s designed for resilience. There’s a lot of ways you could miss out in that type of situation. I seems silly to make sure everything else is redundant and fault tolerant in the name of democracy when the democratic process itself isn’t doing the same.
If hospitalized on that specific day: Sign the back of the voting card and give your ID to a family member, they can cast your vote
How is that an acceptable response? Honestly. You’re in the hospital, in pain, likely having a minor surgery, and having someone cast your vote for you is going to be on your mind too? Do you have your voting card in your pocket just in case this were to play out?
That’s just ridiculous in my opinion. Makes me wonder how many well intentioned would be voters end up missing out each election cause shit happens and voting is pretty optional
In the US, hours-long lines are routine. Not everywhere, but poorer places tend to have fewer voting machines and longer lines.
We've been closing a lot of polling places recently:
https://abcnews.go.com/US/protecting-vote-1-5-election-day-p...
You have early voting, some choose not to trust the early voting system.
We have early voting, nobody has to wait, they choose to wait
Voting machines slow down voting from what I understand
In europe, voting typically happens in one day, where everyone physically goes to their designated voting place and puts papers in a transparent box. You can stay there and wait for the count at the end of the day if you want to. Tom Scott has a very good video about why we don't want electronic/mail voting: https://www.youtube.com/watch?v=w3_0x6oaDmI
Electronic voting and mail voting are very different things though.
UK is a one day affair with voting booths typically open like 6 am to 10 pm
Many countries in Europe have advance voting.
Off the top of my head, I can't think of an EU country that does not have some form of advance voting.
Here in Latvia the "election day" is usually (always?) on weekend, but the polling stations are open for some (and different!) part of every weekday leading up. Something like couple hours on monday morning, couple hours on tuesday evening, couple around midday wednesday, etc. In my opinion, it's a great system. You have to have a pretty convoluted schedule for at least one window not to line up for you.
Germany has mail-in voting, not sure if that counts as advanced voting though
Ireland doesn't have it.
If India can have voters vote and tally all the votes in one day, then so can everyone else. It’s the best way to avoid fraud and people going with whoever is ahead. I am sympathetic with emergency protocols for deadly pandemics, but for all else, in-person on a given day.
Voting in India is staggered over multiple phases over multiple days/weeks. Only the vote count happens on a single day at the end.
Voting days should be a national holiday.
In Australia there are so many places to vote, it is almost popping out to get milk level if convenience. (At least in urbia and suburbia) Just detour your dog walk slightly. Always at the weekend.
In Australia there are so many places to vote, it is almost popping out to get milk level if convenience. Just detour your dog walk slightly. Always at the weekend.
Here in Belgium voting is usually done during the weekend, although it shouldn't matter because voting is a civic duty (unless you have a good reason you have to go vote or you'll be fined), so those who work during the weekend have a valid reason to come in late or leave early.
In the US, where I assume a lot of the griping comes from, election day is not a national holiday, nor is it on a weekend (in fact, by law it is defined as "the Tuesday next after the first Monday in November"), and even though it is acknowledged as an important civic duty, only about half of the states have laws on the books that require employers provide time off to vote. There are no federal laws to that effect, so it's left entirely to states to decide.
In Germany it is always a Sunday.
The Flemish bus company (de Lijn) uses Azure and I couldn't activate my ticket when I came home after training a couple of hours ago. I should probably start using physical tickets again, because at least those work properly. It's just stupid that there's so much stuff being moved to digital only (often even only being accessible through an Android or iOS app, despite the parent companies of those two being utterly atrocious) when the physical alternatives are more reliable.
Wow thats crazy! National transport infrastructure being so fragile. What a great age we live in.
> wouldn't put China or Russia above this
Organizations who had their own datacenters were chided for being resistant to modernizing, and now they modernized to use someone else's shared computers and they stopped working.
I really do feel the only viable future for clouds is hybrid or agnostic clouds.
Update 16:57 UTC:
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing Azure Front Door issues resulting in a loss of availability of some services. In addition. customers may experience issues accessing the Azure Portal. Customers can attempt to use programmatic methods (PowerShell, CLI, etc.) to access/utilize resources if they are unable to access the portal directly. We have failed the portal away from Azure Front Door (AFD) to attempt to mitigate the portal access issues and are continuing to assess the situation.
We are actively assessing failover options of internal services from our AFD infrastructure. Our investigation into the contributing factors and additional recovery workstreams continues. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:57 UTC on 29 October 2025
---
Update: 16:35 UTC:
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025
---
Azure Portal Access Issues
We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly.
This message was last updated at 16:18 UTC on 29 October 2025
---
Message from the Azure Status Page: https://azure.status.microsoft/en-gb/status
Azure Network Availability Issues
Starting at approximately 16:00 UTC, we began experiencing Azure Front Door issues resulting in a loss of availability of some services. We suspect that an inadvertent configuration change as the trigger event for this issue. We are taking two concurrent actions where we are blocking all changes to the AFD services and at the same time rolling back to our last known good state.
We have failed the portal away from Azure Front Door (AFD) to mitigate the portal access issues. Customers should be able to access the Azure management portal directly.
We do not have an ETA for when the rollback will be completed, but we will update this communication within 30 minutes or when we have an update.
This message was last updated at 17:17 UTC on 29 October 2025
"We have initiated the deployment of our 'last known good' configuration. This is expected to be fully deployed in about 30 minutes from which point customers will start to see initial signs of recovery. Once this is completed, the next stage is to start to recover nodes while we route traffic through these healthy nodes."
"This message was last updated at 18:11 UTC on 29 October 2025"
At this stage, we anticipate full mitigation within the next four hours as we continue to recover nodes. This means we expect recovery to happen by 23:20 UTC on 29 October 2025. We will provide another update on our progress within two hours, or sooner if warranted.
This message was last updated at 19:57 UTC on 29 October 2025
AFD is down quite often regionally in Europe for our services. In 50%+ the cases they just don‘t report it anywhere, even if its for 2h+.
Spam those Azure tickets. If you have a CSAM, build them a nice powerpoint telling the story of all your AFD issues (that's what they are there for).
> In 50%+ the cases they just don‘t report it anywhere, even if its for 2h+.
I assume you mean publicly. Are you getting the service health alerts?
CSAM apparently also means Customer Success Account Manager for those who might have gotten startled by this message like me.
Alternative für Deutschland was strange enough, when I saw CSAM I was really wondering what thread I had stumbled into
Thank you, not going to google that shit.
"Apply to become a CSAM mentor"
Some really unfortunate acronyms flying around the Microsoft ecosystem . . .
Quite so. The acronym collision rate is high.
In general, plain language works so much better than throwing bowls of alphabet soup around.
That's a funny criticism to make on a tech forum.
But, for future reference:
site:microsoft.com csam
in many cases: no service health alerts, no status page updates and no confirmations from the support team in tickets. still we can confirm these issues from different customers accross europe. Mostly the issues are regional dependent.
I got a service health alert an hour after it started, saying the portal was having issues. Pretty useless and misleading.
That should go into the presentation you provide your CSAM with as well.
Storytelling is how issues get addressed. Help the CSAM tell the story to the higher ups.
> CSAM
Child Sex-Abuse Material?!? Well, a nice case of acronym collision.
They should rename to Success Customer Account Manager.
Most companies just call 'em CSMs
Supervisor Customer Account Manager: a remote kind of job, paid occasionally with gift cards
...performed by cheap, open weight LLM.
Definitely the most baffling acronym collision I have seen with Microsoft. I did one time count 4 different products abbreviated VSTS at one point.
They must really depend on their government contracts with this administration…
Oh dear. Will make for an awkward thing to have on your resume.
"CSAM Ninja"
This is the single most frustrating thing about these incidents. As you're harmstrung on what you can do or how you can react until Microsoft officially acknowledges a problem. Took nearly 90mins both today and when it happened on 9th October.
so true. instead of getting a fast feedback we are wasting time searching for our own issues first.
Same experience. We've recently migrated fully away from AFD due to how unreliable it is.
I'll be interested in the incident writeup since DNS is mentioned. It will be interesting in a way if it is similar to what happened at AWS.
It's pretty unlikely. AWS published a public 'RCA' https://aws.amazon.com/message/101925/. A race condition in a DNS 'record allocator' causing all DNS records for DDB to be wiped out.
I'm simplifying a bit, but I don't think it's likely that Azure has a similar race condition wiping out DNS records on _one_ system than then propagates to all others. The similarity might just end at "it was DNS".
That RCA was fun. A distributed system with members that don't know about each other, don't bother with leader elections, and basically all stomp all over each other updating the records. It "worked fine" until one of the members had slightly increased latency and everything cascade-failed down from there. I'm sure there was missing (internal) context but it did not sound like a well-architected system at all.
Needs STONITH
https://isitdns.com/
It's always DNS
It is a coin flip, heads DNS, tails BGP
THIS is the real deal. Some say it's always DNS but many times it's some routing fuckup with BGP. two most cursed 3 letter acronym technologies out there
when a service goes down it's DNS when an entire nation or group of nations vanish it's BGP.
DNS has both naming and cache invalidation, so no surprise it’s among the hardest things to get right. ;)
yea its not just the portal. microsoft.com is down too
Seems all Microsoft-related domains are impacted in some way.
• https://www.xbox.com/en-US also doesn't fully paint. Header comes up, but not the rest of the page.
• https://www.minecraft.net/en-us is extremely slow, but eventually came up.
Downdetector says aws and gcp are down too. Might be in for a fun day.
From what I can tell, Downdetector just tracks traffic to their pages without actually checking if the site is down.
The other day during the AWS outage they "reported" OVH down too.
yea I saw that, but im not sure on how accurate that is. a few large apps/companies I know to be 100% on AWS in us-east-1 are cranking along just fine.
Not sure if this is true. I just login to the console with no glitch.
AWS was performance issues and I believe is resolved.
Yeah, I am guessing it's just a placeholder till they get more info. I thought I saw somewhere that internally within Microsoft it's seen as a "Sev 1" with "all hands on deck" - Annoyingly I can't remember where I saw it, so if someone spots it before I do, please credit that person :D
Edit: Typo!
It's a Sev 0 actually (as one would expect - this isn't a big secret). I was on the engineering bridge call earlier for a bit. The Azure service I work on was minimally impacted (our customer facing dashboard could not load, but APIs and data layer were not impacted) but we found a workaround.
It was here https://news.ycombinator.com/item?id=45749054 but that comment has been deleted.
It sure must be embarrassing for the website of the second richest company in the world to be down.
yes, and it seems that at least for some login.microsoftonline.com is down too, which is part of the Entra login / SSO flow.
Whilst the status message acknowledge's the issue with Front Door (AFD), it seems as though the rest of the actions are about how to get Portal/internal services working without relying on AFD. For those of us using Front Door does that mean we're in for a long haul?
Please migrate off of front door. It's been a failure mode since it came out historically. Anything else is better at this point
Didn't the underlying vendor they used for Azure Front Door go bankrupt? It's probably on life support.
We saw issues before 16:00 UTC - approx 15:38
They briefly had a statement about using Traffic Manager to work with your AFD to work around this issue, with a link to learn.microsoft.com/...traffic-manager, and the link didn't work. Due to the same issue affecting everyone right now.
They quickly updated the message to REMOVE the link. Comical at this point.
The statement is still there though on the status page though
They re-added it once the site was accessible.
Yet another reason to move away from Front Door.
We already had to do it for large files served from Blob Storage since they would cap out at 2MB/s when not in cache of the nearest PoP. If you’ve ever experienced slow Windows Store or Xbox downloads it’s probably the same problem.
I had a support ticket open for months about this and in the end the agent said “this is to be expected and we don’t plan on doing anything about it”.
We’ve moved to Cloudflare and not only is the performance great, but it costs less.
Only thing I need to move off Front Door is a static website for our docs served from Blob Storage, this incident will make us do it sooner rather than later.
Front Door is not good.
we are considering the same but because our website uses APEX domain we would need to move all DNS resolver to cloudfront right ? Does it have as a nice "rule set builder" as azure ?
Unless you pay for CloudFlare’s Enterpise plan, you’re required to have them host your DNS zone, you can use a different registrar as long as you just point your NS records to Cloudflare.
Be aware that if you’re using Azure as your registrar, it’s (probably still) impossible to change your NS records to point to CloudFlare’s DNS server, at least it was for me about 6 months ago.
This also makes it impossible to transfer your domain to them either, as CloudFlare’s domain transfer flow requires you set your NS records to point to them before their interface shows a transfer option.
In our case we had to transfer to a different registrar, we used Namecheap.
However, transferring a domain from Azure was also a nightmare. Their UI doesn’t have any kind of transfer option, I eventually found an obscure document (not on their Learn website) which had an az command which would let you get a transfer code which I could give to Namecheap.
Then I had to wait over a week for the transfer timeout to occur because there is no way on Azure side that I could find to accept the transfer immediately.
I found CloudFlare’s way of building rules quite easy to use, different from Front Door but I’m not doing anything more complex than some redirects and reverse proxying.
I will say that Cloudflare’s UI is super fast, with Front Door I always found it painfully slow when trying to do any kind of configuration.
Cloudflare also doesn’t have the problem that Front Door has where it requires a manual process every 6 months or so to renew the APEX certificate.
Thanks :). We don't use Azure as our registrar. It seems I'll have to plan for this then, we also had another issue, AFD has a hard 500ms tls handshake timeout (doesn't matter how much you put on the origin timeout settings) which means if our server was slow for some reason we would get 504 origin timeout.
CloudFlare != CloudFront
I meant cloudfare
DNS. Ofc.
Sounds like they need to move their portal to a region with more capacity for the desired instance type. /s
I noticed that Starbucks mobile ordering was down and thought “welp, I guess I’ll order a bagel and coffee on Grubhub”, then GrubHub was down. My next stop was HN to find the common denominator, and y’all did not disappoint.
Good thing HN is hosted on a couple servers in a basement. Much more reliable than cloud, it seems!
Just don't use genetically identical hardware:
https://news.ycombinator.com/item?id=32031639
https://news.ycombinator.com/item?id=32032235
Edit: wow, I can't believe we hadn't put https://news.ycombinator.com/item?id=32031243 in https://news.ycombinator.com/highlights. Fixed now.
I’ve seen this up close twice and I’m surprised it’s only twice. Between March and September one year, 6 people on one team had to get new hard drives in their thinkpads and rebuild their systems. All from the same PO but doled out over the course of a project rampup. That was the first project where the onboarding docs were really really good, since we got a lot of practice in a short period of time.
Long before that, the first raid array anyone set up for my (teams’) usage, arrived from Sun with 2 dead drives out of 10. They RMA’d us 2 more drives and one of those was also DOA. That was a couple years after Sun stopped burning in hardware for cost savings, which maybe wasn’t that much of a savings all things considered.
I got burnt by this bug on freakin' Christmas Eve 2020 ( https://forum.hddguru.com/viewtopic.php?f=10&t=40766 ). There was some data loss and a lot of lessons learned.
I love that "Ask HN: What'd you do while HN was down?" was a thing
It was on AWS at least (for a while) in 2022.
https://news.ycombinator.com/item?id=32030400
Yeah looks like they're back on M5.
dang saying it's temporary: https://news.ycombinator.com/item?id=32031136
And that IP says it's with M5 again.Always has been.
Starbucks mobile was down during the AWS outage too...
They are multi-cloud --- vulnerable to all outages!
you wouldn't believe some of the crap enterprise bigco mgmt put in place for disaster recovery.
they think that they are 'eliminating a single point of failure', but in reality, they end up adding multiple, complicated points of mostly failure.
Go multi-cloud they said...
Gonna build my application to be multicloud so that it requires multiple cloud platforms to be online at the same time. The RAID 0 of cloud computing.
I noticed it when my Netatmo rigamajig stopped notifying me of bad indoor air quality. Lovely. Why does it need to go through the cloud if the data is right there in the home network…
Wow I just left a Starbucks drivethru line because it was just not moving. I guess it was because of this.
You'd think that Starbucks execs would be held accountable for the fragile system they have put in place.
But they won't be.
Why? Starbucks is not providing a critical service. Spending less money and resources and just accepting the risk that occasionally you won't be able to sell coffee for a few hours is a completely valid decision from both management and engineering pov.
Ha, maybe rethink the I AM NOTHING BUT A HUGE CLOUD CONSUMER thing on some fundamental levels? Like food?
My inner Nelson-from-the-Simpsons wishes I was on your team today, able to flaunt my flask of tea and homemade packed sandwiches. I would tease you by saying 'ha ha!' as your efforts to order coffee with IP packets failed.
I always go everywhere adequately prepared for beverages and food. Thanks to your comment, I have a new reason to do so. Take out coffees are actually far from guaranteed. Payment systems could go down, my bank account could be hacked or maybe the coffee shop could be randomly closed. Heck, I might even have an accident crossing the road. Anything could happen. Hence, my humble flask might not have the top beverage in it but at least it works.
We all design systems with redundancy, backups and whatnot, but few of us apply this thinking to our food and drink. Maybe get a kettle for the office and a backup kettle, in case the first one fails?
You know you can talk to your barista and ask for a bagel, right? If you're lucky they still take cash... if you still _have_ cash. :)
For some reason an Azure outage does not faze me in the same way that an AWS outage does.
I have never had much confidence in Azure as a cloud provider. The vertical integration of all the things for a Microsoft shop was initially very compelling. I was ready to fight that battle. But, this fantasy was quickly ruined by poor execution on Microsoft's part. They were able to convince me to move back to AWS by simply making it difficult to provision compute resources. Their quota system & availability issues are a nightmare to deal with compared to EC2.
At this point I'd rather use GCP over Azure and I have zero seconds of experience with it. The number of things Microsoft gets right in 2025 can be counted single-handedly. The things they do get right are quite good, but everything else tends to be extremely awful.
Many years back was the first time I used Azure, evaluating it for a client.
I remember I at one point had expanded enough menus that it covered the entirety of the screen.
Never before have I felt so lost in a cloud product.
The "Blades" experience [0] where instead of navigating between pages it just kept opening things to the side and expanding horizontally?
Yeah, that had some fun ideas but was way more confusing than it needed to be. But also that was quite a few years back now. The Portal ditched that experience relatively quickly. Just long enough to leave a lot of awful first impressions, but not long enough for it to be much more than a distant memory at this point, several redesigns later.
[0] The name "Blades" for that came from the early years of the Xbox 360, maybe not the best UX to emulate for a complex control panel/portal.
Azure to me has always suffered from a belief that “UI innovations can solve UX complexity if you just try hard enough.”
Like, AWS, and GCP to a lesser extent, has a principled approach where simple click-ops goals are simple. You can access the richer metadata/IAM object model at any time, but the wizards you see are dumb enough to make easy things easy.
With Azure, those blades allow tremendously complex “you need to build an X Container and a Container Bucket to be able to add an X” flows to coexist on the same page. While this exposes the true complexity, and looks cool/works well for power users, it is exceedingly unintuitive. Inline documentation doesn’t solve this problem.
I sometimes wonder if this is by design: like QuickBooks, there’s an entire economy of consultants who need to be Certified and thus will promote your product for their own benefit! Making the interface friendly to them and daunting to mere mortals is a feature, not a bug.
But in Azure’s case it’s hard to tell how much this is intentional.
Not sure what to imagine with this given I didn't use Azure at the time. Is this like the Windows XP style task menu?
Think Niri [0], but worse, embedded in a web browser tab, and without keyboard navigation.
Here's a somewhat ancient Stack Overflow screenshot I found: https://i.sstatic.net/yCseI.png
(I think that's from near the transition because it has full "windowing" controls of minimize/maximize/close buttons. I recall a period with only close buttons.)
All that blue space you could keep filling with more "blades" as you clicked on things until the entire page started scrolling horizontally to switch between "blades". Almost everything you could click opened in a new blade rather than in place in the existing blade. (Like having "Open in New Window" as your browser default.)
It was trying to merge the needs of a configurable Dashboard and a "multi-window experience". You could save collections of blades (a bit like Niri workspaces) as named Dashboards. Overall it was somewhere between overkill and underthought.
(Also someone reminded me that many "blades" still somewhat exist in the modern Portal, because, of course, Microsoft backwards compatibility. Some of the pages are just "maximized Blades" and you can accidentally unmaximize them and start horizontally scrolling into new blades.)
[0] https://github.com/YaLTeR/niri
azure likes to open new sections on the same tab / page as opposed to reloading or opening a new page / tab (overlays? modals? I'm lost on graphic terms)
depending on the resource you're accessing, you can get 5+ sections each with their own ui/ux on the same page/tab and it can be confusing to understand where you're at in your resources
if you're having trouble visualizing it, imagine an url where each new level is a different application with its own ui/ux and purpose all on the same webpage
Imagine OG Xbox menus, or the PS3/PSP menus.
AWS' UI is similarly messy, and to this day. They regularly remove useful data from the UI, or change stuff like the default sort order of database snapshots from last created to initial instance created date.
I never understood why a clear and consistent UI and improved UX isn't more of a priority for the big three cloud providers. Even though you talk mostly via platform SDK's, I would consider better UI especially initially, a good way to bind new customers and pick your platform over others.
I guess with their bottom line they don't need it (or cynically, you don't want to learn and invest in another cloud if you did it once).
It’s more than just the UI itself (which is horrible), it’s the whole thing that is very hostile to new users even if they’re experienced. It’s such an incoherent mess. The UI, the product names, the entire product line itself, with seemingly overlapping or competing products… and now it’s AI this and AI that. If you don’t know exactly what you’re looking for, good luck finding it. It’s like they’re deliberately trying to make things as confusing as possible.
For some reason this applies to all AWS, GCP and Azure. Seems like the result of dozens of acquisitions.
I still find it much easier to just self host than learn cloud and I’ve tried a few times but it just seems overly complex for the sake of complexity. It seems they tie in all their services to jack up charges, eg. I came for S3 but now I’m paying for 5 other things just to get it working.
Any time something is that unintuitive to get started, I automatically assume that if I encounter a problem that I’ll be unable to solve it. That thought alone leads me to bounce every time.
100% agree. I've been working in the industry for almost 20 years, I'm a full stack developer and I manage my servers. I've tried signing up for AWS and I noped out.
AWS Is a complete mess. Everything is obscured behind other products, and they're all named in the most confusing way possible.
Count your blessings. You could have to use Azure SSO through Oracle Cloud..... ; ;
I found it intuitive but admittedly it felt a lot like their Xbox UI which I used a lot during my formative years
Amazon: here's two buttons, some check boxes and a random popup.
MSFT : Hold my beer...
The problem is that in some industries, Microsoft is the only option. Many of these regulated industries are just now transitioning from the data center to the cloud, and they've barely managed to get approval for that with all of the Microsoft history in their organization. AWS or GCloud are complete non-starters.
I moved a 100% MS shop to AWS circa 2015. We ran our DCs on EC2 instances just as if they were on prem. At some point we installed the AAD connector and bridged some stuff to Azure for office/mail/etc., but it was all effectively in AWS. We were selling software to banks so we had a lot of due diligence to suffer. AWS Artifact did much of the heavy lifting for us. We started with Amazon's compliance documentation and provided our own feedback on top where needed.
I feel like compliance is the entire point of using these cloud providers. You get a huge head start. Maintaining something like PCI-DSS when you own the real estate is a much bigger headache than if it's hosted in a provider who is already compliant up through the physical/hardware/networking layers. Getting application-layer checkboxes ticked off is trivial compared to "oops we forgot to hire an armed security team". I just took a look and there are currently 316 certifications and attestations listed under my account.
https://aws.amazon.com/artifact/faq/
I've found that lift and shifting to EC2 is also generally cheaper than the equivalent VMs on Azure.
Microsoft really wants you to use their PaaS offerings, and so things on Azure are priced accordingly. A Microsoft shop just wanting to lift-and-shift, Azure isn't the best choice unless the org has that "nobody ever got fired for buying Microsoft" attitude.
> At this point I'd rather use GCP over Azure and I have zero seconds of experience with it.
TBH, GCP is very good! More people should use it.
I know for some people the prospect of losing their Google Cloud access due to an automated terms of service violation on some completely unrelated service is worrisome.
https://cloud.google.com/resource-manager/docs/project-suspe...
I'd hope you can create a Google Cloud account under a completely different email address, but I do as little business with Google as I can get away with, so I have no idea.
I haven't used much of GCP, but I have had a good experience with Cloud Run and really haven't found a comparable offering from the other clouds.
Isn’t ECS Fargate pretty much the same thing?
Azure outages don’t faze anyone because nobody notices when it happens.
What Amazon, Azure, and Google are showing with their platform crashes amid layoffs, while they supports governments that are Oppressing's Citizens and Ignoring the Law, is that they do not care about anything other than the bottom line.
They think they have the market captured, but I think what their dwindling quality and ethics are really going to drive is adoption of self hosting, distributed computing frameworks. Nerds are the ones who drove adoption of these platforms, and we can eventually end if we put in the work.
Seriously with container technology, and a bit more work / adoption on distributed compute systems and file storage (IPFS,FileCoin) there is a future where we dont have to use big brothers compute platform. Fuck these guys.
These were my thoughts exactly. I may have my tinfoil hat on, but outages these close together between the largest cloud providers amid social unrest, my wonder is the government / tech companies implementing some update that adds additional spyware / blackout functionality.
I really hope this pushes the internet back to how it used to be, self hosted, privacy, anonymity. I truly hope that's where we're headed, but the masses seem to just want to stay comfortable as long as their show is on TV
> they do not care about anything other than the bottom line.
if all companies focused on fixing each and every social issue that exists in the world, how would they make any money?
Preach
The only reason you'd notice MS was down was if Github was down....
GitHub doesn't use Azure yet, but has just published their migration path to azure a few days ago.
I would link to that article, but that one does seem down ;)
https://www.githubstatus.com/incidents/4jxdz4m769gy
> They're stating they're working with the Azure teams, so I suspect this is related.
At least some bits of it do. I was writing something to pull logs the other day and the redirect was to an azure bucket. It also returned a 401 with the valid temporary authed redirect in the header. I was a bit worried I'd found a massive security hole but it appears after some testing it just returned the wrong status code.
What did you do when AWS was down last week?
I read "Microsoft shop" as "Microsoft slop". Fitting. But at least they open source wash themselves so much they're practically a charity right?
Even if the cloud providers have much better reliability than most on-prem infra, the failure correlation they induce negates much of the benefit.
Currently standing in a half closed supermarket because the tills are down and they cant take payments
Just to add - this particular supermarket wasn’t fully down, it took ages for them to press “sub total” and then pick the payment method. I suspect it was slow waiting for a request to timeout perhaps
I remember last mechanical cash registers in my country in 90s and when these got replaced by early electronic ones with blue vacuum fluorescent tubes. Then everything got smaller and smaller. Now I'm pestered to "add the item to the cart" by software.
Last week I couldn't pay for flowers for grandma's grave because smartphone-sized card terminal refused to work - it stuck on charging-booting loop so I had to get cash. Tho my partner thinks she actually wanted to get cash without a receipt for herself excluding taxes
IIRC, the grocery chain I worked for used to have an offline mode to move customers out the door. But it meant that when the system came back online, if the customers card was denied, the customer got free groceries.
IIRC, the grocery chain I worked for used to have an offline mode to move customers out the door.
Chick-fil-a has this.
One of the tech people there was on HN a few years ago describing their system. Credit card approval slows down the line, so the cards are automatically "approved" at the terminal, and the transaction is added to a queue.
The loss from fraudulent transactions turns out to be less than the loss from customers choosing another restaurant because of the speed of the lines.
Yea, good old store and forward. We implemented it in our PoS system. Now, we do non PCI integrations so we arn't in PCI scope, but depending on the processor, it can come with some limitations. Like, you can do store and forward, but only up to X number of transactions. I think for one integration, it's 500-ish store wide (it uses a local gateway that store and forwards to the processors gateway). The other integration we have, its 250, but store and forward on device, per device.
What I gather from this is to always try a dead card first just in case the store is in offline mode
I remember that banks will try to honor the transactions, even if the customer's balance/credit limit is exhausted. It doesn't apply only to some gift cards.
There's a Family Dollar by my house that is down at least 2 full days per month because of bad inet connectivity. I live close enough that with a small tower on my roof i can get line of sight to theirs. I've thought about offering them a backup link off my home inet if they give me 50% of sales whenever its in use. It would be a pretty good deal for them, better some sales when their inet is down vs none.
50% of sales? what do you think the gross margin is on average for each item sold?
It's Family Dollar, margin has to be almost nothing and sales per day is probably < $1k. That's why I said 50% of sales and not profit.
I go there daily because it's a nice 30min round trip walk and I wfh. I go up there to get a diet coke or something else just to get out of the house. It amazes me when i see a handwritten sign on the door "closed, system is down". I've gotten to know the cashiers so I asked and it's because the internet connection goes down all the time. That store has to one of the most poorly run things i've ever seen yet it stays in business somehow.
I think the point people are trying and failing to make is that asking for half of means sales is half of revenue not half of net and that you’re out of your goddamned mind if you think a store with razor thin margins would sell at a massive loss rather than just close due to connectivity problems.
Your responses imply that you think people are questioning whether you would lose money on the deal while we are instead saying you’ll get laughed out of the store, or possibly asked never to come back.
They're all run on a shoestring:
1: I doubt they're "with it" enough to put together a backup arrangement for internet.
2: Their internet problems are probably due to a cheapo router, loose wire, ect.
3: The employees probably like the break.
2-3%, bit higher on perishables. Though i'd just ask lump sum payments in cash since it likely has to no go through corporate (as in, avoid the corporation).
In that case the other 50%.
it's retail. the margin is 30-50% for sure.
EDIT: their last quarterly was 36%. they lost $3.7bn in 24Q4 -- the christmas quarter. sold to PE in Q1.
All my limited knowledge about retail is that losing money in Q4 means you’re dead. Are they fundamentally different than retail?
You'd think any SeriousBusiness would have a backup way to take customers' money. This is the one thing you always want to be able to do: accept payment. If they made it so they can't do that, they deserve the hit to their revenue. People should just walk out of the store with the goods if they're not being charged.
Why doesn't someone in the store at least have one of those manual kachunk-kachunk carbon copy card readers in the back that they can resuscitate for a few days until the technology is turned back on? Did they throw them all away?
I think a lot of payment terminals have an option to record transactions offline and upload them later, but apparently it's not enabled by default - probably because it increases your risk that someone pays with a bad card.
If they used standalone merchant terminals, then those typically use the local LAN which can rollover to cellular or PoT in the event of a network outage. The store can process a card transaction with the merchant terminal and then reconcile with the end of day chit. This article from 2008 describes their PoS https://www.retailtouchpoints.com/topics/store-operations/ca...
The kachunk-kachunk credit card machines need raised digits on the cards, and I don't think most banks have been issuing those for years at this point. Mine have been smooth for at least 10 years.
> kachunk-kachunk credit card machines
How aptly descriptive.
It's hit or miss. My (brand new) bank card and chase credit card are raised. But my other credit cards are flat.
It’s family dollar. They don’t care about customer satisfaction and the cost of reliability is cost.
The stores are in the hood or middle of nowhere. The customers don’t have many options.
Then they would need to get the little booklets of invalid numbers to keep by the register to check (yes, I am old).
Many businesses don't lose revenue from short outages, it just gets shifted.
Pretty sure it'd be a lot better deal for them to have no sales than to pay out 50% of sales on stuff with single digit margins.
Mind-boggling that any retailer would not have the capability to at least run the checkout stations offline.
You can, but it's all about risk mitigation. Most processors have some form of store and forward (and it can have limitations like only X number of transactions). Some even have controls to limit the amount you can store-and-forward (for instance, only charges under $50). But ultimately, it's still risk mitigation. You can store-and-forward, but you're trusting that the card/account has the funds. If it doesn't, you loose and ain't shit you can do about it. If you can't tolerate any risk, you don't turn on store and forward systems and then you can't process cards offline.
Its not the we are not capable. Its, is the business willing to assume the risk?
I knew an old guy in the '00s who specialized in cobal/fortran for working on tiller software. Guess he retired and they couldn't maintain it
Anyone remember Bob's number?? Bob?! Oh the humanity! We're all gonna be canned!
Currently standing in a half closed supermarket because the tills are down and they cant take payments
There's a fairly large supermarket near me that has both kinds of outages.
Occasionally it can't take cards because the (fiber? cable?) internet is down, so it's cash only.
Occasionally it can't take cash because the safe has its own cellular connection, and the cell tower is down.
I was at Frank's Pizza in downtown Houston a few weeks ago and they were giving slices of pizza away because the POS terminal died, and nobody knew enough math to take cash. I tried to give them a $10 and told them to keep the change, but "keep the change" is an unknown phrase these days. They simply couldn't wrap their brains around it. But hey, free pizza!
I’ve been migrating our services off of Azure slowly for the past couple of years. The last internet facing things remaining are a static assets bucket and an analytics VM running Matomo. Working with Front Door has been an abysmal experience, and today was the push I needed to finally migrate our assets to Cloudflare.
I feel pretty justified in my previous decisions to move away from Azure. Using it feels like building on quicksand…
All the clouds hav had major outages this year.
At this point I dont believe that any one of them is any better or reliable than the others.
Google cloud run or cloudflare workers it is.
Personally I am thinking more and more about hetzner, yes I know its not an apples to orange comparison. But its honestly so good
Someone had created a video where they showed the underlying hardware etc., I am wondering if there is something like https://vpspricetracker.com/ but with geek-benchmarks as well.
This video was affiliated with scalahosting but still I don't think that there was too much bias of them and they showed at around 3:37 a graph comparison with prices https://www.youtube.com/watch?v=9dvuBH2Pc1g
Now it shows how contabo has better hardware but I am pretty sure that there might be some other issues, and honestly I feel a sense of trust with hetzner I am not sure about others.
Either hetzner or self hosting stuff personally or just having a very cheap vps and going to hetzner if need be but hetzner already is pretty cheap or I might use some free service that I know of are good as well.
One of recent (4 months ago) Cloudflare outages (I think it was even workers) was caused by Google Cloud being down and Cloudflare hosting an essential service there
It was Workers KV (an optional storage add-on to Workers), and we fixed it, it no longer depends on GCP:
https://blog.cloudflare.com/rearchitecting-workers-kv-for-re...
Hm it seemed that they hosted a critical service for cloudflare kv on google itself, but I wonder about the update.
Personally I just trust cloudflare more than google, given how their focus is on security whereas google feels googly...
I have heard some good things about google cloud run and the google's interface feels the best out of AWS,Azure,GCloud but I still would just prefer cloudflare/hetzner iirc
Another question: Has there ever been a list of all major cloud outages, like I am interested how many times google cloud and all cloud providers went majorly down I guess y'know? is there a website/git project that tracks this?
Ouch, and login.microsoftonline.com too - i.e. SSO using MS accounts. We'd just rolled that out across most (all?) of our internal systems...
And microsoft.com too - that's gotta hurt
SSO and 365 are working fine for us, but admin portals for Azure/365 are down. Our workloads in Azure don't seem to be impacted.
It is interesting to see the differential across different tenants in different geographies:
- on a US tenant I am unable to access login.microsoftonline.com and the login flow stalls on any SSO authentication attempt.
- on a European tenant, probably germany-west, I am able to login and access the Azure portal.
Guess you have NASSO now (Not A Single Sign On)
It's Safe and Secure!
I am still stunned people choose to do this, considering major Office 365 outages are basically a weekly thing now.
We are very dependent on Azure and Microsoft Authentication and Microsoft 365 and haven’t had weekly or even monthly issues. I can think of maybe three issues this year.
So that's why I can't check in for my Alaska Airlines flight... https://news.microsoft.com/source/features/digital-transform...
"BREAKING: Alaska Airlines' website, app impacted amid Microsoft Azure outage"
https://www.youtube.com/watch?v=YJVkLP57yvM
Pretty much every single Microsoft domain I've tried to access loads for a looooong time before giving me some bare html. I wonder if someone can explain why that's happening.
I was wondering the same thing
I am unable to load this article...presumably for related reasons
Can't download VSCode :D
Error: visual-studio-code: Download failed on Cask 'visual-studio-code' with message: Download failed: https://update.code.visualstudio.com/1.105.1/darwin-arm64/st...
Also cant do anything right now with the repo's we have in Azure Devops, how lovely...
get vscodium then
microsoft.com and some subdomains (answers.microsoft.com) has no A and AAA records. They screwed up big time.
https://archive.is/Q4izZ
That specific subdomain has issues with propagation: https://dnschecker.org/#A/answers.microsoft.com (only four resolvers return records)
The root zone and www. do not: https://dnschecker.org/#A/microsoft.com (all resolvers return records)
And querying https://www.microsoft.com/ results in HTTP 200 on the root document, but the page elements return errors (a 504 on the .css/.js documents, a 404 on some fonts, Name Not Resolved on scripts.clarity.ms, Connection Timed Out on wcpstatic.microsoft.com and mem.gfx.ms). That many different kinds of errors is actually kind of impressive.
I'm gonna say this was a networking/routing issue. The CDN stayed up, but everything else non-CDN became unroutable, and different requests traveled through different paths/services, but each eventually hit the bad network path, and that's what created all the different responses. Could also have been a bad deploy or a service stopped running and there's different things trying to access that service in different ways, leading to the weird responses... but that wouldn't explain the failed DNS propagation.
wow, right after AWS suffered a similar thing.
I wonder if this is microsoft "learning" to "prevent" such an issue and instead triggered it...
"One often meets his destiny on the path he takes to avoid it" -- Master Oogway
We’re 100% on Azure but so far there’s no impact for us.
Luckily, we moved off Azure Front Door about a year ago. We’d had three major incidents tied to Front Door and stopped treating it as a reliable CDN.
They weren’t global outages, more like issues triggered by new deployments. In one case, our homepage suddenly showed a huge Microsoft banner about a “post-quantum encryption algorithm” or something along those lines.
Kinda wild that a company that big can be so shaky on a CDN, which should be rock solid.
We battled https://learn.microsoft.com/en-us/answers/questions/1331370/... for over a year, and finally decided to move off since there was no any resolution. Unfortunately our API servers were still behind AFD so they were affected by today's stuff...
Outages are one thing, but having your content polluted seems like a more serious problem? Unless you subscribed to microsoft banners somehow.
And it was HUGE, the microsoft logo was like 50% of the screen.
For me the same. It's very confusing that status page [1] is green
[1]: https://azure.status.microsoft/en-us/status
That status page is never red. Absolutely useless.
> There are currently no active events. Use Azure Service Health to view other issues that may be impacting your services.
Links to a page on Azure Portal which is down...
It's red right now.
Only for the Azure Portal, despite Front Door also being down but showing as green on the status page.
Heh, now it says Front Door and "Network Infrastructure" are down. That second one seems bad.
They added a message at the same time as your comment:
"We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly."
Yeah just took down the prod site for one of our clients since we host the front-end out of their CDN. Just got wrapped up panic hosting it somewhere else for the past hour, very quickly reminds you about the pain of cookies...
... and DNS caching, and browser file cache, and sessions...
Moving a website quickly is never fun.
The paradox of cloud provider crashes is that if the provider goes down and takes the whole world with it, it's actually good advertisement. Because, that means so many things rely on it, it's critically important, and has so many big customers. That might be why Amazon stock went up after AWS crash.
If Azure goes down and nobody feels it, does Azure really matter?
People feel it, but usually not general consumers like they do when AWS goes down.
If Azure goes down, it's mostly affecting internal stuff at big old enterprises. Jane in accounting might notice, but the customers don't. Contrast with AWS which runs most of the world's SaaS products.
People not being able to do their jobs internally for a day tends not to make headlines like "100 popular internet services down for everyone" does.
2026: the year of your own metal in a rack
2027: the year of migrating from your own metal to a managed provider
2028: the year of migrating from a managed provider to the cloud
2029: the year of migrating from the cloud to your own metal in a rack
People keep thinking the solution to their problems is to do something new (that they don't fully understand).
TIL it's called Nirvana Fallacy
I've been doing it since 1998 in my bedroom with a dual T1 (and on to real DCs later). While I've had some outages for sure it makes me feel better I am not that divergent in uptime in the long run vs big clouds.
Are you still on a dual T1? that's gotta be expensive
(and on to real DCs later) would imply their bare metal is now located in a data center.
really should stop skimming the comment when i find a part to comment on <facepalm>
I'd predict the year of linux desktop instead.
Pretty much all Azure services seem to be down. Their status page says it's only the portal since 16:00. It would be nice if these mega-companies could update their status page when they take down a large fraction of the Internet and thousands of services that use them.
FWIW, all of our databases, VMs, AKS clusters, services, jobs etc - are all working fine. Which services are down for you, maybe we can build a list?
Front Door is down for us (as Azure‘s Twitter account confirms)
All of our Azure workloads are up, but we don't use Azure Front Door. That seems to be the only impacted product, apart from the management portal.
We're using Application Gateway for ingress, that seems to be effected.
Same playbook for AWS. When they admitted that Dynamo was inaccessible, they failed to provide context that their internal services are heavily dependent on Dynamo
It's only after the fact they are transparent about the impact
Instead of cyber security awareness month, we should rename it to cloud availability awareness month.
So much of Belgium runs on Azure… it's honestly baffling how many services are down, there's no resilience built into (even large) companies anymore.
The outage impacted GitSocial minor version bump release: https://marketplace.visualstudio.com/items?itemName=GitSocia...
There's no way to tell, and after about 30 minutes, the release process on VS Code Marketplace failed with a cryptic message: "Repository signing for extension file failed.". And there's no way to restart/resume it.
The Internet is supposed to be decentralized. The big three seem to have all the power now (Amazon, Microsoft, and Google) plus Cloudflare/Oracle.
How did we get here? Is it because of scale? Going to market in minutes by using someone else's computers instead of building out your own, like co-location or dedicated servers, like back in the day.
It still is very decentralized. We are discussing this via the internet right now.
Yeah, but MyChart is down.
I need to drop AWS and start passing data through encrypted HN posts.
When AWS was down we were talking about it here, now Azure is down and we're still talking about it here. Where does HN actually live?
big, if true
A lot of money and years of marketing the cloud as the responsible business decision led us here. Now that the cloud providers have vendor lock-in, few will leave, and customers will continue to wildly overpay for cloud services.
Ahh, but you forget what it used to be like. Sites used to go down all the time.
Now, they go down a lot less frequently, but when they do, it's more widespread.
It’s the Heisenberg cloud principal.
Not sure how the current situation is better. Being stranded with no way whatsoever to access most/all of your services sounds way more terrifying than regular issues limited to a couple of services at a time
> no way whatsoever to access most/all of your services
I work on a product hosted on Azure. That's not the case. Except for front door, everything else is running fine. (Front door is a reverse proxy for static web sites.)
The product itself (an iot stormwater management system) is running, but our customers just can't access the website. If they need to do something, they can go out to the sites or call us and we can "rub two sticks together" and bypass the website. (We could also bypass front door if someone twisted our arms.)
Most customers only look at the website a few times a year.
---
That being said, our biggest point of failure is a completely different iot vendor who you probably won't hear about on Hacker News when they, or their data networks, have downtime.
Thats the whole point, big players like AWS and MS can go down, but here we are still talking on the internet.
Decentralisation is winning it seems.
Not everyone has moved over, but I'm sure there have been thoughts or plans to.
From today [0].
> Big Tech lobbying is riding the EU’s deregulation wave by spending more, hiring more, and pushing more, according to a new report by NGO’s Corporate Europe Observatory and LobbyControl on Wednesday (29 October).
> Based on data from the EU’s transparency register, the NGOs found that tech companies spend the most on lobbying of any sector, spending €151m a year on lobbying — a 33 percent increase from €113m in 2023.
Gee whizz, I really do wonder how they end up having all the power!
[0] https://news.ycombinator.com/item?id=45744973
> How did we get here?
I think the response lies in the surrounding ecosystem.
If you have a company it's easier to scale your team if you use AWS (or any other established ecosystem). It's way easier to hire 10 engineers that are competent with AWS tools than it is to hire 10 engineers that are competent with the IBM tools.
And from the individuals perspective it also make sense to bet on larger platforms. If you want to increase your odds of getting a new job, learning the AWS tools gives you a better ROI than learning the IBM tools.
Meredith Whittaker (of Signal) addressed your question the other day: https://mastodon.world/@Mer__edith/115445701583902092
Efficiency (aka cost) <---> Resiliency/redundancy
Pick your point on the scale
Consolidation is the inevitable outcome of free unregulated markets.
In our highly interconnected world, decentralization paradoxically requires a central authority to enforce decentralization by restricting M&A, cartels, etc.
Is there a theorem that models this behavior? Capital feels like a mass that attracts more mass the larger it becomes, like gravity.
A natural monopoly is a monopoly in an industry in which high infrastructure costs and other barriers to entry relative to the size of the market give the largest supplier in an industry, often the first supplier in a market, an overwhelming advantage over potential competitors. Specifically, an industry is a natural monopoly if a single firm can supply the entire market at a lower long-run average cost than if multiple firms were to operate within it. In that case, it is very probable that a company (monopoly) or a minimal number of companies (oligopoly) will form, providing all or most of the relevant products and/or services.
https://en.wikipedia.org/wiki/Natural_monopoly
> How did we get here?
Stonks
[dead]
Deglobalization in geopolitics should be followed by deglobalization in cloud providers as well. Viva la local vendors.
Sorry - my bad. I literally just connected an old XP VM to the internet to activate it.
Surely more vibecoding will fix this problem. Time to fire more staff
Seeing users having issues with the "Modern Outlook", specifically empty accounts. Switching back to the "Legacy Outlook" which functions largely without the help of the cloud fixes the issue. How ironic.
For us, it looks like most services are still working (eastus and eastus2). Our AKS cluster is still running and taking requests. Failures seem limited to management portal.
I still can't log into Azure Gov Cloud with
https://microsoft.com/deviceloginus
Seems like they migrated the non-Gov login but not the Gov one. C'mon Microsoft, I've got a deadline in a few days.
With all the recent outages considered, it is time to move off the cloud.
High availability is touted as a reason for their high prices, but I swear I read about major cloud outages far more than I experience any outages at Hetzner.
I think the biggest features of the big cloud vendors is that when they are down, not only you but your customers and your competitors usually have issues at the same time so everybody just shrug and have a lazy/off day at the same time. Even on call teams reall just have to wait and stay on standby because there is very little they can do. Doing a failover can be slower than waiting for the recovery, not help at all if outage is spanned accross several region, or bring aditional risks.
And more importantly nobody lose any reputation except AWS/Azure/Google.
It's like back in school when there was a snow day!
Ostensible reason.
The real reason is that outages are not your fault. Its the new version of "nobody ever got fired for buying IBM" - later it became MS, and now its any big cloud provider.
For one it’s statistics - Hetzner simply runs far fewer major services than hyperscalers. And the services they run are also more affluent, with larger customer bases, so downtimes are systemically critical. Therefore it’s louder.
On the merits though, I agree, haven’t had any serious issues with Hetzner.
Same with DigitalOcean. I run one box and it hasnt gone down for like 2 years
DO has been shockingly reliable for me. I shut down a neglected box almost 900 days uptime the other day. In that time AWS has randomly dropped many of my boxes with no warning requiring a manual stop/start action to recover them... But everybody keeps telling me that DO isn't "as reliable" as the big three are.
To be fair, in the AWS/Azure outages, I don't think any individual (already created) boxes went down, either. In AWS' case you couldn't start up new EC2 instances, and presumably same for Azure (unless you bypass the management portal, I guess). And obviously services like DynamoDB and Front Door, respectively, went down. Hetzner/DO don't offer those, right? Or at least they're not very popular.
Same here, I run a few droplets for personal projects and never had any issues with then.
It's just the admin portal.
Nope, more than the portal. For instance, I just searched for "Azure Front Door" because I hadn't heard of it before (I now know it's a CDN), and neither the product page itself [1] nor the technical docs [2] are coming up for me.
[1] https://azure.microsoft.com/en-us/products/frontdoor
[2] https://learn.microsoft.com/en-us/azure/frontdoor/front-door...
Plenty of sites are down and/or login not available. It's just really a mess.
It's absolutely not only the admin portal.
It's CDN and FrontDoor at least.
Interesting, everything else is working just fine for us. Offices across the US.
Do you use on front door? Our VMs that don't are working fine, but our app services that do aren't.
we use front door (as does miccrosoft.com) and our website was down, I was able to change the DNS records to point directly to our server and will leave it like that for a few hours until everything is green
The bank I work at is reporting all Power Apps applications are down.
It looks like it is just the 365 admin panels for us. Admittedly, we don't currently host any other services on Azure though.
Seems to be down in Norway.
Even the national digital id service is down.
> Even the national digital id service is down.
Can't help but smirk as my country is ramming through "Digital ID" right now
Looks to be affecting our pipelines that rely on Playwright as they download images from Azure e.g. https://playwright.azureedge.net/builds/chromium/1124/chromi... which aren't currently resolving.
Some exec at Microsoft told the Azure guys to ape everything Amazon does and they took it literally.
Or, the NSA needed to upgrade their access at both.
Do Microsoft still say "If the government has a broader voluntary national security program to gather customer data, we don't participate in it" today (which PRISM proved very false), or are they at least acknowledging they're participating in whatever NSA has deployed today?
PRISM wasn't voluntary. Also there are 3 levels here:
1. Mandatory
2. "Voluntary"
3. Voluntary
And I suspect that very little of what the NSA does falls into category 3. As Sen Chuck Schumer put it "you take on the intelligence community, they have six ways from Sunday at getting back at you"
“Voluntold”
I was gonna say that obv AWS hacked em to even things up.
This is funny but also possibly true because: business/MBA types see these outages as a way to prove how critical some services are, leading to investors deciding to load up on the vendor's stock.
I may or may not have been known to temporarily take a database down in the past to make a point to management about how unreliable some old software is.
The sad thing is - $MSFT isn't even down by 1%. And IIRC, $AMZN actually went up during their previous outage.
So if we look at these companies' bottom lines, all those big wigs are actually doing something right. Sales and lobbying capacity is way more effective than reliability or good engineering (at least in the short term).
AMZN went up almost 4 percent between the day of the outage and the day after. Crazy market.
Because it shows how much lock-in they have.
You know nobody is migrating off of AWS or Azure because of these.
Look how important we are, is what these failures show
What do you mean? That IT isn't important for Microsoft and Amazon?
That's certainly not the right conclusion.
I think he was implying that those companies think they are so important that it doesnt matter they are down, they wont loose any customers over it because they are too big and important.
So we can look forward to "accidental" cloud outages just to show their importance?
I guess the GCP is next.
"They'll learn their lesson and be rock solid after this! I better invest now!"
well, at this point, 90% of the market cap of FAANGS plus Microsoft is... OMG AI LLM hype
I looked into this before and the stocks of these large corps simply does not move when outages happens. Maybe intra-day, I don't have that data, but in general no effect.
This is impacting the Azure CDN at azureedge.net. DNS A records for azureedge.net tenants are taking 2-6 seconds and often return nothing.
It's always DNS, unless it's not DNS.
Updated 16:35 UTC
Azure Portal Access Issues
Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025
----
Azure Portal Access Issues
We are investigating an issue with the Azure Portal where customers may be experiencing issues accessing the portal. More information will be provided shortly.
This message was last updated at 16:18 UTC on 29 October 2025
-- From the Azure status page
Microsoft have started putting customer status pages up on windows.net, so it must be really really bad!
For example when I try to log into our payroll provider Brightpay, it sends me here:
https://bpuk1prod1environment.blob.core.windows.net/host-pro...
https://azure.status.microsoft/en-us/status says everything's fine! Any place I can read more about this outage?
You're looking at it. I couldn't find any discussion elsewhere yet...
official status pages are useless most of the time.
I work for a cloud provider which is serious about transparency. Our customers know they are going to get the straight story from our status page.
When you find an honest vendor, cherish them. They are rare, and they work hard to earn and keep your confidence.
now there is an information about "Azure Portal Access Issues". No word about front door being down.
[dead]
They suggest to use Traffic Manager to router around failing FrontDoor CDN, but DNS is failing too, making the suggestion another failure.
Yeah they're suggesting to use CLI but then my Frontdoor deployment failed. Welp.
"Front Door" has to be the worst product name for a CDN I've ever heard of. I used to work for a CDN too.
I wonder if many Germans are eager to sign up for AFD.
But seriously I thought it would be the console, not a CDN.
Front Door (tm), with Back Door access for the FBI included free with your subscription! ;)
Part of this outage involves outlook hanging and then blaming random addins. Pretty terrible practice by Microsoft to blame random vendors for their own outage.
Microsoft posted an update on X: https://x.com/AzureSupport/status/1983569891379835372?ref_sr...
"We’re investigating an issue impacting Azure Front Door services. Customers may experience intermittent request failures or latency. Updates will be provided shortly."
Always fun when you can't trust the main status page but have to go to some opinionated social medial website to see the actual problem.
https://www.cbc.ca/news/investigates/tesla-grok-mom-9.695693...
This mom’s son was asking Tesla’s Grok AI chatbot about soccer. It told him to send nude pics, she says
xAI, the company that developed Grok, responds to CBC: 'Legacy Media Lies'
At least MSFT is consistent: https://www.microsoft.com/en-us/ is down as well
Likely behind Azure Front Door.
Much of Xbox is behind that too.
Based on the delay in resolving the issue, it appears MC attempted to rehire some of the DevOps engineers whom AI had previously replaced.
They probably hired the ones AWS laid off, causing the AWS outage.
Institutional knowledge matters. Just has to be the right institution is all.
UK, and other regions too; our APAC installation in Australia is affected.
They admit in their update blurb azure front door is having issues but still report azure front door as having no issues on their status page.
And it's very clear from these updates that they're more focused on the portal than the product, their updates haven't even mentioned fixing it yet, just moving off of it, as if it's some third party service that's down.
> as having no issues on their status page
Unsubstantiated idea: So the support contract likely says there is a window between each reporting step and the status page is the last one and the one in the legal documents giving them several more hours before the clauses trigger.
Github Codespaces (for the 5 people that use them) are also still down.
Two hours after the initial outage, they have finally updated the Front Door status on their status page.
And there goes https://www.microsoft.com/
Reasons to not use hyperscalers, exhibit 654
There's a lot of outages this month!
I was working when I saw the portal page showing only resource groups and lots of items missing. I thought it was a weird browser cache issue.
The actual stuff I was working on (App Insights, Function App) that was still open was operational.
Portal and Azure CDN are down here in the SF Bay Area. Tenant azureedge.net DNS A queries are taking 2-6 seconds and most often return nothing. I got a couple successful A response in the last 10 minutes.
Edit: As of 9:19 AM Pacific time, I'm now getting successful A responses but they can take several seconds. The web server at that address is not responding.
i guess folks in azure wanted to show some solidarity with aws brethren
(couldn't resist adding it. i acknowledge this comment adds no value to the discussion)
Azure goes down all the time. On Friday we had an entire regional service down all day. Two weeks ago same thing different region. You only hear about it when it's something everyone uses like the portal, because in general nobody uses Azure unless they're held hostage.
Yeah, im regretting my decision to buy an xbox now. Every once in a while, everything goes down.
The VS Code website is down: https://code.visualstudio.com/
And so is Microsoft: http://www.microsoft.com/
https://www.microsoft.com works for me (with the www subdomain).
We saw all incoming traffic to our app drop to zero at about 15:45. I wonder how long this one will take to fix.
Same exact time for us as well.
On our end, our VMs are still working, so our gitlab instance is still up. Our services using Azure App Services are available through their provided url. However, Front Door is failing to resolve any domains that it was responsible for.
We're on Office 365 and so far it's still responding. At least Outlook and Teams is.
They don't run on Azure!
They definitely do run on Azure. Probably not 100%, but at least some footprint of those services do.
Are you absolutely sure?
They don't, however authentication for those services relies on Entra ID which seems to be affected.
I'd say DNS/Front Door (or some carrier interconnect) is the thing affected, since I can auth just fine in a few places. (I'm at MS, but not looped into anything operational these days, so I'm checking my personal subscription).
I bet it’s DNS.
“ Starting at approximately 16:00 UTC, we began experiencing DNS issues resulting in availability degradation of some services. Customers may experience issues accessing the Azure Portal. We have taken action that is expected to address the portal access issues here shortly. We are actively investigating the underlying issue and additional mitigation actions. More information will be provided within 60 minutes or sooner.
This message was last updated at 16:35 UTC on 29 October 2025”
https://login.microsoftonline.com/ is down, so that's fun
Thank you. I was wondering what was going on at a company whose web app I need to access. I just checked with BuiltWith and it seems they are on Azure.
Service Status: https://status.cloud.microsoft/ and https://azure.status.microsoft/en-us/status
Status page (first link) is down for me. Second one works
oh the irony, the status link being down too
status page being affected by the same issue is so lame
It begs the question from a noob like me... Where should they host the status page? Surely it shouldn't be on the same infra that it's supposed to be monitoring. Am I correct in thinking that?
Looks like the status page is overloaded...
I could not access MS Clarity the entire day.
Azure portal still insists the issue is jsut with Console.
We had to bypass the Frontdoor
It's the DNS https://dnschecker.org/#A/get.helm.sh is unreachable
Why are Azure App Services still working?
Not seeing it. I have VMs in US East and Netherlands and they're up.
I tried to look some things up on their support pages before 1600Z, and it timed-out. The Dutch railways are also affected (they're an MS shop, IIRC).
All of my employers things are hosted on Azure and running just fine and didn't go down at all. Portal access has been fixed.
Doesn't seem to be too bad of an outage unless you were relying on Azure Front Door.
SSO is down, Azure Portal Down and more, seems like a major outage. Already a lot of services seem to be affected: banks, airlines, consumer apps, etc.
The portal is up for me and their status page confirms they did a failover for it. Definitely not disputing that its reach is wide, but a lot of smaller setups probably aren't using Front Door.
Both work for me in the Netherlands
They suggest to use Traffic Manager to route around failing CDNs. But DNS is not working too, making the suggestion another fail.
The iron law of uptime: "The mandatory single point of failure in every possible system is configuration."
Looks like MyGet is impacted too. Seems like they use Azure:
>What is required to be able to use MyGet? ... MyGet runs its operations from the Microsoft Azure in the West Europe region, near Amsterdam, the Netherlands.
It is much more than azure. One of my kids needs a key for their laptop and can't reach that either. Great excuse though, 'Azure ate my homework'. What a ridiculous world we are building. Fuck MS and their account requirements for windows.
Guess when/who has the next outage!
Can't connect to Claude
Cant access certain banking websites in the UK, I am assuming it because of this.
https://www.natwest.com/
This probably explains why paying for street parking in Cologne by phone/web didn't work (eternal spinner) then
I absolutely love the utility aspect of LLMs but part of me is curious if moving faster by using AI is going to make these sorts of failure more and more often.
If true then what "utility" is there?
More visibility for the general person to see how brittle software is?
Does (should, could) DownDetector also say what customer-facing services are down, when some infrastructure is unworking? Or is that the info that the malefactors are seeking?
Friend of mine at MSFT says it's a Sev-0 outage and they can't even get to the ticket tracking system.
All of our sites went down. This is my company’s busiest time of year. Hooray.
I am having a bunch of issues. It looks like their sites and azure are both affected.
I also got weird notification in VS2022 that my license key was upgraded to Enterprise, but we did not purchase anything.
Might be a failsafe, if you cant get a license status, and you're aware that MS is down, just default to the highest tier.
Free upgrade
Yesterday Amazon, today Microsoft. Are Google's cloud services going down tomorrow?
This is because Azure just copies everything AWS does. Google is a bit more innovative, they will have something else unexpected happen.
throwback to when they deleted a customer's entire account! https://arstechnica.com/gadgets/2024/05/google-cloud-acciden...
Maybe they are and no one realized yet.. :P
That said, I don't hear about GCP outages all that often. I do think AWS might be leading in outages, but that's a gut feeling, I didn't look up numbers.
They had a pretty massive one earlier this year. https://status.cloud.google.com/incidents/ow5i3PPK96RduMcb1S...
This isn't GCP's fault, but the outage ended up taking down Cloudflare too, so in total impact I think that takes the cake.
fairly certain they had a significant multi region outage within the past few years. I'll try to find some details to link.
Few customers....few voices to complain as well.
as a victim of xbox, azure is down 'bout as often as its up
here's hoping its Oracle's cloud instead....
And if they don't, we'll know who the culprit is.
Who?
FYI: https://status.cloud.microsoft/
Which itself is^H^H was down. Wow.
FYI: https://status.cloud.microsoft/
503 Service Unavailable
Wasn't the saying "It's always DNS" floating around somewhere?
Be interesting to understand cause here. Pretty big impact on services we use
Could be DNS, I'm seeing SERVFAIL trying to resolve what look to be MS servers when I'm hitting (just one example) mygoodtogo.com (trying to pay a road toll bill, and failing).
Always in these large provider outages you see people who have forgotten the old ways.
Unable to access the portal and any hit to SSO for other corporate accesses is also broken. Seems like there's something wrong in their Identity services.
Quite close to the recent AWS outage. Let me take a look if its a major one similar to AWS.
Any guess on what's causing it?
In hindsight, I guess the foresight of some organizations to go multi-cloud was correct after all.
We're multi-cloud and it really saved a few workloads last week with the AWS issue.
It's not easy though.
It's always freakin DNS...
cost cutting attempts
Trusting AI without sufficient review and oversight of changes to production.
Yeah, these things never happened when humans were trusted without sufficient review and oversight of changes to production.
Do you have any insight or do you just dislike AI? Incidents like this happened long before AI generated code
I don't think it's meant to be serious. It's a comment on Microsoft laying off their staff and stuffing their Azure and Dotnet teams with AI product managers.
GitHub runners (specifically the "larger" runner types) are all down for us. These are known to be hosted on Azure.
The learning modules on https://learn.microsoft.com/ also seem to have a lot of issues properly loading.
I'm mid-deployment, but thankfully it seems to be running ok so far. Just the portal is not working so my visibility is not good.
Sounds like Shrodinger's Deploy
Is it Cosmos DB? If so the symmetry with AWS/Dynamo would be very eerie.
Oh, well, I'm sure Azure will be given the same pass that AWS got here recently when they had their 12-hour outage...
I didn't realize AWS got a pass?
Have repeated outages lost them customers? has it lost them any money in any way?
That is a pass.
Apologies, but this just reads like a low effort critique of big things.
To be clear, they should get criticism. They should be held liable for any damage they cause.
But that they remain the biggest cloud offering out there isn't something you'd expect to change from a few outages that, by most all evidence, potential replacements have, as well? More, a lot of the outages potential replacements have are often more global in nature.
Have people left GitHub due to the multiple post-acquisition outages? That is a pass if you don't judge it the same way.
Well, they have successfully locked their customers captive thanks to huge egress fees.
LinkedIn has been acting funny for an hour or so, and some pages in the learn.microsoft.com domain have been failing for me too...
I remember the saying "It's always DNS". I'm old.
Kind of mindboggling it's still sometimes DNS maybe.
That saying is just as alive today as it ever was.
https://isitdns.com/
>Last week AWS, now this.
This is not the first or second time this happened, multiple Hyperscaler failed one by one.
pretty interesting how datadog's uptime tracker (https://updog.ai/) says all the sites are fully available.
if that's true then it's a sign that Azure's control / data plane separation is doing it's job! at least for now
Our Azure hosted dotnet App Service is working fine, but our docs site served via Front Door went down. Can’t access anything through the Portal.
Maybe they need a downtime tracker. ;)
Our Azure DevOps site is still functioning and our Azure hosted databases are accessible. Everything else is cooked.
Azure portal currently mostly not working (UK)... Downdetector reporting various Microsoft linked services are out (Minecraft, Microsoft 365, Xbox...)
It’s not DNS
There is no way it’s DNS
It was DNS
MS website seems to be up but really slow. Think xbox might still be down, Bing works for some reason tho!?
Hello fellow boomers!
I noticed that winget is also down eg.
It seems Azure FrontDoor is affected, because our private VM works fine in different regions.
vscode.dev appears to be down. I think this will be my excuse to find an alternative -- I never really liked vscode.dev anyway.
(Coder is currently at the top of the experiment list. Any other suggestions?)
Appears to be an issue in Front Door. Our back end stuff is fine but FD is bouncing everything.
Yeah, I have non prod environments that don't use FD that are functioning. Routing through FD does not work. And a different app, nonprod doesn't use FD (and is working) but loads assets from the CDN (which is not working).
FD and CDN are global resources and are experiencing issues. Probably some other global resources as well.
Hate to say it, but DNS is looking like it's still the undisputed champ.
downdetector reports coincident cloudflare outage. is microsoft using cloudflare for management plane, or is there common infra? data center problem somewhere, maybe fiber backbone? BGP?
I just tried to check the Xbox services status page and it never even loaded.
Majority of actual Xbox services are working fine, xbox.com itself is busted.
Many (all?) LinkedIn profiles are also down for me. Luckily the frontpage still works. ;-)
Go cloud!
Luckily?
GitHub also seems to be having trouble for me
Vibe coded internet keeps getting better
Quick find someone who can actually read documentation and code!
You just paste the outage error codes back to the LLM and pray it's still working and can fix whatever went wrong!
When all the people forget to code for themselves, every LLM will code itself out of existence with that one last bug. One, after another.
Yudkowsky's feared Superintellignece holding Azure hostage
Yikes, http://schemas.xmlsoap.org/soap/encoding/ is running on Azure and it's down. So any SOAP/WSDL api's are dead in the water.
A service we rely on that isn't even running on Azure is inaccessible due to this issue. For an asset that probably never changes. Wild for that to be the SPOF.160k+ results on GitHub: https://github.com/search?q=http%3A%2F%2Fschemas.xmlsoap.org...
Shouldn't regions be completely independent?
downdetector reports coincident cloudflare outage. is microsoft using cloudflare for management plane, or is there common infra? data center problem somewhere, maybe fiber backbone? BGP?
nope, dont see any cf issues.
Took out the archive.ph and .is sites too?
Github Actions and Codespaces degraded.
My bet is on a bad config change.
They already announced that.
Yep, down from here too (in Israel).
Services too, not just the portal.
Can confirm
An important quality of the cloud is that it is always available.
Except that it is not!
Interesting times...
yep having trouble logging into https://entra.microsoft.com/ as well
https://www.reddit.com/r/cscareerquestions/comments/1ojbebq/...
Intune, Azure, Entra down in Switzerland
Can't get to microsoft.com even.
Anyone have betting odds on when Google will go down next? Are we looking at all 3 providers having outages in the span of 3 weeks?
microsoft.com is back -
edit: it worked once, then died again. So I guess - some resolvers, or FD servers may be working!
Looks like AWS is also impacted?
Yeah the graph for that one looks exactly the same shape. I wonder if they were depending on some azure component somehow, or maybe there were things hosted on both and the azure failure made enough things failover to AWS that AWS couldn't cope? If that was the case I'd expect to see something similar with GCP too though.
Edit: nope looks like there's actually a spike on GCP as well
It's possibly more likely that people mis-attribute the cause of an outage to the wrong providers when they use downdetector.
Definitely also a strong possibility. I wish I had paid more attention during the AWS one earlier to see what other things looked like on there at the time.
Down here too (region West Europe)
they recently had an incident with front door reachability, wonder if it's back.
QNBQ-5W8
I know how to fix this but this community is too close minded and argumentative egocentric sensitive pedantic threatened angry etc to bother discussing it
Aww man you got me curious for a sec there.
Please sort it out, I'll be out of a job tomorrow.
Reports of Azure and AWS down on the same day? Infrastructure terrorism?
> Infrastructure terrorism?
Unless that's a euphemism for "vibe coding", no.
Reports of Azure and AWS down on the same day? Infrastructure terrorism?
> We have confirmed that an inadvertent configuration change as the trigger event for this issue.
Save the speculation for Reddit. HN is better than that.
We're quickly learning who's relying on a single cloud provider.
Multi cloud is really hard to get right at scale, and honestly not worth the effort for the majority of companies and use-case.
Like AWS or GCP? https://downdetector.com/status/aws-amazon-web-services/ - https://downdetector.com/status/google-cloud/
When you look at the scale of the reports, you find they are much lower than Azure's. seeing a bunch of 24-hour sparkline type graphs next to each other can make it look like they are equally impacted, but AWS has 500 reports and Azure has 20,000. The scale is hidden by the choice of graph.
In other words, people reporting outages at AWS are probably having trouble with microsoft-run DNS services or caching proxies. It's not that the issues aren't there, it's that the internet is full of intermingled complexity. Just that amount of organic false-positives can make it look like an unrelated major service is impacted.
Down in Sweden Central as well (all our production systems are down)
Looking forward to the post mortem.
This brings to mind this -> https://thenewstack.io/github-will-prioritize-migrating-to-a...
Compare the comments and news coverage on this compared to the AWS outage... pretty telling.
on the line with msft, they said 4 hours is what they are thinking. a workaround they are saying is to use traffic manager,
Luckily, no one uses azure and it's fully expected from azure to go down all the time! Keep it up!
Yup, see it as well.
It's DNS
Earnings report today. A coincidence?
I can at least login to Azure. But several MS sites are down.
The Azure API is still working though.
As of now Azure Status page still shows no incident. It must be manually updated, someone has to actively decide to acknowledge an issue, and they're just... not. It undermines confidence in that status page.
I have never noticed that page being updated in a timely manner.
It shows that some people have issues accessing the portal.
This cannot be a coincidence
So that's why all of our municipality's digital services are down ... utter chaos at the political meeting I attended just now.
looks like MS completed a failover and things are be recovering slowly
I noticed issues on Azure so I went to the status page. It said everything was fine even though the Azure Portal was down. It took more than 10 minutes for that status page to update.
How can one of the richest companies in the world not offer a better service?
>How can one of the richest companies in the world not offer a better service?
Better service costs money.
My best guess at the moment is something global like the CDN is having problems affecting things everywhere. I'm able to use a legacy application we have that goes directly to resources in uswest3, but I'm not able to use our more modern application which uses APIM/CDN networks at all.
auth services are down
still down
things seem to be coming back up now
What a time to be alive!
Just another day with microsoft. Honestly pretty tiring as something is always generally broken.
[dead]
Portal is now accessible, bypassing FDN
From Azure status page: "Customers can consider implementing failover strategies with Azure Traffic Manager, to fail over from Azure Front Door to your origins".
What a terrible advise.
Wtf happen with US east????
now aws down again?
Meanwhile the layoffs continue https://www.entrepreneur.com/business-news/microsoft-ceo-exp...
Layoffs will continue until uptime improves!
Now that is actually funny!
> [Satya Nadella] said that the company’s future opportunity was to bring AI to all eight billion people on the planet.
But what if I don't want AI brought to me?
Sounds like someone has a case of the 'Mondays'...
The mondAIs
You'll have to find another planet.
Although judging by the available transports it will likely be colonized by nazis.
Real life Pluribus https://en.wikipedia.org/wiki/Pluribus_(TV_series)
Like most technology initiative these tech CEOs dream up: You're going to get it and swallow it, whether you want it or not.
I especially like how Nadella speaks of layoffs as some kind of uncontrollable natural disaster, like a hurricane, caused by no-one in particular. A kind of "God works in mysterious ways".
I've read the whole memo and it's actually worse than those excerpts. Nadella doesn't even claim these were low performers: Ok, so Microsoft is thriving, these were friends and people "we've learned from", but they must go because... uh... "progress isn't linear". Well, thanks Nadella! That explains so much![dead]
[dead]
[dead]
According to downtector.com - both AWS and GCP are down as well. Interesting
Don't visit this address.