Marrakech – Cloudflare has resolved the major outage that disrupted internet services worldwide on Tuesday, affecting millions of users, including in Morocco, who rely on the company’s network infrastructure.
The San Francisco-based company confirmed that a database configuration change triggered the widespread disruption, which began at 11:20 a.m. UTC on November 18 and lasted several hours. Cloudflare CTO Dane Knecht clarified on social media that the incident was “not an attack.”
Knecht explained: “A latent bug in a service underpinning our bot mitigation capability started to crash after a routine configuration change we made.” He apologized for the impact, stating the company “failed our customers and the broader Internet.”
The outage affected major platforms, including ChatGPT, League of Legends, X (formerly Twitter), Shopify, Dropbox, and Coinbase. The incident also hit major websites in Morocco, impairing access to several national platforms, including news outlets such as Medi1TV, 2M, and SNRT, along with a long list of other widely used services.
Cloudflare CEO Matthew Prince detailed the technical cause in a comprehensive report. The company was updating database permissions in its ClickHouse system when a query began generating duplicate entries in a “feature file” used by the Bot Management system.
The file doubled in size beyond software limits, causing the system to fail. This affected the core proxy system that processes customer traffic, resulting in HTTP 5xx error codes for users attempting to access websites.
The problem was complicated by the file’s automatic regeneration every five minutes. Sometimes the system would create correct files, causing temporary recovery periods that initially led engineers to suspect a cyberattack.
Prince noted that their status page coincidentally went down during the incident, further suggesting to the team that they might be under attack. Internal communications revealed concerns about the potential continuation of recent high-volume DDoS attacks.
Core traffic began flowing normally again at 2:30 p.m. UTC after engineers stopped the faulty file generation and replaced it with an earlier working version. All systems returned to normal operation by 5:06 p.m. UTC.
‘A massive digital gridlock’
The incident impacted multiple Cloudflare services. The company’s Bot Management system failed completely, while Workers KV experienced elevated error rates. Access authentication failed for most users until 1:05 p.m. UTC when rollback procedures began.
Cloudflare’s dashboard became largely inaccessible due to Turnstile authentication failures. Email security processing continued, but with reduced spam detection accuracy due to lost access to reputation sources.
Customers using Cloudflare’s newer FL2 proxy engine saw HTTP 5xx errors, while those on the older FL system received incorrect bot scores of zero. This caused false positive blocking for customers with bot-blocking rules.
The company has begun implementing safeguards to prevent similar incidents. These include hardening configuration file ingestion, enabling global kill switches for features, and preventing error reports from overwhelming system resources.
Cybersecurity expert Mike Chapple from the University of Notre Dame explained to the Associated Press that Cloudflare serves as a “content delivery network” for approximately 20% of websites worldwide. When the service fails, it creates “massive digital gridlock” because the company sits between users and websites.
This marks Cloudflare’s worst outage since 2019, according to Prince. The incident demonstrates the vulnerability of the internet infrastructure when major service providers experience technical failures, especially coming just weeks after a significant Amazon Web Services (AWS) breakdown last October.
The company has promised a detailed technical breakdown and is implementing measures to prevent recurrence. Work continues on reviewing failure modes across all core proxy modules to strengthen system resilience.

Join on WhatsApp
Join on Telegram


