The Four Tiers of Decentralization
The recent Facebook outage made many web2 natives realize how fragile the foundations of their core internet services are.
Most web2 services are centralized by default, but Facebook’s operations proved centralized to such a degree that when one part of the system broke down the engineers were locked out of everything – even the debugging tools.
In an incident report posted by a member of the Facebook engineering team, a particularly telling passage reads:
…as our engineers worked to figure out what was happening and why, they faced two large obstacles: first, it was not possible to access our data centers through our normal means because their networks were down, and second, the total loss of DNS broke many of the internal tools we’d normally use to investigate and resolve outages like this.
The internet hasn’t always been this way. In the early days, it was a lot more decentralized. With no AWS, Azure, or Google Cloud Platform, websites mostly ran from individual servers. This meant that one outage couldn’t take down millions of services in one swoop, and that a rule change from one provider couldn’t enforce mass censorship.
Today, Google, Microsoft and Amazon host a combined 60% of the internet. Amazon alone, 33%, including the most popular and relied-upon web2 services like Netflix, Spotify, Twitch, Facebook, Slack, and Reddit – not to mention vital services like internet banks and government portals.
Why has it turned out this way? Developers are taught to do things the centralized way – use this company’s API. Deploy to AWS. Centralized services are money makers and they have the most marketing clout.
Developers can choose whether to contribute towards the continued monopolization of our internet, or to seek options that prevent downtime, resist censorship and capture the original spirit of the web. In this article, we take a look at some popular web2 and web3 architectures and how they achieve – or fail to achieve – decentralization.
When an app’s back and front end are both centralized, it has the most points of failure out of any architecture on this list. One server for both the back and front end, and even if these stay alive, the whole app could be brought down if the APIs it relies on go down.
An example: Stripe. PHP hosted on AWS, hooking into the Google authentication API for login.
Centralized server storing data on Arweave
Arweave can act exactly like any other database – it can even be queried with graphQL or ardb, which make it simple for applications to quickly load what they need from the full history of blockweave transactions. In cases where a centralized application server that stores data on Arweave goes down, the application data can still be queried from Arweave as if its API was still up.
An example: ArDrive. Main backend data stored on Arweave, some services + front end delivered with Google Cloud and Fastly.
Decentralized back end, centralized front end
Less can go wrong when the back end of an app is served from the blockchain, even though the front end is a point of failure. Whether it’s vulnerable just because it’s hosted on a centralized server, or vulnerable because the entity hosting it can respond to takedown requests or otherwise censor it, it’s still a ‘throat to choke’, so to speak. This setup is very common with DeFi apps like Uniswap.
An example: Uniswap. Made headlines earlier this year when Uniswap Labs altered the front end to hide around 100 tokens. A dead frontend in this context is easier to save than a dead backend, and since Uniswap and other web3 apps often use smart contracts for that purpose, permaweb archivists were able to clone Uniswap’s previous UI and host it permanently on Arweave.
Decentralized backend, decentralized front end
An app with a smart contract backend and front end stored on Arweave is one of the most robust and censorship-resistant architectures possible. The Arweave network consists of hundreds of incentivized nodes, has never gone down, and reliably hosts a plethora of front ends ranging from popular DeFi apps to our very own permacast and permablog.
Here ArGo gets a special shoutout for providing a user-friendly way to continuously deploy a front end from a GitHub repository and attach both HNS and DNS domain names.
An example: permablog. The entire backend is run by a SmartWeave contract. The front end is accessible both via its arweave.net txid and a pretty DNS too (permablog.net). Also has an HNS domain for extra resilience.
- Handshake domains. Handshake (HNS) is “a decentralized, permissionless naming protocol where every peer is validating and in charge of managing the root DNS naming zone”. While it self-describes as an experiment, the results are already visible and verifiable.
- Multiple / dedicated gateways. Gateways are the last in-progress item regarding Arweave’s problem of scale. A single gateway could be (accidentally) DDoS’d by a popular project – we covered that here – but this is avoidable by using either:
- Meson Network. Meson provides the power of over 40,000 nodes as part of an Arweave-backed CDN and global cache layer.