So, voting on core’s implementation of Segwit is now enabled, and all 3 of the miners that support core have already cast their vote (2 pools and 1 cloud mining MLM), totalling about 23% of the network. Adoption seems to have stalled (as of 4Dec16) as the rest of the undecided vote remain undecided. Perfect time for an analysis breakdown of segwit, the good, the bad, and the ugly.
Segwit, the [un?]controversial softfork
Segwit has been called a ‘much needed upgrade’ to the network by core proponents, which has a somewhat jury-rigged way of expanding the effective block size of a block. (to 1.7mb)
Let’s first cut through all the marketing jazz and spin that people supporting Blockstream want to put on it and evaluate it on its technical merits alone, addressing its first its pros, then its cons.
This post is a culmination of about a year’s worth of thoughts and research that I have been informally gathering, which started with a simple question that started last year when I first read a piece which was written in the middle of the Bitcoin XT heyday describing what would be so bad about having 2 persistent forks by core developer, Meni Rosenfeld.
Forks are not scary, they are upgrades!
The post described the general understanding of forks at the time, and it was in this context that I wrote my original piece which was very much a pro-Core stance on the dangers of hard forks. I was wrong on some of my assumptions when I wrote that, which I have over the course of the year corrected, but nevertheless that original piece earned me many twitter RTs and ‘follows’ by core devs and supporters at the time (who have mostly now, funny enough, all banned me).
I’ll just say it. Small blockers are elitists who want to censor out Bitcoin users who cannot afford to transact on mainchain. I’ve lost count of how many times I’ve heard the old argument that scaling onchain damages decentralization, which in turn may damage the censorship resistance of Bitcoin.
Free as in Free speech and Free beer!
It is important to realize the hypocrisy in this line of reasoning. It is subtle, so I bet most of the proponents don’t even know that they are guilty of it.
Simply put, the fee market is a form of censorship. If you cannot pay for a bullet proof car in Mexico city, then you and your family is at risk. If you cannot afford to install a home alarm system, then you have been prevented, indirectly, from keeping your property safe from burglars. If you cannot afford insurance, then you are at risk of a fire, or an accident etc. Similarly, if you cannot afford to pay for the privilege of transacting when you wish in the Bitcoin network, then you must be delegated to 2nd layer networks like Lightning to do your payments. Which will have centralized payment hubs to service you and collect fees from you. How is this any different from the current banking system that we have now? Isn’t this form of slavery to debt one of the exact reason why Bitcoin was created in the first place to solve? Why then should Bitcoin treat those of means different from those without? Shouldn’t all the underserved be equal in the eyes of Bitcoin? Continue reading
Early this year, when the debate on how to manage the meta-consensus issue of hard fork management arose I wrote an article about emergent consensus. This basically outlined the idea behind Bitcoin Unlimited‘s proposal of letting the network decide when it is collectively ready to move the block limit higher, and by what amount. At the time, I wrote that the issue was lack of good UX tools which would be able to track network participants (whether mining node, or regular full-node) votes and show them in real time. After all, emergent consensus can only work if there is a sufficient feedback loop so that the collective group decision making process can be facilitated, and overestimates and underestimates can be corrected. This is much like how a liquid market of bid/asks facilitates price discovery in every financial market since the beginning of human commerce. It is only by repeated and constant dogmatization of the block size limit as a ‘sacrosanct’ part of the protocol, has the proponents of a smaller block restricted Bitcoin been able to convince everyone that the limit cannot be changed, lest the network be subject to catastrophic attacks or instability.
We have all heard about the big problem of mining centralization in Bitcoin. The deep set fears that somehow, if left unchecked, the miners will collude to defraud the network, and sabotage the whole system, all in order to satiate their own lust for profit.
This is often used as a reason to employ [central planned policy here] or to change the protocol to incentivize some other (more acceptable) form of behaviour. Of all the ‘decentralization myths’ this one is the toughest to dispel; not because it is any more true than the other myths but because people have an inbuilt selection bias in that they often believe that a system not serving them directly must mean that system is broken, instead of realizing that they way they are interacting with the system may be at fault. Mining has always been a very liquid market in Bitcoin, and has gone through several phases or generations, and as each new era came to an end there were very loud proponents in the industry that wailed and warned that this new change would mark the end of the network and everything would break. Detractors said the same thing when mining moved from single CPUs to GPUs and experienced a 1000x increase in efficiency, then again when mining moved to FPGAs, and finally to custom ASICs. The industry has seen hashrates go from MH/s, to GH/s, to TH/s. That is a million times increase in just 7 years. Every time, the complainers were the ones that had some entrenched interest in the current model and stood to lose money or competitiveness. Maybe they had just bought 10 new Intel Xeon servers just to mine Bitcoins when some genius had the idea to move mining to GPUs. Or maybe they had just bought $200,000 of GPUs when the first ASICs were released, and were caught holding the bag. Needless to say, you can always identify the people who stand to lose something given a change by how loud they complain about it. (Hint: take note on which miners complain about mining centralization the most)
Lightning network has been heralded as the way to scale Bitcoin into the future, but as it is starting to become apparent that two very separate camps with differing opinions on how to scale Bitcoin are starting to draw lines in the sand, it’s worth taking a pragmatic look at this technology, seeing as it seems to be shaping up that once adopted, it will be very difficult to back out¹
First off, I want to say that Lightning as a concept is pretty interesting. I think that it will have many uses in the world of Bitcoin. Yes, I have read the white paper (both long and short version) and I believe I have pretty good understanding of how it works. A disclaimer, as most of the development is happening behind closed doors via BifFury, it’s hard to comment on any of the new yet unreleased progress, such as developments on the routing algorithm.
Let’s examine the pros and cons of the Lightning overlay network.
- Unlimited txn/s
- Secure from double spends
- Requires Bitcoin to use
Ethereum made crypto-history this week by being the first PoW blockchain to execute a hard fork. They claimed it was done after getting unanimous consensus from the community through stake voting which many have criticised as being nothing more than a farce, as less that a total of 13% of the coin population bothered to turn up to vote, and some sources say it was even possibly less than 2%. Nevertheless, the hard fork was devised and coded, hastily tested, and released, and when the fateful day arrived when it was pre-programmed to activate, July 21st 2016, the network indeed split into two. Quietly, smoothly, without much fanfare.
A week before a group of developers and supporters who opposed the hard fork on ethical principles formed a movement called Ethereum Classic, and pledged to reject the new fork which would see the seizure and confiscation of the ETH that the DAO attacker had acquired during his raid. This movement also saw the defection of about 5% of the mining power in the ethereum network.
What happened after the fork block made history. Contrary to what the ETH developers said, the fork did not remerge and the minority chain persisted. At first the block rate was a fraction of the majority chain. But now after 2 days the block rate has stabilized and the minor chain is mining blocks at about the same rate as before the fork. In addition, the difficulty of the mining is only 1% of the majority chain, which adds an economic incentive for miners to mine on the minor chain in order to make more rewards. This second chain represents the split-fork scenario that many Bitcoin core devs have been warning the community that would cause chaos and destroy both systems. Only, it didn’t. At least not yet.
Power to the People! Power to the Users!
The Decentralization Parody
Every so often in crypto, another data point emerges in the wild that supports or disproves a previous theory or fundamental school of thought. The recent fiasco with Ethereum and its crown jewel proof of concept project, The DAO, was such a data point that made me want to revisit some past debates about decentralization and its misconceptions. The fact that Ethereum was supposed to be decentralized (some argue more than Bitcoin by measures of node operation cost), yet, how the community could be considering supporting a hard fork to break the coin fungibility of its system, in the name of ‘justice’ and making victims whole, stands in the face of everything a good monetary system should be.
Slides for the seminar that I gave at BlockchainHub on March 11th
Presentation slides for the Bloomberg talk I made on Feb 24th