Much about the current Bitcoin splitting debate has revolved around the notion of a hard fork splitting of the network being dangerous. So dangerous, in fact, that core developers have constantly stuck to the argument that the community should trust in their (exclusive) council in order to ensure that we don’t engage in anything that may be unsafe for ourselves. Trust them, they know what is good for us. When libertarians and skeptics around the world hear that they are immediately put on alert.
Most recently an exchange between ex-Bitcoin lead maintainer Gavin Andresen with Core contributor Matt Corallo was especially interesting. Besides the run-of-the-mill talking past each other where Matt seems to ignore points that Gavin clearly addressed (regarding n**2 sighash issues, solved by capping txn sizes to 1mb) the core theme (pun intended) repeated again by Matt was that Hard Forks have no community support (by his own judgement) which is clearly shown by the fact that nobody seems to be giving much attention to the HF proposals in his (exclusive core dev curated) proposal list. Not much surprise here, the standard echo-chamber reality distortion field stuff. What was interesting, was that he once again mentioned the need, nay, the necessity of ‘replay protection’ in ANY hard fork proposal. This is very important point in the core dev platform, as it serves a dual purpose. One which on the surface is ostensibly for the public good, the other may be much more shadowy. Let’s examine what replay protection is, and why we really don’t need it.
So, voting on core’s implementation of Segwit is now enabled, and all 3 of the miners that support core have already cast their vote (2 pools and 1 cloud mining MLM), totalling about 23% of the network. Adoption seems to have stalled (as of 4Dec16) as the rest of the undecided vote remain undecided. Perfect time for an analysis breakdown of segwit, the good, the bad, and the ugly.
Segwit, the [un?]controversial softfork
Segwit has been called a ‘much needed upgrade’ to the network by core proponents, which has a somewhat jury-rigged way of expanding the effective block size of a block. (to 1.7mb)
Let’s first cut through all the marketing jazz and spin that people supporting Blockstream want to put on it and evaluate it on its technical merits alone, addressing its first its pros, then its cons.
This post is a culmination of about a year’s worth of thoughts and research that I have been informally gathering, which started with a simple question that started last year when I first read a piece which was written in the middle of the Bitcoin XT heyday describing what would be so bad about having 2 persistent forks by core developer, Meni Rosenfeld.
Forks are not scary, they are upgrades!
The post described the general understanding of forks at the time, and it was in this context that I wrote my original piece which was very much a pro-Core stance on the dangers of hard forks. I was wrong on some of my assumptions when I wrote that, which I have over the course of the year corrected, but nevertheless that original piece earned me many twitter RTs and ‘follows’ by core devs and supporters at the time (who have mostly now, funny enough, all banned me).