Much about the current Bitcoin splitting debate has revolved around the notion of a hard fork splitting of the network being dangerous. So dangerous, in fact, that core developers have constantly stuck to the argument that the community should trust in their (exclusive) council in order to ensure that we don’t engage in anything that may be unsafe for ourselves. Trust them, they know what is good for us. When libertarians and skeptics around the world hear that they are immediately put on alert.
Most recently an exchange between ex-Bitcoin lead maintainer Gavin Andresen with Core contributor Matt Corallo was especially interesting. Besides the run-of-the-mill talking past each other where Matt seems to ignore points that Gavin clearly addressed (regarding n**2 sighash issues, solved by capping txn sizes to 1mb) the core theme (pun intended) repeated again by Matt was that Hard Forks have no community support (by his own judgement) which is clearly shown by the fact that nobody seems to be giving much attention to the HF proposals in his (exclusive core dev curated) proposal list. Not much surprise here, the standard echo-chamber reality distortion field stuff. What was interesting, was that he once again mentioned the need, nay, the necessity of ‘replay protection’ in ANY hard fork proposal. This is very important point in the core dev platform, as it serves a dual purpose. One which on the surface is ostensibly for the public good, the other may be much more shadowy. Let’s examine what replay protection is, and why we really don’t need it.
The question of how miners will be paid in the long run, after mining subsidy rewards disappear is a much debated topic in Bitcoin. For those who don’t know, mining rewards are set to half every 4 years until they finally reach zero sometime in the year 2140. How the Bitcoin mining ecosystem will remain profitable (and thus healthy) is up in the air. Miners are important as they provide security to the Bitcoin network because they convert real world energy into network security to guard against attacks from malicious forces. Therefore, the more decentralized and diverse a mining ecosystem is, the better for Bitcoin.
So what will happen when mining rewards disappear? Well, some miners feel that transaction fees should rise up to fill up the shortfall. As Ang Li puts it from an excerpt of the a recent article at Bitcoin.com
The incentives that Satoshi Nakamoto designed in the Bitcoin whitepaper are not enough to sustain mining for long, Li feels, adding that as the block reward halves every four years, miners income will continue to decline. According to him, keeping the block size where it is now will not provide enough incentive and therefore has to be reconsidered. Li also believes that only a larger mining transaction fee will maintain the balance. “By increasing block size, and transaction numbers, the fees will gradually replace the block reward, providing enough incentive for the miners to defend the bitcoin hashrate. This is the fundamental way to achieve healthy development of the whole ecosystem.”
This is the fourth part of a multi-part series on the myths of decentralization. You can read the previous installments here:
Part 1 – Decentralization Redefined
Part 2 – Decentralization Myth
Part 3 – Decentralization comes with People
I’ve written quite a lot about the misconceptions and deliberate misdirection that some proponents in the Bitcoin community choose to spread around in order to shape the public perception of what makes Bitcoin valuable, and as a result change the fundamental value proposition of Bitcoin. As you all should know by now, “Value does not exist outside the consciousness of Man” – Carl Menger. So changing people’s consciousness by way of affecting their ideas, affects the value of Bitcoin. Thus it is important that we re-evaluate our notions of why Bitcoin is valuable every so often with a huge dose of skepticism.
In today’s article, I’d like to review what the fundamental security model of Bitcoin is, as intended by its mysterious creator, Satoshi Nakamoto, (at least in my interpretation of it) why that model is the best we can possibly hope for, and why any further attempts at adding extra layers of ‘security’ on top of this model just ends up making it less secure by making it more centralized.
One of the heated debates that has raged over the years in Bitcoin space is whether the idea of a developer team lead by a benevolent dictator is the appropriate model to employ for a network worth more than 15 billion dollars in market capitalization. Many have cited examples of how Satoshi, and then Gavin himself were benevolent dictators, and also how some well known projects have been successfully managed under the watchful eye of a wise and benevolent (though sometimes abrasive) dictator such as the Linux project. It is also true that most civilizations evolve from dictatorships, starting with the tribal chiefs, to feudal warrior kings, to aristocratic monarchs, to emperors. The transition to a democracy is not always a smooth one, and is mired by both slippages into oligarchies, totalitarian fascism to misguided experiments into socialism. It is important then, to keep in mind that while most organized groups start as dictatorships, they eventually evolve into a system that is more inclusive of the common people’s will.
Oh, Glorious Leader, shepherd for the weak, show us the way!
Firstly, let’s get the obvious out of the way. Dictatorships are vastly more efficient than a republic or democracy. This is due to the fact there is little bounds on the leaders power, and his followers will carry out his instructions in the most expedient fashion. Contrast this to a democracy where leaders are continually second guessed by their opposition, and their political opponents who are all vying for their own chance to run the show. In a dictatorship, the only way a change of regime is possible is through open and widespread revolution. This is why despotic Chinese emperors of old made it illegal to congregate in groups of 3 or more, restrict what can be discussed in public and on occasion just committed mass murders of all the academics and scholars for fear that they may spread seeds of dissent and dissatisfaction among the peasants with their pesky logic, philosophy, and ideals of morality. Continue reading
So, voting on core’s implementation of Segwit is now enabled, and all 3 of the miners that support core have already cast their vote (2 pools and 1 cloud mining MLM), totalling about 23% of the network. Adoption seems to have stalled (as of 4Dec16) as the rest of the undecided vote remain undecided. Perfect time for an analysis breakdown of segwit, the good, the bad, and the ugly.
Segwit, the [un?]controversial softfork
Segwit has been called a ‘much needed upgrade’ to the network by core proponents, which has a somewhat jury-rigged way of expanding the effective block size of a block. (to 1.7mb)
Let’s first cut through all the marketing jazz and spin that people supporting Blockstream want to put on it and evaluate it on its technical merits alone, addressing its first its pros, then its cons.
This post is a culmination of about a year’s worth of thoughts and research that I have been informally gathering, which started with a simple question that started last year when I first read a piece which was written in the middle of the Bitcoin XT heyday describing what would be so bad about having 2 persistent forks by core developer, Meni Rosenfeld.
Forks are not scary, they are upgrades!
The post described the general understanding of forks at the time, and it was in this context that I wrote my original piece which was very much a pro-Core stance on the dangers of hard forks. I was wrong on some of my assumptions when I wrote that, which I have over the course of the year corrected, but nevertheless that original piece earned me many twitter RTs and ‘follows’ by core devs and supporters at the time (who have mostly now, funny enough, all banned me).
I’ll just say it. Small blockers are elitists who want to censor out Bitcoin users who cannot afford to transact on mainchain. I’ve lost count of how many times I’ve heard the old argument that scaling onchain damages decentralization, which in turn may damage the censorship resistance of Bitcoin.
Free as in Free speech and Free beer!
It is important to realize the hypocrisy in this line of reasoning. It is subtle, so I bet most of the proponents don’t even know that they are guilty of it.
Simply put, the fee market is a form of censorship. If you cannot pay for a bullet proof car in Mexico city, then you and your family is at risk. If you cannot afford to install a home alarm system, then you have been prevented, indirectly, from keeping your property safe from burglars. If you cannot afford insurance, then you are at risk of a fire, or an accident etc. Similarly, if you cannot afford to pay for the privilege of transacting when you wish in the Bitcoin network, then you must be delegated to 2nd layer networks like Lightning to do your payments. Which will have centralized payment hubs to service you and collect fees from you. How is this any different from the current banking system that we have now? Isn’t this form of slavery to debt one of the exact reason why Bitcoin was created in the first place to solve? Why then should Bitcoin treat those of means different from those without? Shouldn’t all the underserved be equal in the eyes of Bitcoin? Continue reading
Early this year, when the debate on how to manage the meta-consensus issue of hard fork management arose I wrote an article about emergent consensus. This basically outlined the idea behind Bitcoin Unlimited‘s proposal of letting the network decide when it is collectively ready to move the block limit higher, and by what amount. At the time, I wrote that the issue was lack of good UX tools which would be able to track network participants (whether mining node, or regular full-node) votes and show them in real time. After all, emergent consensus can only work if there is a sufficient feedback loop so that the collective group decision making process can be facilitated, and overestimates and underestimates can be corrected. This is much like how a liquid market of bid/asks facilitates price discovery in every financial market since the beginning of human commerce. It is only by repeated and constant dogmatization of the block size limit as a ‘sacrosanct’ part of the protocol, has the proponents of a smaller block restricted Bitcoin been able to convince everyone that the limit cannot be changed, lest the network be subject to catastrophic attacks or instability.
We have all heard about the big problem of mining centralization in Bitcoin. The deep set fears that somehow, if left unchecked, the miners will collude to defraud the network, and sabotage the whole system, all in order to satiate their own lust for profit.
This is often used as a reason to employ [central planned policy here] or to change the protocol to incentivize some other (more acceptable) form of behaviour. Of all the ‘decentralization myths’ this one is the toughest to dispel; not because it is any more true than the other myths but because people have an inbuilt selection bias in that they often believe that a system not serving them directly must mean that system is broken, instead of realizing that they way they are interacting with the system may be at fault. Mining has always been a very liquid market in Bitcoin, and has gone through several phases or generations, and as each new era came to an end there were very loud proponents in the industry that wailed and warned that this new change would mark the end of the network and everything would break. Detractors said the same thing when mining moved from single CPUs to GPUs and experienced a 1000x increase in efficiency, then again when mining moved to FPGAs, and finally to custom ASICs. The industry has seen hashrates go from MH/s, to GH/s, to TH/s. That is a million times increase in just 7 years. Every time, the complainers were the ones that had some entrenched interest in the current model and stood to lose money or competitiveness. Maybe they had just bought 10 new Intel Xeon servers just to mine Bitcoins when some genius had the idea to move mining to GPUs. Or maybe they had just bought $200,000 of GPUs when the first ASICs were released, and were caught holding the bag. Needless to say, you can always identify the people who stand to lose something given a change by how loud they complain about it. (Hint: take note on which miners complain about mining centralization the most)
Lightning network has been heralded as the way to scale Bitcoin into the future, but as it is starting to become apparent that two very separate camps with differing opinions on how to scale Bitcoin are starting to draw lines in the sand, it’s worth taking a pragmatic look at this technology, seeing as it seems to be shaping up that once adopted, it will be very difficult to back out¹
First off, I want to say that Lightning as a concept is pretty interesting. I think that it will have many uses in the world of Bitcoin. Yes, I have read the white paper (both long and short version) and I believe I have pretty good understanding of how it works. A disclaimer, as most of the development is happening behind closed doors via BifFury, it’s hard to comment on any of the new yet unreleased progress, such as developments on the routing algorithm.
Let’s examine the pros and cons of the Lightning overlay network.
- Unlimited txn/s
- Secure from double spends
- Requires Bitcoin to use