Ripple and Lightning Networks: The Nuts and Bolts

Last time, I spoke about how the bread and butter use case of Ethereum was going to be soon challenged by Bitcoin Cash, when the missing op-codes are re-enabled in the upcoming May 16th upgrade.  Today, I will discuss more about the other use cases which BCH is going to challenge, as a payment system and the biggest challengers in that space.  Yes, many expert bitcoiners will recognize that Bitcoin Cash is aiming to be CASH first and foremost, but that doesn’t preclude its use as a payment rail as a secondary use case.

Firstly, it will be good to explore exactly what is the difference between a payment network and cash.  Most simply, cash is an asset.  It is something that cannot be taken away from you without force, because you solely have the ownership of it, and you solely decide when to keep it or when to give it away.  A payment network is simply a system by which transfers of ownership rights can be tracked, requested, processed, and settled.  SWIFT is the most popular payment system among first world banks today, and among consumers VISA is a payment network that allows you to pay for things with credit extended to you by a credit issuer.  It is important to note that while Bitcoin can be thought of as a payment network, (given the nature that all the assets are publicly visible and moved on the blockchain) it is first and foremost an asset ownership ledger.  And Bitcoin (the coin) is an asset, not just a utility token.**

The biggest players in the digital payment network space these days are Lightning Network (built on top of the Bitcoin asset blockchain), and Ripple.  Both have a ‘usage token’ that you have to buy to use it, BTC and XRP respectively, but these networks have a very different approach to solving the same problems.  And, I will argue, these problems don’t exist in the Bitcoin Cash blockchain.

Lightning in a Bottle

First off Lightning Networks. I first spoke about it back in Oct 2016, when the developers first said that they were ready to deploy the network real soon.  The problems I saw were not so much the technical issues (because there were so many technical issues that it wasn’t worth getting too deep into the details at that early stage) but more the economic problems –IF the network were to be adopted at scale.  Those issues have not changed, and we will re-iterate on them later in this post.  Let’s first discuss the matter of trying to “Fix something that isn’t broken”.  The best way to think about LN technically is through an analogy in the physical world.  In the world of motorcycles, there are many types of experimental steering mechanisms.

This is what Lightning developers think they are building:

LN is betting that their technology is both sorely needed and will revolutionize the industry

For those not into bikes, the way a motorcycle steers is very different from how a car does.  Basically, you have to move the handlebars in a direction which is opposite to the direction that you want to turn.  This is called counter-steering.  In addition, the front wheel due to its role in steering has been traditionally mounted using a fork system.  This means that the wheel is held between 2 shock absorbers, which joins the main frame of the bike at the headstock, or the pivot point of the steering system.  This wheel mounting configuration, while most common and simple, combines the steering system, with the braking system, and the suspension system.  This has some disadvantages however, which I won’t get into here, as this is a financial technology blog, and not a motorcycle one (though I’m happy to go into for the riders out there). Very much like Bitcoin, the LN developers and many of those who support the legacy version of Bitcoin (the one disabled with 1mb maximum blocks and segwit) they see these self imposed issues needing to be fixed and LN seems to be the solution.  The bizarre bike in the following figure is how they see the LN network becoming:

LN devs think they are going to ‘fix’ Bitcoin with revolutionary technology

Keep in mind: this is IF the dream of LN can be realized without any complications or fundamental bugs.  In the best case scenario, we will get something like the Bimota Tesi, a beautifully engineered, overly complicated, and expensive motorcycle that looks bizarre and exotic, sure to turn heads, but is very rarely seen on the road as a performance bike because the complicated steering removes all the ‘feel’ and control out of the steering.  Needless to say, for all its purported benefits, no rider of MotoGP has ever taken a Tesi to the track in a serious race.  That pretty much says it all.  However, this was in the best case.  In reality, the LN technology at present looks more like the following bike experiment:

LN: the current state of the technology

An overly engineered experiment that suffers from having to solve engineering problems that it introduced itself due to its overly complex fundamental design.  The world doesn’t really need a hub-steering, 2-wheel drive, diesel powered motorcycle.  Well, not presently anyhow.  The other critical point is the matter of ownership. Recall that I mentioned that Bitcoins are assets.  And that having total and complete control of your assets (or having the option for complete control) is part of the rights of the owner of an asset.  In LN, you are forced to put your asset (Bitcoins) into what is effectively a bank account.  This bank account is a weird jointly owned bank account that you have opened with another peer, which LN folks call a payment channel.  Your Bitcoins locked in such a channel can only go to or from the peer that you opened the channel with, so in order to fix that problem LN network is developing a massive IOU balancing/routing system to make sure that if you need to pay somebody or get paid that it can be done through one of your existing payment channels if possible.  Even if this were all to work, the fact of the matter is that your Bitcoins are still locked in channels, and even though you can technically remove them all if you wanted to that would require many transactions to do so in practice for any decent sized wallet, which would be costly. In addition, there are also new attack vectors that are introduced where a thief can try to steal your locked coins in the hope that you aren’t watching the channel.  Don’t take my word for it (though I DID warn about all these things back in 2016, but you probably don’t remember) read it from the LN Dev blog yourself!  If you manage to get through the post without being totally confused, then at least you will have an impression of all the extra complications that using LN coins will involve.  I would rather not peg my Bitcoins into the LN network if I could just use bitcoins directly.

In practice, LN is exactly like the existing banking system.  Yes, you put your money in the bank and it is ‘technically’ still yours, and yes, you CAN take your money out when you wish, and 99.9% of the time that is fine.  But just like in a banking system, the 0.01% the ownership of your property can be violated, as those who had money in Cyprus Banks found out in 2013, when the banks basically bailed in themselves with their own customers money.  LN seems very similar to this system.

 

Ripple: The network that tried to bootstrap itself with its own tokens

With Ripple, the situation is slightly different.  They were originally a IOU passing network, which wanted to have everyone issue their own IOU tokens on the network, and to manage the transfer of these IOUs on a standard platform.  That standard platform would be a shared ledger with a common consensus protocol, which is what most people call the Ripple network today.  The token XRP was created from nothing, and was originally intended as a spam prevention measure, as to send increasingly more transactions to a node would mean that it would cost more and more XRPs.  Also the intent was that every transaction would destroy a small amount of XRPs so that increased use of the network would slowly appreciate the value of XRP, due to fixed number of them in existence.  It may surprise most people to learn that Ripple actually existed before Bitcoin was released, though it should be mentioned that the Ripple network today looks very different from the original one.  Ripple didn’t like the concept of doing work to earn the inflation of the coin, and thus it had a distribution problem.  Where PoW in Bitcoin made the distribution of the coins very simple (you spend time and energy to mine, you earn coins, whoever you are), Ripple created all 100,000,000,000 XRPs and granted it to themselves a non-profit foundation, early investors, and struggled to figure out how to get them distributed down the pyramid to regular people.  The way XRP was distributed harkens back to how money in a central banking system is distributed.  It is printed by the central bank, then it is sold to large investment banks, which then pass it onto regional banks, and finally to the regular people through loans.

Their approach brought about it some self-induced problems that they then had to solve.  How do prevent some of the large early holders of XRP from hoarding their distribution?  How much should be kept aside to pay for development? How much XRP should be burned on each transaction?  And most importantly, if all the tools and applications look like they are useful, what is to prevent a rival network from just forking the code and running their own version of XRP with a NEW distribution of funds?  These are big issues with the Ripple business model, which requires that they sell the technology to large banks first, which ostensibly want a payment system that costs less than the existing SWIFT payment systems.  To this strategy Ripple is developing trading interfaces and applications to make exchanging IOUs (which may represent fiat currencies or other tokens) to/from XRP easier.  The strategy that they are now pursuing is to position XRP as a bridge currency for FX speculators to hold in order to reduce the volatility of marking markets in low liquidity currencies such as Venezuelan Bolivars or Israeli Shekels.

The issue with the strategy is that many times in economic history people attempted to create a stable bridge currency for the banks of the world and have failed.  The original attempt was proposed by Maynard Keynes, the bancor, and was a failure (no relation to the recent ICO called Bancor, which was also a colossal failure).  The second more recent attempt in the last couple decades is the SDRs proposed by the IMF, which is based on a basket of the G8 currencies, also largely a failure.  Something about having to use a complex instrument controlled by a 3rd party to hedge your own foreign currency risks didn’t appeal to the central banks of the world.  After all, every central bank had a different unique set of problems they are faced with, which necessitates a different hedging strategy.  For instance, a country in South America may do most of its trade with the US, which means it would want to hedge its balance of trade risk with a heavier weight put on USD instead of say, EUR.  At the end of the day, hedging FX with a specific central bank non-negotiable instrument wasn’t as straightforward as just holding the foreign currencies yourself (and less useful).  That is the problem with the “XRP as a bridge currency” strategy.  Why hold XRPs when you can just hold the USD, JPY, EUR yourself?  Which means XRP becomes nothing more than an necessary evil, in order to use the payment network, to put a cost to DDoS attacking the network with many transactions.  But if the network is going to be a wall-gardened curated network and used only by registered banks, then the risk of attack is minimal.  Leaving XRP’s only true value as a speculative instrument that is loosely and informally tied to the usefulness of the Ripple network itself, and the applications that Ripple Labs is creating for it.  Astute readers will note that this is also the same base value proposition as Bitcoin. (that its value is related to the usefulness of the blockchain itself) but the difference is that Ripple created all the XRP out of thin air themselves, and they themselves decided how to distribute the tokens.  Who knows if they played favourites.  Also creating the tokens ex nihilo means that they may very likely be treated as a security by the SEC.  As only securities are created in this way, (well currency is as well, but the central banks have the exclusive license to create those), so there may be complications for those that raised funds by granting XRP to investors.  Furthermore, if XRP were to be considered a security that would severely limit its trade in US and other first world countries further limiting its use as a bridge currency for FX liquidity providers.

Lastly, the problem with XRP comes back to the beginning statement where I posited that Bitcoin is an asset, which means you own it, and nobody else. This is achieved in Bitcoin through both asymmetric key cryptography (you own the knowledge of your private keys), and the decentralization of the mining ecosystem itself. Mining is its own business, and the business of the miners isn’t to run a payment system.  The unique fusion between a commodity extraction business that is concerned more about cheaper greener power generation and the processing of a financial payment system is what gives Bitcoin its power, and Satoshi’s true innovation.  If you build a token market on top of BCH, trying to steal peoples assets, or to freeze their BCH would be very costly and not guaranteed to work.  In comparison, if Ripple at its hypothetical apex, when all central banks were to use it for its inter-bank transfers, and you were a liquidity provider who have been stockpiling XRPs in order to make profit from the imbalance of flows between North Korean Won to the US dollar, you could be denied access to your XRP if all the central banks of the world just refused to listen to the nodes that broadcast your transactions.  Where Ripple uses similar cryptographic security to enforce proper signing of transactions, unlike Bitcoin, a Ripple that banks will use will only trust transactions from each other.  You will not be able to submit a transaction that the banking consortium disapproved of***.  And the banking consortium would not allow a trusted validator into their cartel without ensuring that they were going to play by their rules.  So the difference between Ripple’s decentralization strategy vs Bitcoin’s is that Ripple has a trusted server list (called the UNL) and transactions received from the list are assumed ‘blessed’ (assuming they are valid).  Transactions received from outside the list are not treated the same.  Basically in Ripple, anyone can connect to the network and send transactions, but unless your txn is acknowledged somewhere down the line by the main validators, it won’t make it into their ledger.  And membership to the main validator pool is similar to a cartel formation.  “I’ll put you on my trusted list if you put me on yours”.  Every bank using Ripple will certainly guard its UNL list and ensure that only other licensed banks are on their lists.  Contrast this to Bitcoin, where nobody can stop anyone else from participating in mining, — that is the true key to decentralization.

It is going to be an exciting next few months, as more and more features of ‘speciality Altcoins’ stand to be usurped by BCH as the true power of the original Bitcoin design starts to come to fruition.

For the industry to grow and expand in order to reach the maximum number of humans on the planet,  therefore ensuring its survival through future regulatory and political pressures, it needs to be simple, elegant, and functional.  So despite the above experiments in technology being very cool for bike geeks like myself, I would much rather just have the original and best:

BCH: The original and best. Simple, elegant, beautiful, efficient

 

/EOL

** this stems from the fact that Bitcoins require energy to be burned on creation.  Which means they are at least worth the value of the cost of mining them when they are granted.  Utility tokens, on the other hand, like all tokens, have no intrinsic value, and are created ex nihilo.

*** well, you could submit it, but it will never make it into their ledger.  It may make it into yours if you reduced your UNL list to only have validators which were not part of the banking cartel, but in that case, you have effectively forked off into a parallel ledger.

Move over Ethereum: New functionality for Bitcoin Cash makes it a Smart Contract Contender

Smart contracts.  It was dubbed Blockchain 2.0.  (Blockchain 1.0 was cash) It held all the promise of a new world, a new digital frontier.  It was to herald in an age with broker-less deals, robot escrows, AI oracles, and driverless automobiles being their own corporations as self-reliant actors in the new digital economy.  An economy which did not discriminate between true born humans and machine code born automata.

That was the dream.  That was the promise.  That was what everyone spoke about for the last 4 years.  Except that that never happened.   Oh, there were many attempts.  Some achieved some modicum of success, some less so, even others ended in full blown multi-million dollar fraud or theft.  (Yes, I’m talking about most of the projects in Ethereum space, especially, but not exclusive to, the DAO.)

Let’s talk about Ethereum for a bit, as it is the blockchain with the most activity in the Blockchain 2.0 space.  Arguably it drew away most of the Bitcoin developers after its launch in 2015 as the blockchain built for smart contracts and other programmable money uses.  But at least half of its success is due to the fact that Bitcoin around that same time suffered some pretty big crippling self-imposed limitations that would all but exclude it from being a contender for the mantle of programmable money.  In fact, Vitalik Buterin, the founder and spiritual leader of the Ethereum movement, was originally a bitcoiner, and he was only created Ethereum because the Bitcoin core developers at the time deliberately went out of their way to disable many of the functionalities which would allow for a programming language for smart contracts to be done on Bitcoin itself.  So Vitalik did exactly what any good decentralist did when he was faced with oppression by the established regime.  He left and did his own thing.  He went and started designing Ethereum.  This was 2013.

However, because he had to build it from scratch, or perhaps because Vitalik didn’t have the same insights as the Satoshi did, he approached the design of Ethereum in a pretty naïve fashion.  He wanted a turning complete language so that it would be easy for developers to write smart contracts.  But a turning complete language would mean that infinite loops would be possible, which would be a bad thing in a globally decentralized blockchain.  So he resolved that using an economic protocol cost that would be applied to each computational step, so that you would need to pay per operation, and programs gone amok would run out of ‘gas’ and thus stop execution.  But this introduced a whole new category of complications: how much would each operation cost relative to others? Relative to the total computational capacity of the whole network? How would this scale as time went on?  He then went on to  ‘solve’ this new problem in a way which added even more complexity, and thus, yes, opened up a new class of problems.  He decided that the protocol should just change the rates every so often, by edict given by the outside world.  Miners should be able to decide what the gas prices should be and magically come to consensus on it heeding the advice of the senior ETH core developers –it was the ‘central bank’ approach.  Economically speaking, Ethereum was already becoming much more complex than Bitcoin, and writing and testing smart contracts could sometimes get costly, as your bugs will burn away your ETH as you make mistakes.

Scaling issues

To further the issues, Ethereum has some serious scaling hurdles.  You may have heard of how one wildly successful application on ETH called “Crypto Kitties” nearly melted down the entire network several times in the past due to txn flooding.  How?  It’s a very addictive digital card collecting and trading application, where ‘digital breeders’ can make their own unique kitten mutations and sell them for ETH.  Once many people start using the application at the same time, the network floods with transactions and the whole blockchain slows to a crawl.  But why?  Because the designers of Ethereum took another naïve approach to the problem of STATE and STORAGE.  Basically if you are going to have to run programs on the blockchain, then the code for the programs and it’s interim state, (the memory of the program has as it moves from instruction to instruction) is all stored on the blockchain nodes itself.  Which is to say, EVERY ETHEREUM SERVER is storing EVERY PROGRAM’S STATE.  That’s a lot of wasted storage.  Especially for people who really don’t care for digital kitten mutation as a past time.  And what is worse, every Ethereum server is also doing all the calculations for the Crypto Kitten decentralized application, even if you are not using it.  Basically, when Vitalik says Ethereum is a “World Computer”, he means it is a very, very inefficient computer, because every computer in the world, is executing the same code, and storing the same data, as everyone else, at the same time.  Yeap. Talk about the naïve approach.  It is pretty much the MAXIMALLY naïve design to decentralized multiparty computation.  _Have everyone do every computation_!  No wonder they have such a doozy of a time trying to scale Ethereum past the point where one popular application can wreak havoc on the network.

Well now, why do I bring up all these criticisms on ETH?  I’m not trying to throw cold water on their party.  In fact, I have great respect for Vitalik and many smart contract developers that I have met and know as they are truly breaking new ground in the space, and it is on the shoulders of their hard work that we will carve out the path to the digital frontier of the future.  However, I do want to bring up Ethereum’s fundamental design flaws because they will soon have a worthy competitor.  No, it’s not another complication smart contract blockchain, hatched out of the desire to make the founders rich. (there are _many_ in this category).  It is in fact, the sleeping giant, the original, BITCOIN.  But how you ask? How is it possible that now it can perform as a solid foundation to smart contracts but it couldn’t before?  Did Vitalik miss something? No, he didn’t.  Because the Bitcoin that he left is still stuck exactly as he left it back in 2014.  We are, of course talking about Bitcoin Cash, the offspring of legacy Bitcoin that decided that hard forks were an upgrade mechanism and that it would be OK to grow the network and add new or re-enable old features on it.

It is exactly the latter that will usher in the new age of smart contract development.  On May 16th 2018, BCH will be hard forking as part of their scheduled 6m update schedule, and one of the most exciting things that will be changed in the upgrade is the re-enabling of some of the old OP_CODES which were disabled by core developers out of fear that they may be insecure or open up attack vectors on the network back when the codebase was immature, and the network very small.  For the computer scientists reading this, the interesting instructions are OP_CAT and OP_XOR.  (concatenate, and logical XOR).  I won’t go into why these are very important, but if you are interested then you can read about how Bitcoin is effectively a Turing machine. This means that arbitrary calculations can be done on Bitcoin, using a method that separates the DATA and CODE from the proof of execution.  For the technically inclined, the analogy would be the Bitcoin blockchain transactions effectively becomes a micro instruction table, a set of CPU registers, and a program stack pointer.  All the data, the code, and storage is elsewhere.  This makes the Bitcoin model much simpler than the Ethereum model (store and compute everything on the blockchain nodes).  It’s such an elegant solution, that one wonders if it was always meant to be this way, designed by the original Satoshi, but somewhere along the way it just got derailed.  And why not? Everything else about the Bitcoin design is fairly simple and straight forward.  Coming up with it required several leaps of intuition, but when you read it the solution is surprisingly obvious. (One could reflect on the similarity of this “difficult to come up with, but simple to verify” method as the signature paradigm of the whole Proof-of-Work and hashing model itself. Indeed it seems Bitcoin is itself self-referential, or at least self-consistent)  Recall the original whitepaper was only 9 pages long.

So where does this leave Ethereum post May 2018?  It is anyone’s guess.  Ethereum still has several years of head start on Bitcoin Cash.  It has several custom languages that developers can use for writing smart contracts.  Bitcoin still only has its original SCRIPT, a language that is akin to programming on an HP calculator. (it is similar to FORTH).  But now that the missing OP_CODES will be brought back, that means that more high level languages can be built that can compile to low level Bitcoin SCRIPT.  I foresee a rich ecosystem of smart contracts and languages for developers to be built on top of Bitcoin in the years to come.  And of course, when I say ‘Bitcoin’ I mean Bitcoin Cash, the only Bitcoin that can be upgraded on-chain.

 

/EOL

In my next article I will talk about Ripple and its potential future, given its current strategy as a interbank currency payment system.

 

 

The WEB just ate my computer!

Remember those of you who are old enough, that once upon a time computers were not connected by default to the internet.  When things like token ring LAN and Novell Netware were tools that only companies could afford, and when emails didn’t exist and when you wanted to write somebody a memo, you fired up WordPerfect, write it up, print it out on your dot matrix printer, tear off the perforated holed edges, and handed it to your secretary.

Remember back before Steve Jobs (God Bless his soul) and computers were not connected by default?  Those were the days where applications were written for the computer that they ran on and software portability was an arcane and complex art.

Remember back then IBM had the foresight to realize that portability was a thing that needed to be addressed and thus they had big dreams about Java being the interface layer that would make the dream of “write once, run everywhere” come true?

They even developed ‘net terminals’ that were only running JVMs on them so that they could run any app that was a java app.  This thin-clients bandwagon was jumped on by many hardware manufacturers as they saw chance to sell a new hardware platform that could compete with the dominance of Intel, but they were restricted due to the limitations of the JVM, lack of applications and network bandwidth.  The idea was to put all the applications on servers, and then download them through the LAN to your Net Stations to run.  All storage of apps and data were to be stored back on the companies servers.  Clients were dead. Servers were to run everything.

Good ol’ Big Blue with another technology too early for its time

 

Whatever happened to that?

 

Simply put IBM, the research firm which they had become had pulled another technological whimsical gizmo out of their hat which was way too early for its time.  The world had nary time to get used to the advent of the World Wide Web, and there was IBM already trying to remove local storage from the computer.  This is bound to fail.  This was done at a time when the world had not become accustomed to software subscription models as yet, nor SAAS, IAAS based cloud computing.  The world was still based around monolithic native applications and segmentation of software by hardware and operating system camps and open source software was still relatively new. The best software was still proprietary ones.

If we look back now, the dream of write once, run everywhere has been realized.  Not by Java, or IBM, but by HTML, and Javascript.  The open internet came buy and ate their lunch.  HTML5, JS, PHP, Ruby, Python, front end frameworks, filled the gap, and made GUIs simple to write.  Javascript is now vastly more popular than Java.  Why?  Was it a failure of object oriented programming? Of compiled languages having good free GUI toolkits? Was it a lack of supporting services such as cloud storage and cloud computing that made storing data remotely so unwieldy?  Whatever the reason, it was a brief glimpse of the potential that would start the advent of cloud based services.  The big difference being that it would not be controlled by established technology companies like IBM or Novell or Oracle, but by internet companies.  Nowadays, machines need local storage less and less, with services like google drive, dropbox, microsoft onedrive.  The conversion over to ‘thin client’ Net Stations is complete, but in a decentralized way, thanks to tech like Linux and FOSS, and companies like Amazon, Dropbox, Google, Mozilla and Microsoft.

Call me old fashioned, but I think I’m still a bit reticent about having a local computer that doesn’t have any local storage, somethings you just need to have locally, such as secure applications or data that you want to stay encrypted under your exclusive control.  But more and more, I’m finding that the data produced through the course of normal daily work, office documents, PDFs, contracts, code, memos, notes, emails and the like need not be local.  These seem to be best stored in the cloud, so that multiple computers at home and abroad and in a pinch, my mobile or tablet can access it.  More and more I’m finding that my music collection is also in this category.  Even family pictures are now stored in the cloud.  How much of our data do we actual control and own?

Did you also notice how much more time you spend in your browsers vs stand alone applications in recent years?  Even Office apps are usable on the web with features that match those in standalone apps.  I think we can safely say the age of buying software in a box is over, and everything now is totally connected to the internet.  Whether we like it or not, all our data is belong to the internet.

This means data privacy is going to be more and more of a heated topic in the years to come.

The internet, is now the TV/Radio/Video Collection/Photo Album/Bookshelf for the generations to come.  I welcome our new robot masters.

/EOL

 

 

Time to start a new Chapter…

… in the Book of Bitcoin.  There comes a time in every story when the characters develop, evolve, die, or are reborn.  Such is one of those times in the grand adventure which we all started off on 8 years ago.  The Bitcoin Cash fork having successfully executed is now free from the oppressive roadmap which was being driven by Blockstream and Core, which would see most of Bitcoin transactions turned into just channel opening/closing tasks for untested 2nd Layer networks better suited for micropayments.

So in that vein, I would like to show everyone that micropayment payment channel applications don’t need to wait for Lightning Networks!  Yours.org is a homegrown, build by Bitcoiners, for Bitcoiners, paywall content platform that started off writing their own version of payment channels because LN wasn’t (still isn’t!) ready, then moved onto Litecoin because Bitcoin transactions got too expensive, but now thanks to the low fees on Bitcoin Cash, has moved onto BCC.  As such, I would like to support them by moving my blog onto Yours.org on a sort of a trial run.  Therefore, this month’s blog will be published there.  Yes, you will have to fund a BCC/BCH wallet in order to read the whole article. Yes, it doesn’t seem to support embedding media such as pictures inline yet.  But I hope that Ryan X. and team will be able to slowly improve their platform to the level of Medium or WordPress sometime, and it isn’t terribly hard to get your hands on $5 worth of BCC and funding the Yours.org wallet is dead simple.  Send BCC and it is instantly credited (no need to wait for confirms).

So without further adieu, I leave you enjoy reading about (and using!) Bitcoin Cash:

Bitcoin is Dead, Long Live Bitcoin! (cash)

If we are going to grow this community, we are going to have to start supporting its own economy, eat our own dogfood, so to speak.

 

 

Fork Wars Episode I – The Phantom Futures

If you haven’t been living under a rock for the last couple of weeks then you know that the whole block size debate is boiling to a close.  Segwit2x arose to be a compromise solution, lead by ex-core developer Jeff Garzik, brokered and agreement in New York after the Consensus 2017 conference which had over 90% of the miners and ecosystem in agreement.  Since then BIP91 has locked in, which is an effective lowering of the much exalted soft fork consensus threshold of 95%, by which half of the inner circle of core devs felt was deficient. Regardless of how this was on the surface seen to be a ‘lowering of the standards’ it was done anyway and conveniently so, as segwit was not looking like it would ever pass the 95% bar anyhow (ahem. “I TOLD YOU SO” to all the neckbeards out there, and u/jonny1000!).  Now that segwit2x/segwit is going to be ‘forced’ by way of 90% of the miners starting to reject non-segwit signaling blocks, this ensures that segwit’s threshold of 95% of last 1000 blocks will be met sometime in mid-August.  (yes, you read correct, BIP91 was an 80% majority agreement to come to a 95% agreement by forcing the 20% to agree with you or be orphaned — by force!).

This has set the stage for the drama to follow.  For one there is already a growing group of big blockers who have mobilized to fork off the current Segwit2x/Segwit Bitcoin (let’s call this SegwitCoin) who have identified themselves as BitcoinCash.  They are a fork of BitcoinCore 0.14.x with Segwit and RBF components disabled, and a 8mb Hard Fork coded to engage at Aug 1, 12:20 UTC time.  This guarantees that there will be a ‘big block’ Bitcoin regardless of what happens with the SegwitCoin and the expected in-fighting among the new ‘stewards’ of the main chain (Jeff Garzik and his btc1 team) vs. the old guard which have been deposed (Bitcoin Core, Blockstream). Continue reading

Yoga Splits, Banana Splits, why not Bitcoin Splits?

The world is filled with great splits.  Sometimes a split is just the best way of getting the best of both worlds. It let’s bygones be bygones and leaves freedom of choice to the market which is in the ideal position to determine the best way forward.   But in Bitcoin space, talk of a split is tantamount to talking about White Privilege, racism, or dog meat as a food delicacy. Make no mistake, this is a carefully manicured and cultivated reaction culminating from 4 years of careful opinion “shaping” by interested parties, which I have written about several times in the past, but this is not a post to rehash those arguments.

Splits are Tasty. Why not in Bitcoin?

Splits are Tasty. Why not in Bitcoin?

This is an attempt to examine the practical realities of a split in Bitcoin, WITHOUT any of the ethical/emotional/political/ideological baggage that so many have deliberately or inadvertently attached to the debate.

Continue reading

Keep the Change! — Replay protection is a Red Herring

Much about the current Bitcoin splitting debate has revolved around the notion of a hard fork splitting of the network being dangerous. So dangerous, in fact, that core developers have constantly stuck to the argument that the community should trust in their (exclusive) council in order to ensure that we don’t engage in anything that may be unsafe for ourselves. Trust them, they know what is good for us. When libertarians and skeptics around the world hear that they are immediately put on alert.

Most recently an exchange between ex-Bitcoin lead maintainer Gavin Andresen with Core contributor Matt Corallo was especially interesting.  Besides the run-of-the-mill talking past each other where Matt seems to ignore points that Gavin clearly addressed (regarding n**2 sighash issues, solved by capping txn sizes to 1mb) the core theme (pun intended) repeated again by Matt was that Hard Forks have no community support (by his own judgement) which is clearly shown by the fact that nobody seems to be giving much attention to the HF proposals in his (exclusive core dev curated) proposal list.  Not much surprise here, the standard echo-chamber reality distortion field stuff.  What was interesting, was that he once again mentioned the need, nay, the necessity of ‘replay protection’ in ANY hard fork proposal.  This is very important point in the core dev platform, as it serves a dual purpose.  One which on the surface is ostensibly for the public good, the other may be much more shadowy.  Let’s examine what replay protection is, and why we really don’t need it.

Continue reading

How will Bitcoin Miners be paid in the future?

The question of how miners will be paid in the long run, after mining subsidy rewards disappear is a much debated topic in Bitcoin.  For those who don’t know, mining rewards are set to half every 4 years until they finally reach zero sometime in the year 2140.  How the Bitcoin mining ecosystem will remain profitable (and thus healthy) is up in the air.  Miners are important as they provide security to the Bitcoin network because they convert real world energy into network security to guard against attacks from malicious forces.  Therefore, the more decentralized and diverse a mining ecosystem is, the better for Bitcoin.

So what will happen when mining rewards disappear? Well, some miners feel that transaction fees should rise up to fill up the shortfall.  As Ang Li puts it from an excerpt of the a recent article at Bitcoin.com

The incentives that Satoshi Nakamoto designed in the Bitcoin whitepaper are not enough to sustain mining for long, Li feels, adding that as the block reward halves every four years, miners income will continue to decline. According to him, keeping the block size where it New 22 Petahash Mining Pool Signaling Bitcoin Unlimitedis now will not provide enough incentive and therefore has to be reconsidered. Li also believes that only a larger mining transaction fee will maintain the balance. “By increasing block size, and transaction numbers, the fees will gradually replace the block reward, providing enough incentive for the miners to defend the bitcoin hashrate. This is the fundamental way to achieve healthy development of the whole ecosystem.”

Continue reading

Ouija Board Consensus – Decentralization Myths: Part 4

This is the fourth part of a multi-part series on the myths of decentralization. You can read the previous installments here:

Part 1 – Decentralization Redefined
Part 2 – Decentralization Myth
Part 3 – Decentralization comes with People

I’ve written quite a lot about the misconceptions and deliberate misdirection that some proponents in the Bitcoin community choose to spread around in order to shape the public perception of what makes Bitcoin valuable, and as a result change the fundamental value proposition of Bitcoin.  As you all should know by now, “Value does not exist outside the consciousness of Man” – Carl Menger.  So changing people’s consciousness by way of affecting their ideas, affects the value of Bitcoin.  Thus it is important that we re-evaluate our notions of why Bitcoin is valuable every so often with a huge dose of skepticism.ouija_board

In today’s article, I’d like to review what the fundamental security model of Bitcoin is, as intended by its mysterious creator, Satoshi Nakamoto, (at least in my interpretation of it) why that model is the best we can possibly hope for, and why any further attempts at adding extra layers of ‘security’ on top of this model just ends up making it less secure by making it more centralized.

Continue reading

Coming to Consensus: Governance is just as important as Blocksize

One of the heated debates that has raged over the years in Bitcoin space is whether the idea of a developer team lead by a benevolent dictator is the appropriate model to employ for a network worth more than 15 billion dollars in market capitalization.  Many have cited examples of how Satoshi, and then Gavin himself were benevolent dictators, and also how some well known projects have been successfully managed under the watchful eye of a wise and benevolent (though sometimes abrasive) dictator such as the Linux project.  It is also true that most civilizations evolve from dictatorships, starting with the tribal chiefs, to feudal warrior kings, to aristocratic monarchs, to emperors.  The transition to a democracy is not always a smooth one, and is mired by both slippages into oligarchies, totalitarian fascism to misguided experiments into socialism.  It is important then, to keep in mind that while most organized groups start as dictatorships, they eventually evolve into a system that is more inclusive of the common people’s will.

Oh, Glorious Leader, shepherd for the weak, show us the way!

Firstly, let’s get the obvious out of the way.  Dictatorships are vastly more efficient than a republic or democracy.  635984715851776795-AFP-551724097This is due to the fact there is little bounds on the leaders power, and his followers will carry out his instructions in the most expedient fashion.  Contrast this to a democracy where leaders are continually second guessed by their opposition, and their political opponents who are all vying for their own chance to run the show.  In a dictatorship, the only way a change of regime is possible is through open and widespread revolution.  This is why despotic Chinese emperors of old made it illegal to congregate in groups of 3 or more, restrict what can be discussed in public and on occasion just committed mass murders of all the academics and scholars for fear that they may spread seeds of dissent and dissatisfaction among the peasants with their pesky logic, philosophy, and ideals of morality. Continue reading