Team Slow and Steady

I’ve been finding myself dissatisfied with how the various Bitcoin tribes of the moment self-identify: the maximalists, the plebs, the ossifiers, the multicoiners, the laser eyes, the scalers, the inscriptors, the filterers, the covenants crew, etc — they all feel like they don’t quite match how I think about Bitcoin. So, per xkcd 927, I figure that means it’s time to make up my own tribal identity; hence “Team Slow and Steady“. Our logo, naturally, is a giant tortoise.

There’s a few obvious ways to think about “slow and steady” in the context of Bitcoin:

There are a variety of reasons to like those attitudes in the context of Bitcoin: as something trying to be a superior store of value, predictability and stability are very desirable; as a consensus network, avoiding breakages and incompatibilities becomes extremely important; as a global network, avoiding the cognitive load on users imposed by frequent changes is helpful; as an open source, peer to peer network, taking the time for everyone to understand and accept changes before expecting them to be adopted seems wise.

Likewise, there are drawbacks to this approach: it makes it hard to be “first to market” for new features, it means you need to exercise a lot of patience if you want to be involved, and maybe it means you don’t get the fun of being invited to all the cool parties designed to hype everyone up about the latest fads. That seems like a fine tradeoff to me.

Maybe it’s worth comparing cases where the “slow and steady” philosophy comes to different conclusions than other Bitcoin tribes.

Probably the most direct comparison can be made the the “ossification” camp. After all, what could be more slow and steady than not moving at all?

“Ossify” means “to turn to bone”, or more metaphorically, to become inflexible and unable to be changed. I’m fairly sure its etymology here is by analogy to “protocol ossification” in the Internet, where routers and ISPs that violate the end-to-end principle have caused an inability to effectively deploy new protocols, because not only do the endpoints need to change, but all the routers and ISPs in-between do too. Bitcoin does suffer from that class of problem with regard to addresses: since many people use custodians instead of interacting with the blockchain via their own software, address upgrades (such as the upgrade to segwit p2wpkh and p2wsh bech32 addresses, or taproot p2tr bech32m addresses) become difficult as they require third parties to update their software.

Ethereum and altcoiners seem to have been a big proponent of Bitcoin’s ossification, often treating it as a given that Bitcoin had already ossified (eg “Bitcoin-style ossification”, March 2020, “Bitcoin’s ossification development philosophy”, June 2021), which might be compared to a lack of contemporaneous uptake amongst Bitcoiners (eg “Ossification is stupid and suicide”, April 2021), though perhaps that has changed, with a Bitcoin Magazine article declaring it inevitable in May 2022.

Ossification advocacy generally involves a motte-and-bailey approach, sometimes arguing that it simply means change is slow but not impossible, but often implying that in reality any change is either infeasible or impossible. For anyone who wants to make the former claim, I think “slow and steady” is less misleading. As far as the latter claim goes, claiming that changes to Bitcoin will be impossible or infeasible simply seems wrong to me: we’ve seen that it’s possible historically, we know that changes will be needed in the future, and we have no way of committing future Bitcoiners to not making changes if they decide they’re desirable.

The “laser ray to $100k” meme started in Feb 2021 when the BTC price was about $50k, with the idea that hitting $100k was next. Samson Mow’s take on the same day seems representative: “It took just 52 days for #Bitcoin to go from $25k to $50k. $100k this year is conservative.” As it turned out, that wasn’t conservative, and while Bitcoin’s price rose another 40% or so, that was only after it had crashed to $31k, and it then proceeded to crash again, to an eventual low of around $16k. While it has since rebounded and hit new highs, $100k remains out of reach, at least in USD terms. That hasn’t stopped people from eagerly predicting imminent new highs: to pick on Samson some more:

  • “Soon we’ll never see #Bitcoin below 100k.” – 2021-02-18
  • “Around 6 weeks ago was the last time we saw #Bitcoin in the $30k range. We’ll likely never see those levels again. It’s same for the $50k range. There’ll be a point during the journey to $100k where we leave and never return. Stack here while you can.” – 2021-03-23
  • “Yep, #Bitcoin is going to $100k or higher this year for certain.” – 2021-04-30
  • “#Bitcoin is still going to $100k this year. Keep calm and HODL on.” – 2021-05-13
  • “$100k #Bitcoin is still in play. Easily.” – 2021-05-31
  • “66% of the way to 100k. Just 34% to go. #Bitcoin” – 2021-11-09
  • “#Bitcoin $100k by June.” – 2022-02-05
  • “$100k by end of June.” – 2022-04-15
  • “Updating my prediction to be $100k by end of 2022.” – 2022-06-16
  • “$100k is more likely than $10k. #Bitcoin” – 2023-04-12
  • ““We might go to $0.03M” is the new “we might go to $0.01M.” #Bitcoin” – 2024-01-20
  • “Now $0.1M before the halving. #Bitcoin” – 2024-03-06
  • “You know we’re still going to $0.1M before the halving right?” – 2024-03-20
  • “$0.1M before halving.” – 2024-04-15

Now, if you don’t mind being wrong on the internet, there’s not necessarily any problem to solve here. But at least to me it seems better to remember that Bitcoin’s price appreciation is fundamentally pretty slow: the market isn’t efficient enough to just recognise Hal’s insight from 2009 and immediately set a price floor of $10M per coin. Instead, prices are very unpredictable in the short term, and NGU is something only happens slowly and only looks remotely steady when you zoom out to look at the very long term.

Despite many differences, the covenant crew, the filterers and the scalers all seem to have a common attitude to their preferred features: that they should have been implemented yesterday, and since that didn’t happen, it had better be everyone’s number one priority to get it done now. For the scalers, that’s exemplified by being unwilling to wait to see how the capacity increases from the segregated witness proposal worked, instead insisting that an additional doubling be done (roughly) concurrently. For the covenant crew, the easy example of rushing things is Jeremy’s aborted UASF client in April 222 (“Within a week from today, you’ll find software builds for a CTV Bitcoin Client”), or its Dec 2023/Jan 2024 revival. The filterers, meanwhile, aren’t proposing consensus changes, but are getting pretty impatient about their issue. (Meanwhile the PR that would implement their feature waited around two months to be adjusted to not change defaults, and still doesn’t have tests ensuring it does what it intends). An impatient attitude just seems a bad fit for Bitcoin to me: this is a project that’s trying to outlive everyone that’s currently working on it — even counting the life-extension tribe; spending extra time and effort now seems a much better approach than trying to rush through a change, then getting upset when that doesn’t work the way you’d hoped. (There are plenty of other folks with pet projects who get similarly impatient; sometimes I’m probably even one of them; I’m just picking on these three because they came to mind as significant groups at present, and I’m sympathetic to their projects)

One thing “slow and steady” doesn’t do is tell you at what point “slow” becomes “too slow”. I think in practice that isn’t really hard: stopping points seem to tend to be pretty basic, like “does your code have tests?”, or “can you demo this doing anything interesting at all?”, rather than hard or esoteric like “can you prove/formally verify this?”.

A Hare was making

fun of the Tortoise one day

for being so slow.

Putting the B in BTC

That’s B for billions (of people). Okay, lame title is lame, whatever.

I wrote previously about why Bitcoin’s worth caring about, but if any of that was right, then it naturally leads to the idea that those benefits should be available to many people, not just a few. But what does that actually look like?

The fundamental challenge with blockchain technology is that every transaction is validated by every user — and as you get more users, you get more transactions, which leads to every (validating) user having to do more work just to keep up, which eventually leads to a crowding out problem: you hit a point where the amount of work you have to do just to keep up with existing usage is more work than prospective new users are willing to do, and adoption stagnates. There are three approaches to resolving that conundrum:

  1. Just make the tech super efficient.
  2. Don’t have everyone validate.
  3. Have most people transact off the blockchain.

Let’s go through those.

Super efficient tech

Technology improves a lot, so maybe we can punt the problem and just have all the necessary software/hardware improvements happen fast enough that adoption never needs to hit a bottleneck: who cares how much work needs to be done if it’s just automatically being dealt with in the background and you never notice it? This is the sort of thing utreexo and zerosync might improve.

I think the fundamental limit these ideas run into is that they can ultimately only reduce the costs related to validating the authorisation of state changes, without being able to compress the state changes themselves — even with zerosync compressing historical state changes, if you want to run protocols like lightning or silent payments, you need to be able to monitor ongoing state changes in a timely fashion, in order to catch attempts to cheat you or just to notice someone making an anonymous donation. Having a billion people transacting directly with each other is already something like 6GB of data a month, even if we assume tech improvements and collapse all the signatures to nothing, and constrain everyone to only participating in a single coinjoin transaction per year. (Maths: 32B input, 32B output, 8B new value; everyone who pays you or who you pay participates in the coinjoin; 1B people * (32+32+8) bytes/tx * 1/12 tx/person/month = 6GB/month).

That’s not completely infeasible by any means, even on mobile or satellite data (for comparison, Bitcoin can currently require 2MB/block * 144 blocks/day * 30 days/month for a total of 8GB/month), but I think it’s still costly enough that many people will decide validating the chain is more effort than it’s worth, particularly if it only gets them one transaction a year on average (so for some, substantially less than that), and choose a different option.

Don’t validate

This is probably the riskiest option: if few people are validating, then thos that do have the potential to collude together to change the rules governing the money. Whether that results in enlightened rule by technocratic elites, or corrupt self-dealing as they optimise the protocol to benefit their own balance sheets, or the whole thing getting shutdown by the SEC as an unregistered security, in practice it’s lost the decentralisation that many of Bitcoin’s potential benefits rely on.

On the other hand, it is probably the easiest option: having the node software provide an API, then just letting multiple users access that API is a pretty normal way to build software, after all.

In this context, the advantage of doing things this way is that you can scale up the cost of validation by orders of magnitude: if you only need dozens, hundreds or even thousands of validators, it’s likely fine if each of those validators have to dedicate a server rack or more to keeping their systems operational.

This can allow increasing the volume or complexity (or both) of the blockchain. For example:

  • Bitcoin, pre-segwit: 1MB per 10 minutes
  • Bitcoin, post-segwit: ~2MB per 10 minutes
  • Ethereum: ~6.4MB per 10 minutes (128kB/12s)
  • Chia: ~10MB per 10 minutes (917kB/52s)
  • Liquid: ~20MB per 10 minutes (~2MB/60s)
  • BCH: 32MB per 10 minutes
  • Algorand: 830MB per 10 minutes (5MB/3.7s)
  • Solana: 192GB per 10 minutes (128MB/400ms)

(For comparison, 32MB per 10 minutes was also the value Tadge used in 2015 when considering Lightning adoption for up to 280M users)

Doing more than an order of magnitude increase on Bitcoin’s limits in even a moderately decentralised manner seems to already invite significant technical challenges though (eg consider issues faced by Algorand and Solana, even though in practice neither are yet close to their protocol enforced limits).

Saturating a 100Mb/s link would result in about 6GB per 10 minutes (with no redundancy); combining that with the assumed technical improvements from the previous point, might allow a billion people to each participate in 12 coinjoin transactions per day.

Which is great! So long as you don’t mind having created a CBDC where the “central bankers” are the ones able to run validators on dedicated links, and everyone else is just talking to them over an API. If the central bankers here don’t like you spending or receiving money, they can apply KYC rules on API access and lock you out (or can be instructed to by their respective governments); if they decide to change the rules and inflate the supply or steal your coins, then you have to accept that, because you can’t just keep the old system going as you still can’t afford the equipment to run a validating node, let alone a mining one.

Get off the blockchain

Which leaves the final approach: getting most people’s transactions off the Bitcoin blockchain. After all, it’s easy to validate the chain if (almost) nobody is using it. But the question then becomes how do you make Bitcoin usable without using the blockchain?

The founders of Blockstream already came up with the perfect answer to this back in 2014: sidechains. The promise there was that you could just transfer Bitcoin value to be controlled by another blockchain, with a cryptographic guarantee that so long as you obeyed the rules of that other software, then whoever ended up controlling the value on the other software could unilaterally transfer the value back to the Bitcoin blockchain, without needing anyone else’s permission, or anyone else being able to stop them. That would be ideal, of course, but unfortunately it still hasn’t panned out as hoped, and until it does, it seems that there’s likely to remain a need for some form of trusted custodian when moving control of an asset from one blockchain to another.

Lightning on its own provides a different, limited, answer to that question: it can allow you to move your payments off the blockchain, but it still assumes individual users are operating on the blockchain for settlement — in order to rebalance channels, take profits, or deal with attempted fraud. And while we can keep improving that with things like channel factories and off-chain rebalancing, I’m doubtful that that alone can even get settlement down to the “1 tx/person/year” level mentioned above, where technology improvements would potentially let us get the rest of the way.

Another way to answer is to just say “Bitcoin is for saving, not spending”: if people only use Bitcoin for saving, and not for payments, then maybe it’s feasible for people to only deposit or withdraw once every few years, at a similar rate to buying a home. That likely throws away all the possible benefits of Bitcoin other than “inflation resistance”. And while that would still be worthwhile, I’d rather be a little more ambitious.

Of course, a simple and feasible answer is just “use a custodian” — give someone your bitcoin, get an IOU in return, and use that IOU through some other highly scalable API. After all, in this scenario, you’re already implicitly trusting the custodian to redeem the IOU at par sometime later, so it’s not really losing anything to also trust the operator of the scalable API to not screw you over either

That can take a lot of forms, eg:

  • funds held via a custodial wallet
  • funds held on an exchange
  • funds held in fedimint or similar
  • funds wrapped in an altcoin token (WBTC on Ethereum, RBTC on Rootstock, L-BTC on Liquid, SoBTC on Solana, etc)

All of those have similar risks, in essence: custodial risks in that there is some privileged entity who may be able to deliberately misappropriate Bitcoin funds that are held in trust and aren’t rightfully theirs, and operational risks, in that you (or your custodian) might lose your funds by pressing the wrong button or by an attacker finding and exploiting a bug in the system (whether that be a web1 backend or a web3 smart contract).

For example RBTC suffered a near-catastrophic operational risk last year, having locked the Bitcoin backing RBTC into a multisig contract that could not be spent from via a standard transaction, resulting in 3 months of downtime for their peg-out service. Or consider the SoBTC bridge, which suffered a “hack” a couple of days after its custodian Alameda collapsed, which, if accurate, was a catastrophic operational error, or, if false, was a catastrophic rug pull. Exchange/wallet hacks and rugpulls likewise have a long and tawdry history.

Whether custodial behaviour is sensible depends first on how much those risks can be reduced. For example, Liquid and fedimint both rely on federations in the hope that it’s less likely a majority of the participants will be willing or able to coordinate a theft. Proof of liabilities and reserves (eg BitMex, cashu, WBTC) is another approach, which can at least allow you as the holder of an IOU to verify that the custodian isn’t already running fractional reserve or operating a ponzi scheme (more IOUs outstanding than BTC available to redeem), even if it doesn’t prevent an eventual rug pull. That probably has some value, since a ponzi scheme is easier to excuse: eg “I’m only stealing a little to make ends meet; I’ll pay it back as soon as things turn around” or “we just don’t have good accounting so may have made a few mistakes, but it was all in good faith”.

If those risks aren’t dealt with, then significant amounts of Bitcoin being held via third-party custodians may prevent Bitcoin from succeeding at many of its goals:

  • inflation resistance may be weakened via supply inflation due to unrecognised fractional reserve holdings
  • theft resistance may be hard to ensure if trusted custodians frequently turn out to be thieves themselves
  • censorship resistance may be weakened if there are only a few custodians
  • self-enforcing contracts may be unaffordable if they are only able to be done on the main chain, and not via custodially held coins (eg, due to being held in wallets or exchanges that don’t support programmability, or on alt chains with incompatible programming models)

Decentralised custodians?

But what if, at least for the sake of argument, all those concerns are resolved somehow? For any of this to make sense, there additionally needs to be a way to move funds between custodians without hitting the primary blockchain, at least in the vast majority of cases — otherwise you lock people into having to have a common system with everyone they want to deal with, and you’re back to figuring out how to scale that system, with all the same constraints that applied to Bitcoin’s blockchain originally.

That, at least, seems like a mostly solvable problem: the lightning network already provides a way to chain Bitcoin payments between users without requiring the payer and payee to interact on chain, and extending that to function across multiple chains with a common underlying asset is likely quite feasible. If that approach is sufficient for solving normal payments, then maybe that leaves people who want to participate in self-enforcing contracts more complicated than an HTLC to find a single chain to run their contract but that at least seems likely to be pretty feasible.

What might that world look like? If you assume there are 2000 state updates by custodians per block, and that each custodian has 6 state updates a day on average (a coinjoin or channel factory approach would allow each transaction to involve many custodians, and even with relatively complicated scripts/multisig conditions, 2000 updates per block should be quite feasible), then that would support about 50,000 custodians in total. If a billion Bitcoin users each made use of three custodians, then the average custodian would service about 60,000 customers.

For a user to fully validate both global supply and that their own funds are fully backed, they’d need to validate both the main Bitcoin chain, at a similar cost to today, and (if their custodians make their custodied assets tradable via an altcoin like L-BTC, RBTC or WBTC) may want to verify those chains as well, or otherwise validate their custodians’ claims of liabilities and reserves. But it’s likely that a custodian would need to make many times as many transactions as an end user, so the cost to validate a custodian’s chain that on average might only carry one-third of the transactions of 60,000 users might come to as little as 1% of the effort required to validate the main chain. So the cumulative cost of validating three custodial chains, can probably be arranged to be not too much larger than just validating the Bitcoin main chain. A chain like Liquid, with 10 times the transaction capacity (and validation cost) of Bitcoin, could perhaps support 20,000,000 users who only made one transaction a week on average, for example.

We could convert those numbers to monetary units as well. If you assume that this scenario means that those 50,000 custodians on the main chain are the only entities acting directly the main chain, and divide all 21M bitcoins amongst them, that means that, on average, custodians hold 420 BTC on behalf of their users. If you assume they each spend 1% of total assets in fees per year, then that’s about 4 BTC/block or 400sat/vb in fees. If you assume they fund those fees by collecting fees from people using their IOUs on other blockchains, then each of those blockchains would only need to collect 8000 sats per block, and charge something like 8 msat/vb if they have similar throughput to Bitcoin (compare to Liquid’s current fees of 100msat/vb despite potentially having ten times Bitcoin’s throughput).

Another way of looking at those figures is that, if we were to evolve to roughly this sort of scenario, then whether you’re a custodian or not, the cost of transacting on the main chain would to be pretty expensive: even if you’re willing to spend as much as 1% of your balance in transaction fees a year, and only make one transaction a month, you’ll need to have a balance of about 1 BTC for that to work out. Meanwhile alt chains should be expected to be four or five orders of magnitude cheaper (so if 1 BTC were worth a million dollars, you could reasonably hold $100 or $1000 worth of BTC on an alt chain, and spend something like 8c or 80c per on-chain transaction).

Getting there from here

A world where a billion people are regularly transacting using Bitcoin is quite different from today’s world, of course. An incremental way to look at the above might be to think of it along these lines:

  • Each 2MB per 10 minutes worth of transaction data supports perhaps 300,000 users making about 1 tx/day.
  • If per-capita usage is lower, then the number of users supported is correspondingly higher (eg, 9,000,000 users making 1 tx/month).
  • Increasing utility (individual users wanting to make more transactions) and increasing adoption (more users) both pressure that limit.
  • Counting Bitcoin, and RBTC on Rootstock, WBTC on Ethereum and L-BTC on Liquid, the above suggests custodially held Bitcoin can support up to about 5,000,000 users making about 1 tx/day.
  • Scaling up to 1 billion users at 1 tx/week would require about 50 clones of Liquid, but that probably creates substantial validation costs. If you limited your Liquid clone to 2MB/10 minutes (matching Bitcoin), you’d need 500 instead; if you reduced the Liquid clone to 200kB/10 minutes, you’d need 5000; or if you reduced it to 20kB/10 minutes (1% of Bitcoin’s size) as suggest above you’d get to 50,000 Liquid clones. (For comparison, Liquid currently seems to average about 11kB/10 minutes — taking blocks from May 2023 and subtracting 1.7kB/block of coinbase/block signature overhead)

At present, I don’t think any of those chains are interoperable over lightning, though at least Liquid potentially could be. I think you could do an atomic swap between any of them, however, allowing you to avoid going on the Bitcoin blockchain, at least, though I’m not sure how easy that is in practice with today’s wallets.

Given Ethereum usage isn’t mostly moving WBTC around, and Rootstock and Liquid have very low usage, that’s certainly more in the way of potential capacity than actual adoption. Even Bitcoin seems to have plenty of spare capacity for graffiti so perhaps it’s within the ballpark to estimate current adoption at perhaps 2,000,000 (on-chain) users making 1 tx/month on average (ie, not counting transactions that are about storing data, moving funds between your own wallets, changing your multisig policy, consolidating utxos, increasing utxo fungibility etc). That compares to about 17,000 public lightning nodes, which would make public lightning operators as making up a bit under 1% of on-chain users. Perhaps there are as many as 10 times that doing lightning over private channels or via a hosted/custodial service, so call it 150,000.

So this is very much “Whose Line is it Anyway?” rules — the numbers are made up, and the ratios don’t matter — but if it were close to reality, what would the path to a billion users look like?

  1. Get more adoption of lightning for payments (scale up from perhaps 150,000 users now to 1,500,000 users). Somewhere after a 10x increase, Lightning on the main chain will start causing fee pressure, rather than just responding to it.
  2. Make Liquid much cheaper (drop fee rates from 100msat/vb to 10msat/vb, so that it’s 10x cheaper due to being able to cope with 10x the volume, and an additional 10x cheaper because L-BTC is an IOU). Make it easy and cheap to onboard buy selling small amounts of L-BTC over lightning; likewise make it cheap to exit by buying L-BTC over lightning. Perhaps encourage Liquid functionaries to set a soft limit of -blockmaxweight=800000 (20% of the current size, for twice Bitcoin’s throughput instead of ten times), to prevent lower fees from resulting in too much spam.
  3. Support Lightning channels that are settled on Liquid for L-BTC (ie, add support to lightning node software for the minor differences in transaction format, and also change the gossip protocol so that you can advertise public channels that aren’t backed by a utxo on the Bitcoin chain). Add support for submarine swaps and the like to go to or from lightning to L-BTC.
  4. With reduced fee pressure on Liquid, and higher fee pressure on Bitcoin, and software support, that will presumably result in some people moving their channels to be settled on Liquid; if Liquid has 100x or 200x cheaper fees than Bitcoin, that likely means trying Lightning out on Liquid with a small balance will make sense — eg a $200 L-BTC Lightning channel on Liquid with a random stranger, versus a channel worth $20,000 or more in BTC on Bitcoin proper, perhaps made with a counterparty you already have a history with and trust not to disappear on you.
  5. Increased usage of Liquid will likely reveal unacceptable performance issues; perhaps slow validation or slow IBD, perhaps problems with federation members staying in sync and signing off on new blocks, perhaps problems relaying txs at a sufficient rate to fill blocks, etc. Fix those as they arise.
  6. Build other cool things, both on Bitcoin, Liquid and other chains. OP_VAULT vaults, alternative payment channels, market makers with other assets, etc. Before too long, either fee pressure on Bitcoin or Liquid will increase to uncomfortable levels, or the fact that the Liquid federation is acting as custodian for a lot of Bitcoin will become annoying.
  7. At that point, clone Liquid, with a new custodial federation, and a specific target market that’s started to adopt Liquid but isn’t 100% satisfied with it. Tweak the clone’s parameters or its scripting language or the federation membership/policies to be super attractive to that market, find some way of being even more trustworthy at keeping the custodied BTC secure and the chain operational, launch, and get a bunch of traffic from that market.
  8. From there, rinse and repeat. Make it easy for users to follow validate multiple chains, particularly validating the BTC reservers and the on-chain liabilities, and easy to add a new chain to follow or drop an old chain once they no longer have a balance there.

“Liquid” here doesn’t have to mean Liquid; it could be anything that can support cross-chain payments, such as fedimint, or a bank website, or an Ethereum clone, or a spacechain, or anything else that’s interesting enough to attract a self-sustaining userbase and that you can make trustworthy enough that people will accept giving away custody of their BTC. I picked Liquid as the example, as it’s theoretically straightforward to get proof of liabilities (eg L-BTC liabilities) and reserves (in practice, I can’t find a link or an rpc for this part), it can support self-enforcing contracts, and it shares Bitcoin’s utxo model and scripting language, perhaps making for easier compatibility with Bitcoin itself.

Also perhaps worth noting, that things I paint with a pretty broad brush when I call things “custodial” — OP_VAULT, APO and CTV would likely be enough to allow the “efficient local lightning-factory based banks” I thought about a few years ago; while I’d include that in the class of “custodial” solutions since it only operates efficiently with a trusted custodian, that there is an expensive way of preventing the custodian from cheating might mean you’d class it differently.

(Not worth noting because it should be obvious: all these numbers are made up, and it’s lucky if they’re even in the right ballpark. Don’t try relying on any of them, they’re at most suggestive of the general way in which things might unfold, rather than predictive or reliable)

Fin

Anyway, to move back to generalities, I guess the key ideas are really:

  • There’s a bunch of room to grow non-custodial Lightning and on-chain activity on Bitcoin now — for Lightning, something like 10x should be plausible, but likely also 2x growth (or more) for everything else, perhaps more if there are also technology/efficiency improvements deployed. So PTLCs, eltoo, channel factories, OP_VAULT, silent payments, payjoins, etc would all fit in here.
  • But the headroom there isn’t unlimited — expect it to show up as fee pressure and backlogs and less ability to quickly resolve transaction storms. And that will in turn make it hard and expensive for people with small stacks to continue to do self-custody on the main chain. At that point, acquiring new high value users means pricing out existing low value users.
  • Moving those users onto cheaper chains that can only deal in BTC IOUs kind of sucks, but it’s better than the alternatives: either having them use something worse still, or nobody being able to verify that the big players are still following the rules.

Perhaps one way to think of this is as the gentrification of the Bitcoin blockchain; with the plebs and bohemians forced to move to new neighbourhoods and create new and thriving art scenes there? If so, a key difference between real estate gentrification is that in this analogy, moving out in this sense is a way of defending the existing characteristics of the neighbourhood, rather than abandoning it to the whims of corporatism. And, of course, the Bitcoin itself always remains in Bitcoin’s utxo set, even if its day to day activity is recorded elsewhere. But in any event, that’s a tomorrow problem, not a today problem.

Anyway, to conclude, here are some links to a couple of the conversations that provoked me to thinking about this: with @kcalvinalvinn prompted by @gladstein; and this between @fiatjaf and @brian_trollz. Also some potentially amusing related thoughts on scaling that go in a different direction from back when BTC was 100x cheaper.

Zeitgeist

I don’t know about anyone else, but I’ve been finding the zeitgeist in Bitcoin a bit incoherent: every second idea that’s brought up is treated as either essential or an imminent disaster, and yet a few weeks later everyone who was so excited/enraged has moved onto some different idea to be excited/enraged about. Personally, I like to be able to use popular opinion as something of a safety check that I’m not getting too caught up in the weeds on things I think are technically cool, but that don’t actually matter — problem is, that’s not very effective when everyone else is getting caught up in the weeds worrying about things that don’t actually matter.

So I guess that means going to my back up plan, and looking at things from first principles: what’s Bitcoin actually good for, why are we doing any of this, and what will actually help with that? I’ve thought about that previously in terms of monetary theory — money’s for saving (store of value), or money’s for paying for things (medium of exchange), or money’s for writing out values in contracts (unit of account). But while that’s a nice framework for many things, it’s also perhaps a bit abstract if you’re looking to figure out concrete priorities.

Instead, I’m going to suggest there’s seven different reasons people care about Bitcoin:

  1. Inflation resistance
  2. Theft resistance
  3. Censorship resistance
  4. Cost reduction
  5. Self-enforcing contracts
  6. Wealth redistribution
  7. Revolution

Let’s talk about what I mean by each of these, and why people might want them.

Inflation resistance

This really has three different scales:

High inflation is widely recognised as bad, both for individuals whose savings are diminished and for society as a whole, as it often results in recessionary shocks. By having a fixed supply cap and a liquid market, investing in Bitcoin is a simple way for people to attempt to avoid the bad side effects those policies might have on them, and, if widely adopted, may prevent the bad side effects of those policies in general.

Theft resistance

There are a variety of ways in which money gets stolen — from basic robbery, to more legal methods such as hidden bank fees, locking funds, or civil asset forfeiture. By being inconspicuous even in large amounts, easy to self custody, and easy to spend or transfer when desired, Bitcoin can be an easy way to avoid many ways in which wealth can be stolen or confiscated.

On the other hand, by being easily transferable, Bitcoin can also be more vulnerable to theft by other means, whether that involves threats or physical violence or insufficient technical security of online wallets. There are, I think, plenty of potential improvements to be made here.

Censorship resistance

A different problem some people face is people attempting to prevent them from spending their own money in the ways they’d like, or from receiving money for goods or services they provide. This can result from people simply trying to discourage illegal activities (cf, Silk Road) but can also apply to activities the government dislikes, even if they’re apparently legal (eg, Trucker protests) or completely legal (eg, Operation Choke Point, various GoFundMe bans); and there are various attempts to have more even more intrusive control over people’s spending (eg, China’s Social Credit System, Australia’s Income Management System for welfare recipients, credit cards with emissions limits on purchases, or CBDCs).

In so far as Bitcoin can allow you to transact with other people without third parties having any idea who is making the transaction, or what the transaction is for, then preventing targeted censorship is probably straightforward. And even if such information can still be recovered by approaches such as chainanalysis, even just being able to avoid having to reveal all that information to middlemen, such as banks or payment providers (who, in turn, may be required to collect such information due to KYC/AML requirements) is a win.

If you’re thinking of Bitcoin as “Freedom Money” this is probably the key feature that justifies that phrase: being free to do what you want with your own money, versus someone else being able to tell you “no, we disaprove, we can’t allow you to do that.”

Cost reduction

One of the problems with money is that everyone always seems to want more of it; so even if they’re not stealing your money outright, someone’s always trying to take a bit here and a bit there. Where there’s a monopoly payment platform in place, these fees can be exhorbitant (eg, Twitch takes 50% of subscription fees after deducting taxes/expenses, app stores tend to be 15%-30%, as is Uber Eats, OnlyFans takes about 20%, Patreon is 8%-20%). Having alternative ways of sending/receiving money can both be a way of directly avoiding these costs, and a way of convincing existing companies to reduce their fees by the thread of competition. To a lesser extent the same applies to regular banking services as well, with each transaction losing a few percent to credit card fees or currency conversion fees, as well as setup and account keeping fees.

Self-enforcing contracts

A much more esoteric feature that can attract people to Bitcoin (or cryptocurrencies more broadly) is the ability to write self-enforcing contracts (aka “smart contracts”). That is, rather than writing a contract “you do X, I’ll pay you $Y” and then having to go to court in the event of a disagreement, you setup a self-enforcing contract, and whoever’s in the right can just enforce the contract directly without needing any third party involvement (and the uncertainty, expense and delays that can entail).

There are a lot of limitations to that, of course, and it may be that for many things it’s better done outside Bitcoin per se, rather than directly on the Bitcoin blockchain (particularly for contracts that don’t involve exchanging bitcoin). But even when restricted to simple things like lightning’s hash-timelock contracts, this sort of technique can be a powerful enabling technique for other features.

Wealth redistribution

And then of course there’s the simplest reason of all: the opportunity to part fools from their money. This can be pretty benign — if some people don’t recognise Bitcoin’s value immediately, then you can buy it early then sell it to them later at a higher value when they do eventually recognise its value, ie essentially the “number go up” thesis.

But it can also pretty easily devolve to fraud, where you’re taking money upfront and promising to deliver something of value, without ever having the intention of doing anything much more than taking the money, and the only reason the people losing money are fools is that thy believed you. Maybe you do this by running a ponzi scheme and just spending your customers’ deposits, or maybe by creating worthless tokens and selling them to hoodwinked investors so insiders can profit, or maybe pretending you’ve come up with a new way of effectively turning lead into gold. Personally, I think even some of the ridiculously overconfident price predictions/targets fall into this category too.

Bitcoin and cryptocurrencies are particularly susceptible to this for a few reasons: there’s easily verified history of people getting really rich really quickly, so when people claim some new scheme will do the same for you, it may be more believable than it would be in other contexts; people who’ve missed out on those gains already might be susceptible to FOMO and jumping in on too-good-to-be-true offers without taking the time to do enough due dilligence; Bitcoin and cryptocurrencies are new and complicated, so it can be hard to see all the ways in which a scheme can fail and thus hard to correctly weight up the risks; and finally it can be hard to hold people responsible for frauds accountable in any meaningful way, perhaps because they managed to be anonymous the entire time, perhaps because they’re simply operating from another country, or perhaps because they deflected blame to the technology itself. Perhaps the strangest thing to my mind is that rather than encouraging people to call out frauds to prevent newcomers from being defrauded, people that do that get persecuted, both legally (McCormack, hodlonaut) and even more surprisingly (to me, anyway) socially (Tuur, Matt).

In most cases, people who have different goals for Bitcoin can work together pretty satisfactorily: if someone cares more about censorship resistance and someone else cares more about inflation resistance, that rarely results in much conflict; self-enforcing contracts can be an enabling tool for the other goals whether (eg, by allowing better lightning channels to make payments cheaper or vaults to make theft harder), theft and censorship resistance tend to go hand in hand, and everyone wants cheap transactions.

But people who are benefiting more from fraud don’t necessarily fit in as well: both because improvements on any of the other tend to require actual work, which defeats the point of getting money for nothing, and because often the promises they’re making are obsoleted by actual improvements in the other areas.

So, I think it’s probably worth putting in some effort to appreciate just how much influence fraudsters and scammers have in the space. For example, up until its downfall, FTX and SBF were, somehow, widely respected and considered a respectable partner in designing a regulatory framework for the cryptocurrency industry. That respect provides a platform to make claims like “Bitcoin has no future as a payments network”, which are then widely repeated because, hey, the guy’s a widely respected billionaire (and the fact that FTX effectively held an undisclosed naked short position on BTC to the tune of $1.4B was, prior to the company’s collapse, undisclosed).

How to actually do that is left as an exercise for the reader, of course.

(There are two related topics I’m not including as “fraud” per se, that other people might: if you’re just doing an exit scam, eg running a completely legitimate custodial exchange, then one day running off with all your customers’ funds, then I’m considering that theft more than fraud; if you’re just doing money laundering and only lying about the source of your income/expenses to people who you think have no business knowing about it anyway, then, to me, that’s more an issue of censorship resistance)

Revolution

Finally, I don’t think it’s unfair to say that some see Bitcoin as a way to overthrow the unjust established order. That Bitcoin will stop crony capitalism, stop wars by preventing funding them via inflation, and generally prevent the fall of civilisation and usher in a new renaissance.

Personally, I’m doubtful that this line of thinking really holds up: my guess is that the best bitcoin can really do is more along the lines of “keeping honest people honest”:

  • Bitcoin’s inflation resistance might stop a government that’s not trying to steal/destroy the country’s wealth from accidentally starting on that path, but it won’t prevent a despot from simply banning bitcoin and aggressively persecuting anyone they suspect of using it anyway;
  • likewise if you have to search everywhere to find a tiny SD card, that’s less tempting to confiscate than a bag full of cash, but there’s plenty of cases where governments have successfully confiscated Bitcoin despite people applying reasonable levels of protection (and governments tend to have a competitive advantage at $5 wrench attacks anyway);
  • as far as censorship resistance goes, if a government’s willing to just imprison its critics, that’s probably already a greater threat than not being able to run a gofundme account;
  • maybe significant wealth will get redistributed from later adopters to early adopters, but at best that will just put different people with different flaws in positions of power, and won’t do away with greed or corruption.

But, hey, I could be wrong; if hyperbitcoinization does somehow ushers in a global idyllic utopia, I won’t be complaining.

Alignment

An obvious question that the above might inspire is why should we have one solution to all those different goals — why wouldn’t it be better to work on different approaches so that you can make fewer tradeoffs and end up with a better result for each individual topic, rather than some one-size-fits-all compromise?

I think one key thing tying them all together is decentralisation. Any centralised solution seems likely to breakdown:

  • centralised control over inflation gets compromised by governments wanting to spend more than they tax without building up debt
  • centralised custody provides a single point to attack for governments and activists who want to censor transactions or steal funds
  • centralised payment platforms provide opportunities to raise fees to monopoly levels, greatly increasing costs
  • contracts with a central controller (who can choose to delay enforcing the contract terms or to not enforce them at all) provide a way to overrule the letter of the contract, whether that’s via regulatory influence or economic incentives from the losing party to the contract (or because the central controller was the losing party to the contract)

So there is at least some reason to imagine they have enough in common that trying to do everything with a single system isn’t necessarily crazy.

Conclusion

I think that covers most of the reasons, both good and bad, of why people are interested in Bitcoin. While I might go into these in more detail later, I think, for me, the summary is:

  • At least for now, inflation resistance is mostly a “don’t screw it up” issue. That doesn’t stop people from constantly trying to screw it up of course.
  • Theft prevention has lots of possible improvements: multisigs, OP_VAULT, lightning watchtowers, supply chain authenticity, etc.
  • Censorship resistance still needs work: improving privacy to make it harder to know what to censor via things like encrypting p2p, dandelion transaction broadcast, PTLCs over lightning, etc; but also further encouraging people to get things off-chain where they can stay secret and uncensorable, rather than doing them on-chain where they’re easy to observe and then attack.
  • Cost reduction is, I guess, something we’re still doing pretty okay at; on chain fees are still low, the lightning network has even lower fees and is continuing to improve, etc, so this seems more an issue of “more, faster” than any particular change in direction being needed.
  • Self-enforcing contracts is a much more open ended topic, and I don’t think amenable to a bullet-point summary.
  • As far as fraud goes, making it an expected standard that anyone with Bitcoin denominated liabilities regularly publish a standardised cryptographic proof of solvency/reserves/liabilities (and making it easy to audit that either via your own full node, or via simple third party phone apps) would likely make it easier to detect and avoid ponzi schemes, at least. (Going further and having these being treated as securing the listed creditors for bankruptcy proceedings might also be straight forward, and encourage creditors to routinely validate these proofs)

Movie quote

Zeitgeist — I’m Zeitgeist.

Deadpool — Cool. I’d like to say you have the power to put your finger on the… pulse of society?

Zeitgeist — No… No, I spit acidic vomit.

Deadpool 2, 2018

Bitcoin Price

One of the things that’s hard to talk sensibly about is Bitcoin’s price. I’m going to have a go at it anyway.

Personally, I think there’s two fundamental aspects driving Bitcoin’s value: adoption, and security. The idea there is that, fundamentally, if you look at each person who’s invested in Bitcoin, take their net worth and multiply it by the percentage of that net worth they’re confident putting in Bitcoin, and add all that up, you’ll get Bitcoin’s market cap. So increased adoption means you have more people to look at, and increased security means each person is willing to put a higher percentage of their net worth into Bitcoin. Both those underlying factors are interesting to me, so I think having some idea of price trends is useful even as technical information. Obviously that doesn’t take competition into account at all — maybe other things look safer than Bitcoin, maybe people change their risk profile and are looking for profit rather than safety, etc: I don’t mean to suggest the above is all there is to it.

But the underlying idea there is that long term sustained increases in Bitcoin’s price have to be supported by sustained long term increases in adoption; and for the price increases to be exponential, the adoption increases probably also have to be exponential: you can’t just get 100 new Bitcoiners each cycle, you have to double the number of Bitcoiners each cycle.

One way to capture that trend is by plotting Bitcoin’s price over time on a logarithmic graph: every time the line on the graph goes up by an inch, that reflects the price going up 10x, and likewise reflects the underlying adoption of Bitcoin going up by some similar factor. The simplest way of doing this and turning it into a prediction is exemplified by the rainbow chart which is calculated by also putting the time since bitcoin began on a logarithmic scale, then picking the best straight line that matches. Again: I don’t mean to suggest that this approaches captures all the important things about price movements; it doesn’t have any way to predict how changes in monetary or asset price inflation might affect the price, nor have any way of capturing competition in the monetary/cryptocurrency space, and at best it can only predict a trend, not shocks to the trend.

But without some better way of predicting the future, guessing that trends from the past will continue more or less the same doesn’t seem like a bad idea. But if you want to do that, the rainbow chart is kind-of awkward: most of the space for the chart is outside the rainbow, and thus irrelevant if you already think the price will follow the rainbow; and because the rainbow tends to cross a 10x spread in price, and isn’t a straight line, it can be hard to match up when the rainbow is predicting a particular price level will be hit.

One way of fixing that is baking the rainbow prediction in, and seeing what remains. For the following graph, I’m taking the price prediction from bitcointalk user trolololo dated 2017-01-10; I’m dividing the actual price by the prediction, taking the log of that, then rescaling (ie y=log(price/pred, 10)*4+3), and just using time as the x-axis. The rainbow colours are then simple stripes — the border between green and yellow (at y=3) is the line that the price would follow if it had exactly matched the prediction.

The additional lines marked on the graph are the price points at powers of 10 — $1, $10, $100, etc up to $1,000,000 in the top right corner. You can perhaps see some psychological barriers there — an aversion to crossing the $10, $1000 and $10,000 barriers, with those figures only being crossed after they were in the blue, “cheap”, region of the rainbow prediction.

Again: it’s not reasonable to draw firm conclusions from these sorts of models — the rainbow chart simply models Bitcoin’s past performance, and as companies like to say about stocks, past performance is no guarantee of future results.

But I think it can be reasonable to use it as a baseline, that is with the proviso that things continue growing much as they have been, that there are no enormous shocks, etc, what seems likely to happen? In which case, if crossing the $100,000 barrier is similar to past ones, we probably shouldn’t expect to do it outside of the blue region, which suggests it’s perhaps plausible in 2027, which is another 5 years away; and Bitcoin at a million dollars per coin (hyperbitcoinisation?) is probably still a decade or more away, reachable perhaps sometime in the 2030s. Even $100k being the “fair price” according to the rainbow model appears not to occur until around about 2024.

That is, if you’re saving Bitcoin for your retirement in a decade or three, then buying now and expecting ridiculous gains might be reasonable; but if you’re expecting huge returns over just the next year or two, then that probably implies you’re expecting that Bitcoin grows much faster than it has in its entire history so far, which is a very big call.

I suspect “things continue growing much as they have been” probably already bakes in things like “44 Countries To Meet In El Salvador To Discuss Bitcoin” — we’re already assuming roughly exponential increases in adoption, and going from some companies adding it to their balance sheet to some countries doing the same is simply the scale we’ve already reached. New ETFs or more appropriate accounting standards also seem likely to be effectively baked in to me.

On the other hand, I’m pretty sure a prediction based on a linear regression like that is also effectively assuming that USD monetary policy continues to produce both low inflation in the 1%-2% range, and very low interest rates, since those have both the case for almost all Bitcoin’s existence, and changes there (assuming they’re not “transitory”) will probably cause different behaviours in the future compared to the past. Maybe that results in something meaningful; but maybe it just means that we’d want to redo the charts in “2009 USD” in order to exclude CPI inflation, and that might turn out to be already be good enough.

You can perhaps get a better fitting prediction by redoing the regression; for me, I like that using one that’s about five years old is still useful, and also that it perhaps suggests you shouldn’t take any conclusions you might draw from it too seriously. I don’t think using a fit based on all the price data to date changes any of the conclusions above substantially.

Liquid and Taproot Activation

Blockstream posted today about Elements 0.21 and activating taproot on the Liquid sidechain. I think that’s worth talking about in a couple of ways.

First is that it’s another consensus update scheduled about six weeks after the previous “dynafed” update. That update failed fairly badly causing almost a full day’s downtime for the network, during which no blocks were able to be generated. There was an additional consensus bug in the development version of elements that prevented it from being able to follow the new chain once block signing resumed, though the most recent released version of elements was able to validate blocks without a problem.

I think there’s probably a few problems that played a part in causing that train-wreck.

First is that the block signing code for liquid is proprietary — it’s not quite clear to me if that’s proprietary to Blockstream, or a shared trade secret between Blockstream the functionaries that use the code and do the signing — but either way, it’s code that’s not included in elements, and not something that is widely available and able to be tested thoroughly before it’s used in deployment. That’s probably a legitimate tradeoff to make: keeping the signing mechanism secret is security by obscurity, but provided obscurity is not the only protection, it can still be a valuable additional measure; and additionally, selling a secure way of allowing the functionaries to coordinate around signing the sidechain blocks is (to my understanding) what makes this a business for Blockstream. So I think the conclusion there is that if it’s possible to open more of the block signing code, and then better automate testing of it, that’s great; but it may well not be reasonable to do that, and if so, it should be treated as a much more high risk module than it seems like it has been.

A simple way to mitigate that risk is in fork design. One of the principles we apply in Bitcoin soft forks is ensuring that we don’t break any mining hardware when introducing consensus changes: people have made large, real capital investments, and a software change that devalues that investment isn’t a great way of building mutual trust. We had an instance of exactly that occur in taproot signalling, where a modest amount of hashpower simply wasn’t able to signal for activation; and I’d argue that was the fundamental cause of many of the difficulties with segwit — it (unintentionally) reduced the value of significant amounts of capital investment due to being incompatible with covert ASICBoost.

So I think the second factor in giving rise to the dynafed activation issues was not taking enough advantage of that philosophy.

In the context of a hard-fork — which means accepting blocks that would previously have been unacceptable — a simple way to implement the same principle is to make it a pure hard-fork: that is, make sure you accept any block that would have been acceptable under the old rules, so that if it does turn out you have a bug, you can just keep building blocks as if the hard fork never happened. That way, rather than the chain dying until a fix is rolled out, you can keep building blocks, just without using the new features you were hoping to enable. This is complicated by the fact that, as a hard fork, it is not possible to continue running old validation software once a single block using the new features has been accepted; and because liquid has a two-block finality rule, reorgs of more than one block are not acceptable.

Without being able to see the block signer code, it’s hard to suggest specifics here, but that a majority of “nine functionaries, running an earlier version of the functionary software, reported errors but did not terminate” suggests to me that it should have been possible to design dynafed in a way that failed more gracefully than it did — perhaps by making it so a single, non-upgraded, functionary proposing non-dynafed blocks would be able to have their blocks signed by a majority of functionaries, with no observed downtime; or by making it so a quick downgrade by a majority of functionaries was enough to continue producing valid blocks, rather than an emergency patch having to be written, validated and deployed.

Another standard in the blockchain world is to have a live testnet — somewhere you can deploy code changes and test them out before they start affecting anything worth real money. To the best of my knowledge liquid doesn’t have a testnet anymore. There was originally “Elements Alpha” but that was discontinued at some point (probably because Bitcoin’s testnet isn’t really reliable enough to use with liquid’s peg-in/peg-out feature), and you can spin up your own “liquidv1test” test environment for local use, but that doesn’t test the proprietary block signing code. Testnets certainly aren’t a silver bullet for bugs, and you’d need to put some thought into how to both partition block signing for the testnet from liquid itself to prevent potential exploits, while also keeping the block signing code itself apropriately secret. Those seem like solvable problems, and worth the effort to detect consensus bugs earlier, however. So this is perhaps a third approach that could have detected this bug earlier, before it caused problems.

That’s not to say any of that is necessarily a crisis for liquid, or something that should necessarily be their single highest priority to fix. Rather, it seems to be about par for the course: Solana had 17 hours downtime a month ago due to increased transaction volume, Infura had a 7 hour outage last year due to an unexpected consensus incompatibility, and Ethereum had a consensus bug that was exploited to cause a chain split affecting just over 50% of nodes and a minority of miners. But on the other hand, if liquid isn’t trying to be substantially more reliable/robust than those alternatives, what’s its advantage over them?

I think Blockstream and the Liquid Federation need to step up their game here… Though, to be fair, I’d say the same about everything else in this space, including Bitcoin.

Anyway, the activation of taproot on liquid will be quite different. It’s a soft-fork that only affects transactions, so it should be possible for it to fallback cleanly if there ends up being a problem, and much of the updates for taproot have been ported from Bitcoin, and have been reasonably well tested there. On the other hand there are two substantial sets of consensus features that aren’t in Bitcoin that will be in liquid: one is a variety of additional opcodes which should be quite interesting, and the other is changes to signature hashing to support liquid’s pegged L-BTC and confidential assets features. (There are also various wallet features added to liquid, that don’t have the same failure modes as consensus changes)

I think those changes should have had more review — they have an in-repo document explaining the tapscript additions, rather than something like a BIP or EIP proposing the changes, eg, and they’ve been merged relatively quickly. I expect one consequence of that is that tapscript for liquid and tapscript for Bitcoin will diverge — liquid have claimed a bunch of opcodes for these new features, and I expect that will start to conflict with Bitcoin wanting to claim opcodes for its features. With a bit more time spent on actively seeking feedback on the proposal, that could have been avoided pretty easily, but, oh well. There’s always a risk those changes could result in a consensus incompatibility of some sort, though I think it’s low in this case. There’s a much bigger risk it could result in buggy smart contracts and thus loss of funds. Maybe that’s just a “buyer beware” thing, but if using liquid isn’t substantially safer than transacting on random other multicoin blockchains, then again, what’s the point? (Perhaps confidentiality is enough; I’m not sure if any of the other chains that do multiple assets also do confidential transactions on chain)

The other thing about how they’re deploying taproot compared to how they deployed dynafed is the activtion parameters. Dynafed used the default activation parameters of 75% signalling over a 144 block period (so about 2.5 hours, given liquid generates blocks every minute) and had a special rule that would require functionaries to explicitly enable signalling via the -con_dyna_deploy_signal parameter (that parameter also results in an erroneous “unknown new rules activated” warning, when it’s not used by non-mining nodes). Activation doesn’t occur until after a locked-in period, so another 2.5 hours later after signalling reaches the threshold.

By contrast, liquid’s taproot activation has customised parameters requiring 100% signalling over 10080 blocks (exactly one week) and signalling will occur by default for any functionary that upgrades. A locked-in period is also required, so activation is then delayed for a week after the 100% signalling threshold is reached.

That setup means that any functionary that is not upgraded by the time signalling begins (presumably on the 1st of November) will delay activation by a week, and means that if any functionary finds a problem with the taproot activation on liquid, they can unilaterally prevent activation indefinitely. All-in-all probably fine, but a pretty big change from how dynafed was activated.

Rolling for initiative

At the start of the year, I wrote out some thoughts about Bitcoin priorities, probably most simply summed up as:

it’s probably more important to work on things that reinforce [Bitcoin’s] existing foundations, than neat new ideas to change them

In that post, I also wrote:

I’m particularly optimistic about an as yet unannounced approach that DCI has been exploring, which (if I’ve understood correctly) aims to provide long term funding for a moderate sized team of senior devs and researchers to focus on keeping Bitcoin stable and secure […]

It wasn’t something I’d even considered as a possibility at the time, but the world works in mysterious ways, and as it turns out, I’m now joining the Digital Currency Initiative to work on making that approach live up to its promise.

There are, I think, two ways to make systemic improvements in security. One is in code and tooling improvements — reworking the code directly to remove bugs and make it more robust, and building tools like linters, continuous integration and fuzz testers, that will then automatically eliminate potential bugs before they’re written or merged. I expect that will be where we’ll devote most of the effort.

But I think an equally important part of doing security well is having it be an integral part of development, not an add-on — while certainly some people will have more expertise than others, you want everyone thinking about security; in a similar way to wanting everyone to be thinking about performance if you want your system to work efficiently, or wanting everyone to be thinking about user experience if you want a smooth and consistent experience. That is, the other part of making systemic improvements in security is maintaining a culture that deems security a critical priority, and worth thinking about at all levels.

That may mean that I want to walk back my earlier conclusion that “neat new ideas [that] change [Bitcoin’s existing foundations]” are something to deprioritise. Because it certainly seems like people do want exciting new features, and given that, it quickly becomes super important that the people working on those features aren’t a separate group from the people who are deeply security-conscious, if we want to ensure those new features don’t end up compromising Bitcoin’s foundations. The alternative is to continually fight a rearguard action to either debug or prevent adoption of each neat new idea that hasn’t been developed with an appropriately adversarial mindset.

In particular, that may mean that working on things like ANYPREVOUT and TAPLEAFUPDATEVERIFY might have two ways of fitting into the “improve Bitcoin’s security” framework: it makes it easier to use bitcoin securely (ANYPREVOUT hopefully makes lightning safer by enabling eltoo and thus reduces the risks of toxic state; TAPLEAFUPDATEVERIFY may make improvements in cold storage possible, making on-chain funds safer from theft), but developing them in a way that puts security as a core goal (as compared to other priorities, eg “time to market”) might help establish traditions that improve security more broadly too.

(And I don’t mean to criticise the way things are going in Bitcoin core so far — it’s a great project where security does take a front row seat pretty much all the time. The question I’m thinking about is how to make sure things stay that way as we scale up)

Also, just to get it on the record: “security” means, in some sense, “the system works the way it’s intended to”, at least in regard to who can access/control what; but “who is intended to have what level of access/control” is a question you need to answer first. For me, Bitcoin’s fundamentals are that it’s decentralised, and that it’s a store of value that you, personally, can keep secure and choose to transfer if and when you please — which is really just another way of saying that it’s “peer-to-peer electronic cash”.

I don’t think Bitcoin gets anywhere by compromising on decentralisation: better to leave that to competing moneys whether that be Central Bank issued or altcoin tokens on the one hand, and higher layers that build on Bitcoin, like Liquid or exchanges, on the other. If those things succeed, that’s great — but having a money that’s an even playing field for everyone, powerful or not, is a fundamentally different thing that’s worth trying to make work.

There are plenty of details that go into that, and plenty of other things that are also important (for instance, I think you could also argue that many of Bitcoin’s other priorities, such as the fixed supply, or privacy or censorship resistance can only be obtained by having a decentralised system); but I think it’s worth trying to pick the principles you’re going to stand for early, and for Bitcoin, I think the best place to start is decentralisation.

Sturm und drang und taproot activation

The Shipwreck – Joseph Mallord William Turner 1775-1851

Back at the end of 2019, I said on Stephan Livera’s podcast that activation of taproot is “something a lot of people in the community have very strong opinions of; so it’s probably going to be a Twitter flamefest or whatever about it.” It’s turned out both better and worse than I expected — better in that we got decent agreement on an activation method, merged it into core, and so far appear to be getting uptake by miner more rapidly than I was expecting; worse in that the the “UASF” side of the debate seems to have gone weirdly off the rails.

(I’ve written in the past about activating soft forks in Bitcoin so I’ll just leave that link there if you want some general context for what the heck I’m talking about and otherwise dive right in)

Speedy Trial

The activation method included in the Bitcoin Core 0.21.1 is called “Speedy Trial” and it’s implemented as a variant of BIP 9 (which was used for activating segwit and CSV/relative timelocks) modified in a few ways:

  • rather than having signalling not start for a month after the release of the new version, signalling was scheduled for just a few weeks after the merge of the activation parameters, and ended up starting on the same day the software was actually released
  • rather than having signalling continue for a year, it only continues for a bit over three months (ending sometime after August 11)
  • rather than having activation occur two weeks after lock in occurs, it is delayed until block 709632 (expected around mid November)
  • rather than requring 95% of blocks to signal to lock in activation, only 90% of blocks are required to signal

The main idea is that we don’t have any reason to expect problems with taproot activation, so let’s just do something quickly: if we’re right, and there are no problems, the quick approach will work fine, and if we’re wrong and there are problems then we can do something else.

Shortening the timeframe (starting and ending signalling sooner than BIP 9’s recommendations) means that we can move on to dealing with problems much more quickly and delaying activation helps ensure that there’s still time for users to upgrade and ensure miners play fair, despite signalling starting so quickly. Finally the reduced threshold recognises that the BIP 9 mechanism doesn’t need as high a threshold as the old IsSuperMajority approach, so lowers it somewhat while remaining fairly conservative.

The broader rationale behind this approach was documented by Matt in his Modern Soft Fork Activation email early last year (point 4): adoption by the vast majority of hashpower reduces the risk to the network of activation while the rest of the network upgrades — users who don’t upgrade will follow the chain with the new rules because any chain that doesn’t follow the new rules will quickly become much shorter (the 90% figure means transactions in an chain invalid under the new rules only have ~1% chance of getting three confirms before being reorged to a chain that’s valid under the new rules).

That strategy fits well with voluntary signalling by miners: if miners upgrade quickly, it’s safe to activate the new rules fairly quickly. If they don’t upgrade quickly, well then we need to wait for users to upgrade to have a UASF-style activation — but if users have all upgraded, it doesn’t much matter what miners do: if they mine blocks invalid under the new rules, they’ll just be ignored, the same as BCH or BSV blocks are ignored by Bitcoin nodes. So “Speedy Trial” just deals with the easy case — let’s see if things will work well, and get it over with if it does. If it doesn’t work well, it’ll all be over quickly, and we can move on to an activation method that doesn’t rely on miners, knowing that it’s needed.

“UASF”

While most people were happy to release Bitcoin Core with the Speedy Trial method, that’s not true of everyone, and a few people are instead encouraging use of an alternative implementation, forked from the Bitcoin Core codebase, and using a different activation methodology, that requires taproot activation by signalling by a particular block height that is expected to arrive around November 2022.

I don’t recommend running that client under any circumstances.

The simplest reason is that it’s poorly maintained: for example, the change to set the activation parameters is PR#9, which was merged without any (public) review comments, about nine hours after it was filed, and about 29 hours before the meeting that was going to make the decision on those parameters. It also has a red-cross “failed CI tests” marker, mainly because various software-as-a-service CI systems have limits on how many jobs they’ll run for free, and in order to run all the CI tests for bitcoin, you either have to pay to get them run quickly, or you have to wait for a long time. Having been forked from Bitcoin Core 0.21.0, none of the backports targeted to Bitcoin Core 0.21.1 have been merged, such as #20901, #21469, or #21640 — the lack of #21469 in particular means the UASF client won’t correctly parse bech32m addresses when attempting to pay to taproot addresses following BIP 350, for instance, rendering it incompatible with any taproot wallets following the recommended address format. Are these serious bugs that will lose you money today? No, probably not. But is this the best software to secure your BTC so you won’t lose money tomorrow? Also, no.

Beyond poor quality, it’s also marketed deceptively — eg, announcing it as “the Bitcoin Core 0.21.0 build w/ Taproot added in” and naming it “Bitcoin Core version 0.21.0-based Taproot Client 0.1” rather than making it clear that it’s an entirely separate group of people working on it to the Bitcoin Core developers. Perhaps if you look carefully at their bitcointaproot.cc site, after skipping past the big, bold “Bitcoin Core” heading in the middle of your screen when you first load it up, you might see the “Who maintains the Taproot Client software?” and click through to see “Bitcoin Mechanic, Shinobi and stortz”, though you’ll likely have no way of figuring out who those people are.

“lot=false to lot=true”

It was (in my opinion) always likely we’d end up with one activation method in Bitcoin Core and a different UASF client published by Luke and others — when I did the survey of some devs two-thirds didn’t want to go straight to a “flag day” style activation, but the remaining one-third did, and the BIP 148 experience already demonstrated that releasing a forked client was a realistic gambit if things weren’t going your way.

The original pseudonymous author of BIP 8 approved Luke’s patch adding himself as a co-author of that document in June last year, and in the same patch set, Luke introduced the lockinontimeout parameter, which seemed (to me at least) like a way that the same codebase could satisfy both goals. So for the eight or so months following that, I spent a bunch of time (see #950, #1019, #1020, #1021, #1023, #1063) trying to refine that so it would work as smoothly as possible, even if miners or others were deliberately trying to game the system.

The advantage of that approach over where we are today is that when a few opinionated people decide that a UASF is the only reasonable approach it would be much easier to maintain a high quality fork of Bitcoin Core that has that feature — you’re only changing a single “false” to “true” and otherwise just updating documentation and the name; so there’s no particular difficulty in porting over other patches. (In contrast, when you’re using an entirely different mechanism, you have to touch code in lots of different places, and each of those has a chance of conflicting with any other patches you might also want to include)

By mid February it was looking (to me at least) pretty much like that was how things would play out, and we would merge BIP 8 into core with taproot set as lot=false, but with the code ready for a switch to lot=true. There was still work to be done to make lot=true safe in the adversarial conditions where it might have any value, but it seemed plausible that we could make progress on that over the next few months — pulling in some of the fixes that were already done for the BIP 148 code in 2017, and adding improvements on top of that.

But that was about the point any chance of consensus collapsed: the suggestions of “core should just release both clients and let the people choose” and the consensus risks that implies (which is complicated enough it’d take a whole article in itself to cover) concerned Suhas seriously enough to setup a blog and concerned Matt enough to go back to square one on activation methods. Meanwhile on the other side, Luke declared “LOT=False is dangerous and shouldn’t be used”.

The “UASF (LOT=true) kick off meeting” was then announced as happening a couple of days later (though work on the UASF website had already begun a week earlier, prior to “LOT=false is dangerous” post), which ended up including some gloating about the confusion, along with promises to make it hard to come to consensus (“<luke-jr> personally I plan to NACK any LOT=False; but I doubt I would need to at this point (devs pushing against LOT=True seem to be off shed-painting other bad ideas now)“).

Perhaps someone with more ranks in the Diplomacy skill could’ve done better and we could have stayed on that track, but, at least for me, those were pretty clear “Dead End” signs.

Speedy Trial was proposed a few days later, providing a new track. It took under six hours to get a first draft PR implementing that proposal up, but then an additional 57 days to actually get it included in a release. Some of that was due to problems with the original draft, some was due to improving test coverage, some was due to the regular release candidate process, and some was certainly due to an unexpected certificate revocation, but a lot of time was effectively wasted: eg, there was a port of Speedy Trial on top of the BIP 8 patches proposed a few days after the initial PR above, effectively doubling the review load and splitting the development effort for most of that time, and then there were the promised NACKs, along with long delays with getting BIP updates merged.

I say “effectively wasted”, but perhaps that’s not fair: exploring alternative ways of writing code helps you understand what’s going on, even if you throw it away (certainly, I learned something from it: notably that height based signalling isn’t compatible with testnet behaviour). Given the sudden failure of the previous lot=false approach, I don’t think there was ever any realistic hope of achieving a better degree of consensus, but of course, I could be wrong, and some things are worth trying even when it seems hopeless. So, obviously, draw your own conclusions on whether the time spent there was worthwhile or not.

Speedy Trial vs UASF

I think there’s fundamentally three reasons why some people are still sticking to the UASF approach rather than being happy with the Speedy Trial approach — whether in practice, despite the poor implementation, or more in principle, eg by suggesting that Bitcoin Core should be deploying some sort of a UASF backstop now, rather than solely doing Speedy Trial.

I think the simplest of these reasons is something along the lines of “BIP 8 was proposed in 2017, why go to all this hassle instead of just doing it?” (eg adam3us or michaelfolkson) or more assertively something like “We already agreed to do BIP 8, why are you violating community consensus?” (eg luke-jr or MarkFriedenbach) The problem with that is that the BIP 8 we have today is not the BIP 8 that was proposed in 2017 or even the one we had in January this year — over time it’s had the lot=true/false parameter added, had a compulsory signalling phase added, had numerous tweaks to the state behaviour added, and most recently had a lock-in delay added. It’s never been used outside of the regression test environment, and bugs were being found and fixed as recently as February and throughout March. That’s not unexpected — what we had in 2017 was an idea, but as with most ideas, things get more complicated when you try to actually make them a reality. And sometimes it turns out that your original idea wasn’t so great in the first place.

Another reason is something along the lines of “If we think a UASF might be necessary in a few months in the event Speedy Trial doesn’t hit 90%, why not do it now?” There’s two answers to that: the first is that the approach that we were working towards had collapsed, and it would likely take months to get agreement on an alternative one — by which time Speedy Trial would have finished anyway, whether it succeeds or fails. (That it took two months to even get Speedy Trial out suggests that might even be an underestimate). So why not get started with the easy part while we rethink the hard part? The second is that we’re likely to learn things from Speedy Trial and that can inform our decisions on how to deploy the UASF. From my perspective we’ve already learnt some things:

  • miners/pools didn’t start signalling prior to the activation logic reaching the STARTED state — that probably means there’s less “false” signalling than some us feared/expected
  • pools have been upgrading to signal fairly quickly, adding credence to their prior statements as recorded on taprootactivation.com
  • poolin have reported that some ASIC firmware doesn’t support signalling via version bits — maybe that’s an easy fix, or maybe we should move signalling to a different mechanism; as a result they’ve only enabled signalling for some of their servers, and maybe 1THash has done the same
  • maybe we should be expecting teething problems like this and in the future encourage signalling in advance of it actually mattering

An obvious thing we’ll learn if Speedy Trial fails is that we can’t reach 90% of miners signalling in three months — if we discover that’s for practical reasons (like the issue poolin described), then making signalling mandatory is probably a bad idea if a significant amount of hashrate can’t actually do it — so perhaps lowering the threshold further would be a good idea, or changing the way we signal to something that more mining hardware is compatible with would be worthwhile, or perhaps changing the approach entirely might be a better bet. We’ll also likely learn how enthusiastic businesses and node operators are to upgrade to support taproot — the faster everyone does that, the less time we need to wait before triggering a flag day. But there’s a chicken and egg problem — you can’t pick a flag day without knowing how fast people will upgrade, people can’t upgrade until there’s a client, you can’t release a flag day/UASF client until you pick a date for the flag day, you can’t pick a flag day without knowing how fast people will upgrade… rinse, repeat. So being able to release a client that doesn’t set a flag day lets you break that cycle and get a somewhat informed answer. And beyond all that, perhaps there are unknown unknowns that we’ll find out about.

The third reason for advocating for a UASF now, in my opinion, is just that a bunch of people enjoyed the BIP 148/no2x drama and want to play it out again in much the same. Looked at from the right viewpoint it was a really straightforward heroic saga: a small band of misfits get together to fight the big bad, against the advice of the establishment, build up a popular movement, and win the battle without a shot being fired. Total feel-good Hollywood blockbuster plot line.

You can see little demonstrations of this sentiment every now and then, eg when the BIP 148 big bad started signalling, adam3us’s reponse was “don’t thank them too much – last time they were among the ring leaders in tactical veto games. 148 lurking. never forget.” or zndtoshi’s take “Dude I wish that ST would fail so that miners get it that we can enforce via uasf. But I have a feeling they did learn from the segwit saga and will just signal now.”

And I mean, that’s fine — if you’ve got an empowering narrative for why you’re doing what you’re doing, good for you! But it becomes a problem if remembering the past blinds you to the present and your plotline doesn’t actually match reality — just because you won the last battle with a cavalry charge, doesn’t mean it’s necessarily a good idea for this battle, and just because it was fun fighting an enemy in the past doesn’t mean it’s a smart idea to find more enemies now.

Anyway, that’s my collection of thoughts on where we’re at. No particular conclusion from them — I guess I have a whole other set of thoughts on what to do next — but I wanted to get these written down somewhere before moving on.

Fixing UASF

Ambiguous titles are tight.

I’ve always been a little puzzled by the way the segwit/uasf/uahf/segsignal drama played out back in 2017 — there was a lot of drama about the UASF for a while, and then, when push came to shove, suddenly miners switch to being 100% in favour of it, and there were no problems at all. There was even the opportunity for a bit of a last minute “screw you”: BIP91 could have been activated so that it only locked in after BIP148 activated, potentially resulting in a segwit-enforcing chain that wasn’t valid according to BIP148 — not quite “if I can’t have it, nobody can”, but at least a way to get a final punch in prior to losing the fight. But it just didn’t happen that way.

So what if “losing the fight” wasn’t really what happened — what if the fight had been fixed right from the start?

"We are preparing a UAHF to the market. We will have two kinds of Bitcoin if UASF is activated. Big block vs 1MB block. Let us trade." - Jihan Wu Arp 5, 2017

I think that was a day or two before Greg posted about ASICBoost to the bitcoin-dev list — which is interesting since, prior to the ASICBoost factor being revealed, I don’t think the UASF approach had all that much traction. For example, here’s ebliever commenting on reddit in response to the ASICBoost reveal:

I didn’t like the UASF when first proposed because it seemed radical and a bad precedent. But given the crisis if Bitmain can’t be stopped in short order, to save the rest of the mining industry I’d favor the UASF as an emergency measure ASAP.

Why reveal plans for a UAHF to defend against a UASF before the UASF even has significant support?

Well, one reason might be that you wanted to do a UAHF all along. The UAHF became BCH, and between August 2017 and November 2018, BCH had the “advantage” over regular bitcoin in that you could do covert ASICBoost mining on it. (In November 2018 BCH was changed in a way that also prevented covert ASICBoost, and wonder of wonders, a new hard fork of BCH instantly appeared, BSV)

After all, there weren’t many other outcomes on the table that would have allowed covert ASICBoost to continue — the New York Agreement was aiming to do bigger blocks and segwit which still would have blocked it, the “Bitcoin Unlimited” split had basically failed, and stalling segwit probably wouldn’t work forever

There’s a history of the BCH fork from Haipo Yang of ViaBTC. I think it’s pretty interesting in its own right, but for the purposes of this post the interesting stuff is in section 2 — with Bitcoin Unlimited failing to achieve a split occuring just prior to that section. In particular, it includes the argument:

Fortunately, small-hashrate fork can be done without others’ support, and it seemed to be the only feasible direction for the big-block supporters.

Even at this time, most of the big block advocates still placed their hope on the SegWit+2MB plan reached by the New York Consensus. I made it clear that this road was not going to work

It also gives the following timeline leading up to the BCH split:

Wu Jihan has got a Plan B. While supporting the New York Consensus, he took the small-hashrate fork as a backup. […] At that time, the core members of the Core team launched the UASF (User Activated Soft Fork) campaign, and planned to force the activation of SegWit on August 1, 2017. So we decided to activate UAHF (User Activated Hard Fork) on the same day.

So at least according to that timeline, the NYA was already written off as a failure and the BCH UAHF was already being worked on prior UASF being a thing, and picking the same day to do the UAHF was just a matter of convenience, not a desperate attempt to save Bitcoin from a UASF at all. That’s not confirmation that the UAHF was planned from the start in order to save covert ASICBoost — but it is at least in line with the argument that UAHF was the goal all along, rather than a side effect of trying to oppose the UASF.

The thing is, that leaves nobody having been opposed to the UASF: the BCH camp was just using it as a distraction while they split off for their own reasons; the NYA camp were supporting it as the first step of S2X; conservative folks thought it was risky but were happy to see segwit activated; and obviously bip148 supporters were over the moon.

And that’s relevant today to discussions of the “bip8 lot=true” approach, which proposes using the same procedure as bip148 — by some only as a response to delaying tactics, by others as the primary or sole method.

Because despite there being claims that running a UASF client has no risks, that is fundamentally not true. There are at least two pretty serious risks: the first is that you’ll go out of consensus with the network, nobody will mine blocks you consider valid, and that you’ll be unable to receive payments until you abandon your UASF client — that alone is likely enough risk for businesses and exchanges to not be willing to run a UASF client; and the second is that people split down the middle on supporting and opposing the UASF and we have an actual chainsplit, resulting in significant work as one or both sides figure out how to avoid being spammed by nodes following the other chain, add replay protection and protect their preferred system from suffering from a difficulty bomb that makes it uneconomic to continue mining blocks.

All of that’s fine if you’re confident any UASF you support will easily win the day — shades of Trump’s “trade wars are good, and easy to win” there — but if you’re relying on the experience with segwit and bip148 as your evidence that UASF’s will easily win, perhaps the above is some cause for doubt. It is for me, at any rate.

(Of course, not being easy to win, doesn’t mean unwinnable or too scary to even risk fighting; but it does mean building up your strength before picking a fight. For bitcoin, at a minimum, that means a lot more work on making p2p robust against the potential of a more-work invalid chain that your peers may consider valid)

Bitcoin in 2021

I wrote a post at the start of last year thinking about my general priorities for Bitcoin and I’m still pretty happy with that approach — certainly “store of value” as a foundation feels like it’s held up!

I think over the past year we’ve seen a lot of people starting to hold a Bitcoin balance, and that we’ll continue to do so — which is a win, but following last year’s logic also means we’ll want to start paying more attention to the later parts of the funnel as well: if we (for instance) double the number of people holding Bitcoin, we also want to double the number of people doing self-custody, and double the number of people transacting over lightning, eg; and ideally we’d want that in addition to whatever growth in self-custody and layer 2 transactions we’d already been aiming for if Bitcoin adoption had remained flat.

That said, I’m not sure I’m in a growth mindset for Bitcoin this year, rather than a consolidation one: consider the BTC price at the start of the past few years: 2016: ~$450, 2017: $1000, 2018: $13,000, 2019: $3700, 2020: $8000, 2021: $30,000. Has there been a 8x increase in security and robustness during 2016, 2017 and 2018 to match the 8x price increase from Jan 2016 to Jan 2019? Yeah, that’s probably fair. Has there been another 8x increase in security and robustness during 2019 and 2020 to match the 8x price increase there? Maybe. What if you’re thinking of a price target of $200,000 or $300,000 sometime soon — doesn’t that require yet another 8x increase in security and robustness? Where’s that going to come from? And those 8x factors are multiplicative: if you want something like $250k by December 2021, that’s not a “three-eights-are-24” times increase in robustness over six years (2016 to 2022), it’s an “eight-cubed-is-512” times increase in robustness! And your mileage may vary, but I don’t really think Bitcoin’s already 500x more robust than it was five years ago.

So as excited as I am about taproot and the possibilities that opens up (PTLCs and eventually eltoo on lightning, scriptless scripts and discreet log contracts, privacy preserving proof of reserves, cheap multisig — the list might not be infinite but at least seems computationally intractable), I think I’m now even more of the view that it’s probably more important to work on things that reinforce the existing foundations, than neat new ideas to change them than I was this time last year.

There are already a bunch of areas where Bitcoin’s approach to security and robustness has improved technically over the past few years: we’ve got more people doing reviews (eg, via the PR review club, or getting introduced to Bitcoin via the Chaincode Residency etc), we’ve got deeper and more diverse continuous integration testing (thanks both to more integrations being enabled via github, and travis becoming unreliable enough to force looking at other approaches), fuzz testing has improved a lot and become a bit more broadly used, and I think static analysis of the codebase has improved a bit. There have been a bunch of improvements in code standards (eg using safe pointers, locking annotations, spans instead of raw pointers) too, I think it’s fair to say. I haven’t done an analysis here, just going from gut feel and recollection.

With a focus on robustness, to me, the areas to prioritise in the short term are probably:

  1. Modularisation — eg, so that we can better leverage process separation to reduce security impacts, and better use fuzz testing to catch bugs in edge cases. There’s already work to split the gui and wallet into separate processes, though while that’s merged, it’s not part of the standard build yet. Having the p2p-network-facing layer also be a separate process might be another good win. While it’s a tempting goal, I think libconsensus is still a ways off — p2p, mempool management, and validation rules are currently pretty tightly coupled — but there’s steps we can make towards that goal that will be improvements on their own, I think.
  2. The P2P network — This is the obvious way to attack Bitcoin since by its nature everyone has access to it. There are multiple levels to this: passively monitoring the p2p network may allow you to violate users’ privacy expectations, while actively isolating users onto independent networks can break Bitcoin’s fundamental assumptions (you can’t extend the longest chain if you can’t communicate with any of the people who have the longest chain). There are also plenty of potential problems that someone could cause in between those extremes that could, eg, break assumptions that L2 systems like lightning make. Third-party (potentially centralised) alternatives as backups for the p2p network may also be valuable support here — things like Blockstream Satellite, or block relay over ham radio, or headers over DNS: those can mend splits in the p2p network that the p2p layer itself can’t automatically fix. Or efficiency improvements like erlay or block-relay-only can allow a higher degree of connectivity making attacks harder.
  3. CI, static analysis, reproducible builds — Over the past year, travis seems like it’s gone from having the occasional annoying problem to being pretty much unusable for open source projects. CI is an important part of both development and review; having it break makes both quite a lot harder. What we’ve got at this point seems pretty good, but it’s new and not really time-tested yet, so I’d guess a year of smoothing out the rough edges is probably needed. I think there’s other “CI”-ish stuff that could be improved, like more automated IBD testing (eg, I think bitcoinperf is about 3 months out of date). Static analysis serves a similar goal to tests in a different way; and while we’ve already got a lot of low hanging fruit of this nature already integrated into CI via linters and compiler options, I suspect there’s still some useful automation that could happen here. Finally, nailing down the last mile to ensure that people are running the software the devs are testing is always valuable, and I think the nixos work is showing some promise there.
  4. Third-party validation — We’ve had a few third-party monitoring tools arise lately — various sites monitoring feerate and mempool sizes, forkmonitor checking for stale blocks (and double-spends), or, at a stretch, optech’s reviews of wallet behaviour and segwit support. There’s probably a lot of room for more of this.

I’d love to list formal verification at the consensus layer as a priority, but I think there’s too much yak-shaving needed first: it would probably need all the refactoring to get to libconsensus first, then would likely need that separated into its own process, which you could only then start defining a formal spec for, which in turn would give you something you could start doing formal verification against. I suspect we’ll want to be at that point within another cycle or two though.

I’m not concerned about mining — eventually there might be a risk that the subsidy is too small and there’s not enough fee income, but that’s not going to happen while the price doubles faster than the pre-scheduled halvings. There’s certainly centralisation risks, whether in ASIC manufacture, in hardware ownership/control, or at the pool level, but my sense of things is that’s not getting worse, and is not the biggest immediate risk. Maybe I’m wrong; if so, hopefully the people who think so are working on preventing problems from arising, rather than taking advantage of them.

There are other levels of robustness and security beyond just “keep the network working”, if you consider the question of “how to prevent my coins from being lost/stolen?” more broadly. The phishing attacks and potential for physical attacks resulting from the Ledger leak are an easy example of a problem of this sort, but exchange hacks/failures in general, malware swapping addresses so your funds go to an attacker instead of the intended recipient, and lost access to keys are also pretty bad. I think descriptors, miniscript and taproot multisig probably provide for a good path forward to help prevent losing access to keys; and it’s possible that progress on BIP322 (signing messages against a Bitcoin address) may provide a path to avoiding address swapping attacks.

Technical solutions are, in some sense, all you can hope for if you’re doing self-custody; but where a bank/custodian is involved (good) regulation might be useful too: requirements to keep customer data protected or destroyed, third-party audits to ensure the best-practices procedures you’re claiming to follow are actually being followed, etc. If custodians store funds in taproot addresses, it may be feasible to do privacy preserving (zero-knowledge) proofs of solvency, eg, making it harder for fly-by-night folks to run ponzi schemes or otherwise steal their customers’ funds.

Obviously where possible these sorts of standards should be implemented via audited open source code rather than needing extensive implementation costs by each company. But one other thing to think about is whether regulations of this nature could be setup as industry standards (“we comply with the industry standard, competitor X doesn’t”) rather than necessarily a government regulator — for one, it certainly seems questionable whether government regulators have the background to pick good best practices for cryptocurrency systems to follow. Though perhaps it would be to have something oriented towards “consumer rights” than “industry” per se, to avoid it just being a vector for regulatory capture.

I think there’s been good progress on stabilising Bitcoin development — in 2015 through 2017 we were in a phase where people were seriously thinking of replacing Bitcoin’s developers — devs were opposing a quick blocksize increase, so the obvious solution was to replace them with people who weren’t opposed. If you think of Bitcoin as an experimental, payments-oriented, tech startup, that’s perhaps not a bad idea; but if you think of it a store of value it’s awful: you don’t get a reliable system by replacing experts because they think your plan is wrong-headed, and you don’t get a good store of value without a reliable system. But whatever grudges might show up now and then on twitter, that seems to be pretty thoroughly in the past, and there now seems to be much broader support for funding devs, and much better consensus on what development should happen (though perhaps only because people who disagree have moved to different projects, and new disagreements haven’t yet cropped up).

But while that might be near enough the 64x improvement to support today’s valuation, I think we probably need a lot more to be able to supported continued growth in adoption.

Hopefully this is buried enough to not accidentally become a lede, but I’m particularly optimistic about an as yet unannounced approach that DCI has been exploring, which (if I’ve understood correctly) aims to provide long term funding for a moderate sized team of senior devs and researchers to focus on keeping Bitcoin stable and secure — that is auditing code, developing tools to find and prevent bugs, and doing targeted research to help the white hats stay ahead in the security arms race. I’m not sure it will get off the ground or pass the test of time, and if it does, it will probably need to be replicated by other groups to avoid becoming worryingly centralising, but I think it’s a promising approach for supporting the next 8x improvement in security and robustness, and perhaps even some of the one after that.

I’ve also chatted briefly with Jeremy Rubin who has some interesting funding ideas for Judica — the idea being (again, if I haven’t misunderstood) to try to bridge the charitable/patronage model of a lot of funding of open source Bitcoin dev, with the angel funding approach that can generate more funds upfront by having a realistic possibility of ending up with a profitable business and thus a return on the initial funding down the road.

That seems much more blue-sky to me, but I think we’ll need to continue exploring out-there ideas in order to avoid centralisation by development-capture: that is, if we just expand on what we’re doing now, we may end up where only a few companies (or individuals) have their quarterly bottom line directly affected by development funding, and are thus shouldering the majority of the burden while the rest of the economy more-or-less freeloads off them, and then having someone see an opportunity to exploit development control and decide to buy them all out. A mild example of this might be Red Hat’s purchase of CentOS (via an inverse-acquihire, I suppose you could call it), and CentOS’s recent strategy change that reduces its competition with Red Hat’s RHEL product.

(There are also a lot of interesting funding experiments in the DeFi/ethereum space in general, though so far I don’t think they feed back well into the “ongoing funding of robustness and security development work” goal I’m talking about here)

There are probably three “attacks” that I’m worred about at present, all related to the improvements above.

One is that the “modularisation” goal above implies a lot of code being moved around, with the aim of not really changing any behaviour. But because the code that’s being changed is complicated, it’s easy to change behaviour by accident, potentially introducing irritating bugs or even vulnerabilities. And because reviewers aren’t expecting to see behaviour changes, it can be hard to catch these problems: it’s perhaps a similar problem to semi-autonomous vehicles or security screening — most of the time everything is fine so it’s hard to ensure you maintain full attention to deal with the rare times when things aren’t fine. And while we have plenty of automated checks that catch wide classes of error, they’re still far from perfect. To me this seems like a serious avenue for both accidental bugs to slip through, and a risk area for deliberate vulnerabilities to be inserted by attackers willing to put in the time to establish themselves as Bitcoin contributors. But even with those risks, modularisation still seems a worthwhile goal, so the question is how best to minimise the risks. Unfortunately, beyond what we’re already doing, I don’t have good ideas how to do that. I’ve been trying to include “is this change really a benefit?” as a review question to limit churn, but it hasn’t felt very effective so far.

Another potential attack is against code review — it’s an important part of keeping Bitcoin correct and secure, and it’s one that doesn’t really scale that well. It doesn’t scale for a few reasons — a simple one is that a single person can only read so much code a day, but another factor is that any patch can have subtle impacts that only arise because of interactions with other code that’s not changing, and being aware of all the potential subtle interactions in the codebase is very hard, and even if you’re aware of the potential impacts, it can take time to realise what they are. Having more changes thus is one problem, but dividing review amongst more people is also a problem: it lowers the chance that a patch with a subtle bug will be reviewed by someone able to realise that some subtle bug even exists. Similarly, having development proceed quickly and efficiently is not always a win here: it reduces the time available to realise there’s a problem before the change is merged and people move on to thinking about the next thing. Modularisation helps here at least: it substantially reduces the chance of interactions with entirely different parts of the project, though of course not entirely. CI also helps, by automating review of classes of potential issues. I think we already do pretty well here with consensus code: there is a lot of review, and things progress slowly; but I do worry about other areas. For example, I was pretty surprised to see PR#20624 get proposed on a Friday and merged on Monday (during the lead up to Christmas no less); that’s the sort of change that I could easily see introducing subtle bugs that could have serious effects on p2p connectivity, and I don’t think it’s the sort of huge improvement that justifies a merge-first-review-later approach.

The final thing I worry about is the risk that attackers might try subtler ways of “firing the devs” than happened last time. After all, if you can replace all the people who would’ve objected to what you want to do, there’s no need to sneak it in and hope no one notices in review, you can just do it, and even if you don’t get rid of everyone who would object you at least lower the chances that your patch will get a thorough review by whoever remains. There are a variety of ways you can do that. One is finding way of making contributing unpleasant enough that your targets just leave on their own: constant arguments about things that don’t really matter, slowing down progress so it feels like you’re just wasting time, and personal attacks in the media (or on social media), for instance. Another is the cancel-culture approach of trying to make them a pariah so no one else will have anything to do with them. Or there’s the potential for court cases (cf Angela Walch’s ideas on fiduciary duties for developers) or more direct attempts at violence.

I don’t think there’s a direct answer to this — even if all of the above fail, you could still get people to leave by offering them bunches of money and something interesting to do instead, for example. Instead, I think the best defense is more cultural: that is having a large group of contributors, with strong support for common goals (eg decentralisation, robustness, fixed supply, not losing peoples funds, not undoing transactions) that’s also diverse enough that they’re not all vulnerable to the same attacks.

One of the risks of funding most development in much the same way is that it’s encourages conformity rather than diversity — an obvious rule for getting sponsored is “don’t bite the hand that feeds you” — eg, BitMEX’s Developer Grant Agreement includes “Not undertaking activities that are likely to bring the reputation of … the Grantor into disrepute”. And I don’t mean to criticise that: it’s a natural consequence of what a grant is. But if everyone working on Bitcoin is directly incentivised to follow that rule, what happens when you need a whistleblower to call out bad behaviour?

Of course, perhaps this is already fine, because there are enough devs who’ll happily quit their jobs if needed, or enough devs who have already hit their FU-money threshold and aren’t beholden to anyone?

To me though, I think it’s a bit of a red flag that LukeDashjr hasn’t gotten one of these funding gigs — I know he’s applied for a couple, and he should superficially be trivially qualified: he’s a long time contributor, he’s been influential in calling out problems with BIP16, in making segwit deployment feasible, in avoiding some of the possible disasters that could have resulted from the UASF activation of segwit, and in working out how to activate taproot, and he’s one of the people who’s good at spotting subtle interactions that risk bugs and vulnerabilities of the sort I talked about above. On the other hand he’s known for having some weird ideas and can be difficult to work with and maybe his expectations are unrealistic. What’s that add up to? Maybe he’s a test case for this exact attack on Bitcoin. Or maybe he’s just had a run of bad luck. Or maybe he just needs to sell himself better, or adopt a more business-friendly attitude — and I guess that’s the attitude to adopt if you want to solve the problem yourself rather than rely on someone else to help.

But… if we all did that, aren’t we hitting that exact “conformity” problem; and doesn’t that more or less leave everyone vulnerable to the “pariah” attack, exploitable by someone pushing your buttons until you overreact at something that’s otherwise innocuous, then tarring you as the sort of person that’s hard to work with, and repeating that process until that sticks, and no one wants to work with you?

While I certainly (and tautologically) like working with people who I like working with, I’m not sure there’s a need for devs to exclusively work with people they find pleasant, especially if the cost is missing things in review, or risking something of a vulnerable monoculture. On the other hand, I tend to think of patience as a virtue, and thus that people who test my patience are doing me a service in much the same way exams in school do — they show you where you’re at and what you need to work on — so it might also be that I’m overly tolerant of annoying people. And I did also list “making working on Bitcoin unenjoyable” as another potential attack vector. So I don’t know that there’s an easy answer. Maybe promoting Luke’s github sponsors page is the thing to do?

Anyway, conclusion.

Despite my initial thoughts above that taproot might be less of a priority this year in order to focus on robustness rather than growth, I think the “let wallets do more multisig so users funds are less likely to be lost” is still a killer feature, so I think that’s still #1 for me. I think trying to help with making p2p and mempool code be more resilient, more encapsulated and more testable might be #2, though I’m not sure how to mitigate the code churn risk that creates. I don’t think I’m going to work much on CI/tests/static analysis, but I do think it’s important so will try to do more review to help that stuff move forward.

Otherwise, I’d like to get the anyprevout patches brought up to date and testable. In so far as that enables eltoo, which then allows better reliability of lightning channels, that’s kind-of a fit for the robustness theme (and robustness in general, I think, is what’s holding lightning back, and thus fits in with the “keep lightning growing at the same rate as Bitcoin, or better” goal as well). It’s hard to rate that as highly as robustness improvements at the base Bitcoin layer though, I think.

There are plenty of other neat technical things too; but I think this year might be one of those ones where you have to keep reminding yourself of a few fundamentals to avoid getting swept up in the excitement, so keeping the above as foundations is probably a good idea.

Otherwise, I’m hoping I’ll be able to continue supporting other people’s dev funding efforts — whether blue sky, or just keeping on with what’s working so far. I’m also hoping to do a bit more writing — my resolution last year was meant to be to blog more, and didn’t really work out, so why not double down on it? Probably a good start (aside from this post) would be writing a response to the Productivity Commission Right to Repair issues paper; I imagine there’ll probably be some more crypto related issues papers to respond to over this year too…

If for whatever reason you’re reading this looking for suggestions you might want to do rather than what I’m thinking about, here are some that come to my mind:

  • Money: consider supporting or hiring Luke, or otherwise supporting (or, if it’s in your wheelhouse, doing) Bitcoin dev work, or supporting MIT DCI, or funding/setting up something independent from but equally as good as MIT DCI or Chaincode (in increasing order of how much money we’re talking). If you’re a bank affected by the recent OCC letter on payments, making a serious investment in lightning dev might be smart.
  • Bitcoin code: help improve internal test coverage, static analysis, and/or build reproducibility; setup and maintain external tests; review code and find bugs in PRs before they get merged. Otherwise there’s a million interesting features to work on, so do that.
  • Lightning: get PTLCs working (using taproot on signet or ecdsa-based), anyprevout/eltoo, improve spam prevention. Otherwise, implementing and fine-tuning everything already on lightning’s TODO list.
  • Other projects: do more testing on signet in general, test taproot integration on signet (particularly for robustness features like multisig), monitor blockchain and mempool activity for oddities to help detect and prevent potential attacks asap.

(Finally, just in case it’s not already obvious or clear: these are what I think are priorities today, there’s not meant to be any implication that anything outside of these ideas shouldn’t be being worked on)

Activating Soft forks in Bitcoin

General background: Bitcoin is a consensus system — it works because there are a set of rules on how Bitcoin transactions work, and everyone agrees on what they are. Changing those rules is called “forking” — when some people want to change the rules in a non-backwards compatible way while others don’t, that results in a new altcoin that follows the changed rules (eg BCH), while Bitcoin’s rules stay the same; and when everyone agrees to change the rules in a backwards compatible way, we have what’s called a soft fork. Most of the interesting development in Bitcoin doesn’t require changes to the consensus rules, but some changes do. In essence, these sorts changes touch the fundamentals of Bitcoin, and thus warrant extra care and attention.

Specific background: The proposed taproot soft fork is something we’ve been working on for quite a while now, and the underlying code changes got merged into the bitcoin codebase a bit over a week ago, just in time for the 0.21 feature freeze. Those changes allow the new taproot rules to be used for testing purposes on the regtest chain, and also on the new signet chain, but do not change how things work on the real, live, Bitcoin network. The idea there is to allow people to check that the major upgrade to 0.21 works as expected and is safe to widely deploy, and only after that’s done worry about the soft fork. Exactly how to activate the soft fork is something of an open question though — while we’ve done a number of them in the past, the last one ended up a bit of a debacle. Back in July, we started discussing activation methods more seriously, and came up with some ideas.

At the time, I wanted to get a better idea of what people thought of the fundamental constraints, so I tried writing up a survey and sent an email to a bunch of smart dev-type people inviting them to fill it in if they were interested:

We seem to be getting to the point where people are making memes about activation methods [0] [1] [2], but I think we’re also still at the point where pretty smart people still have big differences of opinion over technical issues [3].

I feel like we’ve made some progress on ##taproot-activation, but talking with harding after he did his wiki summary of the state of things, I didn’t feel like that was quite getting at the heart of the differences. So I’ve had a go at writing up a survey for (what I think are) the underlying questions that are driving the differences between the proposals. There’s only 10 real questions there, but I’ve added a whole bunch of text that (hopefully) neutrally explains most of the tradeoffs between the choices, hopefully without introducing too much of my own bias. I’m hoping it covers all the choices people are currently favouring, even if they’re “comically moronic”, and, ideally at least, will give some clue as to the tradeoffs people are considering/ignoring that’s leading them to that preference. Ideally the results might indicate where there’s already widespread agreement, what might be worth talking through more, and what productive ways there might be of dealing with any remaining disagreements…

If there’s some important issues / responses the survey doesn’t cater for, that would be good to know. And, obviously, if you’re happy to fill in the survey, that would be awesome

My thought is, assuming the response isn’t “this is a stupid, counter-productive idea”, to post the url at the next weekly core dev irc meeting for a broader but still cluey audience, and post to bitcoin-dev and ##taproot-activation afterwards, and then do something about collating and publishing the results, which might hopefully help promote intelligent discussion vs meme wars…

I’ve bcc’ed people so they don’t get included in replies if they’re not interested; but fwiw the list is […]. . Random collection of people who have participated in recent discussions, might have varying strong opinions on some of the topics, and/or who did bunches of work and who I’d be embarrassed to exclude.

Steve Lee, A. Jonas, and Mike Schmidt helped with drafting and will hopefully help with/do all the work of collating responses; Dave Harding, Russell O’Connor both offered helpful comments that assisted significantly with early drafting. Any remaining stupid counter-productivity is mine of course.

(I’m hoping this survey will help result in a better idea of what to do about activation which will then inform what we actually do. But either way it’s certainly not a “vote by sms now, and whichever answers get the most votes will be your new american idol, uh, taproot activation method” thing, or even a “nope, everyone else voted X, your opinion is unimportant”. Hopefully that didn’t need to be said.)

I sent the survey to about 20 people and got 13 responses (including my own). I figure not identifying who responded or tying responses with people is probably best, since that avoids tying anyone to their opinion from a month or three ago, and thus maybe makes it easier for people to adjust their views to new information and eventually come to an agreement.

If you’re interested in the details around this topic, I think the survey’s worth a read, and I’ve left it open in case anyone wants to fill in their own answers.

The results turned out harder to collate than I expected — mostly because google’s CSV export isn’t that great when you have “choose as many as you like” questions that each have full sentences for the answers, but also because there were fewer obvious patterns than I expected. But anyway.

Results for the first set of questions, about activation via enforcement by a supermajority of hashpower, ended up being:

  • What do you consider a reasonable threshold for activation by hashpower supermajority?
    • Eight people selected 90%-95%, 85%-95%, 90% or 95%
    • Four people selected 60%/70%/75% as the lower bound and 95% as the upper
    • One person selected just 75%
  • If everything goes well, how long will it take miners to upgrade and enable signalling for activation by hashpower supermajority?
    • Six people chose “up to 12 months
    • Five people chose “up to 3 months”
    • One person chose “up to 6 months”
    • One person didn’t answer
  • How long should it be at minimum between software release and activation actually taking effect?
    • Five people chose “6 retarget periods” (3 months)
    • Four people chose “4 retarget periods” (2 months)
    • Two people chose “2 retarget periods” (1 month)
    • One person didn’t answer
    • One person gave a free form answer: “Unpopular opinion: between 3 and 6 months. Need to give time for users to update too. Otherwise miners can do play dirty (I suppose but I haven’t thought deeply about this). “

For the “flag day activation” section, the answers were:

  • What concerns do you think should be taken into account in choosing a flag day?
    • Eleven people chose “plenty of people will enforce the rules, after the flag day, though maybe not the flag day itself”
    • Eleven people chose “sufficient number of people enforcing the flag day that ignoring it will be economically unviable”
    • Seven people chose “almost every node will enforce the flag day”
    • Five people chose “not introducing precedents that will cause problems”
    • Four people chose “soon enough to keep development momentum”
  • How long away should the flag day be?
    • Seven people found 12 or 18 months acceptable. Of those, six found 12, 18 or 24 months acceptable, and two of them also considered 36 or 48 months acceptable.
    • One person found only 6 or 12 months acceptable.
    • One person found only 36 or 48 months acceptable.
    • Two people only indicated 12 months.
    • One person only indicated 18 months.
    • One person chose “never”.
  • When should we decide on the flag day?
    • Nine people chose answers that depend on uptake (seven wanted to see how users upgrade; six wanted to see how miners behave; five wanted to be sure a flag day is actually needed)
    • Four people chose “before the first activation attempt”, though two of those also wanted to see how users upgrade, and one also selected the “never” option (not the same person that chose “never” for the previous question)
  • How should disagreement on a choice of flag day be resolved?
    • Six people indicated “whatever the BIP authors and core maintainers agree on is fine”
    • Four people indicated “only do a flag day if there’s clear consensus” (no overlap with the previous six)
    • Four people chose “Pick my answer (or a later one)”; one of those also chose “Pick the average answer”
    • There were a bunch of free form answers as well: “Pick a reasonable answer”, “6 months or 1 year only, unless there’s a clear reason more time is required (e.g., fixing timestamp overflow bugs in far future). Anything in between 6 months and 1 year is bikeshedding, anything less than 6 months is too fast, and anything further than 1 year is too far out”, “Pick the Nth percentile wait, where N is pretty high. I’m fine waiting longer, I just want the flag day locked in”, “rough consensus and running code”, “A thought I had during the segwit2x debacle is that I don’t think there is consensus for playing games of chicken with consensus. I think taproot is a good idea, but I don’t think chain splits are, and I think we should take our time to be careful about deploying consensus changes in a way that is not likely to produce a chain split. No one has any reason to think that taproot won’t activate, so let’s not rashly move forward in a way that could provoke a chain split due to errors or oversights.”
  • How will we know there is community support for a flag day by default?
    • Ten people chose “enough time for reasonable objections to be reported, but none have been”
    • Nine people chose “uptake of software supporting hashpower supermajority activation”
    • Eight people chose “we see manual signalling” (everyone chose at least one of these three responses, except for one person who only entered a free form response)
    • Five people chose “uptake of opt-in activation”
    • Four people chose “we see price information”
    • Four people chose “we already do”
    • One person chose “we never will” (along with other options)
    • There were also a couple of free form additions: “Every softfork is a user-activated softfork”, “when anyone on reddit/reading coindesk would understand that there are no objections, and understand the care that went into design.”, “Know it when we see it, but should only be used if necessary”
  • How should users opt-in to flag day activation?
    • Seven people chose “never opt-in, should be the default for everyone once community support is established”
    • Five people chose “upgrading to a new (optional) version of bitcoin core” — eleven people chose at least one of these two options
    • Four people chose “setting a configuration flag”
    • Four people chose “an alternative forked client”
    • Six people chose “editing the source and recompiling”, however all of those people also chose at least one other option
    • The only free form comment was: “Configs can be set wrong accidentally and is hard to test, a bit harder to run wrong binary for a long time. (speaking against config flag option)”
  • Signalling a flag-day activation?
    • Six people chose “mandatory signalling only when bringing activation forward”
    • Five people chose “always require signalling prior to activation” (nine people chose one of these options)
    • One person chose “never mandate signalling”
    • Two people gave no answer, and one just gave the free form response: “No opinion at this time”
    • Remaining free form comments were: “I think forced signaling flag days are really only interesting for two phase deployments where the first phase doesn’t know about the flag day but hasn’t timed out, and where the flag day is far enough out that disruption from it can be minimized (e.g. miners can get told to at least adjust their versions)”, “We want mandatory signalling to bring to ensure activation on nodes that do not enforce the flag day. The “proposed update to BIP 8 [3]” is a very good solution to this.”

I don’t think the “opinion weighting” answers ended up being very interesting:

  • How informed are your opinions?
    • Six people chose “based on years of experience with multiple activations”
    • Two people chose “in depth study of bitcoin activation”
    • Four people chose “knowledge about other aspects of bitcoin and reading the questions”
    • One person chose “you wanted my opinion, you got it. caveat emptor”.
  • How confident are you about your opinions?
    • Eight people chose “right balance of tradeoffs”
    • Three people chose “not very”
    • One person chose “anything else will be a disaster”

Overall free form comments were:

  • Your choices for how sure I am are pretty rough. I mean, there probably isn’t anyone with more activation experience than me (though several equal) but no one has activated anything in the current network, no one has activated taproot. etc. A sentiment I hoped to be able to express was support for nested activations like harding’s start now and improve later. Forced activation specifics are likely to be complicated and painful to decide and that decision would be greatly simplified by initial robust deployment. … plus I think there are good odds that forced activation will be unnecessary (esp if its clear that we will use it if needed)– so why serialize getting this stuff activated on figuring out forced activation details? Better to do whatever thing has good odds of getting it activated fast assuming miners cooperate then worry about more dramatic things if they don’t. Without the initial attempt everyone is just guessing– guessing on uptake– guessing on miner behaviour– etc. Plus people who want more aggressive and less aggressive approaches differ a lot based just on how pessimistic they are about miners, a question that will be resolved by seeing what miners do. The primary counter argument to this approach is that if we don’t plan for a flagday in advance there is a risk that the moment miners drag their feet at all, the pitchforks will come out and the least reasonable people will immediately move forward with a 30 day flagday or whatever. I think that this can be avoided by the author of the parameters making a clear statement of intent. That if users adopt but miners holdback the intention will be to flag day and we’ll start discussing the details of that in 3 months… or something like that.
  • I can’t answer about how “correct” my opinions are… My feelings about activation methods are strongest when it comes to the narrative around them, and least strong when it comes to the specifics (provided that they are reasonably in line with a good narrative). I think we could take almost any activation method and tell a story about it that is terrible — miner-activation means that miners dictate the rules; user-activation means that developers dictate the rules, etc. In my view the most important thing here is to have as strong a sense as reasonably possible that no one is opposed to the consensus change and that activation is very likely to not cause a chain split (at the time of activation or down the road). How we get there is a matter of debate and discussion, but if we can agree that those two principles are paramount and other issues are secondary, then I think I’d be on board with any number of proposals that are crafted around such a narrative.
  • To summarize my current thinking:
    • deploy bip8(false) in Bitcoin Core
    • If it becomes clear that the miners use their veto to prevent activation let users coordinate on a flag day. In order to opt-in to flag day activation (bip8(true)) users should create their own fork of Bitcoin Core. For this to work properly, it would be ideal to use Anthony Town’s suggested changes to BIP 8. If the users fail to cooperate and the softfork doesn’t activate, then that’s fine too, but maybe the softfork wasn’t useful enough then. We can propose another softfork that hopefully gets more user support (sigagg, etc.).
    • Thanks for making this! Looking forward to see what comes out of it.
  • I think that we’ve micro-managed soft-forking patterns as the biggest threat that bitcoin faces and worthy of all the attention and fuss… There are bigger problems and challenges facing bitcoin centralization that we can work on (e.g., proliferation of storing funds in custodial wallets), but that feel more outside our control, so we focus on something that is in our control. I think any reasonable choice that allows us to ship small soft-forks when recommended by devs and users is the right tradeoff when we’re already suffering from other centralization vectors.
  • Perhaps not a disaster for Taproot, but some deviation could set very harmful precedents. Looking at upgrade history, I worry we ought to set a minimum 1 year after software release before starttime, but the question only asked about a *minimum*.
  • Waiting too long means hats come out of closets and we get something much less organised and safe. Let’s get a flag day set far enough in the future and move on with our lives

Hopefully the length of that summary when there’s only 13 responses serves as a good explanation why I didn’t summarise this earlier, or try getting more responses first…

One thing I’ve been thinking more and more is that the exact activation method isn’t really what’s important. I think that the whole “BIP 9 allows an activation attempt to fail” has been somewhat misleading — while it’s technically true, the fact that the activation could succeed at all is more important, and that possibility implies that we have to be absolutely confident that a deployment is definitely worth activating before we risk activation. And if we are absolutely confident it’s worthwhile, then so is everyone else, and we will eventually activated it one way or another, and the details of exactly how it happens are just that: details. And while details matter, it’s much more important to make sure the idea is sound, and to do that first — so I think it’s actually more of a big deal that we’ve in addition to all the review and unit tests and guides and explanations we’ve now also got taproot activated on signet which should make third-party development feasible, and in some sense allow an extra layer of testing.

But anyway, as far as activation parameters go, your takeaways from the above might be different, but I think mine are:

  • Activation threshold should stay at 95% (or at most be reduced to 90%)
  • We don’t really know how quickly miners will react; so hope for a quick response within a few months, but plan for it taking up to a year, even if everything goes well
  • Setting the startheight to be about a month or two worth of blocks away is probably about right (along with a retarget period each of STARTED and LOCKED_IN this gets at least two or three months of deployment time before activation is possible)
  • Almost everyone is open to the idea of a flag day in some circumstances
  • If there’s a flag day, we should expect it to be a year or two away (though it might turn out sooner than that, or later)
  • There probably isn’t support for setting a flag day initially (only 4/13 support choosing a day early enough; only (a different) 4/13 think we already know there’s sufficient community support; 9/13 want to see adoption of hashpower-based activation to establish there’s consensus for a flag day)
  • Almost everyone wants to see as many nodes as possible enforce the rules after activation
  • Most people seem to be willing to accept bringing a flag day forward by mandatory signalling
  • There’s not a lot of support for opting-in to flag day activation by setting a configuration option

So I think to me that means:

  • Initial activation parameters should be included in a minor update release (eg, version 0.21.1 or 0.21.2) and be something like:
    • lockinontimeout = false
    • startheight = release height + 3 retarget periods, rounded up (to get a two or three month delay before activation is possible)
    • timeoutheight = startheight + 209,664 blocks (4 years worth — in case the 12-24 months estimates turn out to be too low)
    • threshold = 95% (no point changing it)
  • We then see what happens — if miners activate it within the first three or 12 months: great. If they don’t, we’ll have more data, and use that to refine the deployment strategy. For example, if miners have been having legitimate problems deploying the new software, we’ll help them fix that; if not, and there’s plenty of uptake of the new software and other support, we’ll run some numbers on the rate at which users are upgrading, and pick a date for a flag day based on that.
  • When we work out what flag day is best supported by the data (suppose for the sake of example that it’s startheight + 18 months, to be roughly in line with what people expected per the above), then we’d update the deployment parameters to:
    • lockinontimeout = true
    • startheight = unchanged
    • timeoutheight = startheight + 78,624 blocks (18 months’ worth)
    • threshold = unchanged
  • The updated activation parameters should be backported for each major version (eg, if startheight is March 2020 and timeoutheight is in September 2021, that might be 0.21.5, 0.22.3, and 0.23.1, and already included in master by the time 0.24.0 is branched off)

This is more or less the “gently discourage apathy” approach though with a longer initial timeout.

Note that with 13 version bits reasonably available for use (BIP320 reserves the remainder, and miners are actively using them), a four year timeout still allows for a new soft fork every four months on average without having to overlap version bits or come up with a new signalling method; which seems likely to be more than sufficient.

Compared to “modern soft fork activation“, I think the main differences are that it plans for an earlier flag day (though only if that’s actually supportable via adoption data), does not include a config parameter for updating to flag day activation but instead requires upgrading to a new minor release (unavoidable given the flag day isn’t decided in advance and manually setting the flag day would be too easy to get wrong, which risks breaking consensus), and requires mandatory signalling if the flag day occurs.

If you want to maximise the number of nodes that will enforce the rules should a flag day occur, but also only choose the flag day after an initial activation attempt is already widely deployed, then you have no choice but to make signalling mandatory when the flag day occurs. I think it’s a good idea to do a little more work to minimise the costs that mandatory signalling might impose on miners, so have proposed some updates to BIP 8 to that effect — one to not require signalling during LOCKED_IN, and one to reduce signalling during MUST_SIGNAL from 100% of blocks down to the threshold figure; I think the latter also is potentially somewhat protective against miner gamesmanship, as noted in the link. That’s still not zero-impact on miners in the way the “modern soft fork activation” approach is, but I think it’s near enough.

Apart from that, I think the current BIP 8 spec/code should more or less work for the above aleady.

A Paradigm Shift

(I would have liked to have come up with a more original post title, but found myself unable to escape this one’s event horizon)

I’ve been at Xapo for a bit over a couple years now, and it’s been pretty great. Earlier this year, we’d been coming up to performance review time, so, as you do, I’d been thinking about what changes would be cool — raise, promotion, different responsibilities, career growth, whatever — and, largely, coming up blank, particularly given we’d recently taken on Amiti as an additional dev working on bitcoin upstream. I mean, no one’s going to say no to more money for doing the same thing, but usually if you want significant changes you have to make significant change, and I was feeling pretty comfortable: good things to work on, good colleagues to work with, and not too much bureaucratic nonsense getting in the way. In many ways, my biggest concern was I was maybe getting complacement. So, naturally, come Good Friday, after responding to a late night ping on slack, I found out I was being fired — and being a remote worker, without even a kiss on the cheek as is traditional for betrayal at that time of year!

Okay, that’s not a precisely accurate take: I got made redundant along with plenty of others as part of a pretty major realignment/restructuring at Xapo. This was pretty unexpected, since the sale of the institutions part of the business to Coinbase had seemed like it had given Xapo a really long runway to avoid having to make painful cuts, though on the other hand I had been concerned enough about the lack of focus (or a nice brief elevator pitch for what Xapo was) to have been mailing Wences ideas about it last year, so some sort of big realignment was not a total surprise either. It’s summarised in a post on the Xapo blog in May as “relaunching as a digital bank” which I don’t think is really all that clear; and there’s a later post with a bunch of FAQs which is helpful for the details, but not really the big picture. The difference between “custodial wallet” and “bank” has always seemed pretty minor to me, so Xapo’s always seemed pretty bank-like to me anyway — although it’s still worth distinguishing between a bank where all the customers’ balances are fully backed, and the more normal ones with fractional reserve where funds in deposit accounts are mostly backed by other customers’ debts, and are thus at risk for bank runs, which requires deposit insurance backed by central bank money printing and so on.

I think it’s fair to describe Xapo’s new direction as a change of focus from something like “bitcoin’s cool, we’ll help you with it” to something like “protecting your wealth is cool, we’ll help you with it” — but when you do that, bitcoin becomes just one answer, with things like USD or gold or even some equities as other answers, just as they are for Libra. That’s also a focus that matches Wences’ attitude (or life story?) better — protecting you from currency collapses and the like is a mission; playing with cool new technology is a hobby. And while I think it’s a good mission in general, I think it’s particularly timely now with governments/banks/currencies facing pretty serious challenges as a result of response to the covid19 pandemic. It’s also a much tighter focus than Xapo’s had over the time I’ve been with the company — unless you’re a massive conglomerate like Google or Disney, it’s important to be able to say “no — that’s a good idea but it’s not for us, at least not yet” so that you limit the things you’re working on to things that you can do well, so I think that’s also a big improvement for Xapo. And as a result, I can’t even really object to Xapo not retaining a bitcoin core dev spot — in my opinion a focus on wealth preservation for bitcoin mostly means not screwing things up (at least for now) rather than developing new things. Hopefully once Xapo reopens to new customers and those customers are relying on bitcoin as a substantial store of wealth, and the numbers are all going up, it will make sense to have in-house expertise again, but, well, one of the benefits for companies that build on open source platforms is that you can free-ride for a while, and it doesn’t make much sense to begrudge that. I think it’s definitely going to be a challenging time for Xapo to re-establish itself especially with the big personnel changes, but I’m hopeful that it will work out well. I have exercised my stock options for what that’s worth, though I don’t know if that counts as skin in the game or a conflict of interest.

Wences was kind enough to provide a few months’ notice rather than terminating the contract immediately (not something that he was able to do for many of the other Xapo folks who were made redundant around the same time, as I understand it), and even kinder to provide some introductions to people who might fund me in continuing in the same role. It’s certainly a bad negotiating tactic, but the Paradigm guys (they’re a California based company, so guys still counts as gender neutral, right?) were Wences’ first recommendation, and after getting some surprisingly positive recommendations about them, talking to them, and reading some of their writings, I didn’t really see much need to look elsewhere. Like I said, complacent. (Or, if you prefer, perhaps “lacking even first-world problems” is a better description). Once word filtered through the grapevine a little, I did get an offer from the Chaincode folks to see if I needed some support so that I didn’t have to worry about urgently getting a new job in the midst of a global pandemic, but I figured it’s “better for bitcoin” for a company like Paradigm that hasn’t supported development directly until now to get some experience learning how to do it than to join an existing company that’s already doing pretty much everything right, and it didn’t feel like too much of a risk on my behalf. So maybe at least there I managed a not-completely-complacent choice? And while there’s no particular change in job description, I’m hoping working with folks like Arjun and Dan might help me actually finish fleshing out and writing up some ideas that aren’t able to be directly turned into code, and I’m hopeful for some cross-pollination from some of ideas in the DeFi space that they pay attention to, which I’ve mostly been studiously ignoring so far, so I hope there’s a bit of potential for growth there.

Anyway, given I’m doing the same job just with a different company, there wasn’t really any impetus, but I’ve been using it as an excuse to get some of the things I’ve been working on over the past little while actually published; hence the ANYPREVOUT update and the activation method draft in particular. Both those I’d been hoping to publish at or shortly after the coredev meeting in March, but covid19 cancelled that for us, and the times since have been kind of distracting.

In conclusion, the moral of the story: take performance reviews more seriously in future.

COVID19 Thoughts

A month and a bit ago, I wrote up my take on covid19 on facebook. At the time, Australia was at 1300 cases, numbers were doubling twice a week, and I’d been pessimistically assuming two weeks between infection and detection.That led me to pessimistically estimate that we’d be at 20,000 cases by Easter, and we’d be close to capacity for our hospital system, but I was pretty confident that the measures we’d put in place by then would be starting to have an effect and we’d avoid having an utter catastrophe. I’d predicted by late April we’d be “arguing about how to get out of the shutdown” and have a gradual reopening plan by May — that looks like it’s come about now, with the PM and state premiers coordinating on how that should work, and the Queensland one, at least, beginning next week.

The other “COVID SAFE checks” also seem good to me: widespread testing, effective tracking and tracing of outbreaks, and having each stage conditional on the outbreaks being contained. We’re in a much better state to do those things than we were two months ago, There’s also (as I understand it) been a lot of progress on increasing the capacity of hospitals to respond to outbreaks, so as far as “flattening the curve” so that we can go back to living a normal-ish life, without exponential growth causing a disaster, I think we’re doing great.

It’s a more cautious reopening than I would have expected though: the four week minimum time between stages is perhaps twice as long as the theoretical minimum, but even that was twice as long as what I’d have expected the minimum time people would tolerate at a political level. It’s not clear to me how bad the economics is — I think we’ll get the first real economic stats next week, but the numbers I’m seeing so far (7% of payroll employees out of work, eg) aren’t as bad as I was expecting, while the forecasts (which are expecting a sluggish recovery) are worse. Maybe that just means we’ll be able maintain patience in the short term, but should still expect things to be painful while the world tries to recover its supply chains over the next year or two?

The thing that has perhaps most impressed me about Australia’s response, especially compared to the US, has been the lack of politicisation. I don’t think you can have an effective emergency response when the people in charge of that response are pointing fingers at each other, and wasting time with gotcha questions to make each other look stupid.  The National Cabinet approach, the willingness of the both the federal government to bend to some of the states’ concerns (particularly Victoria’s push to close schools prior to Easter), the willingness of states to coordinate under federal leadership and be aligned where possible, and above all mostly managing to work together rather than the usual policy of exaggerating disagreements has been great. Unlike Soraya Lennie I think that’s a massive achievement by the PM and also the opposition leader. Morrison cancelling his trip to the footy was a good move, and Dan Tehan’s walkback of his criticism of Daniel Andrews was too — but forgiving both those mistakes rather than the usual approach of continually bring them back up is also important.

Where I got things wrong, was that it appears the virus is easier to limit than I’d expected. While I thought we’d be screwed for weeks yet, instead we started turning the corner just five days after my post, which itself was ten days after the government had started issuing bans on large gatherings and requiring overseas travelers to start self-isolating. We’ve also apparently had a much lower percentage of cases end up in the ICU — I think 1.75% of cases ended up in ICU in NSW versus figures like 5% from China, or 2.6% from Italy? We’re currently at 97 deaths out of 6913 confirmed cases, which is 1.4%, so double the 0.7% reported from non-Wuhan China.

That fatality rate figure still makes it hard for me to find “herd immunity” strategies plausible — you probably need about 60% or more of the population to have been infected to get herd immunity, but 0.7% of 60% of Australia’s population is 103,000 deaths; compared to 3500 deaths per year from the regular flu in Australia, that seems unacceptably many to me — and perhaps you have to double that to match our observed 1.4% fatality rate anyway. And conversely, it makes it seem pretty unlikely that there’s already herd immunity anywhere — if there haven’t been that many unexplained deaths, it’s pretty unlikely that covid19 swept through somewhere prior to this, granting everyone left alive herd immunity.

Nevertheless, that seems to be the strategy Sweden is taking; currently they have over 3000 deaths, so if the 0.7% ratio holds that’s 430,000 cases, fewer if the ratio’s more like Australia’s 1.4%. However they are currently only reporting 24,000 cases — which adds up to to an 12.5% fatality rate instead. Things seemed to have stabilised for them at about 60-100 deaths per day; so to get from 430k cases to 6M to achieve herd immunity, that’s presumably going to result in a further 39,000 deaths, which at 80 deaths per day will take another 16 months. And Sweden’s reportedly doing some lockdown measures anyway, so even if that number of deaths is acceptable, it’s not clear to me that it’s an argument for “life as normal” rather than “we can deal with this via modest restrictions over quite a long time”. And additionally, I think Sweden has doubled their normal ICU capacity, and may have needed that extra capacity already.

Still, that Sweden’s death rate has stabilised rather than continuing to double also seems to be evidence that the virus does end up limited almost no matter what — though my guess is that this is more because once it becomes obvious to everyone, people start voluntarily limiting their exposure without needing government to mandate it. So perhaps that means the best thing governments can do here is force people to make good choices early, when they have access to good advice that hasn’t percolated through to the rest of the public, then ease off once that advice has spread. Having leaders do the opposite, and spread bad advice early — Florence’s “hug a chinese” day, New York’s “keep going to restaurants” or Boris Johnson “shaking hands with everybody” — might therefore have been spectacularly harmful.

The US numbers don’t make sense to me at present: the CDC reports 1.2 million cases and 73 thousand deaths, but that’s a 6% fatality rate. If the deaths figure is accurate, but the real fatality rate is more like Australia’s 1.4% that would mean there’s really 5.2 million cases in the country (which is still only 1.6% of the population, miles away from herd immunity); while if the cases figure a fatality rate like Australia’s would imply only 17 thousand of the deaths were due to covid19, and 56 thousand were misreported. There’s certainly been reports of deaths being wrongly reported as due to covid19 in the US, but there’s also plenty of indications there hasn’t been enough testing, which would let to the reported case numbers being way too low.

I don’t really have a further prediction at this point; I think there’ll be people worried the staged reopening is both too slow (people need to get back to work) and too fast (there’ll be actual outbreaks that could perhaps have been prevented if we stay in lockdown), and maybe the timeline will get tweaked as a result, but there’s already some flexibility built in via the “COVID SAFE Plan” that will presumably allow things to open up further after some sort of government/health review, and the ability to defer stages if there’s an undue risk. As far as the economy goes, I expect we’ll see a quicker than expected recovery mostly: tourism and exporters will find it difficult but scrape by, I think — lack of international competition will probably mean some tourist places end up with a blow out year; industries relying on immigration such as higher ed and real estate will still be in trouble for a while; but I can’t put a figure on where that will all end up. The budget will be a mess, and worse for the fact that we didn’t get back into surplus between dealing with the last crisis and this one coming along. I expect we’ll be stuck with having to take effort to deal with avoiding covid19 until it either mutates into something more like a normal flu, dies out everywhere, or we get a vaccine, which seems likely to be years away.

Bitcoiner Maximalism

I’ve been trying to come up with a good way of thinking about what to prioritise in Bitcoin work for a little while now — there’s so much interesting stuff going around, all of it Good For Bitcoin, that you need some way to figure out which bits are more important or urgent than others. One way to think about it is “what will we make the price go up?”, another is “how do we beat all the altcoins?”, but both of those seem a bit limited in scope. Maybe an alternative is to think about it backwards: if Bitcoin gets better, more people will want to be Bitcoiners; so what would it take to make more people Bitcoiners? That sort of question is a pretty common one in sales/marketing, and they tend to use “sales funnels” for analysing it — before becoming a customer, people have to hear about a product, be interested in it, and find it for sale somewhere, and you get some attrition at each step; reducing the attrition at any step (without making it worse at any other) then increases your sales and your numbers go up.

One way of looking at that might be to consider the normal sorts of things Bitcoiners do: they buy some Bitcoin, setup their own wallet to have control over their funds, run a full node, and maybe eventually start giving some input into Bitcoin’s development (whether that be in the form of code, discussion, investment or making bets over twitter). The problem with thinking about things that way is that while there are some clear incentives for the first steps (Bitcoin’s increasing in value so a good investment or at least better than earning negative rates; self-custody reduces the risk of some company running off with all the coins you thought were yours), there’s a breakdown after that: having a hardware wallet under your mattress is cheap and easy, but running a full node constantly is an ongoing cost and maintenance burden, and what’s the actual direct benefit to you? If you look at the numbers, those steps are something like 8B to 160M (2%) to 4M (2.5%) to 50k (1.25%) to maybe 900 (1.8%), but there’s no obvious levers to use to increase either the 2.5% or 1.25% figures, so that approach doesn’t seem that useful.

A different way of looking at it might be to first break out people who regularly transact with their Bitcoin balance, rather than just buying and holding. The idea being that this covers traders who actively manage their Bitcoin investment, merchants who sell products for Bitcoin, people who get paid in Bitcoin, and so on. I’ve got no idea what a valid number for this is — BitPay claims to be “Trusted by thousands of businesses — worldwide” which makes it sound like the number probably isn’t in the millions, so I’ve picked a quarter of a million. Going from “actively transacting” to “self-custody” is a different step than self-custody for “buying-and-holding” — don’t think of installing a mobile wallet or buying a hardware wallet, but rather as using software like btcpay or lightning rather than hosted solutions like bitpay or travelbybit. I’ve picked 15k as the number there, based on the number of lightning nodes reported by 1ml.com, and rounded up a bit.

The nice thing about that approach is that the incentives at each stage are a fair bit clearer. You maintain a Bitcoin balance if it works as a store of value and fits into your investment strategy. You go from just holding a Bitcoin balance to actively transacting with it if spending Bitcoin is less of a pain than spending from your bank account — which makes it pretty clear why that step has a 99.85% attrition rate and what to do about it. Likewise, you go from transacting in general to self-custody when you decide that the costs of using a Bitcoin bank outweigh the benefits — risk of loss of funds or censorship, KYC frustrations, privacy concerns versus ease of setup and someone else taking care of ongoing maintenance. Having that option is hopefully a good incentive for businesses (and regulators) to keep those risks, frustrations and concerns relatively rare for everyone that doesn’t self-custody as well. Going from actively using Bitcoin to helping it develop is still a big step, but it’s also a fairly natural one (or so it seems to me). I think those levels also fit fairly well with business models: getting people into Bitcoin in the first place is financial education/advice and exchange services; actively transacting is banking and merchant services; self-custody is hardware wallets, and things like btcpay and lightning nodes; even consensus participation has been monetized by the likes of bitfinex’s chain-split tokens. (A nice thing about this approach is that self-custody for people actively transacting, generally implies running a node for technical reasons, and at that point the costs of running a node are a much smaller deal: you’re getting regular benefits from your regular transactions, so the small regular costs of running a full node are much easier to justify)

One way to view those levels might be as “pre-coiners”, “store-of-value”, “method-of-payment”, “self-sovereign” and “decentralised” — with each level implicitly depending on the previous levels. You can’t pay for things with money that nobody values; there’s no point being in control of money that no one will accept or that’s not worth anything; there’s not point having decentralised money if it can be stolen from you, etc. There’s some circularity too though: there’s no point storing value if you can’t eventually transfer it, and a significant part of the value proposition of Bitcoin for store of value or method of payment is that you can control your own funds and that there isn’t a central group able to inflate the money supply, confiscate funds or block transactions.

What does that mean for priorities? I think there’s a few general principles you can draw from the above:

  • From an industry-growth point-of-view, increasing the percentages for the top two levels and maintaining the percentages for the bottom two seems like a good focus: getting a billion people owning Bitcoin, and hundred of millions transacting using it, even with “only” 12M (6% of 200M) people running their own full nodes (due to self-hosting their lightning balance), and 750k (6% of 12M) people actively paying attention to how Bitcoin works and evolves seems like it could work out.
  • This approach has “store of value” as a foundation that the other properties of Bitcoin rely on — if that makes sense, it probably means messing with the “store of value” features of Bitcoin is a really risky idea. Instead, it’s probably more important to work on things that reinforce the existing foundations, than neat new ideas to change them.
  • The “having Bitcoin” to “transacting with Bitcoin” step is the one that needs the most work — probably in a million areas: not just all the things on the todo list for lightning, but UX stuff, and working with regulators to avoid knee-jerk money-laundering concerns, or with tax agencies to reduce the reporting burden due to Bitcoin valuation changes, to deploying point-of-sale systems, and whatever else.
  • If we do manage to get lots more people holding Bitcoin, and/or lots more people transacting with it, then maintaining the percentages of people doing self-custody or contributing in general will be hard, and require a lot of effort.

So for me (with an open source developer’s perspective), I think that adds up to:

  • Number one priority is keeping Bitcoin working technically — trying to avoid bugs, resist potential attacks (both ones we already know about, and those people have yet to come up with), stay backwards compatible, do clean upgrades. Things to work on here include monitoring, tests, code analysis, code reviews, etc. This also means keeping development of bitcoin itself relatively slow, since all these things take time and effort.
  • Number two priority is, I think, lightning: it seems the best approach for payments, both for people who want to do self-custody, and as the underlying payments mechanism for Bitcoin custodians to use when their customers instruct them to make a payment. There’s a lot of work to be done there: routing, reliability, spam/attack-resistance, privacy, wallet integration, etc. Other payments related things (like btcpay) are also probably pretty high impact.
  • After that, I think being prepared for growth is the next thing: finding ways of doing things more efficiently (eg, batching, consolidation), coping dynamically with changes to the system (eg, fee estimation), developing standards to make it easy to interoperate with new entrants to the ecosystem (eg, psbt, miniscript), and having good explanations of how Bitcoin works and why it works that way to newcomers (podcasts, books, academic papers, etc).

And more particularly, I think that means that I want to prioritise stability over new features (so work on analysis and reviews and tests and no rushing the taproot soft-fork), and as far as new features go, I’m more interested in ones that can provide boosts to lightning or payments in general (so taproot and ANYPREVOUT stay high on my list), but growth and interoperability are still important (so I don’t have to ignore cool things like CTV fortunately).

Libra, hot-take

Hot-take on Facebook and friends’ cryptocurrency. Disclaimer: I work at Xapo, and Xapo’s a founding member of the Libra Association; thoughts are my own, and are only based on public information.

So, first, the stated goal is “Libra is a simple global currency and financial infrastructure that empowers billions of people”. That’s pretty similar to Xapo’s mission (“We created Xapo to give everyone the freedom and security to be more and do more with their money” eg). It’s also something that Bitcoin per-se isn’t really good at: the famous “7 transactions per second” limit means 220 million transactions per year, which doesn’t seem like it really scales to billions of people for instance. And likewise Libra’s monetary policy (backed by a basked of “bank deposits and short-term government securities”) isn’t very interesting compared to just holding funds in USD, EUR, AUD or similar; but probably is pretty compelling compared to holding Bolivars, Zimbabwe dollars or Argentinian pesos. That could make it a death-knell for badly managed central banks in just a few years, which could be pretty interesting.

It doesn’t sound very censorship resistant — if you want to use it to buy hookers or guns or support political causes unpopular with Silicon Valley, you’re probably out of luck. Likewise if you want to pay for a VPN out of China, or similar. It seems like all of the association members will have access to all the transactions, and there’ll only be at most a few hundred megacorps to lean on to fully deanonymise everyone, so while it’s not a positive for shady central banks, I think it’s totally compatible with fascist police states and oppressing freedom of association/speech/thought. Not sure if it’s better or worse than today with almost everything done via credit card or bank transfers. Certainly much worse than cash (or lightning).

The amazing thing about Bitcoin is that there wasn’t a baked in rule along the lines of “Satoshi gets all the moneys” — instead Satoshi just ran the software in the same way any other early adopter could, and all the early adopters benefited essentially equally. So one thing that’s always interesting to me is to see the ways in which new cryptocurrencies have their rules tilted to favour the founders. In this case it looks like there’s three ways: (1) founders get to run validators which means they get to see all the data, control access to it, and (presumably) be paid in “gas” for the privilege; (2) the backing funds are invested in interest-bearing instruments, and the founders collect the interest, while Libra holders bear the investment risk; (3) the backing funds aren’t accessible to most users, but instead only to “authorized resellers” who will presumably charge a spread; these resellers are authorised by the association, and presumably will charge the resellers a membership fee for the privilege.

The consensus model they use is Byzantine consensus, rather than proof-of-work. So it’s immediately final (in much the same way as the Liquid sidechain is), rather than forcing people to have to worry about reorgs of 6 blocks or 100 blocks or 1000 blocks, etc. But that assumes that more than 2/3rds or players are honest — with 28 initial validators, if you had 10 nodes under your control, and could split the remaining 18 honest nodes into two groups of 9, you could collaborate with one group to create one history, and the other group to create a different history, and induce double spends. Essentially the coin’s security becomes vulnerable to a 34% attack, rather than Bitcoin’s nominal 51% attack vulnerability. There’s nothing particularly wrong with that, it just means you need to be careful not to let more than a third of nodes be vulnerable to attack. Probably not good to suggest “For organizations that would like to run a validator node via a cloud service provider …” on your website though.

Unlike proof-of-work, Byzantine consensus doesn’t scale in the number of validators. From their whitepaper: “Our goal was to choose a protocol that would initially support at least 100 validators and would be able to evolve over time to support 500–1,000 validators”. But that’s a feature not a bug if you want to make a profit by being part of a small oligopoly, though. I’m a little dubious about how reliable you can realistically make it too — to have a transaction confirm, 2/3rds of the global set of validators have to see it, so losing links between countries means entire country’s ecommerce systems become unavailable, and if there’s breaks or even just slow-downs between significant subsets of validators, potentially the entire currency becomes unavailable. Bitcoin is small enough that you can route around this via satellite links or SMS or similar, but Libra needs to be able to reliably throw lots of data around.

The whitepaper claims “The association does not set a monetary policy.” which seems a bit disingenuous to me. They’ll need to decide what will make up the basket that backs each Libra coin, and that’s a monetary policy. They also note they’ll have “The ability to customize the Libra coin contract using Move” which “allows the definition of this scheme without any modifications to the underlying protocol or the software that implements it. Additional functionality can be created, such as requiring multiple signatures to mint currency and creating limited-quantity keys to increase security”. There’s a few interesting cases bound up somewhere in there: what happens when the backing reserve loses value — eg, a country renegs on its bonds, or there’s a huge loss in value in one of the currencies, or one of the banks fails and can’t redeem its deposits? They’ve already covered what happens if the reserve gains value: the founders take it as profit. If that works out okay once it happens by accident, that opens up the option of “going off the fiat standard” and just having the coin be issued in its own right, rather than due to changes in a bank balance somewhere. It seems unlikely to me that the economists and MBAs that’ll be running the foundation eventually will be able to resist that temptation once it arises, and their shareholders may even consider them legal beholden to succumb to it.

The Move language doesn’t seem very interesting; it uses accounts rather than coins, will include a “standard library” for things like sha3 rather than having them as opcodes, and generally seems like an incremental simplification from where Ethereum is. Having a smallish group of validators means that upgrades to the language should be relatively easy to coordinate, so I’d expect it to seem cheap and powerful compared to Bitcoin script or Ethereum.

Like I said, I think the macroeconomic impact on bad central banks is probably pretty positive — it either forces them to match world best practices, or be obsoleted. For central banks that are in the basket, it’s not clear to me what the consequences are: if, say, Australians are holding Libra coins instead of AUD, and the Reserve Bank wants to stimulate the economy by printing money/dropping rates to make everyone feel richer, then it seems like there’s two possibilities: if goods remain priced in AUD, despite people holding their spending money in Libra, then prices immediately seem cheaper, and people buy more stuff, and the Reserve Bank is happy; or, what seems more likely, goods become priced in Libra coin as well because that’s what people have in their accounts, and it’s stable and international and cool, and the Reserve Bank loses the ability to counteract recessions. But that assumes Libra is used a lot by people with first-world currencies, rather than the target audience of the unbanked. And it’s not clear that makes sense: it doesn’t pay interest (the founders collect that), it’s vulnerable to foreign currency shocks, and there’s maybe other drawbacks (reliability, privacy concerns, cost/speed, hassles of KYC/AML procedures). You could trivially get around this by having actual stable coins on the Libre platform, ie having an “AUD” coin instead of a Libracoin, but still on the Libra blockchain, with the stable coin backed by a single-currency reserve, rather than a basket reserve.

Good for Bitcoin? I don’t think Libra really competes with Bitcoin — Bitcoin’s a scarce store of value with peer-to-peer validation and permissionless ledger additions; Libra isn’t scarce, its decentralisation is limited to the association members which is in turn limited due to the technology in use, and it’s got permissions at every layer. It seems like, in a world where Bitcoin is wildly successful, that Libra could easily add Bitcoin to its reserve basket, and perhaps that could bridge the gap between the two feature sets: Bitcoin ensures that there’s no hidden inflation where central banks give free money to their cronies, while Libra gives access to Bitcoin as a store of value to billions of people. If Libra takes the fight for sounder-money to third-world governments, that perhaps just makes it easier for Bitcoin to be the next step after that. If Libra looks like the bigger immediate threat, being both new and having well known people to subpoena, while Bitcoin looks like old news that’s reasonably well understood, maybe that means good things for “permissionless innovation” in the Bitcoin space over the next little while. Will be interesting to see how India and Turkey and similar places react — places where the local currency looks precarious but isn’t already a basketcase. If they either don’t try to block Libra, or try but can’t, that’s a really good sign for people being better able to save and control their wealth globally in future, which is definitely good for Bitcoin, while if it does get blocked, that’s probably not a good sign for Libra’s mission.

Better than the alternatives? If you consider this as just an industry association trying to enter underserviced markets to make more moneys, does it make sense? “Decentralised consensus” is a useful organising principle to let the association keep each other honest, and in finance you probably want to keep a permanent audit trail anyway, and the “blockchain” they’ve specified doesn’t seem like it’s much more than that. So that point of view seems to work to me. Seems kind of a weird thing for Facebook to be leading, though.

So yeah; kind of interesting, but not for any of the reasons Bitcoin is interesting. Potential positives for adoption in the third-world; but just another payment method for the first-world. Lots of rent-seeking opportunities, but less harmful seeming than that of third-world central banks. The tech seems fine, but isn’t crazy interesting.

Taxes, nine years on

About nine years ago, during the last days of the first Rudd government, the Henry Tax review came out and I did a blog post about it. Their recommendations were:

  • tax free threshold of $25,000
  • marginal rate of 35% between $25,000 and $180,000
  • marginal rate of 45% above $180,000
  • drop the Medicare levy, low income tax offset, etc
  • introduce a standard deduction to simplify tax returns

(Given inflation, those numbers should probably be $30,000 and $220,000 today)

The only one of those recommendations the Rudd/Gillard govt’s managed to implement was the increase in the tax free threshold from $6000 to $18,200, accompanied by compensating marginal rate increases from 15% to 19% and 30% to 32.5%.

What we’ve got in the budget now is a step closer to the Henry review’s recommendations:

  • tax free threshold remains at $18,000
  • marginal rate of 19% up to $45,000 (in 2022) instead of $37,000
  • marginal rate of 30% up to $200,000 (in 2024) instead of 32.5% to $120,000 (in 2022) or $90,000 (nowish)
  • marginal rate of 37% dropped (in 2024)
  • top marginal rate of 45% retained
  • low income tax offset is retained and increased (and remains regressive, as the marginal tax rate under $66k is larger than the marginal tax rate over $67k due to the offset phasing out as income increases)
  • temporary low-and-middle income tax offset introduced to stage in the change to the 19% marginal rate
  • medicare levy retained at 2% rather than increased to 2.5%

Most of that’s from last year’s budget, which looks like it passed despite opposition from the ALP, the Greens and independents Tim Storer, Andrew Wilkie and Cathy McGowan. This year’s budget just changes the 19% bracket’s cutoff from $41,000 to $45,000, increases the LITO, and drops the 32.5% bracket to 30%.

That’s still a bit worse than the Henry review’s recommendations from almost a decade ago: the 19% marginal rate should and the low-income tax offset should both be dropped, with the tax free threshold raised to compensate for both of those, and the medicare levy should be rolled into remaining rates, increasing them to 32% and 47%. But still, it’ll be the first reduction in the number of tax brackets since 1990, which isn’t nothing.

Despite the Henry review having been a Labor initiative, Labor’s plan seems to be to do the opposite, and re-legislate the 37% tax rate back in so that we won’t have to have “a cleaner [..] pay the same tax rate as a CEO”. Shorten’s explicit example of a nurse on $40,000 and a doctor on $200,000 paying the same rate doesn’t actually work; the nurse’s marginal rate drops to 19% even under existing law before the doctor’s marginal rate drops from 45% to 30%. Comparing marginal rates at wildly different incomes is absurd, however; and the Henry report addressed this concern directly, noting that a large tax free threshold and a flat marginal rate already achieves progressivity, so that, eg, a cleaner on $50,000 pa pays $6630 (13.3%) tax while the CEO on $150,000 pays $36,630 (24.4%) tax, despite both being on the same 30% marginal rate. This doesn’t seem to just be election sloganeering by Shorten, but an ongoing lack of understanding; O’Neill made a similar claim in the parliamentary debate last year, sayingLet’s be absolutely clear here: stages 2 and 3 of the government’s tax plan will flatten out Australia’s personal income tax system, and that structural change to the personal income tax system is eroding its progressivity.

The budget papers have an interesting justification for the changes: they keep the percentage of govt revenue collected from the top 1%, 5%, 10% and 20% of taxpayers roughly stable (in percentage of total terms). Without the changes, I think the numbers indicate that the top 1% of taxpayers and the bottom half of the top 20% of taxpayers currently pay around 16.7% and 16% of the government’s income tax revenue, but without the changes that would reverse to 15.6% and 16.1%, while with them it’s 17% and 15.5%, which seems fairer. On the other hand, in both cases the burden on the bottom 80% of taxpayers is slightly increased in both cases. Not really sure what good answers here are — it really depends on how much more the top x% earn compared to the top y%, and that’s easier to look at just by looking at average and marginal rates anyway — but it seems like an interesting thing to think about.

I did a followup post a few years later, shortly before Gillard got ousted for the brief second Rudd government, looking at something like:

  • tax free threshold of $25,000 [$28,000 inflation adjusted]
  • marginal rate of 35% between $25,000 and $80,000 [$90,000]
  • marginal rate of 40% between $80,000 and $180,000 [$200,000]
  • marginal rate of 46.5% above $180,000
  • dropping Medicare levy, low income tax offset, etc

and noting it’d result in pretty similar government revenue based on the reported taxable income distribution. It’s more effort to get the numbers from the ATO and run them than I can be bothered with for now (and would be pretty speculative trying to apply them to the world of 2024), but tax brackets like

  • tax free threshold of $20,000
  • marginal rate of 20% up to $45,000
  • marginal rate of 32% up to $200,000
  • marginal rate of 47% above that
  • drop Medicare levy, low income tax offset, etc

would be very close to the post-2024 plan, if anyone could manage the politics of not special casing the medicare levy or the low-income offset.

In the same post, I also thought about an unconditional $350 per fortnight payment as an optional alternative to the tax free threshold — so you get $350 a fortnight (tax free) direct into your bank account, but pay 35% from the first dollar you earn other than that all the way to $80k. That seemed like a fairly plausible way to start on a UBI to me — if you’re earning more than $25k per year, it doesn’t affect your total tax bill at all, but it’s a quarter of the minimum wage or about half the newstart allowance, so it’s not trivial, and doesn’t require any additional paperwork or convincing centrelink you’re not a bludger. If you could afford to raise the tax free threshold to $30,000 and just have a 32% rate from there to $200,000 (which would mean everyone earning over $45,000 pays the same tax, while everyone earning less than that pays less tax), you could have a UBI of up to $370/fortnight, without any impact on anyone earning more than $30,000 a year, or any disincentive to work for anyone earning less than that. That still means fitting up to an extra $10,000 per year for all the people who don’t earn more than $30,000 a year into the budget, which still isn’t easy. Maybe an easy way to start might be to make it so you can only opt-in if you’ve filed a tax return for the past three years and are 21 or over, which would exclude a lot of the people who’d otherwise be getting large payouts. Interactions with newstart, and various pensions would also need a bunch of work.

I wish there was a political party that had a policy like that. But the ALP and Greens seem to be against fewer brackets on the general principle that anything that’s good for the rich is bad for Australia (and the Greens think a good starting point for a UBI is $18,200 per year, or even better would be $23,000 per year funded by a top tax bracket of 78% which is just absurd), while the LDP wants a flat 20% tax with a $40,000 tax free threshold and fewer transfer payments rather than more, and everyone else tends to want to only give welfare payments to people who prove they need it, rather than a universal scheme, again on principle, despite that making it harder for welfare recipients to work. The Libs come the closest, but their vision still barely gets one and a half of the four income tax recommendations from the Henry report implemented one and a half decades after the report came out. Which is better than nothing, or going in the wrong direction, but it’s hardly very inspiring.