A Flash of Insights on Lightning Network

I invented the Lightning Network.

Well, not really. But to the best of my knowledge, I was the first person to write a post describing something similar to how LN is conceived today.

I’m not saying this to take credit for its advent – it’s not like I really did anything. Others have invented the concept of micropayment channels on which my suggestion relied, and I only threw out some rough ideas; I didn’t present a full-fledged design, let alone wrote any code.

I’m saying this to emphasize the point that for some of us, that has always been the vision. One of the first burning questions I had when I was first introduced to Bitcoin was how the whole thing was supposed to scale. Concepts I’ve learned of since then – like SPV and pruning – helped, but I wasn’t completely satisfied. Ever since I heard about channels and thought about how they could be used in a network, that would become one of the first things I would reference whenever discussions of scalability came up.

The ability to use an on-chain transaction to anchor a channel, so that real bitcoins can be sent over it without having to bother the entire network or wait for confirmations for every payment – in such a way that the channel can always be closed unilaterally to recover the funds as normal bitcoins sitting in an address you control – is an idea so powerful that I can’t imagine how can anyone resist falling in love with it.

But the reason I am so excited about the development of LN is not that the vision I had is finally being brought to life.

Continue reading

Lunch is being served

I was at a conference yesterday. Estimates say that 1000 people attended. At 15:00 people have left the lecture halls and swarmed the lunch hall.

The rate at which food can be dispensed is finite, so people had to wait a long time in line to reach the coveted buffet. Those who decided to wait half an hour before going to eat (and struggle to find something to do in that time, seeing as everyone is eating), could get to the food without effort – but may have missed an opportunity to enjoy the entire variety the meal started with.

Likewise, when everyone is off the clock at the same time and drive home, they encounter traffic – but those who wait until 3 AM to hit the road, has the whole road to himself. This is the way of many aspects of life – if you try to do something at the same time many others are, you will run into overload.

Bitcoin is the conference, and lunch is being served. We are amidst a wave of interest in Bitcoin and its kin. This is not the first wave of its kind, but it is the biggest so far in absolute terms. Anyone who tries to go into the field at the same time as everyone else, sees that every system collapses under the load.

It begins with the Bitcoin network itself, where demand for transactions exceeds the available block space, and transaction fees go up.

Exchange services get hundreds of requests daily, and sometimes there are delays or missed messages.

Among the hosted wallets, those that are considered more reliable (such as Coinbase) are overloaded and struggle to handle support requests.

Other hosted wallets, which to begin with had been a scam, have decided that now is the time to pack their bags and vanish.

Public forums are overloaded, every thread is buried under many others within an hour, and there are more newbies asking questions than veterans who can answer them.

Physical community hubs are overloaded, and people who bother to arrive in person and consult, have to wait in line until an ambassador is free to assist them.

I, personally, am also overloaded with both messages and strategic activity, and one comes at the expense of the other.

When someone encounters a problem, it is hard to tell if she had unfortunately stumbled upon a scam, or whether it is a legitimate service that is experiencing load, or there is overload in the Bitcoin network, or that herself she erred in handling the wallet. And when she comes to ask and consult, she has trouble receiving answers.

Some people have lost money due to this whole mess.

Sad and doesn’t further a solution, but that is the situation today. For those who cannot accept that, I truly recommend to pass for the time being on buying Bitcoin (or Ether or ICO tokens or anything else). He can wait until the dust settles and things clear up, and one of two will happen – either the wave of interest will fade, and we go back to earlier demand levels which the systems can handle; or demand continues to grow, and over times the systems will upgrade to handle it – Segwit will be enabled, exchangers will hire more staff, and the community will muster more knowledgeable people who can support the newcomer.

If someone thinks that if he does not but bitcoins this very moment, he will miss the opportunity to get rich… Well, I can’t really tell him not to buy, and I do of course in favor of bitcoins being distributed among as many people as possible, and not concentrated in the hands of a few veterans. But he should know what he is going into, and not complain about anyone else but himself.

If you can’t take the heat, stay out of the kitchen.

Between two extremes, but not quite in the middle

If you’re reading this, you probably know that the Bitcoin community is amidst a civil war.

And you might also know that for almost 2 years, I’ve been advocating the position that if no agreement or compromise can be reached, the best course of action is to have a clean split of the network into two incompatible, competing currencies.

However, I also said that a compromise is the better outcome if at all possible. And I also said that for a split to work it must be done properly, and my fear that this will not be the case is growing.

Which is why I think we should give diplomacy another shot and pursue a genuine compromise, and why I urge people from both sides of the fence to be more receptive to it. And yes, compromise does mean giving up things that you hold dear.

I will not go into exact detail about what such a compromise could consist in. But overall, two key components will almost certainly have to be activation of Soft-Fork SegWit as soon as possible, together with a hard fork to increase the block size further (perhaps with a built-in growth schedule) without more delay than is necessary.

My own side in the debate is no secret – I believe that the best technical solution is to activate SegWit immediately, and figure out later whether we need a hard fork, and which.

But I support a compromise along the aforementioned general lines, for several reasons which I will explain.

Technical merit

I’ve said before that I didn’t really personally experience the dreaded datageddon that others reported, with slowly confirming transactions and prohibitive fees. Transactions still confirmed quickly and with relatively cheap fees. This made me question the need to rush the scaling solutions.

But time has passed and I’m sad to report this is no longer the case. Bitcoin has experienced another burst of explosive growth, and so did demand for space in the blockchain. I’ve observed firsthand that getting transactions confirmed within reasonable time requires fees upwards of a dollar. I don’t care too much about my own costs, but I’m beginning to feel embarrassed to praise the merits of Bitcoin as I have always done.

This leads to two conclusions: First, we need to resolve the situation, we can’t remain in the current situation indefinitely. If a compromise is what it takes to move forward, so be it.

Second, if previously I thought that SFSW is good enough for now – now I think that SFSW is probably sorta kinda good enough for now. If growth continues as it has so far, we’ll need a more aggressive blocksize increase sooner rather than later. So despite all the risks and disruptions, an expedited movement towards a hard fork starts to sound like not such a terrible idea.

The other technical issue is that I think we should be more open to the concept of a hard fork. When I got into Bitcoin I didn’t sign up to the idea that a hard fork would occur only whenever a mule foals. There are many much-needed upgrades to the protocol which can only be done by way of a hard fork. If we can’t even change a well-understood parameter, it doesn’t inspire confidence that we’ll be able to handle the bigger changes ahead.

Conservativeness in forks is important, but there is such a thing as too much conservatism, and we might be approaching that point. Which is why, again, expediting the hardfork schedule might not be such a bad idea.

For people, by people

More important than the technical reasons why a compromise is palatable, are the social reasons why we need it.

I don’t see Bitcoin as a piece of art, an engineering wonder that I can put on display and marvel at its technical correctness. It is a tool created by people with the goal of benefiting people. If it fails at this purpose, it should be fixed.

And right now Bitcoin is stuck, and what’s important is to unstick it, not to pat ourselves on the back for how rigorous our technical development methodology is.

Furthermore, Bitcoin is not as robust as some people might think – it is always at the risk of attack by a determined attacker of means. Its security is based on a combination of its own technical defense mechanisms, together with making sure it has as few enemies as possible. Bitcoin has enough enemies from without to worry about. It doesn’t need infighting and the threat of some segments of the Bitcoin community attacking others, which may well be the case if we go for the more militant methods of resolving the conflict. Bitcoin is strongest when all its proponents are allied, and this is what a compromise aspires to achieve.

But the issue goes much deeper than that.

The debate, it seems, becomes more and more divisive every passing day. People who express disagreement are labeled as sellouts or traitors to the Bitcoin cause. Demonization, personal attacks and mudslinging are rampant. People have picked sides. Propaganda has succeeded. It’s sad and doesn’t further a solution.

It is becoming clear that people have firmly tethered their identity to their side on the debate. And this is bad news. As Paul Graham eloquently explains, you can’t have a rational, civil debate when people’s identities are on the line. People adopt new ideas and resist others not for their underlying merit, but for which side the idea is associated with. This can quickly escalate (and in our case, already has), as people become more and more entrenched in their position, and the more vile a person is perceived just for expressing a dissenting position.

I miss the times when all Bitcoiners were on the same boat. When we could discuss technical topics based on their technical merits. When you could express an opinion without being painted as belonging to one camp or another, or having your opinion ignored just because you are already perceived as belonging to the wrong camp. When ideas were just ideas, not “the ideas of this side” and “the ideas of that side”.

But despite our sad state of affairs, I hope that we can reach a compromise. That we will each make sacrifices and rally behind the same banner. If we can do that… Then I hope it will take us back to those better times. That it will diffuse all the tension that has been built up over the years, and take the sting out of the debate. That we will be able to trust each other once more and spend our energies not on quarreling, but on moving forward and furthering solutions.

That, I believe, is a vision worth fighting for.

And God said, “Let there be a split!” and there was a split.

A year ago, I’ve written How I learned to stop worrying and love the fork, espousing my view that a split of Bitcoin into two networks is possible, and might even be good under the right circumstances and with proper preparations.

Half a year ago, I’ve followed up with I disapprove of Bitcoin splitting, but I’ll defend to the death its right to do it, which elaborated a bit and aimed to refute some misinformation.

I’ve been meaning to write another followup to address some questions that have been raised…

And then Ethereum Classic happened.

Continue reading

I disapprove of Bitcoin splitting, but I’ll defend to the death its right to do it

In a slideshow published by Brian Armstrong, CEO of Coinbase, he promotes the view that Bitcoin is currently undergoing a winner-takes-all elections, and that variety in Bitcoin protocols is akin to variety in web browsers.

I find this incorrect, misleading and destructive.

Unlike physical currencies, governed by the laws of nature, and centralized currencies, governed by the whims of their issuers, it’s not at all obvious what ultimately governs a decentralized digital currency such as Bitcoin. There’s the protocol and the code, of course, but those are mutable and thus adhere to a higher authority.

Continue reading

How I learned to stop worrying and love the fork

It’s hot in Israel in August, but not nearly as hot as the global debate surrounding the release of Bitcoin-XT and the contentious hard fork that would ensue if enough people adopt it. It seems that both proponents and opponents of Bitcoin-XT dread the possibility of the network splitting in two, and focus on making sure everyone switches to their side to prevent this from happening. Contrary to this post’s title, I don’t actually like the prospect of a fork; but I do claim that having two networks coexist side-by-side is a real possibility, that it is not the end of the world, and that we should spend more energy on preparing for this contingency.

Continue reading

How many hardware engineers does it take to develop an artificial general intelligence?

None. It’s a software problem.

At a recent wrap-up party of Harry Potter and the Methods of Rationality I attended on Pi day, I overheard a discussion about friendly artificial intelligence. I think several errors were made in that discussion, but unfortunately the suboptimal acoustic situation at the venue prevented me from offering my two satoshis. But I figured this would make for an interesting blog post, since if one person makes these mistakes, there must be more than one. (Incidentally, what HP:MoR, FAI and Bitcoin all have in common, is that I’ve heard about them from LessWrong.)

So far, humanity has had great success at building artificial specific intelligences. These are machines that can perform well in specific tasks which were once doable only with human intelligence. We have calculators that operate faster and more accurately than any human. We have chess programs that can easily beat the strongest human players. We even have cars that drive themselves more safely than humans.

What we don’t have is an artificial general intelligence (AGI) – a machine that has our ability to adapt to a very wide range of circumstances and solve practical problems in diverse fields. What will it take to create such a thing?

An argument I’ve heard says, that at our current technology level, we can build a machine with some specific level of intelligence (using, say, a generic state-of-the-art machine learning algorithm, such as a neural network). With hardware advances and Moore’s law, we will be able to build smarter machines, until one day, a computer will be as intelligent as a human. Past that point, it was said, computers with better hardware will become even smarter than humans, and gradually widen the gap.

Mankind has always been fascinated by the ability of birds to fly, and dreamed of gaining this ability itself. And people tried to proactively pursue this dream… By building feathered contraptions that resembled bird wings, attaching them to their bodies, and waving their arms vigorously.

That didn’t work.

People didn’t succeed in flying by building up muscle strength and flapping their arms more forcefully. They did it by understanding how flight works – the laws of physics, and aerodynamics in particular – and using this understanding to design a machine that can fly given our requirements and the tools available to us. These machines, of course, have only a superficial resemblance to birds.

Taking an algorithm which is crudely inspired by how brains are supposedly built, running it on increasingly faster hardware, and hoping that eventually general intelligence will emerge, is also not going to work. Instead, we need to understand how intelligence works, and use that to write software that will elicit intelligence from the technical capabilities of our computing hardware. Given the reliability and sheer processing power of modern digital computers, it is likely we will end up with a machine which is more intelligent than a human.

What’s next? The machine won’t wait around for Moore’s law to double its processing power and give it an edge in intelligence. Rather, it will use its superior intelligence to modify its own source code and create a better intelligence than we mere humans could create. The result will be even smarter and create an even better AI, and so on. The whole thing can explode rather quickly into absurd levels of intelligence.

What will this absurdly intelligent machine do? Another argument I’ve heard is that, since we wrote the code, it will only do what we told it.

It is a fundamental fact of theoretical computer science that, given an arbitrary program, there is no general way to tell if running this program will go on forever or stop at some point. Knowing whether a program stops or not is a pretty basic thing, so this already demonstrates the absurdity in thinking that knowing the code means knowing what the code does.

But we don’t need to go as far as these abstractions. Chess playing software were written by people, and these people have a good idea of the general way the program will go about finding the best moves. What they don’t know is what actual moves the program will play on the board. Indeed, chess programs often make moves no human would ever think of, because no human can do the trillions of calculations that the computer does.

But chess programs are just a specific intelligence. Once we build a program with general intelligence, we have no idea what specific course of action it will take. At first, we’ll have an idea about what the program does to reach a decision – but once the machine runs modified source code that it has written itself, we don’t even have that anymore.

It is generally assumed the AGI will be an “agent” – it will have a “target function”, a goal it wishes to achieve, and the software will be designed so that it always chooses the actions that best work toward this goal. We can try to construct the goal to be compatible with what we want, but “what we want” is incredibly complex and difficult to code; and the machine only cares about the goal we’ve written, not what we intended to write.

When we humans work towards our goals, we see fellow humans as our peers. When an AGI sees a human, it is more likely to see them as a collection of atoms that might be of more use to it in a different configuration. Avoiding the situation of a strong AI trampling humanity in pursuit of a naive target function that was coded into it, is exactly what the challenge of developing a Friendly artificial intelligence (FAI) is all about.

I’ve skipped over many details in this description, of course. But if you’re interested in learning more, you should stop listening to me – as I know nothing about this subject – and head over to https://intelligence.org/ (and if you ever decide to make a donation, they also accept Bitcoin).