Hacker Newsnew | past | comments | ask | show | jobs | submit | ikken's commentslogin

Every time you hear about cryptocurrencies that allow fee-less, immediate, eco-friendly transactions, it's worth to wonder why dominant cryptocurrencies still suffer from those issues.

For example, one reason for the existence of transaction fees is to prevent network from being DoSed (overwhelmed in terms of processing and storage) by flood of "free" transactions. How can e.g. Nano handle this problem with no fees? This and other problems are discussed in section "V. ATTACKVECTORS" of their whitepaper [1].

Unfortunately, rather than provide concrete, mathematical solutions, the white-paper provides mostly "discussions" of why those issues are not really a problem or that some feature will prevent them (e.g. PoW attached to each transaction - which in itself is form of a fee, non eco-friendly and might be prohibitivly slow at scale). Those discussions are not researched enough to give assurance that Nano would function efficiently and securely if it were ever to get popular.

Blockchains are very inefficient and a we're still figuring out how to make them faster and cheaper without sacrificing too much security and decentralization. But as is often the case - there's no free lunch. And extraordinary claims require extraordinary proofs.

[1]. https://content.nano.org/whitepaper/Nano_Whitepaper_en.pdf



You're referring to the Terms of the website, not "Ethereum 2.0". It's like agreeing to the Terms of Use of a Bitcoin wallet. It doesn't mean that Bitcoin is a "centralized joke".


It's not my field, but I remember reading that your example doesn't represent entanglement because when put into envelopes, one marble is already red and another blue.

In quantum entanglement they are both truly and really random until you measure one. And it's not random in a sense that you closed your eyes when putting them into envelope. They actually both don't have a "selected" color. They "snap into one of two colors" when you measure (look at) one. And the "unbelievable" thing is that when you measure one, the other one immediately snaps into opposite color, no matter how far it is.


I don’t think this is correct. The two particle system is prepared in a perfectly known state (e.g. both spins up). There’s nothing random about it. Randomness only occurs at the measuring device, if it not aligned with the direction of the spin of the incoming particle.


Nope. They don't "have" their own state until one of them is measured. But they do have a correlated state which exists before measurement, which says they have opposite/same states. The individual states arise only after measurement. I'm not a physicist, but wrote a Quantum Simulator.


This reminds me the bet in the bitcoin community [1]. If on average bitcoin blocks are produced every 10 minutes, and you learn that 5 minutes ago someone found a block, what is the average time you will wait for the next block? It turns out it's 10 minutes, not 5 minutes as you would intuitively think. (it's a memoryless process, so average expected time till block is always the same - 10 minutes - no matter how many blocks were recently found).

In other words, when you're waiting for bitcoin transaction to be confirmed and go to check how long ago the most recent block was produced, in order to estimate how soon the next one will come - you're doing it wrong. Even if previous block was found 9 minutes ago, you're average waiting time for the next block is still 10 minutes.

[1]. https://www.reddit.com/r/btc/comments/7rs8ko/dr_craig_s_wrig...


And a related counterintuitive fact (again, assuming 10 minutes):

1. If you pick a block randomly (uniformly), its average length is 10 minutes.

2. If you pick a point t0 in time randomly (uniformly), the average length of the block you're in is 20 mins (and the average length from t0 to next block is 10 mins, and the average length from previous block to t0 is also 10 mins (and, needless to say, 10+10=20...)).


Here's an even simpler example: a flip of a fair coin.

Suppose this is your first flip, one would intuitively think that there is 50% chance of H and 50% T.

Suppose you flipped once and got H. For your second flip, one might intuitively think that since the number of times getting H over the long run is 50% of the total flips, and we already have a flip of H, to "balance it out" the next flip should have a smaller probability of getting H. Instead that is wrong. The next flip still has 50% chance of H.

Suppose further that one has performed N flips, all of them H. One might even think that because of the way the geometric distribution works, it is very unlikely for the next flip to be H again. Instead that is wrong. The next flip still has 50% chance of H.


This is actually wrong. The average expected time till next block is almost never 10 minutes because hashpower goes on and offline all the time. It gets adjusted every 2016 blocks based on historical block timing so that if no changes occur then future blocks would be 10 minutes on average - but changes always happen so this is never accurate. As such, you do learn something by looking at prior block times.


What you say is not a rebuttal of the parent comment. Parent explicitly said that average block times are 10mins in the assumption. The most recent block time doesn't change that.


Parent is talking about bitcoin, where that is false. If they are assuming the average, then they're assuming something false.


the important thing is that it doesn't matter what the parent assumed.

whether the actual time is 10 minutes or 100 years, knowing that somebody else solved one recently doesn't speed up your time to find one


Of course it doesn't speed up your own time, since you have perfect information about your own hashpower. But it does tell you information about the total hashpower that's online, statistically.

I'll give an extreme example to make this clearer. Suppose 10X hashpower just came online an hour ago. It's quite likely that ~60 blocks have been found in the last hour, assuming the difficulty adjustment hasn't happened since. Seeing this, one could deduce that hashpower went up by ~10 and that the expected time till next block is roughly 1 minute instead of 10.

Now, in most cases hashpower doesn't change that drastically but it remains true that recent block times give you more than 0 information about hashpower and therefore about the expectation for future block times.


This is because running code on Ethereum VM and storing data is hugely expensive (rightly so, as it's being done on all nodes in the world). Therefore Solidity will try to compile into a VM code that uses the least numbers of cheapest instructions and pack data into as small memory package as possible.


Most of the issues pointed out by int_19h would be handled at compile time during static analysis and wouldn't change much to the generated bytecode. I'm talking about strong typing, immutability by default, less error-prone syntax, tail calls, evaluation order etc...

Even replacing 256bit ints with arbitrary precision "bigints" wouldn't add too much of a cost if it's a native type of the underlying VM (as it should be for such an application IMO). It might even reduce code size by removing overflow tests.


It's about costing the operations reliably.

But I would still expect arithmetic to be overflow-checked by default, as in e.g. VB.NET. This would mean that careless arithmetic on unvalidated inputs could still cause the contract to fail - but at least it would be a controllable failure, with all state changes being reversed, and hence not exploitable.


In the rationale document they explain that arbitrary length integers were too difficult to determine a reasonable gas cost around for math etc.


"Normal" and "secure" languages don't do this already?


I think that many people miss the most important thing here: it is not important if a currency hard-forks or not - what is important is that it can.

I have no confidence that Bitcoin or any other currency doesn't fork in the future, because it is possible at any moment. The fact that the fork has happened or not changes nothing - it always stays as a possibility.

In other words the chance of Ethereum or Bitcoin hard-forking in the future wouldn't change if the hard-fork didn't happen right now.


> I think that many people miss the most important thing here: it is not important if a currency hard-forks or not - what is important is that it can.

Well, it's setting a precedent that will make the case easier in the future. Bitcoin's hard forks have all seemed to be good faith attempts to fix issues in the protocol. I don't think anyway on the Ethereum team is arguing this is anything other than an extremely arbitrary fork. The fact that 51% of stakeholders agreed to set this precedent should scare anyone from owning Ethereum in the future.


This is a very one-sided discussion which makes it seem like it was written by a person who wants Monero's value to rise. It doesn't mention any drawbacks of Monero - like poor scalability - that blocks it's wide adoption. There's also a good deal of FUD around Dash and Zcash, which has been quickly refuted on reddit [1].

Apart from that I liked this post and it shone some light on issues I wasn't aware of.

[1] https://www.reddit.com/r/btc/comments/4nai1r/on_fungibility_...


Note that Monero's scalability problem also exists in Zcash - an indefinitely growing list of spent tokens; if scalability is a drawback of Monero it's a drawback for Zcash.


An indefinitely growing list of spent transactions is the least of Monero's scaling issues. Monero doesn't scale to large anonymity sets.

Monero uses CryptNote's ring signature approach, which scales linearly with the number of coins you want to mix with. Want to fully mix 1,000 coins together? You need a 30kb transaction[0]. You chunk those coins into smaller mixing sets, but then they aren't fully mixed. In anything using this approach, your anonymity set is limited by what you can transmit across the network in any given transaction or a small set of transactions. I've never seen an exact proposal for mixing tx size and I'd be very interested to see one, but if it was more than 100 coins per tx I'd be surprised.

In ZCash, transactions are constant size and are fully mixed with every other coin in the current anonymity set.

Both approaches do have the indefinitely growing list of spent tokens issue. Which in practice means you need to move coins into a new anonymity set after e.g. 2^32 serial numbers and throw away the old coins and spent serial number list[1]. So there is an inherent limit on the maximal anonymity set you get out of any anonymous ecash scheme. Zerocash hits that limit. Due to its per transaction scaling issues, CryptoNote simply can't.

As a result, in ZCash, your coin is hidden amongst all the coins in the maximal anonymity set. In Cryptonote/Monero, it's hidden amongst a far smaller fraction of that set. In Monero, you are far less anonymous. All things being equal, you want to be more anonymous.

Of course, all other things are not equal. There are merits to both Zerocash and CryptNote on a technical level, but scalability isn't where CryptNote shines.

[0] Assume one group element per signature in the ring at 32 bytes per element. The real scheme is likely worse.

[1] There more sophisticated approaches that can be used.


> In Cryptonote/Monero, it's hidden amongst a far smaller fraction of that set.

Just wanted to point out that it's not that simple - what you are referring to is more like coinjoin level of anonymity. In Monero / Cryptonote, since one-time keys are used for each transaction, when you receive coins, they are in fact hidden among the entire set (which is the same anonymity level as Zerocash). The received coins can then be used as non-signers in "many" ring sigs, and so they have been possibly spent at any time for the remainder of the blockchain - the anonymity set for when a coin is spent is therefore "all" the ring sigs it is a member of, and since they remain on the blockchain indefinitely, this can therefore grow infinitely large.

Edit: I mean, it's fine to downvote, but at least providing a comment is helpful if you disagree.


That is a good point; I should have clarified I was talking about transaction scalability in general, not _anonymous_ transaction scalability.

In any case, hopefully ZK-SNARKS continue to be optimized sufficiently that there's no question about which approach is better; I know you guys have done tremendous work on achieving that goal. Thank you!


> which scales linearly with the number of coins you want to mix with. Want to fully mix 1,000 coins together? You need a 30kb transaction

Actually, it does not scale linearly, it scales logarithmically in the worst case.

If you create a transaction to send 1543XMR, it splits it into 4 pieces: 1000, 500, 40, and 3, respectively. Each of these transactions are put into a ring signature, where the other transactions in the ring are selected from the pool of all other transactions of the same size, since the creation of the network. I'm not sure why you think that it scales linearly on the amount of coins sent.

Edit: Unless you mean, "to achieve perfect anonymity, you need to mix your coins with every other transaction of the same size, which scales linearly with the total number of transactions performed since the start of the network", in which case, yes. It is linear. But thats serious overkill, theres no reason to have a ring size that large.


Yes, I meant perfect anonymity.

If we consider imperfect anonymity, we need to consider more than the size of the anonymity set, we need to consider how likely it is a given coin in the anonymity set is the actual one we are hiding. This is a bayesian thing that depends on that attackers prior knowledge. For many coins it may be vanishingly close to zero. Which means they don't really contribute to the anonymity set. Which means you can end up with a large looking anonymity set that is equivalent to a perfect anonymity set of say 5 coins.

How big is the anonymity set for a given CryptoNote transaction? You might think it 1) clearly is at least the size of all the coins in the tx and 2) actually it's the union of those coins anonymity sets. But what are the probabilities? I don't know. But consider a few possible issues.

If you sample the coins in the mixing set for your tx uniformly from the whole blockchain, than many of them will be very old, but the actual coin you are spending is likely new. This also applies to the sets you are taking the union of. Couple this with other issues such as long term intersection attacks, and it gets very hard to say how much anonymity you really have. Especially because we don't know what techniques the companies that are doing coin tracing have and more significantly, what third party data they are correlating with beyond just the blockchain. Perfect anonymity and very large anonymity sets is the best defense we have against this stuff.


Unsurprisingly, there exists research by the Monero Research Lab highlighting temporal association attacks and other possibilities.

https://lab.getmonero.org/pubs/MRL-0001.pdf https://lab.getmonero.org/pubs/MRL-0004.pdf

As to your last statement: even if the supposition is that the true signer is the most recent output on the blockchain, that is nothing but an unprovable supposition, which means that Monero enables plausible deniability at the very least.

Since transactions are both unlinkable (for any two outgoing transactions it is impossible to prove they were sent to the same person) and untraceable (for each incoming transaction all possible senders are equiprobable) the anonymityset continues to grow, which makes the privacy risk cryptographically negligible.


I don't understand your terminology. What do you mean "mix 1000 coins together", and what is the use-case there?


So how do you advocate for a cryptocurrency that you truly believe in without being called a shill...or a pumper?

Monero/CryptoNote based coins offer better privacy than any other competitor. And we live in an age where that's becoming increasingly important.


Create a cryptocurrency that isn't designed to provide massive increases in wealth for early adopters?


If you're referring to Monero, can you explain how it's "designed" to provide massive increases in wealth for early adopters? Is it the volunteer contributions? Lack of ICO? Perhaps it's the fact that the current market price is lower than when the coin was released two years ago. That surely gives the early adopters an advantage....


There are plenty of early adopters who purchased Monero for several dollars each. And it languished at 30-50 cents for well over a year. And is now only ~$1.

So there's been plenty of time for anyone to be an early adopter....continuing through today and easily for months to come.

Or skip Monero altogether. Aeon, a related project is available right now for less than 1 penny. And has been for quite some time.

So, as to your question, Done. You can be an "early adopter" right now.


Then almost by definition such a cryptocurrency is doomed to failure.


Please check aiopg [1]. There's also an interesting list of packages for asyncio [2].

[1] https://aiopg.readthedocs.org/en/stable/

[2] http://asyncio.org/


We see people hyping all the time for NodeJS mainly because it can handle millions of concurrently open connections. Now we get a much cleaner (IMHO) way to achieve the same thing in python (of course python is slower) - i.e. see aiohttp as websockets server. I would not call it a disaster.


It's great that it works for you for this one particular type of application, but the whole world isn't io bound. Some of us just want regular, plain old, concurrency. As this shows, multiprocessing is still the only, unsane, solution right now. I just want to be able to make a thread, and do stuff in it in parallel to other threads, like I can with most other languages.

"Go write a C extension" you tell me, "use something besides cPython" he says, "just use multiprocessing" I hear. Sure...but ffs, we've had multicore processors for almost TWO DECADES now.

One of my biggest, and apparently unchanging, problems with Python is the desire to keep things simple in the interpreter, to the disadvantage of the language. Sure "implementation for interpreters may vary" blah blah, but you have to target the bottom end in performance, and most widely installed, which is definitely cPython for both points.


I think these are two distinct use cases. NodeJS actually does not (AFAIK) handle one of them. Basically, you have IO bound tasks and CPU bound tasks (and a mix of both which is really nasty business). Python has had CPU-bound task concurrency via multiprocessing and it's been OK. My preference would be to get rid of GIL and improve how threading is actually done, but technically you can serve CPU-bound tasks today with Python 2 and 3. This is (AFAIK) not something that Node does out of the box.

The IO bound tasks in Python are a problem and I wish there was a clean solution. Python does not have a global event loop, so there is not an easy place to hook in coroutines, callbacks, etc. So for a while we were stuck with one of the following:

1. Use threading or multiprocessing. This sucks for more than concurrency of like 2-8.

2. Use eventlet, gevent, or another event loop. The problem here is that you have to buy into it whole hog. No component of yours can be blocking, and that's hard to tell.

3. Write your own event loop. I've done this and find it to be the most understandable and easy to debug approach. This sucks because of the amount of effort it takes for something so fundamental (because networking is tricky).

Some people would be happy if Python got better at solving IO-only bound tasks. I guess that's where this feature comes in. I haven't played with 3.5 yet because I am mostly stuck on 2.7 for reasons. However, looking at it, I feel like there should have been more of a separation between blocking and non-blocking code here. Something alongs the lines of an async function not being able to call a blocking function.

Re: CPU and IO bound tasks: I know of no great framework for this besides threading (not the kind in Python + GIL, but real threading). I usually just side-step this problem by separating tasks that are both IO and CPU bound into smaller tasks that are only CPU or only IO bound. Thankfully, that's generally pretty easy to do.


asyncio provides an event loop in the standard lib now



And ironically, nodejs is also single threaded with it's own GIL, which somehow no one ever mentions. It's great for async IO bound stuff but is really worse that Python for mutlicore CPU bound work - at least Python has concurrent.futures which makes throwing together a parallel map operation a few lines of code.


This is like the cripple making fun of the blind man.

When your competition is NodeJS there is nothing really left to say.


I've been using the new async/await syntax to write beautiful asynchronous websockets server and I fell in love with it. It handles hundreds of thousands of concurrent connections (on top of aiohttp) and the code is so much cleaner than it would be with i.e. NodeJS and Express and Promises. It reads like a serial code.

I think benchmarking asyncio with any type of CPU bound tasks misses the point. Previously we were relying on hacks like monkeypatching with gevent, but now we've been presented with clean, explicit and beautiful way to write massivly parralel servers in Python.


Anything you can share? Some good examples of networking code with async/await would be incredibly helpful. The documentation covers the primatives but I have a hard time putting it all together.


Unfortunately the code is not open source - I'll try to open parts of it in the future.

But please do check this simple gist I found some time ago that helped me understand how powerful asyncio is:

https://gist.github.com/gdamjan/d7333a4d9069af96fa4d


I'm actually tearing up here. That is... Beautiful.


fyi: ES2016 javascript also supports async/await, and you can use them today w babel


No it doesn't. It just hit stage 3 in TC39 yesterday, meaning browsers should be considering implementing it experimentally in order to solicit feedback from developers. When it progresses beyond that you can start calling it ES2016.


i stand corrected, somehow i was under the impression that they were a part of es7 (es2016). So does this mean, there's still a chance that they might not make it?


It's going to make it in some form, possibly even it's current form, but it isn't formally part of the spec yet.


Thanks for writing this!

> [..] It handles hundreds of thousands of concurrent connections [..]

Is it a single process application, or you use a few OS processes behind some load balancer?


Yes, it is a single process application. But to utilize all cores I use Gunicorn to spawn multiple processes.

http://aiohttp.readthedocs.org/en/stable/gunicorn.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: