• 0 Posts
  • 5 Comments
Joined 1 month ago
cake
Cake day: November 30th, 2024

help-circle
  • quantum nature of the randomly generated numbers helped specifically with quantum computer simulations, but based on your reply you clearly just meant that you were using it as a multi-purpose RNG that is free of unwanted correlations between the randomly generated bits

    It is used as the source of entropy for the simulator. Quantum mechanics is random, so to actually get the results you have to sample it. In quantum computing, this typically involves running the same program tens of thousands of times, which are called “shots,” and then forming a distribution of the results. The sampling with the simulator uses the QRNG for the source of entropy, so the sampling results are truly random.

    Out of curiosity, have you found that the card works as well as advertised? I ask because it seems to me that any imprecision in the design and/or manufacture of the card could introduce systematic errors in the quantum measurements that would result in correlations in the sampled bits, so I am curious if you have been able to verify that is not something to be concerned about.

    I have tried several hardware random number generators and usually there is no bias either because they specifically designed it not to have a bias or they have some level of post-processing to remove the bias. If there is a bias, it is possible to remove the bias yourself. There are two methods that I tend to use that depends upon the source of the bias.

    To be “random” simply means each bit is statistically independent of each other bit, not necessarily that the outcome is uniform, i.e. 50% chance of 0 and 50% chance of 1. It can still be considered truly random with a non-uniform distribution, such as 52% chance of 0 and 48% chance of 1, as long as each successive bit is entirely independent of any previous bit, i.e. there is no statistical analysis you could ever perform on the bits to improve your chances of predicting the next one beyond the initial distribution of 52%/48%.

    In the case where it is genuinely random (statistical independence) yet is non-uniform (which we can call nondeterministic bias), you can transform it into a uniform distribution using what is known as a von Neumann extractor. This takes advantage of a simple probability rule for statistically independent data whereby Pr(A)Pr(B)=Pr(B)Pr(A). Let’s say A=0 and B=1, then Pr(0)Pr(1)=Pr(1)Pr(0). That means you can read two bits at a time rather than one and throw out all results that are 00 and 11 and only keep results that are 01 or 10, and then you can map 01 to 0 and 10 to 1. You would then be mathematically guaranteed that the resulting distribution of bits are perfectly uniform with 50% chance of 0 and 50% chance of 1.

    I have used this method to develop my own hardware random number generator that can pull random numbers from the air, by analyzing tiny fluctuations in electrical noise in your environment using an antenna. The problem is that electromagnetic waves are not always hitting the antenna, so there can often be long strings of zeros, so if you set something up like this, you will find your random numbers are massively skewed towards zero (like 95% chance of 0 and 5% chance of 1). However, since each bit still is truly independent of the successive bit, using this method will give you a uniform distribution of 50% 0 and 50% 1.

    Although, one thing to keep in mind is the bigger the skew, the more data you have to throw out. With my own hardware random number generator I built myself that pulls the numbers from the air, it ends up throwing out the vast majority of the data due to the huge bias, so it can be very slow. There are other algorithms which throw out less data but they can be much more mathematically complicated and require far more resources.

    In the cases where it may not be genuinely random because the bias is caused by some imperfection in the design (which we can call deterministic bias), you can still uniformly distribute the bias across all the bits so that not only would be much more difficult to detect the bias, but you will still get uniform results. The way to do this is to take your random number and XOR it with some data set that is non-random but uniform, which you can generate from a pseudorandom number generator like the C’s rand() function.

    This will not improve the quality of the random numbers because, let’s say if it is biased 52% to 48% but you use this method to de-bias it so the distribution is 50% to 50%, if someone can predict the next value of the rand() function that would increase their ability to make a prediction back to 52% to 48%. You can make it more difficult to do so by using a higher quality pseudorandom number generator like using something like AES to generate the pseudorandom numbers. NIST even has standards for this kind of post-processing.

    But ultimately using this method is only obfuscation, making it more and more difficult to discover the deterministic bias by hiding it away more cleverly, but does not truly get rid of it. It’s impossible to take a random data set with some deterministic bias and trulyget rid of the deterministic bias purely through deterministic mathematical transformations,. You can only hide it away very cleverly. Only if the bias is nondeterministic can you get rid of it with a mathematical transformation.

    It is impossible to reduce the quality of the random numbers this way. If the entropy source is truly random and truly non-biased, then XORing it with the C rand() function, despite it being a low-quality pseudorandom number generator, is mathematically guaranteed to still output something truly random and non-biased. So there is never harm in doing this.

    However, in my experience if you find your hardware random number generator is biased (most aren’t), the bias usually isn’t very large. If something is truly random but biased so that there is a 52% chance of 0 and 48% chance of 1, this isn’t enough of a bias to actually cause much issues. You could even use it for something like cryptography and even if someone does figure out the bias, it would not increase their ability to predict keys enough to actually put anything at risk. If you use a cryptographysically secure pseudorandom number generator (CSPRNG) in place of something like C rand(), they will likely not be able to discover the bias in the first place, as these do a very good job at obfuscating the bias to the point that it will likely be undetectable.


  • I’m not sure what you mean by “turning into into a classical random number.” The only point of the card is to make sure that the sampling results from the simulator are truly random, down to a quantum level, and have no deterministic patterns in them. Indeed, actually using quantum optics for this purpose is a bit overkill as there are hardware random number generators which are not quantum-based and produce something good enough for all practical purposes, like Intel Secure Key Technology which is built into most modern x86 CPUs.

    For that reason, my software does allow you to select other hardware random number generators. For example, you can easily get an entire build (including the GPU) that can run simulations of 14 qubits for only a few hundred dollars if you just use the Intel Secure Key Technology option. It also supports a much cheaper device called TrueRNGv3 which is a USB device. It also has an option to use a pseudorandom number generator if you’re not that interested in randomness accuracy, and when using the pseudorandom number generator option it also supports “hidden variables” which really just act as the seed to the pseudorandom number generator.

    For most most practical purpose, no, you do not need this card and it’s definitely overkill. The main reason I even bought it was just because I was adding support for hardware random number generators to my software and I wanted to support a quantum one and so I needed to buy it to actually test it and make sure it works for it. But now I use it regularly for the back-end to my simulator just because I think it is neat.



  • Isn’t the quantum communication (if it were possible) supposed to be actually instantaneous, not just “nearly instantaneous”?

    There is no instantaneous information transfer (“nonlocality”) in quantum mechanics. You can prove this with the No-communication Theorem. Quantum theory is a statistical theory, so predictions are made in terms of probabilities, and the No-communication Theorem is a relativity simple proof that no physical interaction with a particle in an entangled pair can alter the probabilities of the other particle it is entangled with.

    (It’s actually a bit more broad than this as it shows that no interaction with a particle in an entangled pair can alter the reduced density matrix of the other particle it is entangled with. The density matrix captures more than probabilities, but also the ability for the particle to exhibit interference effects.)

    The speed of light limit is a fundamental property of special relativity, and if quantum theory violated this limit then it would be incompatible with special relativity. Yet, it is compatible with it and the two have been unified under the framework of quantum field theory.

    There are two main confusions as to why people falsely think there is anything nonlocal in quantum theory, stemming from Bell’s theorem and the EPR paradox. I tried to briefly summarize these two in this article here. But to even more briefly summarize…

    People falsely think Bell’s theorem proves there is “nonlocality” but it only proves there is nonlocality if you were to replace quantum theory with a hidden variable theory. It is important to stress that quantum theory is not a hidden variable theory and so there is nothing nonlocal about it and Bell’s theorem just is not applicable.

    The EPR paradox is more of a philosophical argument that equates eigenstates to the ontology of the system, which such an equation leads to the appearance of nonlocal action, but this is just because the assumption is a bad one. Relational quantum mechanics, for example, uses a different assumption about the relationship between the mathematics and the ontology of the system and does not run into this.


  • It does not lend credence to the notion at all, that statement doesn’t even make sense. Quantum computing is inline with the predictions of quantum mechanics, it is not new physics, it is engineering, the implementation of physics we already know to build stuff, so it does not even make sense to suggest engineering something is “discovering” something fundamentally new about nature.

    MWI is just a philosophical worldview from people who dislike that quantum theory is random. Outcomes of experiments are nondeterministic. Bell’s theorem proves you cannot simply interpret the nondeterminism as chaos, because any attempt to introduce a deterministic outcome at all would violate other known laws of physics, so you have to just accept it is nondeterministic.

    MWI proponents, who really dislike nondeterminism (for some reason I don’t particularly understand) came up with a “clever” workaround. Rather than interpreting probability distributions as just that, probability distributions, you instead interpret them as physical objects in an infinite-dimensional space. Let’s say I flip four coins so the possible outcomes are HH, HT, TH, and TT, and each you can assign a probability value to. Rather than interpreting the probability values as the likelihood of events occurring, you interpret the “faceness” property of the coin as a multi-dimensional property that is physically “stretched” in four dimensions, where the amount it is “stretched” depends upon those values. For example, if the probabilities are 25% HH, 0% HT, 25% TH, and 50% TT, you interpret it as if the coin’s “faceness” property is physically stretched out in four physical dimensions of 0.25 HH, 0 HT, 0.25 TH, and 0.5 TT.

    Of course, in real quantum mechanics, it gets even more complicated than this because probability amplitudes are complex-valued, so you have an additional degree of freedom, so this would be an eight-dimensional physical space the “quantum” coins (like electron spin state) would be stretched out in. Additionally, notice how the number of dimensions depends upon the number of possible outcomes, which would grow exponentially by 2^N the more coins you have under consideration. MWI proponents thus posit that each description like this is actually just a limited description due to a limited perspective. In reality, the dimensions of this physical space would be 2^N where N=number of possible states of all particles in the entire universe, so basically infinite. The whole universe is a single giant infinite-dimensional object propagating through this infinite-dimensional space, something they called the “universal wave function.”

    If you believe this, then it kind of restores determinism. If there is a 50% probability a photon will reflect off of a beam splitter and a 50% probability it will pass through, what MWI argues is that there is in fact a 100% chance it will pass through and be reflected simulateously, because it basically is stretched out in proportions of 0.5 going both directions. When the observer goes to observe it, the observer themselves also would get stretched out in those proportions, of both simulateously seeing it it pass through and be reflected. Since this outcome is guaranteed, it is deterministic.

    But why do we only perceive a single outcome? MWI proponents chalk it up to how our consciousness interprets the world, that it forms models based on a limited perspective, and these perspectives become separated from each other in the universal wave function during a process known as decoherence. This leads to an illusion that only a single perspective can be seen at a time, that even though the human observer is actually stretched out across all possible outcomes, they only believe they can perceive one of them at a time, and which one we settle on is random, I guess kind of like the blue-black/white-gold dress thing, your brain just kind of picks one at random, but the randomness is apparent rather than real.

    This whole story really is not necessary if you are just fine with saying the outcome is random. There is nothing about quantum computers that changes this story. Crazy David has a bad habit of publishing embarrassingly bad papers in favor of MWI. One paper he defends MWI with a false dichotomy pitching MWI as if its only competition is Copenhagen, then straw manning Copenhagen by equating it to an objective collapse model, which no supporter of this interpretation I am aware of would ever agree to this characterization of it.

    Another paper where he brings up quantum computing, he basically just argues that MWI must be right because it gives a more intuitive understanding of how quantum computing actually provides an advantage, that it delegates subtasks to different branches of the multiverse. It’s bizarre to me how anyone could think something being “intuitive” or not (it’s debatable whether or not it even is more intuitive) is evidence in favor of it. At best, it is an argument in favor of utility: if you personally find MWI intuitive (I don’t) and it helps you solve problems, then have at ya, but pretending this somehow is evidence that there really is a multiverse makes no sense.