And the idea is misguided there as well. Kanji are problematic because people who can't read them can't differentiate their shapes. Any unique silhouette -- even a Latin letter -- is preferable to arrow diagrams, which end up unnecessarily similar to one another.
Absolutely, but that’s not heat. Things like heat produced from burning wood is an example of potential energy of binds getting released as kinetic energy + photons.
The Airthings View Plus is a good one. It has high quality (for consumer) particulate, CO2, VOC and Radon sensors and a good interface. Also tracks temperature, pressure, and humidity for completeness.
Practice like you are learning a musical instrument or chess. Break it down into very specific subskills (like a specific chess endgame or guitar picking technique) and practice those in isolation in addition to your usual practice. Also find a coach if possible.
Feynman diagrams are a representation of an integral so moving beyond them is a question of reformulating the mathematical basis of physics.
There is some work in that direction (look up 'Amplituhedron') but overall it's not that surprising that physicists are using computers to do the same math with much more precision than they could before rather than shaking up the fundamental mathematical tools of their field which have stood since the 1600s.
He's not wrong about the narrow algorithmic use cases (so far), but he's completely missing the utility for simulation of quantum phenomena (chemistry, microbiology, materials science). That use case alone completely justifies investing into them even if you don't care about advancing science.
For chemistry, you usually care about the quantum mechanics of an ensemble. Simulating a single molecule is less useful and chemistry is already a very good heuristic system for this. We would care about say: we have a 2-dimensional membrane with lipids and cholesterols, and, say, two proteins interacting in this membrane. Now, give me a molecule that either enhances or inhibits the interaction, and please constrain your search to molecules that can cross the blood brain barrier and which we can synthesize in under five "undergrad ochem" steps[0] using a set of feeder molecules with low supply chain risk (let's not depend on glutamate from China, e.g.)
I'm not sure this is something that QC can easily solve.
[0] undergrad ochem is actually a pretty good heuristic for which reactions one can perform at industrial scale, though high scale reactions might require catalysts you don't learn about in ochem
> That use case alone completely justifies investing into them
Not at very massive scale of investments required.
Take for example, GPGPU (general purpose graphics processing unit). The wast investment needed to get there was funded by gaming industry. Supercomputers as a investment target were tiny compared to gaming and general purpose computing.
AI boom was created on a tails of gaming industry and the benefits spilled into scientific computing as well.
To start, we'll neglect the bosonic degrees of freedom (the displacement of the atoms themselves) and look at only the fermionic DoF (the electrons).
Water has 8 electrons (which QC can treat exactly without any extra work) in a number of orbital. In general we need 2 qubits per orbital.
Most QC demonstrations so far were performed using so-called minimal basis sets, which have a small number of orbitals and thus give inaccurate results. A better approach would be to take a large orbital basis, do a classical relatively expensive Hartree-Fock calculation then use the orbitals from that to do the QC. This technique when done on classical computers is called MRCI (multi-reference configuration interaction) and is the gold standard in Quantum Chemistry.
So, provided we can pay the cost of doing a large orbital HF calculation (and we can do that for fairly large molecules), we can get pretty good result using n electrons in n orbitals MRCI. So production electronic calculations of water molecules would take about 16 qubits per molecule.
The more frustrating problem is that the number of electronic interaction terms is N^4 the number of orbitals so that we would very rapidly need extremely deep circuits which are not feasible without error correction (which involve using like 8 actual qubits for every calculation qubit). There are proposal to use plane wave basis sets (N^2 interactions) but then we need many more orbitals and thus many more qubits.
We are in practice very far from QC having a significant impact on real-life quantum chemistry. It's not at all clear that we'll ever be able to do QC on a molecule the size of a typical drug, let alone a protein.
because Qbits are just random numbers. They want to make u believe that Qbits represent a range of values, which is true in theory, but in reality it's just a plain old single random number. So all QC calculation are basically random + random * random - random = ... random ofc. I think google it's only QC application was a program that generates some random hash. Other than that, QC hasnt done anything else, nor do i believe it can.
really? How come reality keeps agreeing with me then? Where are these amazing quantum mechanics applications? U really believe in super position?? really? How long have they been working on QC? 50 years now? it never seems to work or be ready.. i wonder why.
I have no idea why you think reality agrees with you. I “believe” in superposition, entanglement, and the like because I’ve tested it myself in a lab with a six qubit quantum computer I built.
Yeah the problem is in the name and the marketing. People think it is meant to replace normal computers, but that is totally wrong. The simulation use cases as well as probing fundamental physics are far more immediate and exciting
It hasn't happened yet or won't happen any time soon, but if there's a chance RSA is going to be broken I'd rather be aware of the possibility 10 years earlier than 10 days after...
What would be the example of an outstanding, reasonably-sized (even quantum computers of the future have finite resources) quantum simulation problem that is intractable today but would unlock some economic potential, if solved?
Drug molecules are much too large for near term QC, let alone the interaction of two of them in a solvent. We're talking thousands of qubits and circuits millions of operations deep.
Quantum chemistry isn't magic, it's been around since the sixties, we understand pretty well the resources required for calculations. I happen to have a PhD in it too. AFAIK on quantum computers there are only two algorithms that are currently considered near-term feasible for this problem (VQE, QPE) and they both require a number of qubits = 2 x number of orbitals N and a number of operations between N^2 and N^4.
It's true that there could be neat quantum computing shortcuts that maintain calculation accuracy and that aren't doable on classical computers... but then we could also imagine that some neat Quantum Chemistry trick might make classical computers much better too. (We actually have a bunch of these already but they are approximative: DFT, machine learning, pseudo potentials etc.)
Thank you for the response. i'll need to read more into VQE, QPE. Still, I'm hopeful that geometric/energy minimization methods like invariant point attention in alphafold and other geometric deep learning findings will translate into quantum chemistry someday
I think one of the biggest applications for quantum computers which isn't well appreciated is simulation of quantum systems: chemistry and materials science. Similar to the old analog computers which were used to simulate differential equations long before they could be fully "virtualized" (simulated deterministically without noise) using digital algorithms.
There are always alternative ways to describe the same phenomena. For example QM could be done with only real numbers (you can emulate imaginary numbers with matrix operations). There is no clear cut notion of what is "required".