• 0 Posts
  • 43 Comments
Joined 5 months ago
cake
Cake day: July 7th, 2024

help-circle
  • Democrats are heartless genocidal freaks, and hardly “spineless” they just don’t care. It’s a party of billionaires. I have no idea how you can unironically believe this ethos that they’re all a bunch of bleeding hearts but are just too scared, quivering in their boots to act but they all mean well… apparently! No, they just never fight for those values you want them to fight for because their party does not represent those values, and pretending they do at this point… I have a bridge to sell you.




  • Honestly, the random number generation on quantum computers is practically useless. Speeds will not get anywhere near as close to a pseudorandom number generator, and there are very simple ones you can implement that are blazing fast, far faster than any quantum computer will spit out, and produce numbers that are widely considered in the industry to be cryptographically secure. You can use AES for example as a PRNG and most modern CPUs like x86 processor have hardware-level AES implementation. This is why modern computers allow you to encrypt your drive, because you can have like a file that is a terabyte big that is encrypted but your CPU can decrypt it as fast as it takes for the window to pop up after you double-click it.

    While PRNG does require an entropy pool, the entropy pool does not need to be large, you can spit out terabytes of cryptographically secure pseudorandom numbers on a fraction of a kilobyte of entropy data, and again, most modern CPUs actually include instructions to grab this entropy data, such as Intel’s CPUs have an RDSEED instruction which let you grab thermal noise from the CPU. In order to avoid someone discovering a potential exploit, most modern OSes will mix into this pool other sources as well, like fluctuations in fan voltage.

    Indeed, used to with Linux, you had a separate way to read random numbers directly from the entropy pool and another way to read pseudorandom numbers, those being /dev/random and /dev/urandom. If you read from the entropy pool, if it ran out, the program would freeze until it could collect more, so some old Linux programs you would see the program freeze until you did things like move your mouse around.

    But you don’t see this anymore because generating enormous amounts of cryptographysically secure random nubmers is so easy with modern algorithms that modern Linux just collects a little bit of entropy at boot and it uses that to generate all pseudorandom numbers after, and just got rid of needing to read it directly, both /dev/random and /dev/urandom now just internally in the OS have the same behavior. Any time your PC needs a random number it just pulls from the pseudorandom number generator that was configured at boot, and you have just from the short window of collecting entropy data at boot the ability to generate sufficient pseudorandom numbers basically forever, and these are the numbers used for any cryptographic application you may choose to run.

    The point of all this is to just say random number generation is genuinely a solved problem, people don’t get just how easy it is to basically produce practically infinite cryptographically secure pseudorandom numbers. While on paper quantum computers are “more secure” because their random numbers would be truly random, in practice you literally would never notice a difference. If you gave two PhD mathematicians or statisticians the same message, one encrypted using a quantum random number generator and one encrypted with a PRNG like AES or ChaCha20, and asked them to decipher them, they would not be able to decipher either. In fact, I doubt they would even be able to identify which one was even encoded using the quantum random number generator. A string of random numbers looks just as “random” to any random number test suite whether or not it came from a QRNG or a high-quality PRNG (usually called CSPRNG).

    I do think at least on paper quantum computers could be a big deal if the engineering challenge can ever be overcome, but quantum cryptography such as “the quantum internet” are largely a scam. All the cryptographic aspects of quantum computers are practically the same, if not worse, than traditional cryptography, with only theoretical benefits that are technically there on paper but nobody would ever notice in practice.


  • the study that found the universe is not locally real. Things only happen once they are observed

    This is only true if you operate under a very specific and strict criterion of “realism” known as metaphysical realism. Einstein put forward a criterion of what he thought this philosophy implied for a physical theory, and his criterion is sometimes called scientific realism.

    Metaphysical realism is a very complex philosophy. One of its premises is that there exists an “absolute” reality where all objects are made up of properties that are independent of perspective. Everything we perceive is wholly dependent upon perspective, so metaphysical realism claims that what we perceive is not “true” reality but sort of an illusion created by the brain. “True” reality is then treated as the absolute spacetime filled with particles captured in the mathematics of Newton’s theory.

    The reason it relies on this premise is because by assigning objects perspective invariant properties, then they can continue to exist even if no other object is interacting with them, or, more specifically, they continue to exist even if “no one is looking at them.” For example, if you fire a cannonball from point A to point B, and you only observe it leaving point A and arriving at point B, Newtonian mechanics allows you to “track” its path between these two points even if you did not observe it.

    The problem is that you cannot do this in quantum mechanics. If you fire a photon from point A to point B, the theory simply disallows you from unambiguously filling in the “gaps” between the two points. People then declare that “realism is dead,” but this is a bit misleading because this is really only a problem for metaphysical/scientific realism. There are many other kinds of realism in literature.

    For example, the philosopher Jocelyn Benoist’s contextual realism argues that the exact opposite. The mathematical theory is not “true reality” but is instead a description of reality. A description of reality is not the same as reality. Would a description of the Eiffel Tower substitute actually seeing it in reality? Of course not, they’re not the same. Contextual realism instead argues that what is real is not the mathematical description but is precisely what we perceive. The reason we perceive reality in a way that depends upon perspective is because reality is just relative (or “contextual”). There is no “absolute” reality but only a contextual reality and that contextual reality we perceive directly as it really is.

    Thus for contextual realism, there is no issue with the fact that we cannot “track” things unambiguously, because it has no attachment to treating particles as if they persist as autonomous entities. It is perfectly fine with just treating it as if the particle hops from point A to point B according to some predictable laws and relative to the context in which the observer occupies. That is just how objective reality works. Observation isn’t important, and indeed, not even measurement, because whatever you observe in the experimental setting is just what reality is like in that context. The only thing that “arises” is your identification.


  • Why did physicists start using the word “real” and “realism”? It’s a philosophical term, not a physical one, and it leads to a lot of confusion. “Local” has a clear physical meaning, “realism” gets confusing. I have seen some papers that use “realism” in a way that has a clear physical definition, such as one I came across defined it in terms of a hidden variable theory. Yet, I also saw a paper coauthored by the great Anton Zeilinger that speaks of “local realism,” but very explicitly uses “realism” with its philosophical meaning, that there is an objective reality independent of the observer, which to me it is absurd to pretend that physics in any way calls this into account.

    If you read John Bell’s original paper “On the Einstein Podolsky Rosen Paradox,” he never once use the term “realism.” The only time I have seen “real” used at all in this early discourse is in the original EPR paper, but this was merely a “criterion” (meaning a minimum but not sufficient condition) for what would constitute a theory that is a complete description of reality. Einstein/Podolsky/Rosen in no way presented this as a definition of “reality” or a kind of “realism.”

    Indeed, even using the term “realism” on its own is ambiguous, as there are many kinds of “realisms” in the literature. The phrase “local realism” on its own is bound to lead to confusion, and it does, because I pointed out, even in the published literature physicists do not always use “realism” consistently. If you are going to talk about “realism,” you need to preface it to be clear what kind of realism you are specifically talking about.

    If the reason physicists started to talk about “realism” is because they specifically are referring to something that includes the EPR criterion, then they should call it “EPR realism” or something like that. Just saying “realism” is so absurdly ridiculous it is almost as if they are intentionally trying to cause confusion. I don’t really blame anyone who gets confused on this because like I said if you even read the literature there is not even consistent usage in the peer-reviewed papers.

    The phrase “observer-dependence” is also very popular in the published literature. So, while I am not disagreeing with you that “observation” is just an interaction, this is actually a rather uncommon position known as relational quantum mechanics.


  • A lot of people who present quantum mechanics to a laymen audience seem to intentionally present it to be as confusing as possible because they like the “mystery” behind it. Yet, it is also easy to present it in a trivially simple and boring way that is easy to understand.

    Here, I will tell you a simple framework that is just 3 rules and if you keep them in mind then literally everything in quantum mechanics makes sense and follows quite simply.

    1. Quantum mechanics is a probabilistic theory where, unlike classical probability theory, the probabilities of events can be complex-valued. For example, it is meaningful in quantum mechanics for an event to have something like a -70.7i% chance of occurring.
    2. The physical interpretation of complex-valued probabilities is that the further the probability is from zero, the more likely it is. For example, an event with a -70.7i% probability of occurring is more likely than one with a 50% probability of occurring because it is further from zero. (You can convert quantum probabilities to classical just by computing their square magnitudes, which is known as the Born rule.)
    3. If two events or more become statistically correlated with one another (this is known as “entanglement”) the rules of quantum mechanics disallows you from assigning quantum probabilities to the individual systems taken separately. You can only assign the quantum probabilities to the two events or more taken together. (The only way to recover the individual probabilities is to do something called a partial trace to compute the reduced density matrix.)

    If you keep those three principles in mind, then everything in quantum mechanics follows directly, every “paradox” is resolved, there is no confusion about anything.

    For example, why is it that people say quantum mechanics is fundamentally random? Well, because if the universe is deterministic, then all outcomes have either a 0% or 100% probability, and all other probabilities are simply due to ignorance (what is called “epistemic”). Notice how 0% and 100% have no negative or imaginary terms. They thus could not give rise to quantum effects.

    These quantum effects are interference effects. You see, if probabilities are only between 0% and 100% then they can only be cumulative. However, if they can be negative, then the probabilities of events can cancel each other out and you get no outcome at all. This is called destructive interference and is unique to quantum mechanics. Interference effects like this could not be observed in a deterministic universe because, in reality, no event could have a negative chance of occurring (because, again, in a deterministic universe, the only possible probabilities are 0% or 100%).

    If we look at the double-slit experiment, people then ask why does the interference pattern seem to go away when you measure which path the photon took. Well, if you keep this in mind, it’s simple. There’s two reasons actually and it depends upon perspective.

    If you are the person conducting the experiment, when you measure the photon, it’s impossible to measure half a photon. It’s either there or it’s not, so 0% or 100%. You thus force it into a definite state, which again, these are deterministic probabilities (no negative or imaginary terms), and thus it loses its ability to interfere with itself.

    Now, let’s say you have an outside observer who doesn’t see your measurement results. For him, it’s still probabilistic since he has no idea which path it took. Yet, the whole point of a measuring device is to become statistically correlated with what you are measuring. So if we go to rule #3, the measuring device should be entangled with the particle, and so we cannot apply the quantum probabilities to the particle itself, but only to both the particle and measuring device taken together.

    Hence, for the outside observer’s perspective, only the particle and measuring device collectively could exhibit quantum interference. Yet, only the particle passes through the two slits on its own, without the measuring device. Thus, they too would predict it would not interfere with itself.

    Just keep these three rules in mind and you basically “get” quantum mechanics. All the other fluff you hear is people attempting to make it sound more mystical than it actually is, such as by interpreting the probability distribution as a literal physical entity, or even going more bonkers and calling it a grand multiverse, and then debating over the nature of this entity they entirely made up.

    It’s literally just statistics with some slightly different rules.


  • I am saying that assigning ontological reality to something that is by definition beyond observation (not what we observe and not even possible to observe) is metaphysical. If we explain the experiment using what we observe then there is no confusing or contradiction, or any ambiguity at all. Indeed, quantum mechanics becomes rather mechanical and boring, all the supposed mysticism disappears.

    It is quite the opposite that the statistical behavior of the electron is decoupled from the individual electron. The individual electron just behaves randomly in a way that we can only predict statistically and not absolutely. There is no interference pattern at all for a single electron, at least not in the double-slit experiment (the Mach–Zehnder interferometer is arguably a bit more interesting). The interference pattern observed in the double-slit experiment is a weakly emergent behavior of an ensemble of electrons. You need thousands of them to actually see it.


  • What is it then? If you say it’s a wave, well, that wave is in Hilbert space which is infinitely dimensional, not in spacetime which is four dimensional, so what does it mean to say the wave is “going through” the slit if it doesn’t exist in spacetime? Personally, I think all the confusion around QM stems from trying to objectify a probability distribution, which is what people do when they claim it turns into a literal wave.

    To be honest, I think it’s cheating. People are used to physics being continuous, but in quantum mechanics it is discrete. Schrodinger showed that if you take any operator and compute a derivative, you can “fill in the gaps” in between interactions, but this is just purely metaphysical. You never see these “in between” gaps. It’s just a nice little mathematical trick and nothing more. Even Schrodinger later abandoned this idea and admitted that trying to fill in the gaps between interactions just leads to confusion in his book Nature and the Greeks and Science and Humanism.

    What’s even more problematic about this viewpoint is that Schrodinger’s wave equation is a result of a very particular mathematical formalism. It is not actually needed to make correct predictions. Heisenberg had developed what is known as matrix mechanics whereby you evolve the observables themselves rather than the state vector. Every time there is an interaction, you apply a discrete change to the observables. You always get the right statistical predictions and yet you don’t need the wave function at all.

    The wave function is purely a result of a particular mathematical formalism and there is no reason to assign it ontological reality. Even then, if you have ever worked with quantum mechanics, it is quite apparent that the wave function is just a function for picking probability amplitudes from a state vector, and the state vector is merely a list of, well, probability amplitudes. Quantum mechanics is probabilistic so we assign things a list of probabilities. Treating a list of probabilities as if it has ontological existence doesn’t even make any sense, and it baffles me that it is so popular for people to do so.

    This is why Hilbert space is infinitely dimensional. If I have a single qubit, there are two possible outcomes, 0 and 1. If I have two qubits, there are four possible outcomes, 00, 01, 10, and 11. If I have three qubits, there are eight possible outcomes, 000, 001, 010, 011, 100, 101, 110, and 111. If I assigned a probability amplitude to each event occurring, then the degrees of freedom would grow exponentially as I include more qubits into my system. The number of degrees of freedom are unbounded.

    This is exactly how Hilbert space works. Interpreting this as a physical infinitely dimensional space where waves really propagate through it just makes absolutely no sense!


  • It is only continuous because it is random, so prior to making a measurement, you describe it in terms of a probability distribution called the state vector. The bits 0 and 1 are discrete, but if I said it was random and asked you to describe it, you would assign it a probability between 0 and 1, and thus it suddenly becomes continuous. (Although, in quantum mechanics, probability amplitudes are complex-valued.) The continuous nature of it is really something epistemic and not ontological. We only observe qubits as either 0 or 1, with discrete values, never anything in between the two.




  • Why are you isolating a single algorithm? There are tons of them that speed up various aspects of linear algebra and not just that single one, and many improvements to these algorithms since they were first introduced, there are a lot more in the literature than just in the popular consciousness.

    The point is not that it will speed up every major calculation, but these are calculations that could be made use of, and there will likely even be more similar algorithms discovered if quantum computers are more commonplace. There is a whole branch of research called quantum machine learning that is centered solely around figuring out how to make use of these algorithms to provide performance benefits for machine learning algorithms.

    If they would offer speed benefits, then why wouldn’t you want to have the chip that offers the speed benefits in your phone? Of course, in practical terms, we likely will not have this due to the difficulty and expense of quantum chips, and the fact they currently have to be cooled below to near zero degrees Kelvin. But your argument suggests that if somehow consumers could have access to technology in their phone that would offer performance benefits to their software that they wouldn’t want it.

    That just makes no sense to me. The issue is not that quantum computers could not offer performance benefits in theory. The issue is more about whether or not the theory can be implemented in practical engineering terms, as well as a cost-to-performance ratio. The engineering would have to be good enough to both bring the price down and make the performance benefits high enough to make it worth it.

    It is the same with GPUs. A GPU can only speed up certain problems, and it would thus be even more inefficient to try and force every calculation through the GPU. You have libraries that only call the GPU when it is needed for certain calculations. This ends up offering major performance benefits and if the price of the GPU is low enough and the performance benefits high enough to match what the consumers want, they will buy it. We also have separate AI chips now as well which are making their way into some phones. While there’s no reason at the current moment to believe we will see quantum technology shrunk small and cheap enough to show up in consumer phones, if hypothetically that was the case, I don’t see why consumers wouldn’t want it.

    I am sure clever software developers would figure out how to make use of them if they were available like that. They likely will not be available like that any time in the near future, if ever, but assuming they are, there would probably be a lot of interesting use cases for them that have not even been thought of yet. They will likely remain something largely used by businesses but in my view it will be mostly because of practical concerns. The benefits of them won’t outweigh the cost anytime soon.


  • Uh… one of those algorithms in your list is literally for speeding up linear algebra. Do you think just because it sounds technical it’s “businessy”? All modern technology is technical, that’s what technology is. It would be like someone saying, “GPUs would be useless to regular people because all they mainly do is speed up matrix multiplication. Who cares about that except for businesses?” Many of these algorithms here offer potential speedup for linear algebra operations. That is the basis of both graphics and AI. One of those algorithms is even for machine learning in that list. There are various algorithms for potentially speeding up matrix multiplication in the linear. It’s huge for regular consumers… assuming the technology could ever progress to come to regular consumers.


  • A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.

    I find this sentiment can lead to devolving into quantum woo and mysticism. If you think anyone trying to tell you quantum mechanics can be made sense of rationally must be wrong, then you implicitly are suggesting that quantum mechanics is something that cannot be made sense of, and thus it logically follows that people who are speaking in a way that does not make sense and have no expertise in the subject so they do not even claim to make sense are the more reliable sources.

    It’s really a sentiment I am not a fan of. When we encounter difficult problems that seem mysterious to us, we should treat the mystery as an opportunity to learn. It is very enjoyable, in my view, to read all the different views people put forward to try and make sense of quantum mechanics, to understand it, and then to contemplate on what they have to offer. To me, the joy of a mystery is not to revel in the mystery, but to search for solutions for it, and I will say the academic literature is filled with pretty good accounts of QM these days. It’s been around for a century, a lot of ideas are very developed.

    I also would not take the game Outer Wilds that seriously. It plays into the myth that quantum effects depend upon whether or not you are “looking,” which is simply not the case and largely a myth. You end up with very bizarre and misleading results from this, for example, in the part where you land on the quantum moon and have to look at the picture of it for it to not disappear because your vision is obscured by fog. This makes no sense in light of real physics because the fog is still part of the moon and your ship is still interacting with the fog, so there is no reason it should hop to somewhere else.

    Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.

    The double-slit experiment is a great example of something often misunderstood as somehow evidence observation plays some fundamental role in quantum mechanics. Yes, if you observe the path the two particles take through the slits, the interference pattern disappears. Yet, you can also trivially prove in a few line of calculation that if the particle interacts with a single other particle when it passes through the two slits then it would also lead to a destruction of the interference effects.

    You model this by computing what is called a density matrix for both the particle going through the two slits and the particle it interacts with, and then you do what is called a partial trace whereby you “trace out” the particle it interacts with giving you a reduced density matrix of only the particle that passes through the two slits, and you find as a result of interacting with another particle its coherence terms would reduce to zero, i.e. it would decohere and thus lose the ability to interfere with itself.

    If a single particle interaction can do this, then it is not surprising it interacting with a whole measuring device can do this. It has nothing to do with humans looking at it.

    At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.

    Eh, you should be reading books and papers in the literature if you are serious about this topic. I agree that a lot of philosophy out there is bad so sometimes external influences can be negative, but the solution to that shouldn’t be to entirely avoid reading anything at all, but to dig through the trash to find the hidden gems.

    My views when it comes to philosophy are pretty fringe as most academics believe the human brain can transcend reality and I reject this notion, and I find most philosophy falls right into place if you reject this notion. However, because my views are a bit fringe, I do find most philosophical literature out there unhelpful, but I don’t entirely not engage with it. I have found plenty of philosophers and physicists who have significantly helped develop my views, such as Jocelyn Benoist, Carlo Rovelli, Francois-Igor Pris, and Alexander Bogdanov.


  • This is why many philosophers came to criticize metaphysical logic in the 1800s, viewing it as dealing with absolutes when reality does not actually exist in absolutes, stating that we need some other logical system which could deal with the “fuzziness” of reality more accurately. That was the origin of the notion of dialectical logic from philosophers like Hegel and Engels, which caught on with some popularity in the east but then was mostly forgotten in the west outside of some fringe sections of academia. Even long prior to Bell’s theorem, the physicist Dmitry Blokhintsev, who adhered to this dialectical materialist mode of thought, wrote a whole book on quantum mechanics where the first part he discusses the need to abandon the false illusion of the rigidity and concreteness of reality and shows how this is an illusion even in the classical sciences where everything has uncertainty, all predictions eventually break down, nothing is never possible to actually fully separate something from its environment. These kinds of views heavily influenced the contemporary physicist Carlo Rovelli as well.


  • And as any modern physicist will tell you: most of reality is indeed invisible to us. Most of the universe is seemingly comprised of an unknown substance, and filled with an unknown energy.

    How can we possibly know this unless it was made through an observation?

    Most of the universe that we can see more directly follows rules that are unintuitive and uses processes we can’t see. Not only can’t we see them, our own physics tells is it is literally impossible to measure all of them consistently.

    That’s a hidden variable theory, presuming that systems really have all these values and we just can’t measure them all consistently due to some sort of practical limitation but still believing that they’re there. Hidden variable theories aren’t compatible with the known laws of physics. The values of the observables which become indefinite simply cease to have existence at all, not that they are there but we can’t observe them.

    But subjective consciousness and qualia fit nowhere in our modern model of physics.

    How so? What is “consciousness”? Why do you think objects of qualia are special over any other kind of object?

    I don’t think it’s impossible to explain consciousness.

    You haven’t even established what it is you’re trying to explain or why you think there is some difficulty to explain it.

    We don’t even fully understand what the question is really asking. It sidesteps our current model of physics.

    So, you don’t even know what you’re asking but you’re sure that it’s not compatible with the currently known laws of physics?

    I don’t subscribe to Nagel’s belief that it is impossible to solve, but I do understand how the points he raises are legitimate points that illustrate how consciousness does not fit into our current scientific model of the universe.

    But how?! You are just repeating the claim over and over again when the point of my comment is that the claim itself is not justified. You have not established why there is a “hard problem” at all but just continually repeat that there is.

    If I had to choose anyone I’d say my thoughts on the subject are closest to Roger Penrose’s line of thinking, with a dash of David Chalmers.

    Meaningless.

    I think if anyone doesn’t see why consciousness is “hard” then there are two possibilities: 1) they haven’t understood the question and its scientific ramifications 2) they’re not conscious.

    You literally do not understand the topic at hand based on your own words. Not only can you not actually explain why you think there is a “hard problem” at all, but you said yourself you don’t even know what question you’re asking with this problem. Turning around and then claiming everyone who doesn’t agree with you is just some ignoramus who doesn’t understand then is comically ridiculous, and also further implying people who don’t agree with you may not even be conscious.

    Seriously, that’s just f’d up. What the hell is wrong with you? Maybe you are so convinced of this bizarre notion you can’t even explain yourself because you dehumanize everyone who disagrees with you and never take into consideration other ideas.


  • This is accurate, yes. The cat in the box is conscious presumably, in my opinion of cats at least, but still can be “not an observer” from the POV of the scientist observing the experiment from outside the box.

    “Consciousness” is not relevant here at all. You can write down the wave function of a system relative to a rock if you wanted, in a comparable way as writing down the velocity of a train from the “point of view” of a rock. It is coordinate. It has nothing to do with “consciousness.” The cat would perceive a definite state of the system from its reference frame, but the person outside the box would not until they interact with it.

    QM is about quite a lot more than coordinate systems

    Obviously QM is not just coordinate systems. The coordinate nature of quantum mechanics, the relative nature of it, is merely a property of the theory and not the whole theory. But the rest of the theory does not have any relevance to “consciousness.”

    and in my opinion will make it look weird in retrospect once physics expands to a more coherent whole

    The theory is fully coherent and internally consistent. It amazes me how many people choose to deny QM and always want to rush to change it. Your philosophy should be guided by the physical sciences, not the other way around. People see QM going against their basic intuitions and their first thought is it must be incomplete and needs to have additional complexity added to it to make it fit their intuitions, rather than just questioning that maybe their basic intuitions are wrong.

    Your other comment was to a Wikipedia page which if you clicked the link on your own source it would’ve told you that the scientific consensus on that topic is that what you’re presenting is a misinterpretation.

    A simple search on YouTube could’ve also brought up several videos explaining this to you.

    Edit: Placing my response here as an edit since I don’t care to continue this conversation so I don’t want to notify.

    Yes, that was what I said. Er, well… QM, as I understand it, doesn’t have to do anything with shifting coordinate systems per se (and in fact is still incompatible with relativity). They’re just sort of similar in that they both have to define some point of view and make everything else in the model relative to it. I’m still not sure why you brought coordinate systems into it.

    A point of view is just a colloquial term to refer to a coordinate system. They are not coordinate in the exact same way but they are both coordinate.

    My point was that communication of state to the observer in the system, or not, causes a difference in the outcome. And that from the general intuitions that drive almost all of the rest of physics, that’s weird and sort of should be impossible.

    No, it doesn’t not, and you’re never demonstrated that.

    Sure. How is it when combined with macro-scale intuition about the way natural laws work, or with general relativity?

    We have never observed quantum effects on the scale where gravitational effects would also be observable, so such a theory, if we proposed one, would not be based on empirical evidence.

    This is very, very very much not what I am doing. What did I say that gave you the impression I was adding anything to it?

    You literally said in your own words we need to take additional things into account we currently are not. You’re now just doing a 180 and pretending you did not say what literally anyone can scroll up and see that you said.

    I am not talking about anything about retrocausality here, except maybe accidentally.

    Then you don’t understand the experiment since the only reason it is considered interesting is because if you interpret it in certain ways it seems to imply retrocausality. Literally no one has ever treated it as anything more than that. You are just making up your own wild implications from the experiment.

    I was emphasizing the second paragraph; “wave behavior can be restored by erasing or otherwise making permanently unavailable the ‘which path’ information.”

    The behavior of the system physically changes when it undergoes a physical interaction. How surprising!



  • Kastrup is entirely unconvincing because he pretends the only two schools of philosophy in the whole universe are his specific idealism and metaphysical realism which he falsely calls the latter “materialism.” He thus never feels the need to ever address anything outside of a critique of a single Laymen understanding of materialism which is more popular in western countries than eastern countries, ignoring the actual wealth of philosophical literature.

    Anyone who actually reads books on philosophy would inevitably find Kastrup to be incredibly unconvincing as he, by focusing primarily on a single school, never justifies many of his premises. He begins from the very beginning talking about “conscious experience” and whatnot when, if you’re not a metaphysical realist, that is what you are supposed to be arguing in the first place. Unless you’re already a dualist or metaphysical realist, if you are pretty much any other philosophical school like contextual realist, dialectical materialist, empiriomonist, etc, you probably already view reality as inherently observable, and thus perception is just reality from a particular point-of-view. It then becomes invalid to add qualifiers to it like “conscious experience” or “subjective experience” as reality itself cannot had qualifiers.

    I mean, the whole notion of “subjective experience” goes back to Nagel who was a metaphysical realist through-and-through and wrote a whole paper defending that notion, “What is it like to be a Bat?”, and this is what Kastrup assumes his audience already agrees with from the get-go. He never addresses any of the criticisms of metaphysical realism but pretends like they don’t exist and he is the unique sole critic of it and constantly calls metaphysical realism “materialism” as if they’re the same philosophy at all. He then builds all of his arguments off of this premise.