• 0 Posts
  • 79 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • That’s false. You can literally not only feel heat from, but you can in fact set things on fire with, a completely monochromatic green laser with a wavelength exactly in the middle of the visible spectrum. No infrared, no ultraviolet. Lots of heat transfer. You could do it with an ultraviolet laser too if you were careful enough and could get around ultraviolet’s tendency to destroy molecular bonds completely before they even have a chance to burn chemically. It’s not just lasers either, any light source is going to deposit energy in the form of heat on anything that light touches. Any light contains a large amount of energy and some of it will get absorbed by anything it interacts with, and that’s still true whether it’s infrared, ultraviolet, somewhere in between, or all the above.

    Infrared has a special relationship with heat, yes, because of the distribution of blackbody radiation, but “No” is absolutely the wrong answer here. The right answer is “Yes, but… it’s complicated”.



  • Oh absolutely. Smart TVs are completely under the control of the technology and media companies with very little hope for freeing them, except that you can still plug a computer into them to bypass all the “smart” features and just use it as a dumb screen with a smart computer instead. But they always seem to put a few new stumbling blocks in the way of both those options every year. That loophole will eventually get closed, it won’t happen overnight, but they will keep eroding the functionalities and convenience of doing so until few if anyone wants to do that anymore.

    Cars are nearly a lost cause too, except where regulations say they must use some standard like OBD2 for “emissions reasons”, although that is obviously a limited scope and manufacturers try to find any ways they can to sabotage it or otherwise avoid it. Appliances and “smart homes”, all the way down to the light bulbs and LEDs, have plenty of proprietary, locked down, unrepairable technology in them too despite reliable open standards being available. The war for total control over our digital devices is in full swing and there’s no area of our lives from large to small that isn’t a battleground. People need to keep prioritizing the freedom of their devices because once they get these technologies and features entrenched it’s going to be very hard to work around them.


  • I mean, they did it with phones too. Android is just Linux. That was one of the main attractions, for me at least.

    At first, many people and groups supplied their own phone OSes. There was a whole thriving community ecosystem. Then they started to make it really hard, locking bootloaders and including critical pieces of hardware that didn’t or couldn’t have open source drivers (look up WinModems for a very early example of this technique, it remains really effective) or otherwise required extremely convoluted methods to access and the phone might function marginally without some of these fully functional, but at least you could still install a custom ROM on it if you were stubborn enough.

    But even that wouldn’t last. Nowadays they’ve made it literally impossible to defeat the security on most phones, in the name of keeping hackers and criminals out, but really a big part of their motivation is blocking these pirate OSes that let you actually control the hardware and software in your phone, doing criminally nefarious things like stopping them from downloading ads (the horror!) and preventing them from funneling all your data and activities back to Big Brother (how rude!) and worst of all updating it with modern functionality after they’ve declared it “obsolete”. The goal going forward is to sell you things that you don’t and can’t control, so they can shut them down or make them gradually more and more useless and make you buy new ones forever. They want you to have a subscription for everything including physical objects without realizing that you’ve been forced to subscribe to their regularly-scheduled-disposable-device-replacement-plan for no actual reason.

    They’re coming for computers too, or at least they’ll try. They want control of everything we interact with. For profit, mostly, but I wouldn’t rule out other motives. It’s a powerful thing when you have control of everything people see and do.


  • Bernoulli’s explanation and Newton’s explanation are the same explanation made from different frames of reference. They’re equal, I don’t understand why people insist that one or the other is incomplete or that they somehow both have different contributions to an airplane’s flight. They’re the same. The airplane flies because the air pushes it up turning some of the energy from its substantial forward movement through said air into enough upward acceleration to counteract gravity. That happens both due to pressure differential AND the sum of the deflection of air in exactly the same measure, they are directly linked and have to be equal. Bernoulli’s explanation is one particularly nuanced and clever way of looking at and understanding the exact mechanics of how that happens and if you plug the resulting values into Newton’s math it matches perfectly. The zero “angle of attack” for a cambered airfoil shape is actually measured this way not by measuring the angles of the physical surfaces or anything like that. The Newtonian explanation is just another way of looking at it. Either way it requires intense computations to come to exact numbers, but the numbers are the same either way. The pressure differential of the air IS the mechanical force of the air, happening as an equal and opposite direction to the deflection of the volume of air the plane is flying through, either of which is what we call lift. They’re all the same thing, happening at the same time and yes you can look at them from different perspectives but that doesn’t mean one perspective is wrong and the other is right. They’re all accurately describing the same thing. It is useful to know both, but not necessary and it does not make either of them incorrect.

    This discussion always reminds me of the “airplane on a treadmill” argument where both sides read the premise differently and scream at each other that only their way of interpreting the question is right.






  • It’s not difficult it’s just expensive and energy intensive, and frankly boiling water is both cheaper and easier and we’ve had lots of experience doing it in massive quantities since the steam age and it works great and gives off steam or hot water which can be used for lots more stuff like heating and even power generation. Ice is almost useless in comparison.

    As for why you can’t freeze salt into ice, they don’t mix. It’s like trying to mix oil and water. Technically, if you get the ice really really cold and mash it up with some equally cold salt you could make some kind of mixture of ice and salt and maybe even compress it together until it forms a solid mass again, but it’s not saltwater ice, it’s just salt and ice mixed together like oil and water. They may appear mixed, but they don’t mix, they don’t dissolve into each other. Ice’s crystal structure does not have anywhere for the salt to go and the salt’s crystal structure doesn’t have anywhere for the ice to go they’re not compatible in any way.


  • Matrix and its implementations like Synapse have a very intimidating architecture (I’d go as far as to call most of the implementations somewhat overengineered) and the documentation ranges from inconsistent to horrific. I ran into this particular situation myself, Fortunately this particular step you’re overthinking it. You can use any random string you want. It doesn’t even have to be random, just as long as what you put in the config file matches. It’s basically just a temporary admin password.

    Matrix was by far the worst thing I’ve ever tried to self-host. It’s a hot mess. Good luck, I think you’re close to the finish line.


  • Most cheap non-dimmable LEDs have drivers that use resistors to determine the current to drive through the LEDs. As a rule, these are always set too high to overdrive the LEDs (sometimes as much as twice their rated current) for marginal brightness gains and to burn out the bulb prematurely. I’m obviously unable to actually see directly into the operation of the great minds that design LED lightbulbs but logic leaves me with only those two plausible conclusions, I’ll let you decide which motivation you think is a bigger factor for most manufacturers.

    Conveniently, most manufacturers carefully fine-tune this value to prematurely destroy the LEDs at just the right time, which requires careful balancing of resistors, and even MORE conveniently (for us) the cheapest way for them to do this is typically to use two resistors. And MOST conveniently (for us), if you were to carelessly break one of the pair of resistors they use, and leave the other one intact, the current would immediately drop to a very reasonable and appropriate level, generating much less heat, drawing much less power, making LED death extremely unlikely, and only modestly reducing brightness in many cases, because LEDs have non-linear brightness and the heavily overdriven ones are typically FAR beyond the point of diminishing returns. In some cases the reduction in power results in basically no visible difference in light output. In some cases it can be argued they’re literally stealing extra power from your electricity bill and using it as an electric heater for no purpose other than to burn out your own light bulbs prematurely so you have to replace them.

    The good news is, like I said, removing one of the responsible resistors instantly solves the design flaw and is usually quite easy even without any special tools or electronics knowledge. BigCliveDotCom calls this “Doobying” the bulbs after the Dubai bulbs that were mentioned in other comments. If you watch some of his videos about LED bulbs you should be able to see the pattern of which resistors to remove, if they are on the board they will basically always be right next to each other and relatively small values (typically in the 20 ohms to 200 ohms range). The only modification I make to his procedure is that I prefer to remove the HIGHER value of the two resistors instead of the lower one, which results in perhaps somewhat less lifetime preservation (still much more than the original setting) and less power savings, but more brightness, and is usually adequately good for my purposes. I also use sturdy tweezers to remove the resistor instead of a screwdriver which seems to me that it would have a higher risk of collateral damage.

    Is it a lot of work for a single light bulb? Kind of, yes. But once you get it done a bunch of times, you’ll probably rarely have to do it again, as these bulbs last almost forever. In fact, I have yet to have one actually fail, I am mostly just replacing the occasional old unmodified LED bulb from time to time.

    This will not work with dimmable bulbs or certain fancy high end bulbs. Also some are much, much easier to modify than others. Clive calls the ones that are relatively easy “hackable” and it’s really a crapshoot to find them. Some have covers/bulbs/diffusers that are nearly impossible to remove without catastrophic damage to the bulb and/or your hands. Others simply use a different circuit design that doesn’t have resistors. Some only have a single resistor, meaning to change the value you need to solder a new one in its place. In my experience, the bargain-basement, junkiest, least reliable bulbs tend to be the easiest to hack this way and often skimp on things like “gluing the lens on” so it’s easy to get off. But you’ll have to experiment to find a brand and style that works well for this.



  • While it sounds a bit hacky, I think this is an underrated solution. It’s actually quite a clever way to bypass the whole problem. Physics is your enemy here, not economics.

    This is kind of like trying to find an electric motor with the highest efficiency and torque at 1 RPM. While it’s not theoretically impossible, it’s not just a matter of price or design, it’s a matter of asking the equipment to do something it’s simply not good at, while you want to do it really well. It can’t, certainly not affordably or without significant compromises in other areas. In the case of a motor, you’d be better off letting the motor spin at its much higher optimal RPM and gear it down, even though there will be a little loss in the geartrain it’s still a much better solution overall and that’s why essentially every low speed motor is designed this way.

    In the case of an ammeter, it seems totally reasonable to bring it up to a more ideal operating range by adding a constant artificial load. In fact the high precision/low range multimeters and oscilloscopes are usually internally doing almost exactly the same thing with their probes, just in a somewhat more complex way behind the scenes.


  • The end result is exactly the same.

    The difference is that you can install an iso on a computer without an internet connection. The normal iso contains copies of most or all relevant packages. Although maybe not all of the latest and most up to date ones, the bulk are enough to get you started. The net install, like the name suggests, requires an internet connection to download packages for anything except the most minimal, bare-bones configuration. The connection would hopefully be nearly as fast if not faster than the iso and be guaranteed to have the latest updates available which the iso may not. While such a fast connection is usually taken for granted nowadays, it is not always available in some situations and locations, it is not always convenient, and some hardware may have difficulty with the network stack that may be difficult to resolve before a full system is installed or may require specialized tools to configure or diagnose that are only available as packages.

    In almost all cases, the netinst works great and is a more efficient and sensible way to install. However, if it doesn’t work well in your particular situation, the iso will be more reliable, with some downsides and redundancy that wastes disk space and time.

    Things like windows updates and some large and complex software programs and systems often come with similar “web” and “offline” installers that make the same distinctions for the same reasons. The tradeoff is the same, as both options have valid use cases.


  • To be fair, in the case of something like a Linux ISO, you are only a tiny fraction of the target or you may not even need to be the target at all to become collateral damage. You only need to be worth $1 to the attacker if there’s 99,999 other people downloading it too, or if there’s one other guy who is worth $99,999 and you don’t need to be worth anything if the guy/organization they’re targeting is worth $10 million. Obviously there are other challenges that would be involved in attacking the torrent swarm like the fact that you’re not likely to have a sole seeder with corrupted checksums, and a naive implementation will almost certainly end up with a corrupted file instead of a working attack, but to someone with the resources and motivation to plan something like this it could get dangerous pretty quickly.

    Supply chain attacks are increasingly becoming a serious risk, and we do need to start looking at upgrading security on things like the checksums we’re using to harden them against attackers, who are realizing that this can be a very effective and relatively cheap way to widely distribute malware.


  • I still use Nextcloud for syncing documents and other basic stuff that is relatively simple. But I started getting glacial sync times consuming large amounts of CPU and running into lots of conflicts as more and more got added. For higher performance, more demanding sync tasks involving huge numbers of files, large file sizes, and rapid changes, I’ve started using Syncthing and am much, much happier with it. Nextcloud sync seems to be sort of a jack of all trades, master of none, kind of thing. Whereas Syncthing is a one trick pony that does that trick very, very well.


  • I feel like you are the one who is confusing a “NAS device” or “NAS appliance” as in a device that is specifically designed and primarily intended to provide NAS services (ie, its main attribute is large disks, with little design weight given to processing, RAM or other components except to the extent needed to provide NAS service), and a NAS service itself, which can be provided by any generic device simultaneously capable of both storage and networking, although often quite poorly.

    You are asserting the term “NAS” in this thread refers exclusively to the former device/appliance, everyone else is assuming the latter. In fact, both are correct and context suggests the latter, although I’m sure given your behavior in this thread you will promptly reply that only your interpretation is correct and everyone else is wrong. If you want to assert that, go right ahead and make yourself look foolish.