Dad, architectural designer, former SMB sysadmin and still-current home-labber, sometimes sim-racing modder, enthusiastic everything-hobbyist. he/him.

  • 0 Posts
  • 38 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • It’s not strictly true that it didn’t mean anything, but I would say that it consisted of a couple weakly-defined and often mutually incompatible visions is what could be.

    Meta thought they could sell people on the idea of spending hundreds of dollars on specialized hardware to allow them to do real life things, but in a shitty Miiverse alternate reality where every activity was monetized to help Zuck buy the rest of the Hawaiian archipelago for himself.

    Cryptobros thought the Metaverse was going to be a decentralized hyper-capitalist utopia where they could live their best lives driving digital Lambos and banging their harem of fawning VR catgirl hotties after they all made their billions selling links to JPEGs of cartoon monkeys to each other.

    Everybody else conflated the decentralized part of the cryptobros’ vision with the microtransactionalized walled garden of Meta’s implementation, and then either saw dollar signs and scrambled to get a grift going, or ran off to write think pieces about a wholly-imaginary utopia or dystopia they saw arising from that unholy amalgamation.

    In reality, Meta couldn’t offer a compelling alternative to real life, and the cryptobros didn’t have the funds or talent to actually make their Snow Crash fever dream a reality, so for now the VR future remains firmly the domain of VRChat enthusiasts, hardcore flight simmers, and niche technical applications.



  • Energy is only “available” when there is a region of higher energy density and a region of lower energy density, that you can extract work from by allowing that energy to flow from the former to the latter until they are equalized, at which point no further energy can be extracted from that system.

    In the case of air conditioning, you can make heat flow “uphill”, so to speak, by applying additional energy from outside of the inside air / outside air system, usually in the form of electricity generated at a power plant. In the very large picture, though, it’s all just moving energy around from other regions of higher and lower densities, a losing usable energy with each transfer. That’s what entropy means.

    Veritasium did a really good video on this idea a couple months ago, if you’re interested: https://youtu.be/DxL2HoqLbyA?si=bru50t1VYEKXKmKX





  • Along the lines of @AnonStoleMyPants – the trouble with longtermism and effective altruism generally is that, unlike more established religion, it’s become en vogue specifically amongst the billionaire class, specifically because it’s essentially just a permission structure for them to hoard stacks of cash and prioritize the hypothetical needs of their preferred utopian vision of the future over the actual needs of the present. Religions tend to have a mechanism (tithing, zakat, mitzvah, dana, etc.) for redistributing wealth from the well-off members of the faith towards the needy in an immediate way. Said mechanism may often be suborned by the religious elite or unenforced by some sects, but at least it’s there.

    Unlike those religions, effective altruism specifically encourages wealthy people to keep their wealth to themselves, so that they can use their billionaire galaxy brains to more effectively direct that capital towards long-term good. If, as they see it, Mars colonies tomorrow will help more people than healthcare or UBI or solar farms will today, then they have not just a desire, but a moral obligation to spend their money designing Mars rockets instead of paying more taxes or building green infrastructure. And if having a longtermist in charge of said Mars colony will more effectively safeguard the future of those colonists, then by golly, they have a moral obligation to become the autocratic monarch of Mars! All the dirty poors desperate for help today aren’t worth the resources relative to the net good possible by securing that utopian future they imagine.



  • as a counterpoint, when the use-case for the tool is specifically “I want a picture that looks like it was painted by Greg Rutkowski, but I don’t want to pay Greg Rutkowski to paint it for me” that sounds like the sort of scenario that copyright was specifically envisioned to protect against – and if it doesn’t protect against that, it’s arguably an oversight in need of correction. It’s in AI makers and users’ interest to proactively self-regulate on this front, because if they don’t somebody like Disney is going to wade into this at some point with expensive lobbyists, and dictate the law to their own benefit.

    That said, it’s working artists like Rutkowski, or friends of mine who scrape together a living off commissioned pieces, that I am most concerned for. Fantasy art like Greg makes, or personal character portraits of the sort you find on character sheets of long-running DnD games or as avatar images on forums like this one, make up the bread and butter of many small-time artists’ work, and those commissions are the ones most endangered by the current state of the art in generative AI. It’s great for would-be patrons that the cost of commissioning a mood piece for a campaign setting or a portrait of their fursona has suddenly dropped to basically zero, but it sucks for artists that their lunch is being eaten by an AI algorithm that was trained by slurping up all their work without compensation or even credit. For as long as artists need to get paid for their work in order to live, that’s inherently anti-worker.


  • M1 gets most of its performance-per-watt efficiency by running much farther down the voltage curve than Intel or AMD usually tune their silicon for, and having a really wide core design to take advantage of the extra instruction-level parallelism that can be extracted from the ARM instruction set relative to x86. It’s a great design, but the relatively minor gains from M1 to M2 suggest that there’s not that much more in terms of optimization available in the architecture, and the x86 manufacturers have been able to close a big chunk of the gap in their own subsequent products by increasing their own IPC with things like extra cache and better branch prediction, while also ramping down power targets to put their competing thin-and-light laptop parts in better parts of the power curve, where they’re not hitting diminishing performance returns.

    The really dismal truth of the matter is that semiconductor fabrication is reaching a point of maturity in its development, and there aren’t any more huge gains to be made in transistor density in silicon. ASML is pouring in Herculean effort to reduce feature sizes at a much lower rate than in years past, and each step forward increases cost and complexity by eyewatering amounts. We’re reaching the physical limits of silicon now, and if there’s going to be another big, sustained leap forward in performance, efficient, or density, it’s probably going to have to come in the form of a new semiconductor material with more advantageous quantum behavior.






  • Speculations indicate that Navi 3.5 might enable integrated graphics with performance comparable to an Nvidia RTX 3070.

    Uh huh. Given that the Radeon 780M that represents the current state of the art in Zen4 iGPUs is still trailing a discrete 3050 (by no means a strong performer itself) by about 30% on average, this seems wildly optimistic. Don’t get me wrong, I would love a beastly iGPU, but this seems less like informed speculation and more like fanboy hype.



  • I’m about 99% certain that the image in the article is some AI-generated nightmare fuel. There’s a link to the actual paper at the bottom of the article, and it has this figure showing a few example organoids, which are ~10mm across and look a bit like white mushrooms.

    The ethical dilemma posed by a brain in a petri dish is an interesting hypothetical, but probably not one worth worrying about at this point. There’s less brain tissue here than in the average lab mouse, with no sensory inputs and little differentiation relative to a real human brain. The neurons in the organoids are probably able to do as neurons do individually, but they lack the structure or infrastructure required for them to have basic awareness, let alone consciousness.

    Organoids like these can be useful for in-vivo study of brain tissue without the ethical troubles of rooting around in somebody’s head now, but that’s about it. We’re a very long way from growing a brain-in-a-jar and hooking it up to The Matrix.



  • I don’t know if there’s many major outlets that are primarily investigative in the era of the 24/7 news cycle and the accompanying need to always have something fresh on the front page, but at least in the English-speaking world the various newspapers of record (think places like the New York Times or the The Guardian) still have a decent newsroom and publish original investigative pieces. In audio formats, NPR and the various constellations of associated organizations like the Center for Investigative Reporting do excellent work as well. There’s also organizations like Bellingcat that specialize in deep-dive investigations using open-source intelligence, presented in a “just-the-facts” format without editorialization.


  • Because a significant chunk of what gets passed off as journalism on such sites is just writing copy – for example, regurgitating press releases, or repackaging the work of another outlet that actually did do the legwork of investigative journalism. I don’t think there’s anything inherently wrong with using AI tools to speed up the task of summarizing some other text for republishing, but I do question the value of such work in the first place.

    It’s going to be a long, long time until artificial intelligence can do the work of a true investigative journalist.