• 1 Post
  • 291 Comments
Joined 5 months ago
cake
Cake day: June 30th, 2024

help-circle
  • It’s the before times, analog days, and the Internet was in it’s infancy. Stephan Hawking, a theoretical physicist, cosmologist, and author, said the following:

    For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.

    Computers have been very effective applied to vehicles. In my life I’ve seen the advent of the aluminum block, anti-lock brakes and stability control, variable ignition and valve timing, more aerodynamic body, paddle shift and continuously variable transmissions, drive by wire, now even hybrid and electric drives. This has allowed leaps forward in safety, efficiency, and performance.

    Then, we enshitified. Today there’s barely choice in the vehicle market. Toyota/Honda; Hyundai/Kia; Ford/Chevy/Chrysler and a trim package defines everything but trucks. 1/2 ton trucks as symbols of identity break repeatedly if regularly used for payload and towing. “Choice” is a 1/4 Ranger, 1/2 Chevy diesel, or 3/4 Ford/Chevy/Ram. They didn’t make the first two for decades, still scarce and expensive for what they are. And, for all vehicles one now often needs to remove inaccessible bolts in tight spaces, for several parts, to get to the part that’s broken.

    Profit optimization through technology is why there’s little choice in vehicles; Why you can envision a Walmart and Lowes strip mall and every American knows exactly what it looks like and where the closest couple copies are; Why we can’t replace phone batteries and screens. An out-of-the-box idea from AI that’s also conveniently practical for humans will probably cure cancer. AI is also what’s analyzing all the data being collected, just as inhumanely. The vehicle manufacturers want their cut.

    Did Herbert envision that the spice of prescience was computational cycles?








  • The US. Psychiatrists can Rx. Psychologists cannot. But, they’ve always a psychiatrist in their back pocket who’ll do whatever for a quick buck. If they label themselves “therapist” they’re idiots who couldn’t even hack a MA.

    My experiences have clearly demonstrated that anyone accepting money will inhibit progress or have one forever dependent upon pharmaceuticals. Those that provide the best help never asked me for anything in return.

    Same goes for education.





  • …who constantly says: “I agree with you in the goal you seek, but I cannot agree with your methods of direct action”; who paternalistically believes he can set the timetable for another man’s freedom; who lives by a mythical concept of time and who constantly advises the Negro to wait for a “more convenient season.”

    I think you’re MLK’s “white moderate”: our greatest stumbling block in our stride towards freedom.







  • I feel like it will get to the point where AI will start writing code that works but nobody can understand or maintain including AI

    Already there, and have been for awhile. In my work we often don’t understand how the AI itself works. We independently test for accuracy. Then we begin trusting results without verification. But, at no time do we really understand the logic of how the AI gets from input to output.

    If you are able to explain the requirements to an AI so fully that the AI can do it correctly it would have taken shorter time to program by yourself.

    This makes sense for a one-time job. But, it doesn’t make sense when there’s a hundred jobs with only minor differences. For example, the AI writes a hundred AI’s. We kill all but the three to five best models.


  • education about CS/responsible use of technology

    The vast majority of what’s been suggested in the OP and comments focuses on the technical: CS and IT. But, no one’s focused on “responsible use of technology”. I’d like to see a course that focused on the morality and ethics of usage.

    Examples of possible classroom topics:

    1. Is it moral and ethical to spread disinformation as a means to “good” end? Is it acceptable to spread truth if the consequences are likely “bad”?

    2. Is it moral and ethical to use generative AI to effectively libel/slander a political opponent? Does it the analysis change if used for advertising?

    3. Is it moral and ethical to pirate media? Does it depend on what’s being pirated? Does it depend on why it’s being pirated?

    The "problems with such a course:

    1. It’d require prerequisite of basic philosophy/logic and basic CS/IT. It could be a lot of material to cover. Course construction and presentation needs to be focused, rooted in experience, likely a passion project.

    2. The audience may be too young to think in these terms. A little experience goes a long way towards understanding these topics well enough to have a good faith classroom discussion. I don’t intend ageism, in fact the opposite. I think today’s youth are more capable than when I was such an age: Make it known that the course is “hard”. Those that choose it will excel.