• 0 Posts
  • 6 Comments
Joined 10 months ago
cake
Cake day: January 19th, 2024

help-circle
  • After reading, the gist of it seems to be:

    • Vanilla far-right indoctrinated dumbo (his vision: “Reds” welcome, “Blues” not, “Anti-Blue Propaganda” on public view screens)
    • Wants exploitative capitalism on steroids with companies controlling everyone’s lives completely
    • Claims current capitalism is only bad because it’s “woke capitalism” which he claims the “ruling class” is pushing
    • Wants tech bros to butter up police and give security staff jobs to their children as a favor, i.e. intentional social classism

    .

    In short, just another out of touch entrepreneur who sells snake oil cures to people suffering in the current system, so that they may invite in the boot that stomps them down for good.


  • If you were alive (and online) during the 90s, you may remember the banter between Microsoft and General Motors:

    From https://crysa.fzu.cz/ondra/documents/cars_like_windows.html (the only online copy I could find)

    Bill Gates reportedly compared the computer industry with the auto industry and stated, “If GM had kept up with technology like the computer industry has, we would all be driving twenty-five-dollar cars that get 1,000 miles to the gallon.”

    In response to Bill’s comments, General Motors issued a press release stating: If GM had developed technology like Microsoft, we would all be driving cars with the following characteristics:

    […]

    1. The oil, water temperature, and alternator warning lights would be replaced by a single “general car error” warning light.

    2. New seats would force everyone to have the same size butt.

    3. The airbag system would ask “Are You Sure?” before going off.

    4. Occasionally, for no reason whatsoever, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key, and grabbed a hold of the radio antenna.

    30 years later, some of those jokes are finally becoming reality, thanks to Tesla.


  • A perfect demonstration of how Russian indoctrination works right here.

    Original reporting: A major disinfo attack against Europe being prepared by Russia is uncovered through diligent investigation and published and reported on.

    The response:

      1. divert to farmer’s dissatisfaction with several policies
      1. cast disinfo reports as underhanded attempts (by politician Russia wants gone) to arrogantly brush off farmer’s concerns (which the report never even related to)
      1. claim Macron is selling out to EU (here, have a serving of anti-EU sentiment, too)
      1. vaccinate reader against the disinfo being countered (“everyone who tells you otherwise belittles you and hates you, join us in our righteous anger”)

    Emotional framing:

    Nationalists, agricultural owner-operators, and farmers exposed to rising interest rates

    “truckloads of exported Ukranian agricultural salvage” vs. “fresh French produce”

    we’re getting an earful about how all these local yokels are hoodwinked by anti-EU Russian Propaganda

    Macron for selling out the agg sector to financial interests in Brussels

    “If you’re not in favor of (insert supposed evil acts described in lurid way), then you’re a secret spy for Putin and a traitor.”

    Result: The reader comes out the other end an angry person, outraged about the plight of farmers, outraged again at disinfo reports supposedly serving to silence them, outraged once more at a France politician selling them out to the EU, EU painted as high-and-mighty villain, automatic anger against anyone who tells them a different viewpoint ready to trigger.


  • I agree that a lot of human behavior (on the micro as well as macro level) is just following learned patterns. On the other hand, I also think we’re far ahead - for now - in that we (can) have a meta context - a goal and an awareness of our own intent.

    For example, when we solve a math problem, we don’t just let intuitive patterns run and blurt out numbers, we know that this is a rigid, deterministic discipline that needs to be followed. We observe and guide our own thought processes.

    That requires at least a recurrent network and at higher levels, some form of self awareness. And any LLM is, when it runs (rather than being trained), completely static, feed-forward (it gets some 2000 words (or 32000+ as of GPT-4 Turbo) fed to its input synapses, each neuron layer gets to fire once and then the final neuron layer contains the likelihoods for each possible next word.)


  • Is this a case of “here, LLM trained on millions of lines of text from cold war novels, fictional alien invasions, nuclear apocalypses and the like, please assume there is a tense diplomatic situation and write the next actions taken by either party” ?

    But it’s good that the researchers made explicit what should be clear: these LLMs aren’t thinking/reasoning “AI” that is being consulted, they just serve up a remix of likely sentences that might reasonably follow the gist of the provided prior text (“context”). A corrupted hive mind of fiction authors and actions that served their ends of telling a story.

    That being said, I could imagine /some/ use if an LLM was trained/retrained on exclusively verified information describing real actions and outcomes in 20th century military history. It could serve as brainstorming aid, to point out possible actions or possible responses of the opponent which decision makers might not have thought of.