The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 2 Posts
  • 154 Comments
Joined 6 months ago
cake
Cake day: January 12th, 2024

help-circle
  • I thought about this a while ago. My conclusion was that the simplest way to handle this would be to copy multireddits, and expand upon them.

    Here’s how I see it working.

    Users can create multireddits multicommunities multis as they want. What goes within a multi is up to the user; for example if you want to create a “myfavs” multi with !potatoism, !illegallysmolcats and !anime_art, you do you.

    The multi owner can:

    1. edit it - change name, add/remove comms to/from the multi
    2. make the multi public or private
    3. use the multi as their feed, instead of Subscribed/Local/All
    4. use the multi to bulk subscribe, unsub, or block comms

    By default a multi would be private, and available only for the user creating it. However, you can make it public if you want; this would create a link for that multi, available for everyone checking your profile. (Or you could share it directly.)

    You can use someone else’s public multi as your feed or to bulk subscribe/unsub/block comms. You can also “fork” = copy it; that would create an identical multi associated with your profile, that then you can edit.



  • By far, my biggest issue with flags in r/place and Canvas does not apply to a (like you said) 20x30. It’s stuff like this:

    \

    People covering and fiercely defending huge chunks of the canvas, for something that is completely unoriginal, repetitive, and boring. And yet it still gets a pass - unlike, say, The Void; everyone fights The Void.

    Another additional issue that I have has to do with identity: the reason why we [people in general] “default” to a national flag, for identity, is because our media and governments bomb us with a nationalistic discourse, seeking to forge an identity that “happens” to coincide with that they want.

    But, once we go past that, there are far more meaningful things out there to identify ourselves with - such as our cultures and communities, and most of the time they don’t coincide with the countries and their flags.

    As such I don’t think that this is a discourse that we should promote, through the usage of the symbols associated with that discourse.

    Maybe where you’re from it’s easy to separate your government flag as its own symbol that doesn’t represent real people

    I think that this is more of a matter of worldview than where we’re from, given that some people in Brazil spam flags in a way that strongly resembles how they do it in USA.




  • Yeah, it’s actually good. People use it even for trivial stuff nowadays; and you don’t need a pix key to send stuff, only to receive it. (And as long as your bank allows you to check the account through an actual computer, you don’t need a cell phone either.)

    Perhaps the only flaw is shared with the Asian QR codes - scams are a bit of a problem, you could for example tell someone that the transaction will be a value and generate a code demanding a bigger one. But I feel like that’s less of an issue with the system and more with the customer, given that the system shows you who you’re sending money to, and how much, before confirmation.

    I’m not informed on Tikkie and Klarna, besides one being Dutch and another Swedish. How do they work?


  • Brazil ended with a third system: Pix. It boils down to the following:

    • The money receiver sends the payer either a “key” or a QR code.
    • The payer opens their bank’s app and use it to either paste the key or scan the QR code.
    • The payer defines the value, if the code is not dynamic (more on that later).
    • Confirm the transaction. An electronic voucher is emitted.

    The “key” in question can be your cell phone number, physical/juridical person registre number, e-mail, or even a random number. You can have up to five of them.

    Regarding dynamic codes, it’s also possible to generate a key or QR code that applies to a single transaction. Then the value to be paid is already included.

    Frankly the system surprised me. It’s actually good and practical; and that’s coming from someone who’s highly suspicious of anything coming from the federal government, and who hates cell phones. [insert old man screaming at clouds meme]


  • Do you mind if I address this comment alongside your other reply? Both are directly connected.

    I was about to disagree, but that’s actually really interesting. Could you expand on that?

    If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with “made by AI”. To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

    In other words, to lie without getting caught you’re getting rid of what makes the output problematic on first place. The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one. Those are the ones who’d get caught, because they’re doing what you called “dumb” (and I agree) - not proof-reading their output.

    Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.





  • Think on the available e-books as a common pool, from the point of view of the people buying them: that pool is in perfect condition if all books there are DRM-free, or ruined if all books are infested with DRM.

    When someone buys a book with DRM, they’re degrading that pool, as they’re telling sellers “we buy books with DRM just fine”. And yet people keep doing it, because:

    • They had an easier time finding the copy with DRM than a DRM-free one.
    • The copy with DRM might be cheaper.
    • The copy with DRM is bought through services that they’re already used to, and registering to another service is a bother.
    • If copy with DRM stops working, that might be fine, if the buyer only needed the book in the short term.
    • Sharing is not a concern if the person isn’t willing to share on first place.
    • They might not even know what’s the deal, so they don’t perceive the malus of DRM-infested books.

    So in a lot of situations, buyers beeline towards the copy with DRM, as it’s individually more convenient, even if ruining the pool for everyone in the process. That’s why I said that it’s a tragedy of the commons.

    As you correctly highlighted that model relies on the idea that the buyer is selfish; as in, they won’t care about the overall impact of their actions on the others, only on themself. That is a simplification and needs to be taken with a grain of salt, however note that people are more prone to act selfishly if being selfless takes too much effort out of them. And those businesses selling you DRM-infested copies know it - that’s why they enclose you, because leaving that enclosure to support DRM-free publishers takes effort.

    I guess in the end we are talking about the same

    I also think so. I’m mostly trying to dig further into the subject.

    So the problem is not really consumer choice, but rather that DRM is allowed in its current form. But I admit that this is a different discussion

    Even being a different discussion, I think that one leads to another.

    Legislating against DRM might be an option, but easier said than done - governments are specially unruly, and they’d rather support corporations than populations.

    Another option, as weird as it might sound, might be to promote that “if buying is not owning, pirating is not stealing” discourse. It tips the scale from the business’ PoV: if people would rather pirate than buy books with DRM, might as well offer them DRM-free to increase sales.


  • Does this mean that I need to wait until September to reply? /jk

    I believe that the problem with the neolibs in this case is not the descriptive model (tragedy of the commons) that they’re using to predict a potential issue; it’s instead the “magical” solution that they prescribe for that potential issue, that “happens” to align with their economical ideology, while avoiding to address that:

    • in plenty cases privatisation worsens the erosion of the common resource, due to the introduction of competition;
    • the model applies specially well to businesses, that behave more like the mythical “rational agent” than individuals do;
    • what you need to solve the issue is simply “agreement”. Going from “agreement” to “privatise it!!!1one” is an insane jump of logic from their part.

    And while all models break if you look too hard at them, I don’t think that it does in this case - it explains well why individuals are buying DRM-stained e-books, even if this ultimately hurts them as a collective, by reducing the availability of DRM-free books.

    (And it isn’t like you can privatise it, as the neolibs would eagerly propose; it is a private market already.)

    I’m reading the book that you recommended (thanks for the rec, by the way!). Under a quick glance, it seems to propose self-organisation as a way to solve issues concerning common pool resources; it might work in plenty cases but certainly not here, as there’s no way to self-organise people who buy e-books.

    And frankly, I don’t know a solution either. Perhaps piracy might play an important and positive role? It increases the desirability of DRM-free books (you can’t share the DRM-stained ones), and puts a check on the amount of obnoxiousness and rug-pulling that corporations can submit you to.



  • I’m thinking that perhaps the community could/should go a step further, and create another instance to talk about open source and privacy. That would be IMO the best scenario - it would be a great counterpoint to .ml, and it would avoid centralising Lemmy around .world even further.

    (I also feel like this might be better even for the devs. Administrative work isn’t exactly pleasing, and if I had to take a guess they mostly maintain that instance because they need it for the software. But that’s just a guess, don’t trust me on that.)

    inb4: yes, I know - easier said than done. But I feel like it could be a good option.



  • This is going to be interesting. I’m already thinking on how it would impact my gameplay.

    The main concern for me is sci packs spoiling. Ideally they should be consumed in situ, so I’d consider moving the research to Gleba and ship other sci packs to it. This way, if something does spoil at least the spoilage is near where I can use it. Probably easier said than done - odds are that other planets have “perks” that would make centralising science there more convenient.

    You’ll also probably want to speed up the production of the machines as much as possible, since the products inherit spoilage from the ingredients. Direct insertion, speed modules, upgrading machines ASAP will be essential there - you want to minimise the time between the fruit being harvested and outputting something that doesn’t spoil (like plastic or science).

    Fruits outputting pulp and seeds also hint me an oil-like problem, as you need to get rid of byproducts that you might not be using. Use only the seeds and you’re left with the pulp; use only the pulp and you’re left with the seeds. The FFF hints that you can burn stuff, but that feels wasteful.




  • I also apologise for the tone. That was a knee-jerk reaction from my part; my bad.

    (In my own defence, I’ve been discussing this topic with tech bros, and they rather consistently invert the burden of the proof. Often to evoke Brandolini’s Law. You probably know which “types” I’m talking about.)

    On-topic. Given that “smart” is still an internal attribute of the blackbox, perhaps we could gauge better if those models are likely to become an existential threat by 1) what they output now, 2) what they might output in the future, and 3) what we [people] might do with it.

    It’s also easier to work with your example productively this way. Here’s a counterpoint:


    The prompt asks for eight legs, and only one pic was able to output it correctly; two ignored it, and one of the pics shows ten legs. That’s 25% accuracy.

    I believe that the key difference between “your” unicorn and “my” eight-legged dragon is in the training data. Unicorns are fictitious but common in popular culture, so there are lots of unicorn pictures to feed the model with; while eight-legged dragons are something that I made up, so there’s no direct reference, even if you could logically combine other references (as a spider + a dragon).

    So their output is strongly limited by the training data, and it doesn’t seem to follow some strong logic. What they might output in the future depends on what we add in; the potential for decision taking is rather weak, as they wouldn’t be able to deal with unpredictable situations. And thus their ability to go rogue.

    [Note: I repeated the test with a horse instead of a dragon, within the same chat. The output was slightly less bad, confirming my hypothesis - because pics of eight-legged horses exist due to the Sleipnir.]

    Neural nets

    Neural networks are a different can of worms for me, as I think that they’ll outlive LLMs by a huge margin, even if the current LLMs use them. However, how they’ll be used is likely considerably different.

    For example, current state-of-art LLMs are coded with some “semantic” supplementation near the embedding, added almost like an afterthought. However, semantics should play a central role in the design of the transformer - because what matters is not the word itself, but what it conveys.

    That would be considerably closer to a general intelligence than to modern LLMs - because you’re effectively demoting language processing to input/output, that might as well be subbed with something else, like pictures. In this situation I believe that the output would be far more accurate, and it could theoretically handle novel situations better. Then we could have some concerns about AI being an existential threat - because people would use this AI for decision taking, and it might output decisions that go terribly right, as in that “paperclip factory” thought experiment.

    The fact that we don’t see developments in this direction yet shows, for me, that it’s easier said than done, and we’re really far from that.