• 0 Posts
  • 135 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2023

help-circle

  • Custom domains mean that if the alias provider enshittifies, you can switch to any other provider near-instantly. As long as you never use the domains to host illegal or dodgy shit it’s extremely unlikely you’ll ever lose them — far less likely than losing a gmail or whatever.

    With SL you can avoid spam by using the “beta” (been beta for 3+ years lol) “auto create” option instead of a catch-all, meaning that you can direct emails to different inboxes (or do nothing) based on specific regex strings you control — up to 100 of them. I had a catch-all regex (.*) as my # 100 and it took 2 years to receive catch-all fishing spam. Then I removed it and now have only random strings (e.g. .*fgyu.*) so new emails must have them if they want to get somewhere. Everything else bounces. All previous emails continue to work until you disable them individually.

    I use a mix:

    • SL-domains: anything I don’t give a shit about.
    • Non-PII domain: anything I would want to persist if I changed provider, but don’t need my identity, or can give out a unique email in-person.
    • PII-domain: banks and all other services tied to my identity.
    • Top-Secret-PII-domain: critical services that could compromise all others (password manager, email/OS accounts, domain name registrar).





  • The obvious solution to me is sponsorblock switching to sampling pixels out of each frame, like that project that encoded data into video streams (yet resilient to compression), there are algorithms that could fingerprint any ad with an extremely high degree of accuracy. It’d be more complex than the current implementation, but it’d also be more resilient. I’d settle for it hiding the video and suppressing the audio for the ads duration, possibly displaying a countdown timer, vs actually watching the ad. Then Youtube would get paid, but have no way of knowing you haven’t seen the ad, and the metrics around their ad effectiveness would ultimately suffer, so users still win.

    You could even go so far as to have the client cache the video, several minutes in advance, dropping all the ad frames, so it’s a seamless experience for the user. I got money, but will spend 10x as much ensuring Google gets less from me. It ain’t about money. It’s about sending a message!


  • Opt-in should be mandatory for all services and data sharing. I would start my transition to Linux today if this were opt-out, though the way Apple handles this for other services makes me believe opt-in will be temporary.

    Currently, when you setup any device as new, even an offline/local user on macOS, the moment you log into iCloud it opts-almost-every-app-and-service-into iCloud, even one’s you have never used and always disabled on every device. There’s seemingly no way to prevent this behavior on any device, let alone at an account level.

    Currently, even though my iPhone and language support offline (on-device) Siri, and I’ve disabled all analytics sharing options, I must still agree to Apple’s data sharing and privacy policy to use Siri. Why would I need to agree to a privacy policy if I only want to use Siri offline, locally on my device, and disable it from accessing Apple’s servers or anything external to the content on my phone? Likely because if you enable Siri, it auto-enables (opts in) for every app and service on your device. Again, no way to disable this behavior.

    I understand the majority of users do not care about privacy or surveillance capitalism, but for me to trust and use a personal AI assistant baked into my devices OS, I need the ability to make it 100% offline, and fine grained network control for individual apps and processes, including all of the OS’s processes. It would not be difficult to add a toggle at login to “enable iCloud/Siri for all apps/services” or “let me choose which apps/services to use with iCloud/Siri, individually”. Apple needs stronger and clearer offline controls in all its software, period.



  • Lucky! I finally got my mum to use the password manager I admin, but she still reuses the same dozen passwords for everything and manually enters them in… sigh. I’ve set strong passwords and 2FA for all critical accounts, so I just let her be a moron with the rest of them.

    Computers break her brain. She literally responds with questions like “it’s IN the computer?” Zoolander style. I just do most of her shit myself because it’s less painful than trying to teach her.







  • WhatAmLemmy@lemmy.worldtoAsklemmy@lemmy.mlSearch engines down?
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 month ago

    I was thinking about this and imagined the federated servers handling the index db, search algorithms, and search requests, but instead leverage each users browser/compute to do the actual web crawling/scraping/indexing; the server simply performing CRUD operations on the processed data from clients to index db. This approach would target the core reason why search engines fail (cost of scraping and processing billions of sites), reduce the costs to host a search server, and spread the expense across the user base.

    It also may have the added benefit of hindering surveillance capitalism due to a sea of junk queries from every client, especially if it were making crawler requests from the same browser (obviously needs to be isolated from the users own data, extensions, queries, etc). The federated servers would also probably need to operate as lighthouses that orchestrate the domains and IP ranges to crawl, and efficiently distribute the workload to client machines.



  • WhatAmLemmy@lemmy.worldtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    97
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Should … Should we tell OP that nobody understands all of any moderately large codebase, especially the sub-dependencies … or that even the thousands of developers who wrote most of that code don’t understand how their own code works anymore?

    I could read the same book every year and I still won’t remember most of the minor events on my deathbed. Doesn’t mean I won’t remember the key components that make up the story — coding is like that, except the minor events and key components can be rewritten or removed by someone else whenever you go to read them next.



  • I think this question might be missing the point of TOTP and protection it provides. The reason 256/512 is used to encrypt data and passwords is to prevent the possibility of brute force and other attacks (e.g. using other data breaches). This doesn’t really matter with TOTP. They can’t reverse engineer a TOTP password out of you. They can’t use your info from prior breaches to gleen what your TOTP might be anywhere else. It’s not something where “cracking” the hash is likely to be attempted, as an attacker would still have to capture the generated codes and time of input in some way, then brute force hashes until they generate one that produces the correct codes at x time. Why would they ever do that when it would be a thousand times easier to compromise a device or TOTP app, and scrape the hashes directly from it; negating any need to brute force?

    Note: I am not a cryptographer and have not implemented a TOTP server, so I could be completely wrong.

    TL;DR 256/512 wouldn’t necessarily increase the security of TOTP at all.