• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • I think it was more targeting the client ISP side, than the VPN provider side. So something like having your ISP monitor your connection (voluntarily or forced to with a warrant/law) and report if your connection activity matches that of someone accessing a certain site that your local government might not like for example. In that scenario they would be able to isolate it to at least individual customer accounts of an ISP, which usually know who you are or where to find you in order to provide service. I may be misunderstanding it though.

    Edit: On second reading, it looks like they might just be able to buy that info directly from monitoring companies and get much of what they need to do correlation at various points along a VPN-protected connection’s route. The Mullvad post has links to Vice articles describing the data that is being purchased by governments.


  • One example:

    By observing that when someone visits site X, it loads resources A, B, C, etc in a specific order with specific sizes, then with enough distinguishable resources loaded like that someone would be able to determine that you’re loading that site, even if it’s loaded inside a VPN connection. Think about when you load Lemmy.world, it loads the main page, then specific images and style sheets that may be recognizable sizes and are generally loaded in a particular order as they’re encountered in the main page, scripts, and things included in scripts. With enough data, instead of writing static rules to say x of size n was loaded, y of size m was loaded, etc, it can instead be used with an AI model trained on what connections to specific sites typically look like. They could even generate their own data for sites in both normal traffic and the VPN encrypted forms and correlate them together to better train their model for what it might look like when a site is accessed over a VPN. Overall, AI allows them to simplify and automate the identification process when given enough samples.

    Mullvad is working on enabling their VPN apps to: 1. pad the data to a single size so that the different resources are less identifiable and 2. send random data in the background so that there is more noise that has to be filtered out when matching patterns. I’m not sure about 3 to be honest.






  • They haven’t released the text publicly but they’re voting on it in less than a week. That’s also one of the many objections that Mozilla et al has to this whole thing: it’s basically being done in secret in a way that won’t give the public any time to react or object.

    Historically, the browser vendors have only distrusted certificate authorities when they had reason to not trust them, not some arbitrary reason.

    One of the examples of them preventing a CA from being trusted is Kazakhstan’s, which was specifically set up to enable them to intercept users’ traffic: https://blog.mozilla.org/netpolicy/2020/12/18/kazakhstan-root-2020/

    Even if all of the EU states turn out to be completely trustworthy, forcing browser vendors to trust the EU CAs would give more political cover for other states to force browser vendors to trust their CAs. Ones that definitely should not be trusted.

    I think there wouldn’t be nearly the same level of objection if it was limited to each country’s CC TLD, rather than any domain on the internet.


  • How is giving any EU state the ability to be a certificate authority in your browser for issing a certificate for any site, without them needing to follow the rules the browser vendors have for what makes an authority trustworthy, with no option to disable them or add additional checks to their validity, “protecting their citizens from (American) corporate abuse”?

    From the Mozilla post:

    Any EU member state has the ability to designate cryptographic keys for distribution in web browsers and browsers are forbidden from revoking trust in these keys without government permission.

    […]

    There is no independent check or balance on the decisions made by member states with respect to the keys they authorize and the use they put them to.

    […]

    The text goes on to ban browsers from applying security checks to these EU keys and certificates except those pre-approved by the EU’s IT standards body - ETSI.




  • How it works: Chrome only displays the lookalike phishing protection screens for sites with similar domains to the ones you visit, which can be detected by a server when the site doesn’t load (the warning first appears instead).

    Summary from the conclusion:

    Lookalike Warnings are arguably a great safety feature that protects users from common threats on the web. It’s hard to balance effectiveness and good user experience, making Site Engagement a vital source of information. However, since disabling Site Engagement or Lookalike Warnings is impossible, we believe it’s important to discuss these features’ privacy implications. For some people, the risk of exposing their browsing history to a targeted attack might be far worse than being tricked by lookalike phishing websites. Especially given that site engagement is also copied into incognito sessions.




  • Considering this proposal is used for the key exchange, they definitely need to update both the client side and server side part to be able to make use of it. That’s the kind of thing that may take years but luckily it can fall back to older methods.

    It also needs to be thoroughly vetted so that’s why it’s a hybrid approach. If the quantum resistant algorithm turns out to have problems (like some others have), they’re still protected by the traditional part like they would have been, with no leaking of all the data.




  • I think the Firefox settings now call it the address bar when selecting if you want it to do both functions or have separate boxes. It may still be that internally.

    Also I just looked it up and apparently I was wrong anyways and the Chrome internal docs call it the omnibox actually…

    And the chromium developers blog calls it the address bar…

    And so does The Keyword (blog.google)…

    I think they’ve both given up on getting the public to use their special names now that it’s just an expected feature of a browser.


  • Are you talking about the “Make Chrome your own” page that walks you through a few customization options before asking if you want to sign in? You can just select “No thanks” and you’re not signed in. Incognito windows work just fine.

    Or are you talking about the “Set up your new Chrome profile” screen that pops up when you make a new profile? It shows two options: Sign in [to Google] and “Continue without an account”. “Continue without an account” just has you name the new profile to distinguish it from any others you may have and then lets you start using it. Incognito windows work just fine.

    This is Chrome on the desktop by the way.



  • While Signal does use Google’s notification system, my understanding is that they just use it as a way to wake up the app on the phone and have it check for new messages rather than sending any message content this way. The notification system that the operating system provides on the phone would still have access to the message data in the locally generated message notification with the actual content however, along with any apps that you give access to your notifications.

    It may be something like Android System Intelligence or another similar service doing it: https://support.google.com/pixelphone/answer/12112173?hl=en

    Notification management: Adds action buttons to notifications. For example, the action buttons could add directions to a place, help you track a package, or add a contact.