One good thing doesn’t even outweigh one bad one. What do you call someone who tells 99 truths and one lie?
A liar.
It’s the same here; there’s an asymmetry between doing what’s right and betraying someone’s trust. When Mozilla can demonstrate consistent integrity, maybe I’ll stop using a fork.
With introduction of even more AI services to Firefox I wanted to express that to me it does seem like Firefox is missing with development of features. This sentiment is echoed by a lot of people in my social bubble of technologists, ethicists and other people with same priorities as what one could think are values which Firefox was built on.
Those concerns are in my opinion very valid. The machine learning models have shown to be unreliable - just some of the recent examples from AI products made by large corporations: pointing users to eat pizza with glue, providing false information about justaboutanything. There are a number of ethical issues yet to be resolved with usage of AI, from its intense usage of computing resources that adds up to electric grid demands. Through privacy and possible copyright violations of datasets that power the models. To an entire bag of other issues monitored by excellent resources such as AIAAIC repository.
AI/Machine learning is an amazing field with many likely applications, and yet, its recent rise to fame is characterized by failures and issues in many implementations. Personally I often don’t even see whether the application of AI is truthfully necessary, in many cases a human would do a more trustworthy and fast task of information gathering than a large machine learning model.
When we compare current state of “AI”, does it reflect what Firefox stands for? Does it reflect Mozilla’s principals?
Let’s compare.
Principle 2
The internet is a global public resource that must remain open and accessible.
LLMs are known for being a black box. Depending on our definition of “open and accessible”, LLMs can be a very free resource or a completely inaccessible black box of math.
Principle 4
Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
It’s clear that LLMs pose a privacy risk to Internet users. LLMs pose risk in at least two ways - because the data they are trained on sometimes contains private information due to negligent training process. In this case users of a learned model can possibly access private information. The second risk is of course usage of 3rd party services that may use information to infringe on privacy of users. While Mozilla in blog assures that “we are committed to following the principles of user choice, agency, and privacy as we bring AI-powered enhancements to Firefox”, it’s unclear how supporting such services as “ChatGPT, Google Gemini, HuggingChat, and Le Chat Mistral” helps protect Firefox user privacy. Giving users choice should not compromise their safety and privacy.
Principle 10
Magnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.
In my opinion in the process of designing AI functionalities on top of Firefox there was no evaluation on how those functionalities can benefit the public. There are a number of issues as mentioned above with the LLMs, they can be dangerous and work in detriment to users. Investing and supporting in technology of this type may lead to terrible consequences with little actual benefit.
In addition Mozilla claims:
We are committed to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts.
I ask Mozilla to reevaluate impact of AI considering all of those points. I ask on behalf of myself as well as many users that I see on Fediverse being greatly worried and frustrated with AI changes added on top of Firefox. There is certainly a lot of potential greatness that could be done with AI, but those steps must be taken responsibly.
Im glad you decided to copy-paste an overly padded ream of text instead of forming your own opinion, but sure.
P.2
All ai models used in firefox currently are fully open source.
P.4
Those models are also ran completely locally.
That linked blog refers to an OPT-IN experiment that lets you choose what model you want to use, including non-privacy respecting ones, but this is left up to the user.
P.10
The two ai features currently in firefox are alt-text generation for blind people and privacy-respecting page translation, i think youd have a hard time justifying why those arent useful.
Im glad you decided to copy-paste an overly padded ream of text instead of forming your own opinion
I endorse it.
P.4 Those models are also ran completely locally.
Wrong.
None of the models provided by Mozilla as defaults are run locally. They are all run on their own respective providers’ clouds.
P.2 All ai models used in firefox currently are fully open source.
Also wrong.
Even if we ignore the fact that Mozilla only allows you to connect to third parties that are running something in a black box, there is no such thing as open source AI, as far as I have seen. The models are always closed source black boxes. If you have downloaded a binary and cannot compile it yourself, you are not using something that is open source.
P.10
The two ai features currently in firefox are alt-text generation for blind people and privacy-respecting page translation
Which was not being discussed in the linked post. The fact you are trying to veer off topic from “Mozilla has not broken our trust” to “this particular other thing has not broken our trust” is unhelpful.
the comment is going “grrr ai” i was pointing out that the ai features currently in firefox (translation and alt-text) are local and privacy respecting. You cant just ignore things that dont fit your opinion.
second, that chatbot thing you’re crying about is OPT IN AND only in nightly, let you choose any chatbot available online, not just the ones they named, and on top of that it most likely will never make it into a non-nightly release, because theyve decided to make that kind of ai feature an extension instead.
Also, when i say that a model is open source, i am referring to the binary being downloadable and the model weights being freely available.
You clearly saw the word “ai” and decided that mozilla was as bad as google, without looking into it at all.
You asked how Mozilla betrayed users, so let’s focus on that. I don’t care about the ways they haven’t betrayed users, in the same way I don’t care about how Seamus built a bridge.
second, that chatbot thing you’re crying about is>>> only in nightly
No, it’s in Firefox 130. I know this because I use Firefox.
it most likely will never make it into a non-nightly release
LOL
Also, when i say that a model is open source, i am referring to the binary being downloadable
This is not what open source means. If that’s the case, Microsoft Windows is open source. Go nuts.
You clearly saw the word “ai” and decided that mozilla was as bad as google, without looking into it at all.
People aren’t blindly saying all AI is bad. They were pointing at a specific thing. Do not strawman.
No, it’s in Firefox 130. I know this because I use Firefox.
it’s in the “labs” section, its disingenuous to imply it isnt. I was wrong to say its only in nightly, but its still an opt-in experimental feature.
Also, truncating my arguments in quotes to make me look stupid and trying to exclude any facts you dont like is a dick move, and you know it, i’m not going to respond to the rest of this because you are clearly not arguing in good faith.
One good thing will not outweigh ten bad ones.
Also, is this post on Mastodon? How’s Mozilla’s instance doing? I hope well. /s
One good thing doesn’t even outweigh one bad one. What do you call someone who tells 99 truths and one lie?
A liar.
It’s the same here; there’s an asymmetry between doing what’s right and betraying someone’s trust. When Mozilla can demonstrate consistent integrity, maybe I’ll stop using a fork.
How did they betray you or their mission?
Found online:
With introduction of even more AI services to Firefox I wanted to express that to me it does seem like Firefox is missing with development of features. This sentiment is echoed by a lot of people in my social bubble of technologists, ethicists and other people with same priorities as what one could think are values which Firefox was built on.
Those concerns are in my opinion very valid. The machine learning models have shown to be unreliable - just some of the recent examples from AI products made by large corporations: pointing users to eat pizza with glue, providing false information about just about anything. There are a number of ethical issues yet to be resolved with usage of AI, from its intense usage of computing resources that adds up to electric grid demands. Through privacy and possible copyright violations of datasets that power the models. To an entire bag of other issues monitored by excellent resources such as AIAAIC repository.
AI/Machine learning is an amazing field with many likely applications, and yet, its recent rise to fame is characterized by failures and issues in many implementations. Personally I often don’t even see whether the application of AI is truthfully necessary, in many cases a human would do a more trustworthy and fast task of information gathering than a large machine learning model.
When we compare current state of “AI”, does it reflect what Firefox stands for? Does it reflect Mozilla’s principals?
Let’s compare.
The internet is a global public resource that must remain open and accessible.
LLMs are known for being a black box. Depending on our definition of “open and accessible”, LLMs can be a very free resource or a completely inaccessible black box of math.
Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
It’s clear that LLMs pose a privacy risk to Internet users. LLMs pose risk in at least two ways - because the data they are trained on sometimes contains private information due to negligent training process. In this case users of a learned model can possibly access private information. The second risk is of course usage of 3rd party services that may use information to infringe on privacy of users. While Mozilla in blog assures that “we are committed to following the principles of user choice, agency, and privacy as we bring AI-powered enhancements to Firefox”, it’s unclear how supporting such services as “ChatGPT, Google Gemini, HuggingChat, and Le Chat Mistral” helps protect Firefox user privacy. Giving users choice should not compromise their safety and privacy.
Magnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.
In my opinion in the process of designing AI functionalities on top of Firefox there was no evaluation on how those functionalities can benefit the public. There are a number of issues as mentioned above with the LLMs, they can be dangerous and work in detriment to users. Investing and supporting in technology of this type may lead to terrible consequences with little actual benefit.
In addition Mozilla claims:
I argue that AI models are the opposite to that. AI output is not verifiable. They are working against sharing knowledge by making seemingly accurate information that turn out to be false.
I ask Mozilla to reevaluate impact of AI considering all of those points. I ask on behalf of myself as well as many users that I see on Fediverse being greatly worried and frustrated with AI changes added on top of Firefox. There is certainly a lot of potential greatness that could be done with AI, but those steps must be taken responsibly.
Im glad you decided to copy-paste an overly padded ream of text instead of forming your own opinion, but sure.
P.2 All ai models used in firefox currently are fully open source.
P.4 Those models are also ran completely locally. That linked blog refers to an OPT-IN experiment that lets you choose what model you want to use, including non-privacy respecting ones, but this is left up to the user.
P.10 The two ai features currently in firefox are alt-text generation for blind people and privacy-respecting page translation, i think youd have a hard time justifying why those arent useful.
I endorse it.
Wrong.
None of the models provided by Mozilla as defaults are run locally. They are all run on their own respective providers’ clouds.
Also wrong.
Even if we ignore the fact that Mozilla only allows you to connect to third parties that are running something in a black box, there is no such thing as open source AI, as far as I have seen. The models are always closed source black boxes. If you have downloaded a binary and cannot compile it yourself, you are not using something that is open source.
Which was not being discussed in the linked post. The fact you are trying to veer off topic from “Mozilla has not broken our trust” to “this particular other thing has not broken our trust” is unhelpful.
Just ask Seamus (the bridge builder) what people remember him for.
the comment is going “grrr ai” i was pointing out that the ai features currently in firefox (translation and alt-text) are local and privacy respecting. You cant just ignore things that dont fit your opinion.
second, that chatbot thing you’re crying about is OPT IN AND only in nightly, let you choose any chatbot available online, not just the ones they named, and on top of that it most likely will never make it into a non-nightly release, because theyve decided to make that kind of ai feature an extension instead.
Also, when i say that a model is open source, i am referring to the binary being downloadable and the model weights being freely available.
You clearly saw the word “ai” and decided that mozilla was as bad as google, without looking into it at all.
You asked how Mozilla betrayed users, so let’s focus on that. I don’t care about the ways they haven’t betrayed users, in the same way I don’t care about how Seamus built a bridge.
No, it’s in Firefox 130. I know this because I use Firefox.
LOL
This is not what open source means. If that’s the case, Microsoft Windows is open source. Go nuts.
People aren’t blindly saying all AI is bad. They were pointing at a specific thing. Do not strawman.
it’s in the “labs” section, its disingenuous to imply it isnt. I was wrong to say its only in nightly, but its still an opt-in experimental feature.
Also, truncating my arguments in quotes to make me look stupid and trying to exclude any facts you dont like is a dick move, and you know it, i’m not going to respond to the rest of this because you are clearly not arguing in good faith.