shork

it's the shork!

bisexual ~ nonbinary ~ likes video games and weird/ old electronics and will post obsessively about both ~ ​awoo.space admin ~ avatar by Caius Nocturne


Osmose
@Osmose

I could make this a really really long post but it's late and it's really not all that complex at it's core, even if there is history and nuance if you wanted to get into it:

Mozilla's mythology (i.e. the mixture of historical beliefs of leadership, the stories told within the community, and the Mozilla Manifesto) outlines the biggest successes of Mozilla as being when they take some existing technology and either sponsor or create a superior open alternative and either topple the competition or force standardization. Firefox did this for web browsers, Rust for compiled languages, and asm.js/WebAssembly for Google Native Client.

This is what mozilla.social (before mismanagement and now downsizing) was all about. Doing the same for the fediverse, which puts decentralization at it's core, which already appeals to the privacy and choice values of Mozilla.

Mozilla assumes generative AI / LLMs are going to be big and are problematic, and thus it follows that the most Mozilla thing to do about it is to embrace them and try to push the field towards better principles via participation like they have time and again. Common Voice is the more traditional attempt at doing this: mass farming of data without permission is one of the major problems with AI training, so why not build a dataset with proper rights and permissions, and while we're at it, why not try and include languages that don't traditionally have good representation in datasets like this, too?

However, the mounting pressure from Google continuing to be the primary funding source distorts this focus such that "participation" includes shoehorning AI into features that can be charged for, like MDN Plus' AI Help. Furthermore, it's much less clear in the case of LLMs that participation itself is ethical, given the power requirements and massive data requirements compared to prior AI techniques, and the unclear impacts on users of long-term reliance on them.

There's other nuances around the type of people Mozilla hires nowadays, the specific dynamics of people in leadership, etc. but in broad strokes this AI stuff from Mozilla is essentially using an old successful playbook on what is probably the wrong game at the wrong time.


You must log in to comment.