If you feel like a little schadenfreude, here's an article about Google admitting that they won't be able to keep pace with open-source LMs that are doing more with less:
Which also mentioned LoRA regarding reducing the parameters needed for LMs:
Some quotes from the article I appreciated (emphasis theirs):
We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.
[...]
In many ways, this shouldn’t be a surprise to anyone.
now we can get on with trying to avoid an entirely different kind of dystopia, the one where instead of megacorps deciding how this stuff gets used, it's not possible to prevent unethical uses of these models because there's no single point of control
this is a good thing, it's closer to our ideal world. you cannot have decentralized power structures while having the ability to prevent misuse of technology, because any mechanism which can prevent misuse can also prevent challenges to power - and will.
however, it creates a huge moral burden on the community that works with these models, to foster public conversations about where the ethical lines need to be and how we can hold each other accountable for them.
we look forward to that work. it is important. it is a core part of building the society we wish to live in.
