posts from @belarius tagged #09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0

also:

catball
@catball

If you feel like a little schadenfreude, here's an article about Google admitting that they won't be able to keep pace with open-source LMs that are doing more with less:

Which also mentioned LoRA regarding reducing the parameters needed for LMs:

Some quotes from the article I appreciated (emphasis theirs):

We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.

People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.

Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.

[...]

In many ways, this shouldn’t be a surprise to anyone.


belarius
@belarius

Our best hope is to learn from and collaborate with what others are doing outside Google.

(Sidles up to the bar next to Google engineer) Guess you're new around these parts.