lupi

cow of tailed snake (gay)

avatar by @citriccenobite

you can say "chimoora" instead of "cow of tailed snake" if you want. its a good pun.​


i ramble about aerospace sometimes
I take rocket photos and you can see them @aWildLupi


I have a terminal case of bovine pungiform encephalopathy, the bovine puns are cowmpulsory


they/them/moo where "moo" stands in for "you" or where it's funny, like "how are moo today, Lupi?" or "dancing with mooself"



Bovigender (click flag for more info!)
bovigender pride flag, by @arina-artemis (click for more info)



catball
@catball

If you feel like a little schadenfreude, here's an article about Google admitting that they won't be able to keep pace with open-source LMs that are doing more with less:

Which also mentioned LoRA regarding reducing the parameters needed for LMs:

Some quotes from the article I appreciated (emphasis theirs):

We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.

People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.

Giant models are slowing us down. In the long run, the best models are the ones which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.

[...]

In many ways, this shouldn’t be a surprise to anyone.


You must log in to comment.

in reply to @catball's post:

thank goodness tbh, I was hoping this was true but there was always a chance it wasn't. I think if Google doesn't think their sauce is special, then literally no one's is. There's 0% chance in my mind that there's some ridiculous advantage that Google wouldn't be aware of.

To be fair, Damore wasn’t hired to be a huge prick. This is someone good enough at LLMs to be paid to do it by one of the best talent-scout organizations in the business. I see your salt and I add a teaspoon, maybe.

Agree @modulusshift !

I think the most compelling argument I've heard among folks in the field in favor of google / openAI is that they have already built integrations between their language models and other services.

I feel like the edge Google has is mainly in that they have a lot cash to push a lot of people after this, plus datacenters / hardware / infrastructure to support their pursuit

As research advances allowing LMs to operate on less params and with less overhead, and as open corpora become more robust, Google's hardware and cash advantage will become less prominent in the market, and the niche for LLMs becomes a more accessible commodity

My dream though is that everyone gets sick of LLM hype and realizes that putting a conversational agent in front of a service doesn't make the service any better (maybe even worse)

I guarantee that this is why the CEOs are in Congressional hearings talking about their fears of AI turning bad. They couldn't care less about "evil AI," because that's literally what a corporation is, a set of rules that diffuses responsibility for when people feel exploited. They care about making AI too expensive for competitors to run, now through legal compliance, since hardware is no longer an obstacle...