ticky

im in ur web site

  • she/her

web dracat

made:
internet-ti.me, @Watch, Wayback Classic, etc.

avatars appearing:

in 2D by nox lucent
in 3D by Zcythe

"If it were me, I'd have [changed] her design to make [her species] more visually clear" - some internet rando

I post embeds of other peoples' things at @ticky-reposts


posts from @ticky tagged #Craig Newmark Philanthropies

also:

In late 2022 or early 2023 he deleted much of his profile from his "craig newmark philanthropies" site1, leaving only a list of dot points of personal inspiration.

Later, in May 2023 he updated the profile2 with the following somewhat outrageous claim:

He originally intended to study Large Language Models like ChatGPT in the early seventies, but decided he wanted gainful employment, and went into software development.

Let's do some back of the napkin machine learning maths, shall we?

The best claims I can find claim ChatGPT appears to require about 8 Nvidia A100s3 which are capable of 312 teraFLOPS each4 to run, so you are looking for about 2,498 teraFLOPS for real-time, "chat" type responses.

Let's be generous and say they had access to a CDC STAR-100 supercomputer from the late 60s5. This machine is capable of 100 megaFLOPS.

That means you'd need about twenty-five million STAR-100s to achieve this on FLOPS alone. And that's not even getting into the memory requirements of a large language model - a clue is in the name.

But there's a bit of a problem there. They made three. Just three, of the STAR-100. And all three went to U.S. government facilities.

All this is to say bull shit, craig, what the hell are you talking about. Technology like large language models would not have been feasible or useful in the 70s.


  1. https://web.archive.org/web/20230202083112/https://craignewmarkphilanthropies.org/about-us/craig-newmark-bio/

  2. https://web.archive.org/web/20230501150333/https://craignewmarkphilanthropies.org/about-us/craig-newmark-bio/

  3. https://twitter.com/tomgoldsteincs/status/1600196988703690752

  4. https://www.nvidia.com/en-us/data-center/a100/#specifications

  5. the next big upgrade for supercomputing at the time would be the Cray-1, which wasn't until 1976, which I think doesn't count as "early seventies" anymore