By creating unpromptable questions, OpenAI begins the work of outlining unthinkable concepts, unmemorable desires, all in the name of empowering us and freeing us from overreliance. If prompting models is to be the new shape of the human cognitive process β and even if you are entirely optimistic about that hybridized potential β a mandated forgetting can still be enacted through what is made difficult to ask.
I am imagining a scenario in the near future when I will be working on writing something in some productivity suite or other, and as I type in the main document, my words will also appear in a smaller window to the side, wherein a large language model completes several more paragraphs of whatever I am trying to write for me, well before I have the chance to conceive of it. In every moment in which I pause to gather my thoughts and think about what I am trying to say, the AI assistant will be thinking for me, showing me what it calculates to be what I should be saying, and Iβll have the option to just start fine-tuning its settings to adjust its output based on what audience I am addressing or what mood Iβm trying to evoke. If I am grasping for ideas, it will supply some. Maybe I will work deliberately to reject them, to come up with something different. Maybe I will use its output as a gauge of exactly what I must not say, in which case it is still dictating what I say to a degree. Or maybe Iβl just import its language into my main document and tinker with it slightly, taking some kind of ownership over it, adapting my thinking to accommodate its ideas so that I can pretend to myself I would have eventually thought them too. I am wondering what I will have to pay to get that window, or worse, what Iβll have to pay to make it disappear.
just saying
I'm waiting for them to make it so you can't turn those suggestions off.

