
Creating shorter timeline market as 2028 one seems to be v bullish.
I tried with GPT-3 to draw some donuts. I really did. It kept giving me a shirtless man. Or a scary looking spider.
I tried again with GPT-3.5 aka the "ChatGPT". It was very bad at it.
Often it fails to make a hole in the donut, and I like telling the model "Donuts have holes in them". (asking it to generate donuts with glaze, or pretty donut or tasty donut leads to many funny results) (and why does it keep using codeblocks in perl?)
sus
chill out
And now some results with GPT-4 are pretty underwhelming to see.
So, will there be a model in the GPT series in the next 2 years (by the end of 2024 or start of 2025) that consistently draws a donut using ascii characters when prompted to do so?
@firstuserhere
GPT-4
"Think step by step to arrive at correct solution. draw me a the best donut you can in ASCII with outer and inner rings"
The second try was not as good though.

@MikhailDoroshenko Interesting. I'm interested in whether it can make it consistently, even if i tweak it a little like a "beautiful" looking or "tasty" donut. I haven't played with GPT-4 myself yet, but to be sure, does it regenerate it again if you ask it to try again (i'd average over a few times just to make sure it consistently does it)
Will be interesting to see if GPT-4's visual understanding will transfer to ASCII art as well.
@Stefan Alas it does not, as far as I have been able to tell after an hour of trying to get it to draw ASCII cubes and donuts.
GPT-4 can draw the standard ASCII cube everyone has seen a zillion times, but can't follow orders to flip it left-to-right even though that just requires reversing each line stringwise and it could easily write the Python script to do that.
I gave up after trying to get it to draw a proper donut with a hole (though it could draw simple circles or discs). I wasn't able to reproduce Mikhail's success, but I believe it is possible as I saw attempts that came close (though with things like extra holes).
It is interesting how limited its connection from your instructions, which it can verbally rephrase back to you, is to its actual spatial abilities. It reminds me of trying to teach someone a physical skill (like throwing a ball, or playing a first-person shooter) who has no aptitude for it at all.
I wonder if it'd do better if someone with API access provided an example image as part of the input.
It does seem like the kind of thing that ought to be more consistent by 2025 though, given how close it is now.
@ML i think the op comment meant the gpt-4-vision not the current gpt-4-no-vision version
@ML yeah I've seen it give very good step by step instructions, do well on each instruction and end up making a smiling guy with sunglasses at the end lol
@firstuserhere Interesting, thanks! I had been assuming that the model available via ChatGPT was the same weights as the model with image support, just with no image input connected to it. If it is a totally different model then I should not expect ChatGPT-4's current capabilities to provide particularly useful information.


















