This morning I was searching online for a certain image done on an etch-a-sketch and couldn't find it.
I even tried getting a bunch of ai engines to generate the image, and they all failed - miserably!
Then it struck me that this could be a neat programming challenge...
I think (in theory) that using QB64PE to draw a gray line on that silver-gray etch-a-sketch background, and have it look photoreal shouldn't be too hard. We could google etch-a-sketch and download a bunch of pictures and just zoom in real close to see what gradations of gray make up the individual lines.
Then it becomes more of a problem of analyzing an image and vectorizing it, but only using movements that could be done with the two knobs (horizontal & vertical).
We could allow diagonal lines to emulate turning both knobs at the same time - maybe one knob is turned faster than the other or if the person has a really steady hand, at the same rate.
The actual analyzing of the image to turn into line art is something way beyond me, maybe the program could use QB64PE's HTTP capability to call some free REST API that does the raster to vector conversion? Or maybe this could be done in QB64PE, or just converted from existing logic done in some other language. Maybe Spriggsy or someone with a subscription to chatgpt or similar could try having it generate that logic?
Anyway, I just thought it was an interesting idea that I'd share here with y'all!
I even tried getting a bunch of ai engines to generate the image, and they all failed - miserably!
Then it struck me that this could be a neat programming challenge...
I think (in theory) that using QB64PE to draw a gray line on that silver-gray etch-a-sketch background, and have it look photoreal shouldn't be too hard. We could google etch-a-sketch and download a bunch of pictures and just zoom in real close to see what gradations of gray make up the individual lines.
Then it becomes more of a problem of analyzing an image and vectorizing it, but only using movements that could be done with the two knobs (horizontal & vertical).
We could allow diagonal lines to emulate turning both knobs at the same time - maybe one knob is turned faster than the other or if the person has a really steady hand, at the same rate.
The actual analyzing of the image to turn into line art is something way beyond me, maybe the program could use QB64PE's HTTP capability to call some free REST API that does the raster to vector conversion? Or maybe this could be done in QB64PE, or just converted from existing logic done in some other language. Maybe Spriggsy or someone with a subscription to chatgpt or similar could try having it generate that logic?
Anyway, I just thought it was an interesting idea that I'd share here with y'all!