09-19-2023, 12:42 AM
Why's it crash?
Simple answer: You're running out of memory to store it all in, and getting a seg fault. (Linux is fairly good at popping up a message that says, "Error -- segment fault at 123456", while Windows tends to just close and pretend it never ran such an uncouth program on its OS in its entire history!)
Complex break down of the problem:
The above is what an internal image type is stored and referenced as inside QB64. Anyone want to add up all those bits and bytes and tell us what the total is?
Sure, you might only be making a screen which is only a single 4-byte memory, but there's all the *extra* info which has to be stored on that screen. What's the font in use on it? What's its handle? How wide? How tall? What color palette does it use? What scale does it render at? Where's it appear in regards to the X/Y screen coordinates? How many bytes does each pixel use in memory? (2 for text screen, 1 for 256-colors, 4 for 32-bit screens.)
How many bytes of information is stored just to handle that image of ONE single pixel??
I dunno, but I'm guessing it's quite a few!!
And then we double that amount of memory used by making a _COPYIMAGE of that same image and tossing it into hardware memory.
So let's be conservative and say 1000 bytes of memory to hold each image structure. Plus 1000 bytes for each hardware image structure. Plus 4 bytes for each image data, plus 4 bytes for each hardware image data...
At 1028 pixels width, that's 2008 * 1028 that's over 2MB of memory requirement for each ROW of data. (The x in the formula.)
Once you start going by Y as well, that multiples 2MB * row for total memory used. Say Y = 100, that's now an internal usage of 200MB of memory. And that's just a very conservative estimate of 1000 bytes overhead for each pixel. If you look at our image type closely, you'll notice that several of those points of data are just OFFSETS to actual blocks of memory that are stored elsewhere.
So we're storing large amount of memory repetitively, with no real break for the OS to do its thing with. Those aren't going to be written as continuous chunks of sequential memory; the OS is going to place them where it finds convenient open chunks of unused memory. So let's say that Y crashes at around 50 on one run -- that's going to be 1028 * 50 = 51400 chunks of memory allocated on the fly... Well, 51400 chunks of that one single image header, but there's also 51400 chunks for each of those offsets and 51400 chunks for each block of image data itself...
At what point does your OS run out of conveniently free chunks of memory??
Let's say I have 10 bytes of memory available for use.
0000000000 <-- My 10 bytes of free memory!
Now, let's say I write data in chunks of 3 bytes, allocated randomly and conveniently:
0011100000 <-- First write of memory.
0011101110 <-- Second write of memory.
<<<<<<<<<<-- Third time, we're not out of memory, but now there's not a single free 3-block of memory in usage! We have to optimize our internal memory structure first before we can store that single chunk!
1111110000 <-- Internal memory process to reorganize and reindex our memory to free up space!
1111110111 <-- And then we can finally write that 3rd block of memory!
See how the process is working here, with memory getting shuffled left and right and reorganized back and forth behind the scenes, without you ever having to deal with any of it personally??
Now, visualize that happening with the hundreds of thousands of chunks of memory that I just talked about above to register each image! How many writes is your OS trying to do concurrently? How many shuffles? How many chunks are being optimized and how many are being dumped on each pass of X? How many reshuffles of memory are required??
A million chunks of memory, all of various sizes, suddenly created, allocated, and shuffled into position *somewhere* in your OSes internal brain... All in the space of a few microseconds, with a million more chunks coming in the next microsecond, and then a million more, and a million more, and....
Eventually the OS just says, "PAH!! I give up! I'm dead!"
And that's when your program dies completely.
Now, WHY are we seeing "RANDOM" crashes and not one consistent point where an OS says, "I surrender!!"?
It all has to do with how many of those shuffles and optimizations and everything it has to do at once.
Let's go back and look at those 10 bytes of memory, with data written in 3-byte chunks:
0011101110 <-- This is 2 bytes of data written randomly (Steve randomly so I can illustrate the point). With only 2 writes, we have to perform optimization
0001111110 <-- Now, let's say this is how those 2 bytes ended up in memory when placed randomly. We don't have to trigger that optimization shuffle to write the next chunk of data...
Sometimes things are going to shuffle and end up dying after 50 passes of Y. (Which is 1028 passes of X, which is all that image header info and all those linked offsets, and a partridge in a pear tree additional...)
Other times, it might place itself so that it randomly doesn't run into issues until after 100 passes of Y.
It all depends on how tight that data gets packed, how much spacing it has between it, how many calls are made to optimize it, how large the free chunks are between it.... Yada yada yada!
IT'S COMPLICATED!! Get it?
Which brings us round circle to the simple explanation:
You ran out of memory.
Simple answer: You're running out of memory to store it all in, and getting a seg fault. (Linux is fairly good at popping up a message that says, "Error -- segment fault at 123456", while Windows tends to just close and pretend it never ran such an uncouth program on its OS in its entire history!)
Complex break down of the problem:
Code: (Select All)
struct img_struct {
void *lock_offset;
int64 lock_id;
uint8 valid; // 0,1 0=invalid
uint8 text; // if set, surface is a text surface
uint8 console; // dummy surface to absorb unimplemented console functionality
uint16 width, height;
uint8 bytes_per_pixel; // 1,2,4
uint8 bits_per_pixel; // 1,2,4,8,16(text),32
uint32 mask; // 1,3,0xF,0xFF,0xFFFF,0xFFFFFFFF
uint16 compatible_mode; // 0,1,2,7,8,9,10,11,12,13,32,256
uint32 color, background_color, draw_color;
uint32 font; // 8,14,16,?
int16 top_row, bottom_row; // VIEW PRINT settings, unique (as in QB) to each "page"
int16 cursor_x, cursor_y; // unique (as in QB) to each "page"
uint8 cursor_show, cursor_firstvalue, cursor_lastvalue;
union {
uint8 *offset;
uint32 *offset32;
};
uint32 flags;
uint32 *pal;
int32 transparent_color; //-1 means no color is transparent
uint8 alpha_disabled;
uint8 holding_cursor;
uint8 print_mode;
// BEGIN apm ('active page migration')
// everything between apm points is migrated during active page changes
// note: apm data is only relevent to graphics modes
uint8 apm_p1;
int32 view_x1, view_y1, view_x2, view_y2;
int32 view_offset_x, view_offset_y;
float x, y;
uint8 clipping_or_scaling;
float scaling_x, scaling_y, scaling_offset_x, scaling_offset_y;
float window_x1, window_y1, window_x2, window_y2;
double draw_ta;
double draw_scale;
uint8 apm_p2;
// END apm
};
// img_struct flags
# define IMG_FREEPAL 1 // free palette data before freeing image
# define IMG_SCREEN 2 // img is linked to other screen pages
# define IMG_FREEMEM 4 // if set, it means memory must be freed
The above is what an internal image type is stored and referenced as inside QB64. Anyone want to add up all those bits and bytes and tell us what the total is?
Sure, you might only be making a screen which is only a single 4-byte memory, but there's all the *extra* info which has to be stored on that screen. What's the font in use on it? What's its handle? How wide? How tall? What color palette does it use? What scale does it render at? Where's it appear in regards to the X/Y screen coordinates? How many bytes does each pixel use in memory? (2 for text screen, 1 for 256-colors, 4 for 32-bit screens.)
How many bytes of information is stored just to handle that image of ONE single pixel??
I dunno, but I'm guessing it's quite a few!!
And then we double that amount of memory used by making a _COPYIMAGE of that same image and tossing it into hardware memory.
So let's be conservative and say 1000 bytes of memory to hold each image structure. Plus 1000 bytes for each hardware image structure. Plus 4 bytes for each image data, plus 4 bytes for each hardware image data...
At 1028 pixels width, that's 2008 * 1028 that's over 2MB of memory requirement for each ROW of data. (The x in the formula.)
Once you start going by Y as well, that multiples 2MB * row for total memory used. Say Y = 100, that's now an internal usage of 200MB of memory. And that's just a very conservative estimate of 1000 bytes overhead for each pixel. If you look at our image type closely, you'll notice that several of those points of data are just OFFSETS to actual blocks of memory that are stored elsewhere.
So we're storing large amount of memory repetitively, with no real break for the OS to do its thing with. Those aren't going to be written as continuous chunks of sequential memory; the OS is going to place them where it finds convenient open chunks of unused memory. So let's say that Y crashes at around 50 on one run -- that's going to be 1028 * 50 = 51400 chunks of memory allocated on the fly... Well, 51400 chunks of that one single image header, but there's also 51400 chunks for each of those offsets and 51400 chunks for each block of image data itself...
At what point does your OS run out of conveniently free chunks of memory??
Let's say I have 10 bytes of memory available for use.
0000000000 <-- My 10 bytes of free memory!
Now, let's say I write data in chunks of 3 bytes, allocated randomly and conveniently:
0011100000 <-- First write of memory.
0011101110 <-- Second write of memory.
<<<<<<<<<<-- Third time, we're not out of memory, but now there's not a single free 3-block of memory in usage! We have to optimize our internal memory structure first before we can store that single chunk!
1111110000 <-- Internal memory process to reorganize and reindex our memory to free up space!
1111110111 <-- And then we can finally write that 3rd block of memory!
See how the process is working here, with memory getting shuffled left and right and reorganized back and forth behind the scenes, without you ever having to deal with any of it personally??
Now, visualize that happening with the hundreds of thousands of chunks of memory that I just talked about above to register each image! How many writes is your OS trying to do concurrently? How many shuffles? How many chunks are being optimized and how many are being dumped on each pass of X? How many reshuffles of memory are required??
A million chunks of memory, all of various sizes, suddenly created, allocated, and shuffled into position *somewhere* in your OSes internal brain... All in the space of a few microseconds, with a million more chunks coming in the next microsecond, and then a million more, and a million more, and....
Eventually the OS just says, "PAH!! I give up! I'm dead!"
And that's when your program dies completely.
Now, WHY are we seeing "RANDOM" crashes and not one consistent point where an OS says, "I surrender!!"?
It all has to do with how many of those shuffles and optimizations and everything it has to do at once.
Let's go back and look at those 10 bytes of memory, with data written in 3-byte chunks:
0011101110 <-- This is 2 bytes of data written randomly (Steve randomly so I can illustrate the point). With only 2 writes, we have to perform optimization
0001111110 <-- Now, let's say this is how those 2 bytes ended up in memory when placed randomly. We don't have to trigger that optimization shuffle to write the next chunk of data...
Sometimes things are going to shuffle and end up dying after 50 passes of Y. (Which is 1028 passes of X, which is all that image header info and all those linked offsets, and a partridge in a pear tree additional...)
Other times, it might place itself so that it randomly doesn't run into issues until after 100 passes of Y.
It all depends on how tight that data gets packed, how much spacing it has between it, how many calls are made to optimize it, how large the free chunks are between it.... Yada yada yada!
IT'S COMPLICATED!! Get it?
Which brings us round circle to the simple explanation:
You ran out of memory.