Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Pointer in Basic
#21
(07-18-2023, 05:04 PM)SMcNeill Wrote:
(07-18-2023, 04:49 PM)SagaraS Wrote: The output shows you the memory addresses that are reserved for the variables.
Since they all follow each other in memory, you can view the addresses in a memory viewer.
This isn't guaranteed.  It's up to the OS to decide where your variables go.  If it's a clean run of a program, things might fall into a pattern of being one after the other, but that's not guaranteed.  Once you get to the point that you're starting to free memory and reuse memory and redimension memory and all that, the position of where variables and such all fall in memory gets shuffled into unpredictable order.  You really can't count it variable2 being 4 bytes after variable 1 and 4 bytes before variable3, or any such thing as that.  If you want to know where a variable is stored in memory, use _OFFSET to get that value back for yourself, or else you might end up changing things you don't intend to.  Wink

I double this comment especially in 32-bit introducing multitasking and threads, and 64-bit introducing multi-processing. A memory allocator that "consecutively" allocates blocks is not desireable if it were possible.

Quote:If it's a clean run of a program, things might fall into a pattern of being one after the other, but that's not guaranteed.

This has to be emphasized even more because it applies only to the same run of the same executable file state. Once one change, however minor is made to the program that is compiled to that executable, things are thrown out of whack. This was easily shown by old "DEBUG" program in MS-DOS while trying to step through a dot-COM file.
Reply
#22
(07-18-2023, 07:30 PM)SagaraS Wrote: I can spin this any way I want. With _Byte, INTEGER and LONG and others 8 bytes are reserved.
What you're seeing here is your OS's basic data ruleset at work.  You'll see this type of thing all the time in TYPE usage and with DECLARE LIBRARY. 

Let's take a step back and try and think about how a hard drive works for a moment.  You have such things as disk tracks, sectors, and clusters and all -- but what are they?? 

Show Content

So basically, your hard drive's SECTOR setting is the smallest chunk of data that it can read or write to at a time.  If the sector size is 512 bytes, and you write a text file that is only "Hello World" and a total of 11 bytes, then that 11 byte file will still use a multiple of 512 bytes for storage.  (In this case, it'd use 512 bytes.  A 513 byte data file actually uses 1024 bytes of storage, as it *HAS* to align to the smallest sector size of 512 bytes.)

So why do we see such variances in disk sector sizes?  Some drives have 128 bytes for a sector.  Some have 4096.  Some with larger and smaller values..  WHY??  What's the difference?

Smaller sector sizes pack data tighter together, while larger sector sizes read faster.  <-- That's the overall truth of the matter.

If you have a drive with a 64 byte sector size, you can write 10 "Hello World" files and use 640 bytes of drive space -- each file is at the minimum of the sector size.  Now, compare that to a drive with a 4096 sector size, those exact same 10 "Hello World" files will use 40,960 bytes of drive space.  Small sector sizes pack data much more efficiently!!

On the other hand, you have a file which is 640 bytes in size.  With the 64 byte sector size, that drive has to read 64 bytes sectors, 10 different times, and move all that data into memory; whereas that 4096 sector size drive makes 1 simple pass from the drive and is done with it.  That larger drive is much faster!!


Now, with that basic concept in mind with hard drives, the way your OS handles memory isn't much different.   

Generally speaking, data for your programs is going to be written in the size of your OS's registers.  For 32-bit apps, that's usually on a 4-byte breakpoint.  For 64-bit apps, that's usually on an 8-byte break point.  It's why when you write a DECLARE LIBRARY routine for a microsoft library, you have to add padding for the OS.

TYPE foo
   a AS INTEGER
   p AS PADDING * 6 'for 64-bit systems.  For a 32-bit system, this would need 2-bytes of padding.
   b AS _INTEGER64
END TYPE

Your OS wants to implement data structures so that it reads and works with a register of data in a single pass.  It's faster and more efficient, rather than having to read a portion of a register, work with it, then write a portion back.

It's all about what's most efficient, in general cases, for the OS to read/write data.  (Note that things like #pragma pack and such alters such behavior.)  Generally speaking, the OS is going to write data in 4-bytes (sectors if you want) on a 32-bit App and in 8-byte sectors on a 64-bit App.

>> NOTICE I SAID APP AND NOT OS!!  64-bit OSes will still pack 32-bit programs into 4-byte sectors, rather than 8-byte sectors, so everything defaults for compatibility reasons.  <<

So what you're seeing is the clean positioning of data along the 8-byte boundry of your 64-bit apps -- and that's what you'll generally find on a fresh start of a new program.  As I mentioned however, the data isn't guaranteed to always remain in such a neatly organized order for you.  Let's think about the following program flow for a moment:

We start a program with 10 integers in use.  Those use the first 80 bytes of memory. 

11111111222222223333333344444444555555556666666677777777888888889999999900000000 <-- this is basically the memory map of those 10 integers each using 8 bytes of memory.  1 is the first integer.  2 is the second integer.  3 is the third integer, and so on...

We then start a routine which resizes that second integer to become a 8-element array.  It can't stay in the original 8-bytes of memory that it used -- it needs 16-bytes to hold that array now.

11111111XXXXXXXX33333333444444445555555566666666777777778888888899999999000000002222222222222222 <-- We now have a gap of 8 bytes of freed memory which isn't in use any longer.  variable 1 comes first, then there's a gap, then there's variable 3, and variable 2 is now in memory after variable 10 (represented by 0).

And now we add a new variable, which is variable #11 into the program.

11111111AAAAAAAA33333333444444445555555566666666777777778888888899999999000000002222222222222222 <-- The As are representive of the 11th variable.  In this case, the first variable comes first in memory, followed by the 11th variable, with the 3rd through 10th next, and the 2nd variable using up the last bytes in memory.

The more a program runs, the more the memory tends to get shuffled over time.  Don't assume that it's always going to be a singular block of memory from variable 1 to variable N.  If you do, you'll end up in a world of problems eventually as you end up corrupting values you never intended to interact with.
Reply
#23
(07-18-2023, 10:11 PM)SMcNeill Wrote: It's all about what's most efficient, in general cases, for the OS to read/write data.  (Note that things like #pragma pack and such alters such behavior.)  Generally speaking, the OS is going to write data in 4-bytes (sectors if you want) on a 32-bit App and in 8-byte sectors on a 64-bit App.

>> NOTICE I SAID APP AND NOT OS!!  64-bit OSes will still pack 32-bit programs into 4-byte sectors, rather than 8-byte sectors, so everything defaults for compatibility reasons.  <<
And why does QB64 reserve the same variables for me on a 32-bit OS in a 32-bit APP? So always 8 bytes for each variable.
Shouldn't that be at least 4 bytes?

I don't quite get it.

QB64 would then have to reserve something else. And the gaps are never really filled. I've already declared umpteen variables and restarted the program. On a 64-bit as well as on a 32-bit OS. With a 32-bit app as well as with a 64-bit app of the same program.
And again and again I can observe the same thing in the memory viewer.

Do you have some example code that better illustrates that the memory behaves differently?

Because I can't reproduce it no matter what I do instead.

My Visual Studio C++ app works like you say. But somehow QB64 doesn't intend to do what you're telling me.

Don't get me wrong, I know what you're saying and how the memory should work. But somehow QB64 doesn't want to keep that ^^
Reply
#24
(07-19-2023, 10:27 AM)SagaraS Wrote:
(07-18-2023, 10:11 PM)SMcNeill Wrote: It's all about what's most efficient, in general cases, for the OS to read/write data.  (Note that things like #pragma pack and such alters such behavior.)  Generally speaking, the OS is going to write data in 4-bytes (sectors if you want) on a 32-bit App and in 8-byte sectors on a 64-bit App.

>> NOTICE I SAID APP AND NOT OS!!  64-bit OSes will still pack 32-bit programs into 4-byte sectors, rather than 8-byte sectors, so everything defaults for compatibility reasons.  <<
And why does QB64 reserve the same variables for me on a 32-bit OS in a 32-bit APP? So always 8 bytes for each variable.
Shouldn't that be at least 4 bytes?

I don't quite get it.

QB64 would then have to reserve something else. And the gaps are never really filled. I've already declared umpteen variables and restarted the program. On a 64-bit as well as on a 32-bit OS. With a 32-bit app as well as with a 64-bit app of the same program.
And again and again I can observe the same thing in the memory viewer.

Do you have some example code that better illustrates that the memory behaves differently?

Because I can't reproduce it no matter what I do instead.

My Visual Studio C++ app works like you say. But somehow QB64 doesn't intend to do what you're telling me.

Don't get me wrong, I know what you're saying and how the memory should work. But somehow QB64 doesn't want to keep that ^^
You're right, from what I can tell.  O_o!  

Older versions used to align on 4-byte boundaries with 32-bit apps.  When I tested our latest version, it aligns on 8-byte boundaries -- most of the time!

PRINT _OFFSET(a), _OFFSET(b), _OFFSET(c)

Run the above, as is, without bothering to declare any variable type (it'll default to SINGLE), and see the spacing -- it's 32-bytes!

It's just another reason to keep in mind that it's all up to the OS for where/how it places your data in memory.  I wouldn't trust that the space between is always going to be 8-bytes for anything. (Especially if mem usage starts to increase and the OS starts to reshuffle and pack program memory.)

WHY the 32-bit apps now appear to align on 8-byte memory spaces is a mystery to me.  It may be the g++ guys tweaked something moving up from v11 to v17 (or whatever the lastest version is now).  Personally, I'd just always rely on _OFFSET or MEM.OFFSET to let the program tell me where my data is in memory.  If I was truly concerned about always packing things without any spaces, then I'd just declare a _MEMNEW block and micromanage placement manually.
Reply
#25
(07-19-2023, 01:42 PM)SMcNeill Wrote: It's just another reason to keep in mind that it's all up to the OS for where/how it places your data in memory.  I wouldn't trust that the space between is always going to be 8-bytes for anything. (Especially if mem usage starts to increase and the OS starts to reshuffle and pack program memory.)
I haven't been following too closely, but I just wanted to point out that the OS does _not_ determine where your data is placed in memory. That is entirely up to the application and the OS will never move it. Most of those decisions are decided at compile time by the compiler, but memory allocations are decided at runtime by various things (QB64 runtime, C runtime, etc.).

Additionally, you should note that (for reasons) QB64 allocates local variables at runtime, that's why things are a bit different compared to other languages . Currently QB64 has a `mem_static_malloc()` which is used to allocate local/stack variables, and it aligns all returned addresses to 8 bytes. Really I'd like to replace the whole thing, as that's not even the correct thing to do (`_FLOAT` needs 16 byte alignment), but that's why all variables appear to take up at least 8 bytes - we ask `mem_static_malloc()` to give us 1, 2, or 4 bytes, and it gives the lowest multiple of 8 that's higher or equal to what we asked for - in that case, 8.

None of that is documented because as far as the programmer cares, it doesn't matter. Even if `mem_static_malloc()` reserves an 8-byte chunk of memory, the LONG value it's for is only going to use 4 of those bytes as it's a 4-byte integer - the size of the variable does not change and you'd never know unless you're manually comparing the addresses between variables. It wastes a minor amount of memory, but that's it. Additionally this only applies to locally declared variables. Arrays are allocated separately and won't have that 'extra' space in-between each array entry.
Reply




Users browsing this thread: 2 Guest(s)