Posts: 520
Threads: 65
Joined: May 2022
Reputation:
86
Should I try GitHub? Alright, I’ll give it a shot. I’ve never worked actively through it before, so first I’ll have to figure out how it works. I would definitely like to contribute a few things to QB64PE. I’d use my own solutions, but of course, I would submit them to the GitHub community for approval first.
However, I’d need to know what to watch out for, what the strict requirements are, and what criteria must be met for anything to be merged. I don't want to go against the rules.
I'm happy to give this collaboration a try.
@Jack – I have provided a detailed response.
Posts: 520
Threads: 65
Joined: May 2022
Reputation:
86
I think I’ve finally lost my mind, but I’m planning to attempt to add support for dynamic nested arrays to QB64PE. I’ve recently managed to get support for static nested arrays working, which has encouraged me to take the next step into the dynamic territory.
If we are to implement dynamic nested arrays - a task I am just beginning (first, I need to clarify the intended behavior) - the syntax for ReDim should look like this:
Imagine a type named "test" containing an integer X, a long array A, and a string array B. If we ReDim an array C(100) of this type, we should then be able to reach further in and call something like: Redim C(0).B(20, 50) as String.
In this case, we would be resizing the nested array B via ReDim. Does this approach seem correct to you? The same logic would apply to _Preserve. For example, Redim _Preserve C(0).B(1, 0) as String would increase the first dimension of the nested array B by one element.
Regarding Erase, I propose extending its functionality so we can clear a specific nested array, such as Erase C(0).B. To keep the implementation consistent with C++ logic (where a nested dynamic array acts similarly to a pointer or a vector member within a struct), I suggest the following:
First, the current implementation of Erase must remain fully functional and work on top-level arrays exactly as it does now. Second, Erase must become context-aware. If I call Erase C (the parent array), the parser must ensure a recursive cleanup of all nested dynamic arrays within every element of C to prevent memory leaks. Third, inside the Type definition, the nested array should be represented by a descriptor. This allows us to use ReDim and _Preserve on a per-element basis, mirroring how one would resize a member vector in C++.
Furthermore, I assume we should also support defining bounds within the Type itself, for example:
Type Test
A(1 to 30) as Long, B(minValue to maxValue) as Long
End Type
I guess so, anyway!
I would like to hear your suggestions and opinions before I start digging deep into the Qb64Pe.bas parser and beyond. As for GET and PUT with these variable nested arrays, direct support might be "impossible" - much like how _Mem "doesn't work" with variable - length strings... So, support is technically possible, just not directly. But that is a task for the very end.
These suggestions seem reasonable to me, but I want to hear your thoughts first.
Posts: 10
Threads: 0
Joined: Jun 2025
Reputation:
1
10 hours ago
(This post was last modified: 10 hours ago by CookieOscar.)
My 0.000002 cents for what it's worth:
(Yesterday, 03:54 PM)Petr Wrote: ...
Redim C(0).B(20, 50) as String
...
I assume this only redims B() in the
0 indexed element of C(), and not any other B() array from other existing C() elements?
Cool!
So, what about:
Redim C
().B(20, 50) as String
or even:
Redim C
.B(20, 50) as String
Could that redim ALL B() arrays of ALL existing elements of C()?
Or is that taking it too far?
PS: I think I've never had the need to have an array inside an array/UDT inside an UDT like that before though. But very cool if all this would become possible (as long as there is no additional overhead in the underlying C++ code when not used -- especially speed wise)! The most complex I ever had the need for was a dynamic array (with fixed number of dimensions) of a
build-in type inside an UDT.
PSS: Make sure you touch grass from time to time. Loosing your mind is no bueno
Who remembers QB30, GWBASIC, C64, ZX80?
Posts: 352
Threads: 45
Joined: Jun 2024
Reputation:
32
So @Petr for me....(and i know im a very specific case!)
Id dim the array for a model, then load it...in the loading its vertex, normals and UV coordinates would be derived from the file and then based on the amount of each we then redim the arrays of said UDT and load the data...but i hope your planning to allow for arrays of UDTs inside UDTs right? This way a given face or edge or vertex can hold its required info (normals, connections, UV, etc...) and this way only require dimensioning once...
I used _mem for this and a counter, in my load sub id get the size from the file, create an array, load it and then dimension the mem block, copy the data to it and the renderering sub would use the counter to know how big an array to make (often based on verts per frame), get the data from the mem block and then render it...i.e constantly making and destroying arrays every loop for every model....i was amazed that it works so fast though! (Galleon did a good job here!) and it was my go to method for simulating arrays in UDT's. To ME Fixed sized arrays are pointless...(though i appreciate the dev work and look forward to seeing REDIM a a thing!)
John
Posts: 190
Threads: 14
Joined: May 2024
Reputation:
20
@Petr
tremendous work.
it should be included in qb64 phoenix 4.5.
(posting now. tried to get into this site. around 1500 hours 12 march 2026 but kept getting 403 internet error. not sure why.)
hopeless addict of dying in the first few levels of two particular console viewport "roguelike" games