03-27-2024, 06:41 PM
@Dimster You'd have to count yours by having it time to whole process -- the file access once for counting the file and the second time for dimming and loading it... and that's going to vary a LOT based on where the file is located and how you're accessing it.
Load the file once into memory, parse it from there to count it, and then load it? That's not going to be so bad speed-wise, but it's going to use up the most memory possible as it'll have 2 copies of the same data in memory until the original, unparsed data, is freed.
Access the file from a normal drive, load it line by line for a item count, dim the array that size, and then load the data? Going to be slower, with your speeds variable according to whatever your drive speeds are. RAM drive? Speedy! SSD Drive? A little slower. A 9800RPM SATA quad-drive? Fast as heck... well, maybe... how's that SATA configured? Speed, size, or security? An old 1980 1600RPM drive? Slow...
Access that file via a network drive, load it line by line for an item count, dim the array that size, and then load the data? Take a nap if that file is any size at all!! Network transfer rates and data validity checks and everything else has to be performed TWICE -- line by line -- as it transfers that file! There's some posts on here about saving .BAS files across network drives -- and them only taking 15 minutes or so to save line-by-line!! The method you suggest is definitely NOT one I'd even want to consider for network usages.
So, in all honestly, it'd be hard to get a true reading of just how much of a difference it would make overall, as you could only really rest for your own system's data setup. To give an idea though of which method has to be faster, just look at what you're doing:
Method 1: Open file. Read file line by line to get number of items in the file. Close file. Open file. Dim array to that size. Read file line by line to get those items into the array. Close file.
vs.
Method 2: Dim oversized array. Open file. Read file line by line directly into array. Close file. Resize array to proper size.
One has to read the whole data file twice. The other only reads it once. Now, the difference in performance here is going to be mainly based on how fast your system transfers that data -- the physical access speed is going to be more than anything going on with the memory manipulation in this case.
Load the file once into memory, parse it from there to count it, and then load it? That's not going to be so bad speed-wise, but it's going to use up the most memory possible as it'll have 2 copies of the same data in memory until the original, unparsed data, is freed.
Access the file from a normal drive, load it line by line for a item count, dim the array that size, and then load the data? Going to be slower, with your speeds variable according to whatever your drive speeds are. RAM drive? Speedy! SSD Drive? A little slower. A 9800RPM SATA quad-drive? Fast as heck... well, maybe... how's that SATA configured? Speed, size, or security? An old 1980 1600RPM drive? Slow...
Access that file via a network drive, load it line by line for an item count, dim the array that size, and then load the data? Take a nap if that file is any size at all!! Network transfer rates and data validity checks and everything else has to be performed TWICE -- line by line -- as it transfers that file! There's some posts on here about saving .BAS files across network drives -- and them only taking 15 minutes or so to save line-by-line!! The method you suggest is definitely NOT one I'd even want to consider for network usages.
So, in all honestly, it'd be hard to get a true reading of just how much of a difference it would make overall, as you could only really rest for your own system's data setup. To give an idea though of which method has to be faster, just look at what you're doing:
Method 1: Open file. Read file line by line to get number of items in the file. Close file. Open file. Dim array to that size. Read file line by line to get those items into the array. Close file.
vs.
Method 2: Dim oversized array. Open file. Read file line by line directly into array. Close file. Resize array to proper size.
One has to read the whole data file twice. The other only reads it once. Now, the difference in performance here is going to be mainly based on how fast your system transfers that data -- the physical access speed is going to be more than anything going on with the memory manipulation in this case.