Posts: 60
Threads: 9
Joined: Jul 2024
Reputation:
13
05-04-2025, 03:03 AM
(This post was last modified: 05-04-2025, 03:04 AM by aadityap0901.)
Is there a way in qb64, to improve code performance?
I know that, we can transpile the code to c++ header files, and then we can include them as libraries (which boosts performance).
But, is there any way to boost performance in pure qb64?
$Checking:Off only works when we have ZERO BUGS.
Posts: 3,001
Threads: 356
Joined: Apr 2022
Reputation:
279
Many little things can improve speed and performance. It all depends on what you're doing, when, where, and how complicated you want things to become.
Some general rules of thumb:
DO...LOOPS are faster than FOR...NEXT loops.
Working with ASCII values are faster than working with STRINGS. (IF ASC(text$,1) = 65 is faster than IF MID$(text$,1,1) = "A", for example.)
String addition is slooow. If you know you're going to add a bunch of strings together, add up the length, create one string that size, and then use MID$ on that string rather than adding them over and over repeatedly.
If writing to network files, write all your data to a local drive first and copy if over whole, instead of passing it one line at a time and waiting for it to pass network security and protocols.
If you need to read a file and can read it with LINE INPUT, open that file FOR BINARY instead of FOR INPUT. Even better, _READFILE$ and parse it yourself.
Skip use of POINT and PSET commands, using _MEM instead.
Use a default variable type that matches your system. INTEGER64 variables are faster to process on 64-bit machines than LONG, INTEGER, OR BYTE variables.
Never use _BIT. Forget it exists unless you're using it for a large bit array. Even then, consider it twice. Then consider it again. And then use _BYTE instead.
Set the flags in the compiler options to optimize your EXE. They make a huge difference in many cases.
Simplify your math and loops as much as possible, moving as much outside the loop as possible. This includes logic decisions such as IF or SELECT CASE. Sometimes you'll write more code, but the difference in performance can be truly worth it.
Never add a progress bar to a display. If you do, don't update it every 0.00001 percent. It takes time to print or draw to the screen and sometimes you'll spend more time on using making and using a progress bar in a loop than you'd use with the loop itself. Only update that bar as infrequently as possible to reduce any hit to performance.
Don't REDIM arrays by a count of 1. REDIM them much larger than you expect them to be, and then REDIM _PRESERVE them back down to the proper size afterwards. Each REDIM is a hit to performance. OVERDO it once, be done with it, and then resize it down. Don't grow it by one element over and over and over and over and over and over and over and over....
If you're using input for movement (such as in a game and such), you may want to use _KEYDOWN instead of _KEYHIT or INKEY$. Keydown is limited to the repeat speed of your machine. _KEYDOWN can generate a result with each pass of the loop. This can truly affect performance and responsiveness.
Umm.... Lots of other things I'm sure, but that's what I can pop off, off the top of my head.
Posts: 11
Threads: 1
Joined: Apr 2022
Reputation:
6
(05-04-2025, 03:35 AM)SMcNeill Wrote: Many little things can improve speed and performance. It all depends on what you're doing, when, where, and how complicated you want things to become.
Some general rules of thumb:
DO...LOOPS are faster than FOR...NEXT loops.
Working with ASCII values are faster than working with STRINGS. (IF ASC(text$,1) = 65 is faster than IF MID$(text$,1,1) = "A", for example.)
String addition is slooow. If you know you're going to add a bunch of strings together, add up the length, create one string that size, and then use MID$ on that string rather than adding them over and over repeatedly.
If writing to network files, write all your data to a local drive first and copy if over whole, instead of passing it one line at a time and waiting for it to pass network security and protocols.
If you need to read a file and can read it with LINE INPUT, open that file FOR BINARY instead of FOR INPUT. Even better, _READFILE$ and parse it yourself.
Skip use of POINT and PSET commands, using _MEM instead.
Use a default variable type that matches your system. INTEGER64 variables are faster to process on 64-bit machines than LONG, INTEGER, OR BYTE variables.
Never use _BIT. Forget it exists unless you're using it for a large bit array. Even then, consider it twice. Then consider it again. And then use _BYTE instead.
Set the flags in the compiler options to optimize your EXE. They make a huge difference in many cases.
Simplify your math and loops as much as possible, moving as much outside the loop as possible. This includes logic decisions such as IF or SELECT CASE. Sometimes you'll write more code, but the difference in performance can be truly worth it.
Never add a progress bar to a display. If you do, don't update it every 0.00001 percent. It takes time to print or draw to the screen and sometimes you'll spend more time on using making and using a progress bar in a loop than you'd use with the loop itself. Only update that bar as infrequently as possible to reduce any hit to performance.
Don't REDIM arrays by a count of 1. REDIM them much larger than you expect them to be, and then REDIM _PRESERVE them back down to the proper size afterwards. Each REDIM is a hit to performance. OVERDO it once, be done with it, and then resize it down. Don't grow it by one element over and over and over and over and over and over and over and over....
If you're using input for movement (such as in a game and such), you may want to use _KEYDOWN instead of _KEYHIT or INKEY$. Keydown is limited to the repeat speed of your machine. _KEYDOWN can generate a result with each pass of the loop. This can truly affect performance and responsiveness.
Umm.... Lots of other things I'm sure, but that's what I can pop off, off the top of my head.  Gotta say, this is really a great list. I can see it being the start of a style guide or something.
Posts: 175
Threads: 15
Joined: Apr 2022
Reputation:
25
I worry a lot about speed writing batch processes and trying to bring runtime back from hours to minutes or minutes to seconds.
In my experience the 'rules' SMcNeill has given are the most important factors in improving speed
Better string handling and replacing Mid$() with Asc() sometimes improves performance x100 or more.
Most important however is to really take your time thinking about code structure, smart ordering of steps, not doing more then needed for the requirement and keep work in big loops to a minimum.
Saving a single condition/calculation/assignment in a loop running a billion times, can make a noticeable difference.
As a last step, when you are fully confident about your code, use $checking:off for huge loops
e.g:
Code: (Select All)
Dim As _Integer64 x, y, z, xmax, ymax, zmax, a, b, c
' some code
$Checking:Off
Do While x < xmax
a = Asc(s$, x)
'everything here that's only required for each new x
Do While y < ymax
b = Asc(s$, y)
'everything here that's only required for each new x & y
If a <> b Then
Do While z < zmax
c = Asc(s$, z)
'everything here that's always required
Loop
End If
Loop
Loop
$Checking:On
45y and 2M lines of MBASIC>BASICA>QBASIC>QBX>QB64 experience
Posts: 1,035
Threads: 140
Joined: Apr 2022
Reputation:
23
(05-04-2025, 03:35 AM)SMcNeill Wrote: Some general rules of thumb:
DO...LOOPS are faster than FOR...NEXT loops.
Working with ASCII values are faster than working with STRINGS. (IF ASC(text$,1) = 65 is faster than IF MID$(text$,1,1) = "A", for example.)
...
Umm.... Lots of other things I'm sure, but that's what I can pop off, off the top of my head. 
Wow, a lot of of these I hadn't seen - great info, thanks for sharing, Steve!
Posts: 3,001
Threads: 356
Joined: Apr 2022
Reputation:
279
(05-04-2025, 01:09 PM)mdijkens Wrote: As a last step, when you are fully confident about your code, use $checking:off for huge loops
One important thing to note here -- I don't recommend this unless you're careful and certain of what you're doing. The issue isn't even writing error free code; the issue is how QB64 handles its internal events. You could have something simple like this that works 100% fine:
Code: (Select All)
$Checking:Off
Do
_Limit 30
i = _LoadImage("blah blah.jpg", 32)
t = _CopyImage(i)
_FreeImage i
_FreeImage t
Loop Until _KeyHit
Now, give that a run and see what happens. Click the X in the top right corner to close it. Hit any key to end it. Try to stop it as you wish...
...and then go into your task manager and terminate it manually because it's stuck.
That's 100% error free code, but it relies on an external file. A file that isn't included in the download. It can't report that to you, and it can't get past the glitch in the code, so it just... locks up in an endless loop of trying to report something it can't.
$CHECKING:OFF can help speed up your programs as it eliminates a lot of the QB64 internal checks, but it can also lead to serious issues. Lose a network connection? Dismount a drive? File not found? User inputs a 0 which would generate a division by 0 error? Any and all errors that you might normally catch and be safe against, can now potentially lock you up and utterly break your program.
For many things, the slight increase in speed isn't really worth it.
Where IS it worth it??
Around code tested _MEM blocks. Around various arrays. Both of those have been altered to maximize performance once checking is off. (Ask @RhoSigma for more details about arrays and checking off, as he was the one who fixed those with us and I don't remember all the details for them off the top of my head now.)
Any other place where I'd use Checking:Off? Small tight loops that are going to do one thing, do it repeatedly, and where I'm convinced that I can't generate an error. (Around a sort routine, or a shuffle routine, for example, as once written those tend to be fairly error proof.)
Otherwise, I'd be cautious about overuse of $Checking:Off, particularly around large blocks of code. Those safeguards are there for a reason. Is it really worth a tenth of a second in processing times to skip them?
Let's compare what we're trimming out with that $Checking:Off above:
Code: (Select All) S_1:;
do{
if(qbevent){evnt(1);if(r)goto S_1;}
do{
sub__limit( 30 );
if(!qbevent)break;evnt(2);}while(r);
do{
*__SINGLE_I=func__loadimage(qbs_new_txt_len("blah blah.jpg",13), 32 ,NULL,0|1);
qbs_cleanup(qbs_tmp_base,0);
if(!qbevent)break;evnt(3);}while(r);
do{
*__SINGLE_T=func__copyimage(qbr(*__SINGLE_I),NULL,0);
if(!qbevent)break;evnt(4);}while(r);
do{
sub__freeimage(qbr(*__SINGLE_I),1);
if(!qbevent)break;evnt(5);}while(r);
do{
sub__freeimage(qbr(*__SINGLE_T),1);
if(!qbevent)break;evnt(6);}while(r);
S_7:;
dl_continue_1:;
}while((!(func__keyhit()))&&(!is_error_pending()));
dl_exit_1:;
if(qbevent){evnt(7);if(r)goto S_7;}
sub_end();
return;
And with $Checking:OFF?
Code: (Select All) do{
sub__limit( 30 );
*__SINGLE_I=func__loadimage(qbs_new_txt_len("blah blah.jpg",13), 32 ,NULL,0|1);
qbs_cleanup(qbs_tmp_base,0);
*__SINGLE_T=func__copyimage(qbr(*__SINGLE_I),NULL,0);
sub__freeimage(qbr(*__SINGLE_I),1);
sub__freeimage(qbr(*__SINGLE_T),1);
dl_continue_1:;
}while((!(func__keyhit()))&&(!is_error_pending()));
dl_exit_1:;
sub_end();
return;
}
That looks like an impressive amount of code all cleaned up, but what are those lines that we stripped out? Basically they're all this:
do{
sub__limit( 30 );
if(!qbevent)break;evnt(2);}while(r);
And that breaks down to basically becoming:
DO
Try to implement the _LIMIT 30 command
LOOP until there's no internal error at this stage.
So if there's an error, we report it. Otherwise it's basically a simple IF check that we bypass quickly and quietly.
In total, we skipped all of 7 simple IF checks... How long do they take to process in a cycle? Can you even measure the hit to performance? The speed you're saving here is negligible, but the trouble you're opening yourself up for could be rather badly unwanted.
Save $Checking:Off for the places where it can make a true difference in performance without having to worry over something unexpected destroying your program. Small, tight blocks with mem and arrays is the place to remember it. Other places.... just be very cautious and think twice if it's worth the inherent risks for trouble in your code.
Posts: 124
Threads: 13
Joined: Apr 2022
Reputation:
59
Quote: (Ask @RhoSigma for more details about arrays and checking off, as he was the one who fixed those with us and I don't remember all the details for them off the top of my head now.)
If $CHECKING:OFF is in effect, then no "subscript out of range" checks are performed anymore. This may not improve much on a simple 1-dimensional array, but gets more important for multi-dimensional arrays, as the array indexes are checked for each and every array access and for every single dimension, i.e. a simple line like array(x,y,z) = array(x,y,z) + 1 will already perform 6 of those index checks. So especially if many of array accesses happen within a loop it might be useful to turn off checking, but be warned, if any given array index runs out of range and so accessing innocent memory, than this will almost immediatly cause a seg fault crash.
Posts: 424
Threads: 41
Joined: Jul 2022
Reputation:
41
Quote:This may not improve much on a simple 1-dimensional array, but gets more important for multi-dimensional arrays, as the array indexes are checked for each and every array access and for every single dimension, i.e. a simple line like array(x,y,z) = array(x,y,z) + 1 will already perform 6 of those index checks.
So we can argue as new rule of speed:
Prefer monodimensional arrays vs multidimensional arrays as much as possible, and this depends on how you have projected data structure in your code.
Moreover we can argue that,generally ,while we are using monodimensional arrays, it is possible that $Checking:OFF makes little difference in speed.
Just setting focus (for dummy like me) on the infos that coming out from these explanations.
Thanks
|