Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Coding Efficiency
#1
To go kinda in tandem with Pete's post about coding styles, I wanted to take a second to get everyone's thoughts on Coding Efficiency.  What do you guys usually shoot for -- the most efficient code for you, or the most efficient code for your programs to run?

Everyone know bubble sorts are crap.  They're slow.  They're inefficient as heck.   And, if all we're doing is sorting a small set of items such as the order of a shopping list, or deck of cards, they're all we need!  Why bother with the time and effort to come up with something faster, when your biggest use case is sorting 100 things in 0.2 slow seconds?  My personal time as a programmer is worth more than the 0.1 second you'd save as an user of my "Shopping App Pro" program.  

Most programmers should for an adage of "Less is more".  bplus is terrible about this, with his constant need to count lines of code and use them as a marker for success.  (Not picking on bplus at all -- just using him as an example we see here on the forums, because he posts and shares these short programs a ton.)

The thing is, a lot of times, "MORE IS LESS" when it comes to coding.   For example, let's say I have a variable x that can do 100 different things..  Usually it's written in code as something like:

SELECT CASE x
   CASE 1
   CASE 2
   CASE 3
   CASE 4
   CASE 5
   ... so on.
END SELECT

Simple for the programmer to write, understand, and work with.   But wouldn't it be a lot more efficient with:

SELECT CASE x
    CASE 0 TO 50
       SELECT CASE x
          CASE 1
          CASE 2
          ...on
      END SELECT
    CASE 51 TO 100
...

We've now doubled the code for decision making, but halved the total possible number of IF checks required to process the code.  If we break it down further, binary-tree style, we can reduce it to a long arse batch of code that has to make a maximum of 8 decisions, instead of 100!

It's a lot more code on the programmer, but a lot more efficient for the program itself.



The question is, where do most of you guys try and draw the line?  How hard do you work for code efficiency, very programmer efficiency?  

Curious My-Minds Want To Know!!
Reply
#2
I tend to write programs for speed using the tips and tricks learned over the years with QB64 like using DO...LOOP instead of FOR...NEXT and the like.

If those tips/tricks happen to work well with efficiencies like you point out then heck ya, I'm all for them. Your SELECT CASE use above is very ingenious and I'll start using that. I don't seem to able to think outside of the box as far as you do when it comes to clever, but still understandable, optimizations.

One aspect of QB64 I'm trying to use more is the _MEM family of statements for extremely efficient code when working with large data sets, such as images. RhoSigma's image processing library was a huge help in my understanding of how to do this.

Some times I'll come up with very "clever" code, in my mind anyway, that uses very little code to achieve a big outcome. However, documenting that and then coming back to it six months or year later will probably result in, "what the heck was I thinking here?" So if my "clever" code has no efficiency benefit I'll usually break it down to something more understandable for my sanity later on when I revisit it or for someone else that may be looking through the code.
There are two ways to write error-free programs; only the third one works.
QB64 Tutorial
Reply
#3
Goal number 1:  a balance of maintainability/adaptability/discernability while trying to get a working program running as quickly as possible.

The quicker the "proof of concept" program (that will eventually evolve into the final product) is working, the quicker we can elicit overlooked requirements and identify unexpected potential stumbling blocks (or even unforseen/unexpected opportunities.) 

As the program evolves, keep refactoring for maintainability/adaptability/discernability.  And, wherever a segment of code looks like it can be refactored for better performance without worry of sticks in the wheels (i.e. new/changed requirements), then go for it if hurts too much not to improve the performance.

Once the program has evolved to a significant milestone that seems to cover the bulk of requirements and stumbling blocks, then start tackling code performance when it hurts too much not to refactor that code for better performance.


Something like that.
Reply
#4
(08-11-2024, 01:21 AM)TerryRitchie Wrote: If those tips/tricks happen to work well with efficiencies like you point out then heck ya, I'm all for them. Your SELECT CASE use above is very ingenious and I'll start using that. I don't seem to able to think outside of the box as far as you do when it comes to clever, but still understandable, optimizations.

I think the first place where I *really* had an explosion of code added to my program, which truly made a mind-boggling ump in performance was when I was working on the QBDbase stuff back at the original forums that Galleon kept up and running for us.  I had a really simple patch of code that looked something like:

FOR x = 1 to limit
   For y = 1 to limit
         IF user_data$ = foo AND user_type = fart THEN
              ...do some stuff
         END IF
    NEXT
NEXT

Maybe 50 lines of code in total.  Nothing impressive, and it ran fine.   It just ran slow  -- (it was the sorting routines where you could sort your database by whichever field you desired, as I recall).   Incredibly slow, as in taking minutes to run with a 1,000,000+ record database.

I found that to be unacceptable, so I went back and stripped the hell out of that routine.   What I ended up with looked much more like:

IF user_data$ = foo AND user_type = fart THEN
      FOR x = 1 to limit
             FOR y = 1 to limit
                sort stuff
        .     NEXT
      NEXT
ELSEIF  user_data$ = foo AND user_type = fart2 THEN
      FOR x = 1 to limit
             FOR y = 1 to limit
                sort stuff
        .     NEXT
      NEXT
ELSEIF....
END IF

I moved that IF checking outside the FOR ...LOOPS, which required that the contents of the FOR-LOOP was basically copy/pasted inside each IF condition, which resulted in 1000 lines of code, instead of 50 -- and those routines now ran in 0.02 seconds instead of minutes!!



A change in structure that produced a TON more code, (mainly as I say, just all copy-paste code), but which ran an unbelievable speed faster!  It's where that concept that "MORE IS LESS SOMETIMES", really resonated with me.  It was like getting slapped upside the head with a week old fish -- it left an impression which has still carried with me, even to this day.

And thus, I really don't care about the line number count anymore.  Can I write code in a fraction of the lines?  Sure I can!  But it's not just about writing code in the shortest amount of code, or the fastest I can toss it out onto the screen anymore.  Now, I try to think and organize a way to separate things into the smallest repetition possible, with the fewest decisions required, when I'm needing to build something around speed or efficiency.

For example -- I used to just write word search routines to load a list of words from 1 to limit, and then search them for the word/term in question.  Now, I tend to break those down to multiple word lists, each organized by length of word.

If you're searching for things that closely match "Stve" for a name, you don't need to compare it against "onomatopoeia".  Just compare it with the word lists that are Len("Stve") - 2 TO Len("Stve") + 2 in length and look for close matches.   Now, you're not searching and checking one 250,000 word database for matches.... You're checking 5 databases that are a fraction of that size -- containing only 2 to 6 letters.



It's more coding involved, but the performance improvements are definitely worth it.  It just requires trying to so a little more thinking than just "How do I make this work"....  It involves coding around the thought, "How can I break this down to make it work, with as few checks as possible." 

Wink
Reply
#5
I still work with INKEY$, so Steve and I share the same idea of using SELECT CASE in a multi-conditional manner when it comes to efficiency. For instance, INKEY$ returns one or two bit string lengths, so I'll set up my SELECT CASE as a nested routine where I first select if the key pressed is one or two bits in length, and then I make two nested cases to get the key code.

Example:

Code: (Select All)
Do
    _Limit 30
    b$ = InKey$
    Select Case Len(b$)
        Case 1
            Select Case b$
                Case "a"
                    Print b$
                Case Chr$(27)
                    System
            End Select
        Case 2
            Select Case Mid$(b$, 2, 1)
                Case ";"
                    Print "F1"
            End Select
    End Select
Loop

For word processing, and other office related apps, I find QB64 to be plenty fast without being overly concerned with efficiency. However, if you intend on using graphics, changing font sizes, links, colors, etc. well then you are either checking character by character for attributes or at least groupings. This can slow things down a bit too much, but the efficient work around is to use string editing rather than constantly creating and replacing strings. In other words, add to the middle of an existing string. Now the strings don't have to be DEFINED as fixed length strings to accomplish this. I just create a string like a$ and then do something like a$ = STRING$(100, CHR$(0)). Now we have a 100-character non-fixed string and we can manipulate it by adding whatever we need into that string like mid$(a$, 1, 17) = "Steve is Amazing!" Oh dammit, I've been coming here so long that now I'm brainwashed, too!

Anyway, (watch Steve quote this post before I come to my senses and edit it...) Other things I like to do are what Terry described, including using defining variables to use less memory such as bits and integers.

Oh, I also like to make sure, as best I can, the program flow is broken up in parts so certain routines, that don't need the 100 or so lines that are coming and won't get sucked into any of those conditions, get directly routed to the part of the program they are needed in. Using EXIT in LOOPs or even _CONTINUE are two examples other than how the program flow is designed in the first place.

Neat topic! +1

Pete
Fake News + Phony Politicians = Real Problems

Reply
#6
In terms of efficiency, do you guys avoid logical operators in Select Cases? Meaning the simpler the case operant the more efficient the code.
Reply
#7
(08-12-2024, 06:34 PM)Dimster Wrote: In terms of efficiency, do you guys avoid logical operators in Select Cases? Meaning the simpler the case operant the more efficient the code.

I'd have to see an example to figure that one out.

What I can say is I like to use simple algorithms to substitute for conditional statments whenever possible.

For instance, instead of...

Code: (Select All)
Do
    _Limit 30
    b$ = InKey$
    Select Case UCase$(b$)
        Case "A" To "Z"
            If UCase$(b$) = "A" Then Print "1"
            If UCase$(b$) = "B" Then Print "2"
            If UCase$(b$) = "C" Then Print "3"
            ''' etc.
    End Select
Loop Until b$ = Chr$(27)

I'd code...

Code: (Select All)
Do
    _Limit 30
    b$ = InKey$
    Select Case UCase$(b$)
        Case "A" To "Z"
            Print LTrim$(Str$(Asc(UCase$(b$)) - 64))
    End Select
Loop Until b$ = Chr$(27)

Pete
Fake News + Phony Politicians = Real Problems

Reply
#8
Since this is a topic on Coding Efficiency, I want to take a moment to point out how foolish it is to listen to anything Pete might say on such a subject.  After all, as he's illustrated countless times before, he STILL uses Inkey$ in his programs.   

Hahahahahahahaha!!    (Don't worry.  If any else of you do, I'm not laughing at you.  I just laugh at Pete. Tongue )

Let me take a moment to explain:

INKEY$ and _KEYHIT are the *exact*, 100% identical, no kidding, gosh darn same command -- with one *minute* difference:  one returns a string value, the other returns the ASCII numeric value of that string.

Inkey$ says "A", _KEYHIT says 65.
Inkey$ says CHR$(0) + CHR$(79), _KEYHIT says 19200.   <-- this is the exact same value if yo MKI$ or _CVI the values.

Keyhit simply returns the numeric value of Inkey$.   That's the only difference in the two commands.

BUT, it's inherently faster for several reasons:

1) Numbers are always faster than strings.  Strings come with overhead numbers just don't have.
2) When it comes to extended keypresses, there's a LOT less to process than with Inkey$.   
      IF User_Input = 19200 THEN.....
      IF User_Input$ = CHR$(0) + CHR$(79) THEN....

Compare the two IF statements above.  Take just a moment to analyze them in your head.  Which is OBVIOUSLY more efficient?
   IF number = number....   <--- single comparison of 2 numbers.  This isn't going to take long
   IF string = CHR$ function conversion of number to string, added to, CHR$ function conversion of number to string...

Umm...   I'm smart like Pete!  Me thinks the second method is better!  Me likes bananas and pew pew pistols and me brain is smaller than me hat!   Tongue

Pete's example code: 
Code: (Select All)
Do
    _Limit 30
    b$ = InKey$
    Select Case UCase$(b$)
        Case "A" To "Z"
            Print LTrim$(Str$(Asc(UCase$(b$)) - 64))
    End Select
Loop Until b$ = Chr$(27)

And Steve's exact same code:
Code: (Select All)
Do
    _Limit 30
    b = _KeyHit
    Select Case b And Not 32
        Case 27: System
        Case 65 To 90: Print _Trim$(Str$((b And Not 32) - 64))
    End Select
Loop

No UCASE$ needed -- the AND NOT 32 takes care of that.  No ASC conversion needed.  No string overhead.  Just nice efficient code.

Unlike Pete's.   Big Grin
Reply
#9
Still not a fan of _KEYHIT because part of efficiency to me is how fast I can code. _KEYHIT slows me down. It's not intuitive like INKEY$. 

Actually INKEY$ would be just about perfect for my uses, except for the nagging problem it has identifying when a key is released. It gives false positives and can be problematic without coding a workaround routine, which really is inefficient.

I'm glad we have _KEYHIT and the other QB64 key statements but I'm not sure I will ever get comfortable using them. I don't have youth on my side like Steve. Oh, I won't go into sympathy because us old folks do get plenty of sympathy for that, but Steve, once again beats me in this department, as studies have proven time and time again that sympathy for ugly outweighs age every time!

Pete Big Grin
Fake News + Phony Politicians = Real Problems

Reply
#10
(08-13-2024, 11:23 PM)Pete Wrote: Still not a fan of _KEYHIT because part of efficiency to me is how fast I can code. _KEYHIT slows me down. It's not intuitive like INKEY$. 

Actually INKEY$ would be just about perfect for my uses, except for the nagging problem it has identifying when a key is released. It gives false positives and can be problematic without coding a workaround routine, which really is inefficient.

I'm glad we have _KEYHIT and the other QB64 key statements but I'm not sure I will ever get comfortable using them. I don't have youth on my side like Steve. Oh, I won't go into sympathy because us old folks do get plenty of sympathy for that, but Steve, once again beats me in this department, as studies have proven time and time again that sympathy for ugly outweighs age every time!

Pete Big Grin

I find _Keyhit utterly intuitive anymore.

Quick, what's the INKEY$ key combos for Left Arrow, F1 and Delete?

Want me to give them to you instantly in the IDE for _KeyHit?

CTRL-K, Left Arrow
CTRL-K, F1
CTRL-K, Delete

Just press CTRL-K and then the key, the IDE automatically returns the associated value with that key back into your program where your cursor is located.

Now, what can be any more intuitive than that??  Big Grin
Reply




Users browsing this thread: 24 Guest(s)