03-25-2023, 03:08 PM
Hi madscijr
about
So to search a specific key it seems that in Luke's dictionary you must use different DO...LOOPs, and surely this can spend more time.
However I think that only after having used some equal rules (expandable or fixed size ?, collision managment : overwriting, shifting, overlapping? , Type of data, etc etc) the comparison is well done.
here the results after using structure data of Luke Dictionary in your code of testing...
![[Image: immagine-2023-03-25-155355444.png]](https://i.ibb.co/c2Dhbg5/immagine-2023-03-25-155355444.png)
about
this question depends from the previous question. How do you structure the dictionary ? And which features do you think to have to get in it?
Thanks for talking about this.
NB: I must stress that the system of string with join and split (see Bplus dictionary) is very speed working in RAM!
about
Quote:So, I am wondering what specifically is the bottleneck in the "Luke" version?I think that the structure of the two dictionaries is quiet different.
So to search a specific key it seems that in Luke's dictionary you must use different DO...LOOPs, and surely this can spend more time.
However I think that only after having used some equal rules (expandable or fixed size ?, collision managment : overwriting, shifting, overlapping? , Type of data, etc etc) the comparison is well done.
here the results after using structure data of Luke Dictionary in your code of testing...
![[Image: immagine-2023-03-25-155355444.png]](https://i.ibb.co/c2Dhbg5/immagine-2023-03-25-155355444.png)
about
Quote:Do you think that the "Luke" version can be improved to get the speed AND the features?
this question depends from the previous question. How do you structure the dictionary ? And which features do you think to have to get in it?
Thanks for talking about this.
NB: I must stress that the system of string with join and split (see Bplus dictionary) is very speed working in RAM!