Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
building a more resilient https downloader?
#1
When I try to download certain large files from certain sites using a web browser, the download times out (it just says Failed - network error, or Couldn't download - Network issue). I can usually click Resume and it will pick up where it left off and finish the download, but sometimes it fails more than once, and it's damn annoying having to manually restart it. 

I'm wondering how one might build a basic https browser and file downloader that if it detects the download timed out, keeps trying to reconnect and automatically resumes the download?
Reply
#2
Oh I ain't even going to attempt to make a web browser. Not in v4.0, that's for sure. A downloader? Sure. I think I got rid of my code, though. I've got several different HTTPS downloaders. Even one that only uses QB64 functions.
The noticing will continue
Reply
#3
(01-03-2025, 12:14 PM)SpriggsySpriggs Wrote: Oh I ain't even going to attempt to make a web browser. Not in v4.0, that's for sure. A downloader? Sure. I think I got rid of my code, though. I've got several different HTTPS downloaders. Even one that only uses QB64 functions.
Cool! If you come across it, and are feeling generous, that would be great. 

I agree that making a Web browser is a Pandora's box nobody wants to casually open - even Google and Microsoft have to work hard to keep theirs up to date and bug free! 

The reason I included Web browser in the functionality is that some sites display a file link in your browser, but if you try the link in a different browser it doesn't work, so I'm guessing the link is specific to the ID of the browser? Maybe a cookie or something? No idea. Maybe if the downloader can somehow identify itself as an existing Web browser that might work? (BTW I am so far removed from Web and networking and how that all works, I could be talking out my ass and wouldn't know it, LoL!)
Reply
#4
I remember talking out of my ass once, many years ago... Haven't been able to navigate back, since.

I did some of this stuff maybe 15 years back with wget and curl. I don't know if wget had it back then, but StackOverflow has an article about this and this quote:

Quote:Wget has been designed for robustness over slow or unstable network connections;
if a download fails due to a network problem, it will keep retrying until the
whole file has been retrieved.  If the server supports regetting, it will
instruct the server to continue the download from where it left off.
 

https://stackoverflow.com/questions/1972...ly-in-curl

It might be worth a look, since wget is very easy to shell from Qb64.

Pete
Reply
#5
(01-03-2025, 05:45 PM)Pete Wrote: I remember talking out of my ass once, many years ago... Haven't been able to navigate back, since.

I did some of this stuff maybe 15 years back with wget and curl. I don't know if wget had it back then, but StackOverflow has an article about this and this quote:

Quote:Wget has been designed for robustness over slow or unstable network connections;
if a download fails due to a network problem, it will keep retrying until the
whole file has been retrieved.  If the server supports regetting, it will
instruct the server to continue the download from where it left off.
 

https://stackoverflow.com/questions/1972...ly-in-curl

It might be worth a look, since wget is very easy to shell from Qb64.

Pete
Very interesting... thanks!
Reply




Users browsing this thread: 1 Guest(s)