~ 1 min read

Hitting Multiple Files Using Bash and Wget or Curl

I needed to hit a number of files on a remote HTTP server with an index ranging from 1 to 150 and my first instinct was to use wget with some sort of a loop (bash script seemed like the most logical thing to do).

Then I realized that I can do the same thing in a simple oneliner using 'curl'

It's super easy using the following info:

You can specify multiple URLs or parts of URLs by writing part sets within braces as in:

http://site.{one,two,three}.com or you can get sequences of alphanumeric series by using [] as in:

ftp://ftp.numericals.com/file[1-100].txt ftp://ftp.numericals.com/file[001-100].txt (with leading zeros) ftp://ftp.letters.com/file[a-z].txt

Nested sequences are not supported, but you can use several ones next to each other:

http://any.org/archive[1996-1999]/vol[1-4]/part{a,b,c}.html

You can specify any amount of URLs on the command line. They will be fetched in a sequential manner in the specified order. You can specify a step counter for the ranges to get every Nth number or letter:

yo

http://www.numericals.com/file[1-100:10].txt http://www.letters.com/file[a-z:2].txt