Blog Archives

#Linux : How to Slice an Array in #Bash

A lot of things can be done using just the command line.
In fact, Bash shell scripting language is a touring-complete language, so anything can be done!

One important feature is the ability to slice an array (i.e. select a contiguous subset of elements of a collection).
So let’s say for example we stored the list of installed packages into a variable PACKAGE_LIST as a bash array:

#PACKAGE_LIST=(`dpkg -l | awk '{print $2}'`)

and for some reason we want to select elements from 4 to 10:

#PACKAGE_LIST=${PACKAGE_LIST[@]:3:10}

Let me explain. Here, we are using Bash parameter expansion:

  • The [@] following the array name returns the whole content of the array.
  • The :X:Y part is doing the slicing by taking a slice of length Y starting at position X. Note that if X is negative, that is we start at X elements from the end, we must put a space between the colon and the number.

#Linux : list all packages installed

Sometimes we want to quickly check if we have a required package installed, for example a developer library we need to compile our code.

If you are running a Debian based Linux system such as Ubuntu, you have a couple of alternatives. Using apt, let’s say we are looking for python3-dev since it is required to debug python code through gdb:

#sudo apt list --installed | grep python3-dev

However, you will notice apt is pretty slow and we will be greeted by the warning WARNING: apt does not have a stable CLI interface. Use with caution in scripts.. Moreover, the output is not so nice if we want to parse it in a scripted way. A faster alternative is provided by dpkg:

#sudo dpkg -l | grep python3-dev

In RedHat and RedHat-like Linuxes such as Fedora and CentOS we can use either rpm:

#sudo rpm -qa | grep python3-dev

or yum:

#sudo yum list installed | grep python3-dev

#MacOsX : Terminal Cheat Sheet

If you are a *nix geek like me you can’t but love the command prompt.
One of the best tool to improve the plain old terminal is an utility called tmux. You can install through Homebrew.
Now, there are many commands to remember to play nicely with the terminal, and sometimes a little remind might be useful, that’s why cheat sheets exist.
Here is mine, enjoy.

#cURL : HOWTO [UPDATED]

You can use the cURL library and the curl command to design your own Request and explore the Response. There are many possible uses like e.g., API debug, web hacking, pen testing.
curl is a tool to transfer data from or to a server, using one of the supported protocols (e.g., FTP, GOPHER, HTTP, HTTPS, IMAP, LDAP, POP3, RTMP, SCP, SFTP, SMTP, TELNET). The command is designed to work without user interaction.
curl offers a busload of useful tricks like proxy support, user authentication, FTP upload, HTTP post, SSL connections, cookies, file transfer resume, Metalink, and more. As you will see below, the number of features will make your head spin!
So curl is a truly powerful command, however it does at the cost of complexity. Here I will show some real-world use cases.

URL

The URL syntax is protocol-dependent. If you specify URL without protocol:// prefix, curl will attempt to guess what protocol you might want. It will then default to HTTP but try other protocols based on often-used host name prefixes. For example, for host names starting with “ftp.” curl will assume you want to speak FTP.
You can specify multiple URLs or parts of URLs by writing part sets within braces as in:

curl en.wikipedia.org/wiki/{FTP,SCP,TELNET}

or you can get sequences of alphanumeric series by using [ ] as in:

curl forums.macrumors.com/showthread.php?t=[1673700-1673713]
curl numericals.com/file[1-100].txt
curl numericals.com/file[001-100].txt
curl letters.com/file[a-z].txt

Nested sequences are not supported, but you can use several ones next to each other:

curl any.org/archive[1996-1999]/vol[1-4]/part{a,b,c}.html

You can specify any amount of URLs on the command line. They will be fetched in a sequential manner in the specified order.
You can specify a step counter for the ranges to get every Nth number or letter:

curl numericals.com/file[1-100:10].txt
curl letters.com/file[a-z:2].txt

Trace Dump

In order to analyze in depth what we send and receive we might save everything on a file, this is as easy as:

curl --trace-ascii DebugDump.txt URL

Save To Disk

If you want save the Response to disk you can use option -o <file>. If you are using {} or [] to fetch multiple documents, you can use ‘#‘ followed by a number in the specifier. That variable will be replaced with the current string for the URL being fetched. Remember to protect the URL from shell by adding quotes if you receive the error message internal error: invalid pattern type (0). Examples:

curl 'en.wikipedia.org/{FTP,TFTP,SFTP}' -o "#1.html"
curl arxiv.org/pdf/13[01-11].36[00-75].pdf -o "arXiv13#1.36#2.pdf"

Option -O writes output to a local file named like the remote file we get (only the file part of the remote file is used, the path is cut off). The remote file name to use for saving is extracted from the given URL, nothing else. Consequentially, the file will be saved in the current working directory. If you want the file saved in a different directory, make sure you change current working directory before you invoke curl:

curl -O arxiv.org/pdf/1301.3600.pdf

Only the file part of the remote file is used, the path is cut off, thus the file will be saved as 1301.3600.pdf.

Set HTTP Request Method

The curl default HTTP method, GET, can be set to any method you would like using the -X <command> option. The usual suspects POST, PUT, DELETE, and even custom methods, can be specified:

curl -X POST echo.httpkit.com

Normally you don’t need this option. All sorts of GET, HEAD, POST and PUT requests are rather invoked by using dedicated command line options.

Forms

Forms are the general way a web site can present a HTML page with fields for
the user to enter data in, and then press some kind of ‘submit’
button to get that data sent to the server. The server then typically uses
the posted data to decide how to act. Like using the entered words to search
in a database, or to add the info in a bug track system, display the entered
address on a map or using the info as a login-prompt verifying that the user
is allowed to see what it is about to see.
Using the -d option we can specify URL encoded field names and values:

curl -d "prefisso=051" -d "numero=806060" -d "Prosegui=Verifica" -d "form_name=verifica_copertura_ehiveco" http://www.ovus.it/verifica_copertura_ehiveco.php

A very common way for HTML based application to pass state information between pages is to add hidden fields to the forms. Hidden fields are already filled in, they aren’t displayed to the user and they get passed along just as all the other fields. To curl there is no difference at all, you just need to add it on the command line.

Set Request Headers

Request headers allow clients to provide servers with meta information about things such as authorization, capabilities, and body content-type. OAuth2 uses an Authorization header to pass access tokens, for example. Custom headers are set in curl using the -H option:

curl -H "Authorization: OAuth 2c4419d1aabeec" http://echo.httpkit.com
curl -H "Accept: application/json" -H "Authorization: OAuth 2c3455d1aeffc" http://echo.httpkit.com

Note that if you should add a custom header that has the same name as one of the internal ones curl would use, your externally set header will be used instead of the internal one. You should not replace internally set headers without knowing perfectly well what you’re doing. Remove an internal header by giving a replacement without content on the right side of the colon, as in: -H "Host:".
If you send the custom header with no-value then its header must be terminated with a semicolon, such as -H "X-Custom-Header;" to send "X-Custom-Header:".
curl will make sure that each header you add/replace is sent with the proper end-of-line marker, you should thus not add that as a part of the header content: do not add newlines or carriage returns, they will only mess things up for you.

Referer

A HTTP request may include a referer field (yes it is misspelled), which can be used to tell from which URL the client got to this particular resource. Some programs/scripts check the referer field of requests to verify that this wasn’t arriving from an external site or an unknown page. While this is a stupid way to check something so easily forged, many scripts still do it.
This can also be set with the -H, --header flag of course. When used with -L, --location you can append ";auto" to the --referer URL to make curl automatically set the previous URL when it follows a Location: header. The ";auto" string can be used alone, even if you don’t set an initial --referer.

curl -e google.com http://echo.httpkit.com

User Agent

To specify the User-Agent string to send to the HTTP server you can use --user-agent flag. To encode blanks in the string, surround the string with single quote marks. This can also be set with the -H, --header option of course. Many applications use this information to decide how to display pages. At times, you will see that getting a page with curl will not return the same page that you see when getting the page with your browser. Then you know it is time to set the User Agent field to fool the server into thinking you’re one of those browsers:

curl -A "Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_2 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8H7 Safari/6533.18.5" http://echo.httpkit.com

Cookies

The way the web browsers do “client side state control” is by using cookies. Cookies are just names with associated contents. The cookies are sent to the client by the server. The server tells the client for what path and host name it wants the cookie sent back, and it also sends an expiration date and a few more properties.
When a client communicates with a server with a name and path as previously specified in a received cookie, the client sends back the cookies and their contents to the server, unless of course they are expired.
Many applications and servers use this method to connect a series of requests into a single logical session. To be able to use curl in such occasions, we must be able to record and send back cookies the way the web application expects them. The same way browsers deal with them.

It is supposedly the data previously received from the server in a "Set-Cookie:" line. The data should be in the format "NAME1=VALUE1; NAME2=VALUE2".
If no = symbol is used in the line, it is treated as a filename to use to read previously stored cookie lines from, which should be used in this session if they match. Using this method also activates the “cookie parser” which will make curl record incoming cookies too, which may be handy if you’re using this in combination with the -L, --location option. The file format of the file to read cookies from should be plain HTTP headers or the Netscape/Mozilla cookie file format. NOTE that the file specified with -b, --cookie is only used as input. No cookies will be stored in the file. To store cookies, use the -c, --cookie-jar option or you could even save the HTTP headers to a file using -D, --dump-header:

curl --cookie "name=whitehatty" http://echo.httpkit.com
curl -c cookies.txt http://www.facebook.com
sed -i '' s/#HttpOnly_\.facebook\.com/echo\.httpkit\.com/g cookies.txt
curl --cookie cookies.txt http://echo.httpkit.com
curl -b cookies.txt --cookie-jar newcookies.txt http://echo.httpkit.com
curl --dump-header headers_and_cookies http://www.facebook.com

Work In Progress…

Ok there are many more options, but I will stop here for now. I will add something in the future, so if you have any request (like using more real urls) just leave a comment.