Difference between revisions of "ITIC:Network tools and commands - Exercises"

From Juneday education
Jump to: navigation, search
m (Links)
(Apa)
Line 1: Line 1:
TODO
+
=Intro IT - Network tools exercises=
 +
We'll fill in more text here soon.
 +
=Links (Intro IT and computing)=
 +
==Where to go next==
 +
Next page is [[ITIC:Using_a_text_editor]].
 +
 
 +
{{Nav links|prev=ITIC:Network_tools_and_commands|TOC=Introduction_to_IT_and_computing#Chapters|next=ITIC:Using_a_text_editor}}
  
 
For now, inclusion of [[MoreBash:Exercises_-_Network_Tools]] below:
 
For now, inclusion of [[MoreBash:Exercises_-_Network_Tools]] below:
  
 +
<div style="margin-left: 50px; background-color: #DCDCDC">
 
{{:MoreBash:Exercises_-_Network_Tools}}
 
{{:MoreBash:Exercises_-_Network_Tools}}
  
 
END_INCLUSION of MoreBash:Exercises_-_Network_Tools
 
END_INCLUSION of MoreBash:Exercises_-_Network_Tools
=Links (Intro IT and computing)=
+
</DIV>
==Where to go next==
+
Next page is [[ITIC:Using_a_text_editor]].
+
 
+
{{Nav links|prev=ITIC:Network_tools_and_commands|TOC=Introduction_to_IT_and_computing#Chapters|next=ITIC:Using_a_text_editor}}
+

Revision as of 16:29, 13 August 2019

Contents

Intro IT - Network tools exercises

We'll fill in more text here soon.

Links (Intro IT and computing)

Where to go next

Next page is ITIC:Using_a_text_editor.

« PreviousBook TOCNext »

For now, inclusion of MoreBash:Exercises_-_Network_Tools below:

Work in progress

This chapter is a work in progress. Remove this section when the page is production-ready.

Note to hesa: Not sure this should be about Scripts primarilly - your slides talk about the tools, not using them in scripts...

Totally agree. Let's rename the page (skip the scripts part)

Prerequisite knowledge

These exercises assume that you have basic knowledge of Bash, computers and networks. We have books covering some of these basics, if you need to refresh them.

Make sure you have seen the lectures from the above chapters and made all the exercises, unless you have equivalent prior knownledge.

Introduction

The purpose of these exercises is to get you familiar with the some network commands and tools available in bash. See the previous chapter PDFs and video lectures for an introduction to network tools.

Some of the tools are: General networking:

  • telnet
  • netcat (nc)

Connectivity between servers:

  • ssh/scp
  • rsync

Diagnostics

  • ping
  • traceroute
  • netstat
  • nmap
  • tcpdump
  • wireshark
  • iftop

HTTP and Web stuff

  • lwprequest
  • wget
  • curl

DNS stuff

  • whois
  • host
  • dig
  • nslookup

Exercises on IP and domains

Look up the IP of www.gnu.org using dig

Use dig to find the IP addresss of the server www.gnu.org. What dns was used?

Expand using link to the right to see a hint.

$ dig www.gnu.org

; <<>> DiG 9.10.4-P5-RedHat-9.10.4-4.P5.fc25 <<>> www.gnu.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20673
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.gnu.org.			IN	A

;; ANSWER SECTION:
www.gnu.org.		93	IN	CNAME	wildebeest.gnu.org.
wildebeest.gnu.org.	93	IN	A	208.118.235.148

;; Query time: 10 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Fri Feb 03 11:04:52 CET 2017
;; MSG SIZE  rcvd: 81

From this we can see that the IP (via wildebeest.gnu.org) is 208.118.235.148.

The DNS used was 8.8.8.8.

Look up the IP of www.gnu.org using host

Use host to find the IP addresss of the server www.gnu.org. What dns was used?

Expand using link to the right to see a hint.

$ host www.gnu.org
www.gnu.org is an alias for wildebeest.gnu.org.
wildebeest.gnu.org has address 208.118.235.148
wildebeest.gnu.org has IPv6 address 2001:4830:134:3::a

From this we can see that the IP (via wildebeest.gnu.org) is 208.118.235.148.

The DNS used was 8.8.8.8.

Look up the IP of www.gnu.org using nslookup

Use nslookup to find the IP addresss of the server www.gnu.org. What dns was used?

Expand using link to the right to see a hint.

$ nslookup www.gnu.org
Server:		8.8.8.8
Address:	8.8.8.8#53

Non-authoritative answer:
www.gnu.org	canonical name = wildebeest.gnu.org.
Name:	wildebeest.gnu.org
Address: 208.118.235.148

From this we can see that the IP (via wildebeest.gnu.org) is 208.118.235.148.

The DNS used was 8.8.8.8.

Look up the IP using another DNS using dig

Use dig to find the IP addresss of the server www.gnu.org. Use the DNS 8.8.4.4 instead.

Expand using link to the right to see a hint.

$ dig @8.8.4.4 www.gnu.org

; <<>> DiG 9.10.4-P5-RedHat-9.10.4-4.P5.fc25 <<>> @8.8.4.4 www.gnu.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28503
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.gnu.org.			IN	A

;; ANSWER SECTION:
www.gnu.org.		13	IN	CNAME	wildebeest.gnu.org.
wildebeest.gnu.org.	13	IN	A	208.118.235.148

;; Query time: 9 msec
;; SERVER: 8.8.4.4#53(8.8.4.4)
;; WHEN: Fri Feb 03 11:11:02 CET 2017
;; MSG SIZE  rcvd: 81

From this we can see that the IP (via wildebeest.gnu.org) is 208.118.235.148. The same as when using 8.8.8.8.

Silently look up the IP using another DNS using dig

Use dig to find the IP addresss of the server www.gnu.org. Use the DNS 8.8.4.4 instead. Make dig print out less information (hint: search for short in the manual).

Expand using link to the right to see a hint.

$ dig @8.8.4.4 +short www.gnu.org
wildebeest.gnu.org.
208.118.235.148

From this we can still see that the IP (via wildebeest.gnu.org) is 208.118.235.148. The amount of text is less (shorter).

Look up the domain of the IP address 208.118.235.148

Use dig to find the domain of IP addresss 208.118.235.148. Make dig print out less information (hint: search for "reverse lookups" in the manual).

Expand using link to the right to see a hint.

$ dig +short -x  208.118.235.148
wildebeest.gnu.org.

The domain of the reversed looked up IP address is 208.118.235.148.

Look up the domain of the IP address of the domain www.gnu.org

Use dig to

  1. find the IP addresss www.gnu.org
  2. and reverse lookup that IP

To do this you need to use the output of the first dig execution as argument (not stdin) to the next call to dig.

If we do this manually we type:

$ dig +short www.gnu.org
wildebeest.gnu.org.
208.118.235.148

and copy/paste the IP (208.118.235.148) to dig;

$ dig +short -x 208.118.235.148
wildebeest.gnu.org.

Your task now is to automate this in one command.

Expand using link to the right to see a hint.

$ dig +short -x $(dig +short www.gnu.org | egrep -e "^[0-9\.]{4}") wildebeest.gnu.org

To get an idea of how we came up with this odd command line we will guide you through our stupid thoughts.

We start with finding the IP of www.gnu.org

$ dig +short www.gnu.org
wildebeest.gnu.org.
208.118.235.148

This is too much information so we use egrep to keep only the lines containing an IP address. An IP address is four numbers separated by a dot. So we create a regular expression matching IP addresses: ^[0-9\.]{4}. This expression says:

  • the line shall start with (this is what ^ means)
  • followed by four occurances of the digits 0 to 9 (0-9) or a dot (\.)

Using this we get:

$ dig +short www.gnu.org | egrep -e "^[0-9\.]{4}"
208.118.235.148

Nice! This is what we want as argument (again, not stdin so we shall not pipe) to a new dig execution. We could store the output of the command above in a variable and use the variable: Note: this regular expression is not perfect since it will match non IP addresses but it will do fine for this exercise. A better regular expression would be ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$.

$ IP=$(dig +short www.gnu.org | egrep -e "^[0-9\.]{4}")
$ dig +short -x $IP
wildebeest.gnu.org.

The above is hopefully ok. But we think it's a bit cumbersome so we suggest skipping the variable:

$ dig +short -x $(dig +short www.gnu.org | egrep -e "^[0-9\.]{4}")
wildebeest.gnu.org.

... and for the keen student we would like to point out that we actually could use a pipe:

$ dig +short www.gnu.org | egrep -e "^[0-9\.]{4}" | xargs dig +short -x 
wildebeest.gnu.org.

In the last command line we pipe the output of dig +short www.gnu.org | egrep -e "^[0-9\.]{4}" to xargs. xargs reads from stdin and starts dig +short -x using the text (read from stdin) as argument to dig. So the second dig is started (by xargs) like this dig +short -x 208.118.235.148

Write a small script that does the above

The script shall:

  • take the domain as an argument
  • exit with 1 and a printout to stderr if no argument was given
  • exit with the status code of dig

Expand using link to the right to see a hint.

Create a file called rcheck_domain.sh

#!/bin/bash

DOMAIN=$1

if [ "$DOMAIN" != "" ]
then
       dig +short -x $(dig +short $DOMAIN | grep "^[0-9]*\.") ;
       exit $?
else
    echo "Missing domain" 1>&2
    exit 1
fi

You can find complete source code to the suggested solutions below in the . directory in this zip file or in the git repository.

Write a bash function that does the above

The function shall:

  • take the domain as an argument
  • exit with 1 and a printout to stderr if no argument was given
  • exit with the status code of dig

Expand using link to the right to see a hint.

$ rcheck_domain() { DOMAIN=$1; if [ "$DOMAIN" = "" ] ; then  echo "Missing domain" 1>&2; exit 1 ; fi ; dig +short -x $(dig +short $1 | grep "^[0-9]*\.") ; }

You can now use the function above like this:

$ rcheck_domain www.gnu.org
wildebeest.gnu.org.
$ rcheck_domain www.sunet.se
webc.sunet.se.
$ rcheck_domain www.funet.fi
www.funet.fi.

If you want to be able to use it in the future you put the function in your ~/.bashrc file.

Exercises on checking network

Use ping to check if 8.8.8.8 is up

Expand using link to the right to see a hint.

$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=44 time=23.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=44 time=25.3 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=44 time=23.5 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=44 time=24.4 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=44 time=24.7 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=44 time=23.4 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5009ms
rtt min/avg/max/mdev = 23.485/24.202/25.341/0.706 ms

Press control-c to interrupt the program

Ping the host again, but at most 3 times

Expand using link to the right to see a hint.

$ ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=44 time=25.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=44 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=44 time=24.7 ms

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 23.851/24.845/25.935/0.872 ms

What was the return value of the above?

If you're on a proper network the host 8.8.8.8 shall be "pingable" so you should get a 0 back. With proper network we mean a network that does not block ping.

What can such return value be used for?

Expand using link to the right to see a hint.

$ ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=44 time=25.9 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=44 time=23.8 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=44 time=24.7 ms

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 23.851/24.845/25.935/0.872 ms
$ echo $?
0

The return value can be used as a check to see if a host is up and take action depending on that.

Exercises on web pages

Use curl to get the html page of www.gnu.org

Expand using link to the right to see a hint.

$ curl www.gnu.org
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0<!DOCTYPE html PUBLI
C "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">

<head>
<!-- start of server/head-include-1.html -->
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<link rev="made" href="mailto:webmasters@gnu.org" />
<link rel="icon" type="image/png" href="/graphics/gnu-head-mini.png" />
<meta name="ICBM" content="42.355469,-71.058627" />
<meta name="DC.title" content="gnu.org" />
<link rel="stylesheet" href="/combo.css" media="screen" />
<link rel="stylesheet" href="/mini.css" media="handheld" />
<link rel="stylesheet" href="/layout.min.css" media="screen" />
<link rel="stylesheet" href="/print.min.css" media="print" />
<!-- end of server/head-include-1.html -->

<!-- end of server/header.html -->

<!-- Parent-Version: 1.79 -->

<title>The GNU Operating System and the Free Software Movement</title>

<meta http-equiv="Keywords" content="GNU, FSF, Free Software Foundation, Linux, Emacs, GCC, Unix, 
Free Software, Libre Software, Operating System, GNU Kernel, GNU Hurd" />
<meta http-equiv="Description" content="Since 1983, developing the free Unix style operating syste
m GNU, so that computer users can have the freedom to share and improve the software they use." />
<link rel="alternate" title="Planet GNU" href="http://planet.gnu.org/rss20.xml" type="application/
rss+xml" />
....


Use curl to get the html page of www.gnu.org and store it on file

Use curl to get the html page of www.gnu.org and store it on a file called www-gnu-org.html. What is the return value?

Expand using link to the right to see a hint.

$ curl www.gnu.org  -o www-gnu-org.html
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 26560    0 26560    0     0  64738      0 --:--:-- --:--:-- --:--:-- 64780

On success curl returns 0.


Use curl to get a non-existing html page

Use curl to try to get www.sunet.se/this-page-does-not-exists.html. What exit code is returned?

Expand using link to the right to see a hint.

$ curl www.sunet.se/this-page-does-not-exists.html
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://www.sunet.se/this-page-does-not-exists.html">here</a>.</p>
<hr>
<address>Apache/2.4.7 (Ubuntu) Server at www.sunet.se Port 80</address>
</body></html>
$ echo $?
0

curl succeeds and returns 0.


Use curl to get a html page from a non-existing server

Use curl to try to get http://madeupurl.thatreallydoesn.otexist.com/this-page-does-not-exists.html. What exit code is returned?

Expand using link to the right to see a hint.

$ curl http://madeupurl.thatreallydoesn.otexist.com/this-page-does-not-exists.html
curl: (6) Could not resolve host: madeupurl.thatreallydoesn.otexist.com
$ echo $?
6

curl fails looking up the server and and returns 6 as exit code.

Use w3m to get the text of an html page

Use w3m to get the web page of www.nytimes.com.

Expand using link to the right to see a hint.

Check out the -dump option to w3m.

Expand using link to the right to see a suggested solution.

$ w3m -dump www.nytimes.com

Count the number of times London is mentioned

Use w3m to get the web page of www.nytimes.com and check how many times London is mentioned.

Expand using link to the right to see a hint.

w3m outputs the web content (with the html removed). So you should be able to pipe the output to grep. You can use the -c to grep to count the occuranes.

Expand using link to the right to see a suggested solution.

$ w3m -dump www.nytimes.com | grep -c London
Received cookie: nyt-a=3405f3e1be1967838ead183860360c4934bd00a943d8ea9632efc3050e5b22ab
Received cookie: nyt-a=3405f3e1be1967838ead183860360c4934bd00a943d8ea9632efc3050e5b22ab
0

Write a bash function that searches a web page for a search string

The function shall take two arguments:

  1. the site
  2. the search expression

If any of the arguments is missing the function shall output an error message to stderr.

Expand using link to the right to see a suggested solution.

$ wgrep() { SITE=$1; REG_EXP=$2; if [ "$REG_EXP" = "" ] ; then echo "Missing argument(s)" 1>&1 ; else  w3m -dump $SITE | grep -c $REG_EXP ; fi ; }

You can now use the function to check the number occurances of words on a web site. Examples below:

$ wgrep www.nytimes.com Europe
Received cookie: nyt-a=3405f3e1be1967838ead183860360c4934bd00a943d8ea9632efc3050e5b22ab
Received cookie: nyt-a=3405f3e1be1967838ead183860360c4934bd00a943d8ea9632efc3050e5b22ab
0

Remove the printouts to stderr from w3m in the function above

Expand using link to the right to see a suggested solution.

$ wgrep() { SITE=$1; REG_EXP=$2; if [ "$REG_EXP" = "" ] ; then echo "Missing argument(s)" 1>&1 ; else  w3m -dump $SITE 2>/dev/null | grep -c $REG_EXP ; fi ; }

You can now use the function to check the number occurances of words on a web site. Examples below:

$ $ wgrep www.nytimes.com Europe
0
$ wgrep www.nytimes.com Super
5
$ wgrep www.dailymirror.com London
11

Write a function that uses the function above to check several sites

Write a function that checks, using the function above, the following web sites for a search string:

  • www.nytimes.com
  • www.mirror.co.uk
  • www.daily-sun.com
  • www.washingtonpost.com
  • www.chicagotribune.com
  • www.theguardian.com/us
  • timesofindia.indiatimes.com
  • www.dailymail.co.uk

The search string shall be given as argument to the function.

Expand using link to the right to see a suggested solution.

To start of with we can write a function the can echo the sites.

$ dwgrep() { for site in www.nytimes.com www.mirror.co.uk www.daily-sun.com www.washingtonpost.com www.chicagotribune.com www.theguardian.com/us timesofindia.indiatimes.com www.dailymail.co.uk ; do echo $site; done ; } 
[hesa@bartok bash-network-tools]$ dwgrep 
www.nytimes.com
www.mirror.co.uk
www.daily-sun.com
www.washingtonpost.com
www.chicagotribune.com
www.theguardian.com/us
timesofindia.indiatimes.com
www.dailymail.co.uk

Ok, it seems to work. Let's check the search word (given as argument):

$ dwgrep() { REG_EXP=$1 ; if [ "$REG_EXP" = "" ] ; then echo "Missing argument(s)" 1>&1 ; else for site in www.nytimes.com www.mirror.co.uk www.daily-sun.com www.washingtonpost.com www.chicagotribune.com www.theguardian.com/us timesofindia.indiatimes.com www.dailymail.co.uk ; do echo $site $REG_EXP ; done ; fi ; } 
$ dwgrep 
Missing argument(s)
$ dwgrep London
www.nytimes.com London
www.mirror.co.uk London
www.daily-sun.com London
www.washingtonpost.com London
www.chicagotribune.com London
www.theguardian.com/us London
timesofindia.indiatimes.com London
www.dailymail.co.uk London

Ok, let's invoke the function we wrote earlier

$ dwgrep() { REG_EXP=$1 ; if [ "$REG_EXP" = "" ] ; then echo "Missing argument(s)" 1>&1 ; else for site in www.nytimes.com www.mirror.co.uk www.daily-sun.com www.washingtonpost.com www.chicagotribune.com www.theguardian.com/us timesofindia.indiatimes.com www.dailymail.co.uk ; do echo -n "$site: " ; wgrep $site $REG_EXP ; done ; fi ; } 
$ dwgrep London
www.nytimes.com: 0
www.mirror.co.uk: 11
www.daily-sun.com: 0
www.washingtonpost.com: 0
www.chicagotribune.com: 1
www.theguardian.com/us: 0
timesofindia.indiatimes.com: 0
www.dailymail.co.uk: 19

Not a lot of code and lots of work done :)

Netcat and telnet to the rescue

Use telnet to get a webpage

Use telnet to open a connection to www.apache.org (default for web and http is port 80).


Expand using link to the right to see a suggested solution.

$ telnet www.apache.org 80 
Trying 88.198.26.2...
Connected to www.apache.org.
Escape character is '^]'


HTTP/1.1 200 OK Date: Mon, 06 Feb 2017 08:42:55 GMT Server: Apache/2.4.7 (Ubuntu) Last-Modified: Mon, 06 Feb 2017 08:10:28 GMT ETag: "d572-547d82a205060" Accept-Ranges: bytes Content-Length: 54642 Vary: Accept-Encoding Cache-Control: max-age=3600 Expires: Mon, 06 Feb 2017 09:42:55 GMT Connection: close Content-Type: text/html

<!DOCTYPE html> <html lang="en"> <head>

 <meta charset="utf-8">
 <meta http-equiv="X-UA-Compatible" content="IE=edge">
 <meta name="viewport" content="width=device-width, initial-scale=1">
 <meta name="description" content="Home page of The Apache Software Foundation">
 

......

Use netcat to get a webpage

Use netcat to open a connection to www.apache.org (default for web and http is port 80).


Expand using link to the right to see a suggested solution.

$ nc www.apache.org 80 
GET / HTTP/1.0

HTTP/1.1 200 OK
Date: Mon, 06 Feb 2017 08:39:15 GMT
Server: Apache/2.4.7 (Ubuntu)
Last-Modified: Mon, 06 Feb 2017 08:10:28 GMT
ETag: "d572-547d82a205060"
Accept-Ranges: bytes
Content-Length: 54642
Vary: Accept-Encoding
Cache-Control: max-age=3600
Expires: Mon, 06 Feb 2017 09:39:15 GMT
Connection: close
Content-Type: text/html

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <meta name="description" content="Home page of The Apache Software Foundation">
  
........

Use telnet to send an email

We will not do this an exercise where we expect you to learn and remember. Think of this more as a way to get an understanding of a protocol and how a client talks to a server.

Connect to an email server:

$ telnet mail.youremailprovider.com 25
Trying 89.18.105.40...
Connected to mail.youremailprovider.com.
Escape character is '^]'.


Say hi:

EHLO myown
250-mail.youremailprovider.com Hello myown [81.170.163.11]
250-SIZE 52428800
250-8BITMIME
250-PIPELINING
250-AUTH CRAM-MD5
250-STARTTLS
250 HELP


Specify from who the email is:

MAIL FROM: myemail@myemailserver.com
250 OK

Specify to who the email is:

RCPT TO: hesa@youremailprovider.com
250 Accepted

Write som content:

DATA
354 Enter message, ending with "." on a line by itself
Hi there
.
250 OK id=1cagHb-00020O-Ab

The mail server mail.youremailprovider.com is of course a faked one. When writing this exercise Henrik used his own mail server.

Use netcat to launch a webserver

In one terminal, start netcat with the following arguments -l -p 8080.

$ nc -l -p 8080

This will open up a listening netcat, waiting for someone to connect on port 8080.

In another terminal, start netcat with the following arguments -p 8080.

$ nc localhost -p 8080

This will open up a connection to 8080, which is where your listening netcat "awaits your call".

Your netcat sessions are now "connected" over the local network so if you type something in one of the terminals you should be able to see the same text in the other. And vice verse.


Write a small script that outputs a valid web page to stdout

The script shall output something like this:

HTTP/1.1 200 OK
Connection: close
Date: Mon Feb  6 10:22:50 CET 2017
Server: netcat special deal
Content-Length: 136
Content-Type: text/html; charset=utf-8
Cache-Control: max-age=60



<!DOCTYPE html>
<html>

<head>
<title>Page Title</title>
</head>

<body>
Current date is: Mon Feb  6 10:22:50 CET 2017
</body>

</html>

Expand using link to the right to see a suggested solution.

Here's a sample script:

#!/bin/bash


content()
{
    echo "<!DOCTYPE html>"
    echo "<html>"
    echo ""
    echo "<head>"
    echo "<title>Page Title</title>"
    echo "</head>"
    echo ""
    echo "<body>"
    echo "Current date is: $(date)"
    echo "</body>"
    echo ""
    echo "</html>"
}

header()
{
    echo "HTTP/1.1 200 OK"
    echo "Connection: close"
    echo "Date: $(date)"
    echo "Server: netcat special deal"
    echo "Content-Length: $LENGTH"
    echo "Content-Type: text/html; charset=utf-8"
    echo "Cache-Control: max-age=60"
    echo ""
    echo ""
    echo ""
}

LENGTH=$(content | wc -c)

header
content


Here's a sample script you can use.

You can find complete source code to the suggested solutions below in the webserver-nc directory in this zip file or in the git repository.

Use the script together with netcat to make a web server

The scripts prints to stdout. Use a pipe to redirect the output of your script to be stdin for netcat. Add the same listning flags as above to netcat.

Once started you should be able to go to the URL localhost:8080 with a browser. Try reloading the page. Explain what happens when reloading.

Expand using link to the right to see a suggested solution.

$ ./webserver.sh | nc -l -p 8080

The page loads fine once. After this the netcat sessions is done and no reloading can be done.



Make the server start all over again in a loop

Start the above command in a loop (use while (true); do ......; done in bash). Once started you should be able to go to the URL localhost:8080 with a browser. Try reloading the page. Explain what happens when reloading.

Expand using link to the right to see a suggested solution.

$ while (true); do  ./webserver.sh | nc -l -p 8080 ;  done

The page loads fine once. After the page has been lodaded and netcat exits a netcat new session is started and reloading can be done.

Referer

Check the output from your netcat sessions. It most likely looks something like this

GET / HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
DNT: 1
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,sv;q=0.6

GET /favicon.ico HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36
Accept: image/webp,image/*,*/*;q=0.8
DNT: 1
Referer: http://localhost:8080/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,sv;q=0.6

The important line in this printout right now is

Referer: http://localhost:8080/


Now, access this page (the same as before) by clicking here: localhost:8080. The output in the terminal where you're reunning your netcat/webserver should look something like this:

GET / HTTP/1.1
Host: localhost:8080
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
DNT: 1
Referer: http://virt08.itu.chalmers.se/mediawiki/index.php?title=MoreBash:Exercises_-_Scripts_-_Network_Tools&action=submit
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,sv;q=0.6

GET /favicon.ico HTTP/1.1
Host: localhost:8080
Connection: keep-alive
User-Agent: Mozilla/5.0 (X11; Fedora; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36
Accept: image/webp,image/*,*/*;q=0.8
DNT: 1
Referer: http://localhost:8080/
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8,sv;q=0.6

The important lines in this printout right now is

GET / HTTP/1.1
Referer: http://virt08.itu.chalmers.se/mediawiki/index.php?title=MoreBash:Exercises_-_Scripts_-_Network_Tools

and

GET /favicon.ico HTTP/1.1
Referer: http://localhost:8080/

The first referer comes from the browser (client) which tells the server that it came across this page by clicking on a link on page http://virt08.itu.chalmers.se/mediawiki/index.php?title=MoreBash:Exercises_-_Scripts_-_Network_Tools. The second referer is the browser, without us saying so, check if the server has a favicon. It does this by refering to the web page.

Links

External links

Example scripts

END_INCLUSION of MoreBash:Exercises_-_Network_Tools