tag:blogger.com,1999:blog-75001443340831582912024-03-14T00:20:34.084-06:00scyph.us (sī′fəs)A cup of random knowledge.Unknownnoreply@blogger.comBlogger5125tag:blogger.com,1999:blog-7500144334083158291.post-40323337978852059352020-07-29T12:54:00.002-06:002020-07-29T12:58:00.609-06:00tcpdump - Show only http headers<div>Only the headers</div>
<pre>tcpdump -A -l -s 0 'tcp port 8088 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | grep ': '</pre>
<div>If you want the request listed as well:</div>
<pre>tcpdump -A -l -s 0 'tcp port 8088 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | egrep '(GET|: )'</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-7500144334083158291.post-68600633843061831672020-07-28T16:49:00.002-06:002020-07-28T16:55:44.892-06:00Ansible - Task executed multiple times due to a Broken Pipe<h1 style="text-align: left;">Why is this a problem?</h1><div><br /></div><div>If you are running a command that is non-idempotent, the command will be started over again which may cause a failure in your playbook execution. Recently I ran into a situation where Ansible was rerunning on a task on a node even though the task was not configured to be retried. It turns out that the <a href="https://github.com/ansible/ansible/blob/b4184aa50e902131e1d970ffcd2588fb199d11d2/lib/ansible/plugins/connection/ssh.py#L418-L426">ssh connection will retry actions auto-magically</a> if the ssh connection fails while waiting for the task to execute. Here's how I was able to reproduce the issue....</div>
<h2 style="text-align: left;">
Environment Setup</h2>
I started by setting up a fake environment using containers. I have a <a href="https://github.com/mwhahaha/scale-ssh">scale-ssh</a> project that can be used to launch a set of containers running ssh that can be used to run Ansible against.<br />
<pre>$ git clone https://github.com/mwhahaha/scale-ssh
$ cd scale-ssh
$ ./build-container.sh
$ cp ~/.ssh/id_rsa.pub authorized_keys
$ ./run-containers.sh 10
</pre>
After the containers have launched, we now have an Ansible inventory file that can be used to execute Ansible via SSH to our freshly launched containers.<br />
<br />
<h2 style="text-align: left;">
Setting up the Broken Pipe</h2>
<div>
In order to be able to reproduce the connection failure, we want to inject ourselves between Ansible and the target hosts. We can do this using an ssh CommandProxy when connecting to our container network (e.g. 172.16.86.0/24). We can add the following configuration to our ~/.ssh/config file.</div>
<div>
<br /></div>
<div>
<pre>Host 172.16.86.*
ProxyCommand ssh localhost nc %h %p
UserKnownHostsFile=/dev/null
StrictHostKeyChecking=no
ControlMaster=auto
ControlPersist=60s
ServerAliveInterval=1
ServerAliveCountMax=1
PreferredAuthentications=publickey
</pre>
</div>
<div>
In order for this to work, you will need to ensure ncat is available on yourself. Make sure to install it using your favorite package manager.</div>
<h2 style="text-align: left;">
Running Ansible</h2>
<div>
We will create a playbook that just runs a shell command against the target containers via SSH. We need to create an ansible.cfg to correctly set up the SSH options we want Ansible to use when connecting. Our ansible.cfg will look like...</div>
<div>
<br /></div>
<div>
<pre>[defaults]
forks = 10
[ssh_connection]
control_path_dir = /tmp/ansible-ssh
retries = 8
pipelining = True
</pre>
<div>We have a dummy playbook that will run and wait for ~5 minutes while we mess with our connections. We create a broken-pipe.yaml...</div>
<pre>- hosts: all
gather_facts: false
tasks:
- shell: sleep 301
</pre>
</div>
Now we can run this playbook and see what happens.<div><h2 style="text-align: left;">Ansible Command Execution</h2><div>Our containers will mount the systemd socket so when Ansible executes our shell command in the container, we get an entry in the journal on the host running the container. In a separate window, you'll want to run `journalctl -f -t ansible-command` to watch the journal and see when Ansible is running the shell task.</div><div><br /></div><div>$ ansible-playbook -i inventory.ini broken-pipe.yaml</div><div><br /></div><div>In the other window with journalctl running and you should see some ansible-command log lines.</div><div>
<pre>$ journalctl -f -t ansible-command
-- Logs begin at Tue 2020-07-28 10:14:26 MDT. --
Jul 28 16:38:39 myhostname ansible-command[23231]: Invoked with _raw_params=sleep 50 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None<br />Jul 28 16:38:39 myhostname ansible-command[23252]: Invoked with _raw_params=sleep 50 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None<br />Jul 28 16:38:39 myhostname ansible-command[23255]: Invoked with _raw_params=sleep 50 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None<br />Jul 28 16:38:39 myhostname ansible-command[23277]: Invoked with _raw_params=sleep 50 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None<br />Jul 28 16:38:39 myhostname ansible-command[23279]: Invoked with _raw_params=sleep 50 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None<br />Jul 28 16:38:39 myhostname ansible-command[23284]: Invoked with _raw_params=sleep 50 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None</pre></div><h2 style="text-align: left;">Break the Pipe</h2><div>Now that the command is running, you need to identify one of our ncat processes to stop to simulate a broken pipe. </div><div>
<pre> $ ps -x | grep nc
22703 pts/0 S+ 0:00 ssh localhost nc 172.16.86.89 22
22704 pts/0 S+ 0:00 ssh localhost nc 172.16.86.95 22
22705 pts/0 S+ 0:00 ssh localhost nc 172.16.86.97 22
22706 pts/0 S+ 0:00 ssh localhost nc 172.16.86.93 22
22707 pts/0 S+ 0:00 ssh localhost nc 172.16.86.90 22
22708 pts/0 S+ 0:00 ssh localhost nc 172.16.86.92 22
22709 pts/0 S+ 0:00 ssh localhost nc 172.16.86.94 22
22710 pts/0 S+ 0:00 ssh localhost nc 172.16.86.96 22
22711 pts/0 S+ 0:00 ssh localhost nc 172.16.86.91 22
22712 pts/0 S+ 0:00 ssh localhost nc 172.16.86.88 22
$ kill 22862
</pre>
</div><div>Once you've run the kill command, you should see a new ansible-command entry pop up in the journalctl output. This is ansible retrying the command.</div><div><pre>Jul 28 16:38:39 myhostname ansible-command[23284]: Invoked with _raw_params=sleep 301 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jul 28 16:38:49 myhostname ansible-command[23333]: Invoked with _raw_params=sleep 301 _uses_shell=True warn=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None<br /> </pre></div><div>
<h2 style="text-align: left;">How do I fix it?</h2><div>If you have a long running process, you can use async and poll on the task to prevent connectivity related issues from causing the execution to restart. The example playbook can be adjusted with async and poll. Note that the async value is basically an upper timeout limit on the execution.</div><div>
<pre>- hosts: all
gather_facts: false
tasks:
- shell: sleep 301
async: 305
poll: 3</pre>
</div><div><br /></div><div><br /></div>
<br /></div></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-7500144334083158291.post-64571423547363470512010-07-02T09:21:00.000-06:002010-07-02T09:21:01.772-06:00Vyatta -- SIP Connection Tracking for VOIPI've your going to be running VOIP devices behind a <a href="http://www.vyatta.com/">Vyatta</a> router, you may need to enable some extra connection tracking options on your firewall to handle the SIP traffic correctly.<br />
<br />
To enable sip tracking, log in to your router and do the following:<br />
vyatta@rtr01:~$ configure<br />
vyatta@rtr01# set firewall conntrack-options sip enable-indirect-media<br />
vyatta@rtr01# set firewall conntrack-options sip enable-indirect-signalling<br />
vyatta@rtr01# commit<br />
vyatta@rtr01# saveUnknownnoreply@blogger.comtag:blogger.com,1999:blog-7500144334083158291.post-19569843130559322182010-07-02T09:14:00.000-06:002010-07-02T09:14:34.989-06:00Vyatta -- Grouping To VRRP Interfaces TogetherIf you're using <a href="http://www.vyatta.com/">Vyatta</a> as a router and you want to group two vrrp interfaces together for redundancy, use the sync-group option to have the two interfaces fail over together. This is useful if you have two <a href="http://www.vyatta.com/">Vyatta</a> routers on two separate switches and you want to fail over if one of the switches fail or if only one interface on the server fails.<br />
<br />
Here is an example with rtr01 being the master router. If just eth0 or just eth1 fails, both vrrp groups fail and service is transfered to rtr02.<br />
<br />
rtr01 network configuration<br />
- eth0 real 10.0.0.2<br />
- eth0 vrrp 10.0.0.1<br />
- eth1 real 10.1.0.2<br />
- eth1 vrrp 10.1.0.1<br />
<br />
<a href="http://www.vyatta.com/">Vyatta</a> interface config:<br />
<br />
interfaces {<br />
ethernet eth0 {<br />
address 10.0.0.2/16<br />
hw-id 00:13:72:65:b4:cf<br />
vrrp {<br />
vrrp-group 1 {<br />
advertise-interval 1<br />
priority 150<br />
<b>sync-group failover</b><br />
virtual-address 10.0.0.1/16<br />
}<br />
}<br />
}<br />
ethernet eth1 {<br />
address 10.1.0.2/16<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> hw-id 00:13:72:65:b4:d0</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> vrrp {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> vrrp-group 2 {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> advertise-interval 1</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> priority 150</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> <b>sync-group failover</b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> virtual-address 10.1.0.1/16</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div> }<br />
}<br />
<div><br />
</div><br />
rtr02 network configuration<br />
- eth0 real 10.0.0.3<br />
- eth0 vrrp 10.0.0.1<br />
- eth1 real 10.1.0.2<br />
- eth1 vrrp 10.1.0.1<br />
<br />
<a href="http://www.vyatta.com/">Vyatta</a> interface config:<br />
<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> interfaces {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> ethernet eth0 {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> address 10.0.0.3/16</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> hw-id 00:13:72:65:69:a9</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> vrrp {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> vrrp-group 1 {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> advertise-interval 1</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> priority 20</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> <b>sync-group failover</b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> virtual-address 10.0.0.1/16</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> ethernet eth1 {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> address 10.1.0.3/16</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> hw-id 00:13:72:65:69:aa</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> vrrp {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> vrrp-group 2 {</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> advertise-interval 1</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> priority 20</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> <b>sync-group failover</b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> virtual-address 10.1.0.1/16</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> }</div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-7500144334083158291.post-8906027005473951202010-06-28T09:02:00.004-06:002010-06-28T09:07:49.928-06:00mysql -- Error reading master configurationIf you are trying to setup and slave and you get this error message in the error log repeatedly, it maybe that the master configuration isn't correct. You may need to try resetting the slave with:<div><br /><div><div>mysql> reset slave;</div><div>Query OK, 0 rows affected (0.00 sec)</div><div><br /></div><div>Then reissue the change master command to setup the slave:</div></div></div><div><br /></div><div><div>mysql> CHANGE MASTER TO</div><div> -> MASTER_HOST='xxx.xxx.xxx.xx',</div><div> -> MASTER_USER='xxxxxxxx',</div><div> -> MASTER_PASSWORD='xxxxxxx',</div><div> -> MASTER_PORT=3306,</div><div> -> MASTER_LOG_FILE='mysql-bin.000003',</div><div> -> MASTER_LOG_POS=1372;</div><div>Query OK, 0 rows affected (0.01 sec)</div><div><br /></div><div>Restart the slave:</div><div><br /></div><div>mysql> start slave;</div><div>Query OK, 0 rows affected (0.00 sec)</div></div>Unknownnoreply@blogger.com