AWS - Clean up left over EBS volumes

Sometimes you mess up and forget to have EC2 instances delete their volumes on termination. When this happens you may need to clean them up.  If you have a list of the AMI IDs that were used in each region, here is a script that lets you find volumes using the AMI snapshot ids that are no longer mounted and need to be cleaned up. import boto3 import botocore.exceptions as boto_exc AMI_MAP = { "us-east-1":"ami-0123456789abcef12", "us-east-2":"ami-0123456789abcef13", "us-west-1":"ami-0123456789abcef14", "us-west-2":"ami-0123456789abcef15", } def get_snapshot_from_id(client, ami_id): resp = client.describe_images( ImageIds=[ami_id] ) return resp["Images"][0]["BlockDeviceMappings"][0]["Ebs"]["SnapshotId"] session = boto3.Session() ec2 = session.client("ec2") count = 0 for region in ec2.describe_regions()

tcpdump - Show only http headers

Only the headers tcpdump -A -l -s 0 'tcp port 8088 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | grep ': ' If you want the request listed as well: tcpdump -A -l -s 0 'tcp port 8088 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | egrep '(GET|: )'

Ansible - Task executed multiple times due to a Broken Pipe

Why is this a problem? If you are running a command that is non-idempotent, the command will be started over again which may cause a failure in your playbook execution.  Recently I ran into a situation where Ansible was rerunning on a task on a node even though the task was not configured to be retried.  It turns out that the ssh connection will retry actions auto-magically if the ssh connection fails while waiting for the task to execute. Here's how I was able to reproduce the issue.... Environment Setup I started by setting up a fake environment using containers. I have a scale-ssh  project that can be used to launch a set of containers running ssh that can be used to run Ansible against. $ git clone $ cd scale-ssh $ ./ $ cp ~/.ssh/ authorized_keys $ ./ 10 After the containers have launched, we now have an Ansible inventory file that can be used to execute Ansible via SSH to our freshly launc

Vyatta -- SIP Connection Tracking for VOIP

I've your going to be running VOIP devices behind a Vyatta router, you may need to enable some extra connection tracking options on your firewall to handle the SIP traffic correctly. To enable sip tracking, log in to your router and do the following: vyatta@rtr01:~$ configure vyatta@rtr01# set firewall conntrack-options sip enable-indirect-media vyatta@rtr01# set firewall conntrack-options sip enable-indirect-signalling vyatta@rtr01# commit vyatta@rtr01# save

Vyatta -- Grouping To VRRP Interfaces Together

If you're using Vyatta  as a router and you want to group two vrrp interfaces together for redundancy, use the sync-group option to have the two interfaces fail over together.  This is useful if you have two  Vyatta  routers on two separate switches and you want to fail over if one of the switches fail or if only one interface on the server fails. Here is an example with rtr01 being the master router.  If just eth0 or just eth1 fails, both vrrp groups fail and service is transfered to rtr02. rtr01 network configuration - eth0 real - eth0 vrrp - eth1 real - eth1 vrrp Vyatta  interface config:  interfaces {      ethernet eth0 {          address          hw-id 00:13:72:65:b4:cf          vrrp {              vrrp-group 1 {                  advertise-interval 1                  priority 150                  sync-group failover                  virtual-address              }          }      }      etherne

mysql -- Error reading master configuration

If you are trying to setup and slave and you get this error message in the error log repeatedly, it maybe that the master configuration isn't correct. You may need to try resetting the slave with: mysql> reset slave; Query OK, 0 rows affected (0.00 sec) Then reissue the change master command to setup the slave: mysql> CHANGE MASTER TO -> MASTER_HOST='', -> MASTER_USER='xxxxxxxx', -> MASTER_PASSWORD='xxxxxxx', -> MASTER_PORT=3306, -> MASTER_LOG_FILE='mysql-bin.000003', -> MASTER_LOG_POS=1372; Query OK, 0 rows affected (0.01 sec) Restart the slave: mysql> start slave; Query OK, 0 rows affected (0.00 sec)