Thursday, 2 April 2020

AWS CLI

AWS CLI Quickstart

what is aws cli?

essentially a command line python tool you can use to query/create/change state of all things in aws.
pretty cool for performing ad-hoc commands.

installing:

% pip3 install awscli
.
.

check installed ok

% aws --version
aws-cli/1.17.8 Python/3.8.1 Darwin/19.3.0 botocore/1.14.8
%
% aws help or aws command help
% complete -C aws_completer aws   <== add this to your .bashrc
^-^ stuart@stuartah ~ $ aws s3 (tab-tab here shows possible options below)
cp        ls        mb        mv        presign   rb        rm        sync      website

configure

% aws configure
AWS Access Key ID [None]: **************
AWS Secret Access Key [None]: *******************
Default region name [None]: eu-west2
Default output format [None]: json
%
you can also add credentials as env variables - e.g. export AWS_[ACCESS/SECRET]_KEY=...

here we're adding the credentials to allow connectivity.
instead of running aws configure you can also edit ~/.aws/config and credentials:

% ls -l
total 9
-rw------- 1 stuart stuart 42 Apr  2 17:03 config
-rw------- 1 stuart stuart 89 Apr  2 17:03 credentials
% cat config 
[default]
region = eu-west2
output = json
% cat credentials 
[default]
aws_access_key_id = **************
aws_secret_access_key = *******************
%
% aws configure list
      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                             None    None
access_key     ****************Y47I shared-credentials-file    
secret_key     ****************ti17 shared-credentials-file    
    region                eu-west-2      config-file    ~/.aws/config


instead of just having default section can have multiple sections (profiles) and use them with
aws --profile otherone ...

Examples


list s3 buckets:
% aws s3 ls
2020-03-23 15:45:43 bucket1
2020-03-29 20:10:27 bucket2
%
% aws s3 ls s3://bucket1
2020-04-01 11:23:28         69 index.html

% aws s3 cp file s3://bucket1/
% aws s3 rm s3://bucket1/index.html


% aws ec2 describe-instances --filters Name=instance-state-name,Values=stopped  --region  eu-west-2  --output json  |  jq  -r  .Reservations[].Instances[] .StateReason.Message% aws ec2 start-instances --instance-ids ...

other commands worth investigating:-














Wednesday, 26 June 2019

changing remote in git

generally remote is set by default to the place you originally cloned your repo from, however you may want to change this.

a simple way is on first push use:

git push -u origin remotename


the above can be configured as the default in your gitconfig file:

[push]
default = current

git remote set-url ..is the command we need.

normally remote has the name origin, which is an alias for a remote repository, set as a key locally in place of the remote repos full url.

to do the change its pretty simple:

$ git remote set-url origin https://urlname.com/USERNAME/REPONAME.git

an interesting feature is we can have multiple remotes (or aliases pointing at them) in your repo... can push or pull to multiple remotes then..but be careful..can get confusing!
e.g.


git remote add alt different-machine:/path/to/repo

Monday, 24 September 2018

bash best practices


Bash best practices

A few hints on bash best practice:


* use #!/usr/bin/env bash .. this is more portable but you cant rely on a specific version
* use set, dont use options to bash - if someone runs your script with bash scriptname then it will ignore the options to bash
* use {} to enclose variables - can cause mistakes if you donr - e.g. variable name becomes VAR_ext rather than what you wanted
* to ensure you always have a value, set defaults - e.g. "${MYNAME}" = "Stuart" - MYNAME defaults to Stuart if not already set
* use spaces for tabs, not tabs - tab not portable
* max line length of 80 characters for readability - use \ to split of lines if needed
* dont have whitespace at end of lines as may confuse source code control like git
* use $(command) instead of backticks
* variables and function names lowercase and underscores - use meaninful names
* constants should be in caps, declare them first in file
* use readonly to set a variable readonly
* can use local to make a variable specific to a function
* put functions together below the constants, I order the functions alphabetically as easier to find
* use a main function if using multiple functions
* check return values from functions
* avoid eval - munges input
* [[ .. ]] is better than test or /usr/bin/[ - stops pathname expansion and word splitting
* comment difficult bits of code
* insecure to have suid/sgid
* i prefer to have .sh extension so easy to recognise file
### good set options:

* -e exit script immediately if command fails
* -o pipefail fails if any part of a pipe fails
* -u treat unset variables as an error and exit immediately
* suggest second line of bash script is: set -eou pipefail
* -x prints each command before executing it - expands arguments also
* use -E if script contains traps
* to start with using a program like shellcheck can be useful - gives your scripts a quick checkover
* a useful tool if you use vim is the bash support plugin - see https://www.thegeekstuff.com/2009/02/make-vim-as-your-bash-ide-using-bash-support-plugin/

/var/log/journal

Currently working on creating a large repo - copied a lot of the packages to a VM in my home test lab - on extracting 15Gb of packages I noticed I was running short on space (had about 20Gb free but wanted to ensure extract didnt fail).

running the old du -hs * from / I noticed /var had a lot of storage being used.
(yes I dont have separate partitions on my smaller VMs)

Looking further I could see that it was pretty much being used in /var/log/journal:

# journalctl --disk-usage
Archived and active journals take up 3.9G in the file system.


I'm really not that interested in these logs - and so will add the following modification to my ansible scripts:

modify /etc/systemd/journald.conf => SystemMaxUse=100M


I then run:

systemctl kill --kill-who=main --signal=SIGUSR2 systemd-journald.service

and:

systemctl restart systemd-journald.service

running check again:

# journalctl --disk-usage
Archived and active journals take up 80.0M in the file system.

cool - lots of space cleared on my VM!

(I do recommend on servers having separate filesystems, and bigger ones than I use - however on home system I have 2Tb worth of ssd - this has to be shared amongst numerous smaller VMs)

Tuesday, 11 September 2018

TCP/IP

What is TCP/IP

tcp/ip is basically a set of rules/standards - see darpa standard
transmission control protocol/internet protocol
based on OSI model - but slightly different - 4 layers instead of 7

Image result for 4 layers of tcp/ip


tcp is essentially the transport layer  - responsible for splitting up the data and posting it on the physical link  - but like a clerk in an office getting lots of things ready to deliver to a customer - splitting it into manageable parcels - ip is like the postman - he picks up the parcels and routes them to their destination.

A TCP packet runs ontop of an IP packet.

TCP Packet:


enter image description here

IP packet:


Image result for ip packet

Three way handshake:

1. A tcp connection is established via a three way handshake - client sends a SYN (synchronize) packet to server with a random sequence number.

2. server sends back a SYN-ACK - containing another random sequence number and an ACK number to acknowledge clients sequence number

3. client then sends an ACK number to server, which aknowledges the servers sequence number.

now that the sequence numbers are synchronized, both ends can now send and receive data independently
Image result for three way handshake





Saturday, 21 April 2018

connect ubuntu to wifi from command line

so i have a server install of a beta release of bionic beaver - ubuntu 18.04 - I want to connect this to wifi as I'm not near my cobbler server and cant connect to it via ethernet at the moment.

If we've connected before we can view previous connections:

nmcli c

- havent had any previous connections so no luck there.

to see wifi hotspots near me:

root@bionic-beaver-x8664:~# nmcli d wifi list
IN-USE  SSID                  MODE   CHAN  RATE        SIGNAL  BARS              SECURITY
abramshumps                       Infra  13    270 Mbit/s              100     ▂▄▆█        WPA2
abramshumps_5G                Infra  36    270 Mbit/s                69      ▂▄▆_         WPA1 WPA2
abramshumps                       Infra  13    270 Mbit/s                64      ▂▄▆_         WPA1 WPA2

cool - so wifi working out of the box this time - not going to have to go through pain of building drivers for my laptop (Dell Latitiude 7280)

so next lets see my wifi device:

root@bionic-beaver-x8664:~# ip link show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s31f6: mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether a4:4c:c8:21:7c:44 brd ff:ff:ff:ff:ff:ff
3: wlp2s0: mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
    link/ether cc:2f:71:73:a3:3e brd ff:ff:ff:ff:ff:ff

we can try and connect to it now:

root@bionic-beaver-x8664:~# nmcli d wifi connect abramshumps password supersecretpw
Device 'wlp2s0' successfully activated with 'ead1b11c-3d70-4ba8-9ca9-48389235c7db'.

now lets test connectivity to internet:

root@bionic-beaver-x8664:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=13.2 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=13.3 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 13.292/13.344/13.397/0.126 ms
root@bionic-beaver-x8664:~#


yay - I can now add ubuntu-desktop and other goodies I want :-)








Monday, 4 December 2017

Puppet - Roles and Profiles



Forgetting about Puppet for a moment - if we are to conceptually describe how our servers are built as part of a large distributed group then we would describe them initially in terms of business function - e.g. this server stores data of financial transactions, it also needs to allow certain people to login to it, it needs to be monitored etc etc - whilst another server might be described at a high level as running a web server, needs the ability to allow certain people to login etc.

The high level description of what a server does we call a role (e.g. data store role and web server role in the example above).

The technical sub-components of a role we call a profile (e.g. install postgres database module, install ldap client module, install nginx module).

We can further subdivide profiles into roles that are common to many systems, e.g. we can define . common profile that contains:

common profile -> ldap module + packages module + monitoring module + security module

then we could define a web server profile:

webserver profile -> nginx module + haproxy module

we can define multiple profiles that contain the technical building blocks which we can put together for a specific role (business function). Each server is described by one role. Each role is formed of at least one profile - usually more. Profiles can be reused multiple times in different roles.

then our role would look like:

webserver role -> common profile + webserver profile


I have a git repository that shows a simple example of using roles and profiles - puppet rolesandprofiles

Sunday, 1 October 2017

Arista EOS - CLI quickstart

This is just a brain dump for me as I seem to very occassionally have to work on Arista switches - which by the way are pretty nice - mini version of linux on them.

Why do I like Arista - Andy Bechtolsheim (one of the founders of Sun Microsystems who I used to work for) - nice to have a mad scientist doing well ;-) - also good for Cisco to have some competition - monopolies are never a good thing. Also provided first (that I'm aware) of ULL switches. Other reasons I like it - it has linux (and so bash), sysdb - database on the switch that holds important data - used similarly to IPC, MLAG - allows port channels to exist on multiple switches at the same time, VARP - allows multiple switches to respond to arp requests for same IP, ZTP (zero touch provisioning) - loads config from network, LANZ - latency analyser, email, job scheduler (is this just cron?), tcpdump, event handlers, event monitors.


to login:
ssh admin@switch

what version are we running:
show ver

what interfaces are attached and whats their status:
show interfaces
show int status
show interfaces  ethernet 1-5 status
show run? (detail config with defaults)

what have I been doing here ;-) :
show history

lets go crazy and make changes outside of change control and config management:
chicmsw01>enable
chicmsw01#configure
chicmsw01(config)#interface ethernet 8
chicmsw01(config-if-Et8)#comment
Enter TEXT message. Type 'EOF' on its own line to end.
testing rancid
EOF
chicmsw01(config-if-Et8)#end

misc:
bash - gets us into a bash shell - nice if your from a linux background
bash python - wow!
environment -> can change fan speed etc
show logging ?
show log last 5 min
can configure syslog/email
show reload cause full (see why switch rebooted?)

lets ensure my crazy changes persist after a restart:
copy running-config startup-config


Also worked with implementing rancid - also pretty cool - dumps configs, saves to change control - and alerts me on any diffs.

Friday, 16 June 2017

Google Cloud Platform

So recently I received an email from google inviting me to use gcm free for a year - also came with $300  to pay towards chargeable services - very nice.

Previously I've used aws - pretty familiar - also its almost becoming the de-facto standard in most companies I've contracted for  - I think competition is good - so having an alternative should be welcomed.

so - google has multiple data centres worldwide - similarly to amazon it has 3-4 in each region.
when creating a machine you choose a region and a zone.

google also has its own fiber network - never more than 500 miles from an access point


  • compute engine  - iass
  • app engine - paas
  • managed services - elastic resources, machine learning, big data
  • container engine (docker/kubernetes)
  • flexible machine types - any cpu/memory configuration
  • simpler firewall rules
  • bills by minute, not hour
accessing gcp :

web console: cloud.google.com
android or iphone app
can ssh from command line after server built
programmatically - api - rest: https://developers.google.com/oauthplayground/ and developers.google.com/apis-explorer/#p

stackdriver looks interesting - this is fordiagnostics, logging, monitoring - integration into other tools such as elk/splunk/patrol etc/ debug console for java and python apps

equivalent of lamda is cloud functions - serverless/ephemeral function that responds to events - not worked on this but sounds very interesting - replacement for stuff like jenkins/ci?

cloud storage buckets are similar to s3 (which is uber cool)

machine learning - mlaas? - machine learning as a service - this is something that I certainly want to learn more about 


Monday, 5 June 2017

linux changing screen resolution

So recently I installed Linux Mint/sarah - I installed this under vmware.

All latest vmware tools installed - however trying to change screen resolution didn't give me the option corresponding to the actual resolution of my screens (1920x1080).

To fix this:


stuart@stuart-virtual-machine ~ $ cvt 1920 1080
# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync

stuart@stuart-virtual-machine ~ $ xrandr --newmode "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync

stuart@stuart-virtual-machine ~ $ xrandr --addmode Virtual1 "1920x1080_60.00"

now the required resolution appears in display settings and I'm happy :-)



Wednesday, 12 April 2017

FPM - easy way to build packages

FPM ('flipping' rpm ;-) ) is a really cool way to build packages without the general grind of creating spec files etc .. heres my quick howto for CentOS:


  • yum -y install rpm-build
  • gem install fpm
  • mkdir -p stupack/files stupack/etc stupack/logs (just making an arbitrary dir tree)
  • touch stupack/files/test stupack/etc/stu.conf stupack/logs/stulog (dumping some dumb files there
  • fpm -s dir -t rpm -n stupack stupack
              Created package {:path=>"stupack-1.0-1.x86_64.rpm"}
  •  rpm -qlp ./stupack-1.0-1.x86_64.rpm 
            /stupack/etc/stu.conf
                /stupack/files/test
                  /stupack/logs/stulog

      yay - we have an rpm in 2 secs of easyness!

      we can add  version number with a -v x.y.z to the fpm command
      x86_64 was chosen for architecture - but could choose this as anything..

      options: 

      -t (rpm,deb,solaris..)
      -s input type (directories,gem,rpm..)
      -f force - overwrite existing
      -n name to give to package
      --license - name of lic for the package
      --provides - what this pkg provides
      --config-files - marks a file in package as a config file

      overall - very nice, wish I'd had it when first tediously creating rpms!



        AWS CLI

        AWS CLI Quickstart what is aws cli? essentially a command line python tool you can use to query/create/change state of all things in...