Posts

Extend Linux Filesystem for AWS EC2 instance

How to grow a volume on aws ec2 instance
so - space getting a little tight on box, cant archive or delete anything - lets grow it. another inherited box - i would really have preferred app dir to be on a separate partition to root, but moving this one required downtime so lets do it whilst our devs are still using it!
prior to growing: # df -hl /dev/xvda1       36G   32G  4.1G  89% /
# lsblk NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT xvda         202:0    0   32G  0 disk  └─xvda1 202:1    0   36G  0 part /
we then go into aws console, look at instance under ec2 instances, click on actions -> modify volume and then increase size to 100G. alternatively (ideally!) use terraform to increase volume size, or cloudformation or: aws ec2 modify-volume --size 100 --volume-id vol-yourvolidhere
df remains the same but lsblk now shows: # lsblk NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT xvda         202:0    0  100G  0 disk  └─xvda1 202:1    0   36G  0 part / we can extend the space onto partion 1 by: # growpar…

Change Jenkins home directory in Ubuntu

Change Jenkins home directory in Ubuntu
We needed recently to change home directory of Jenkins in Ubuntu. The underlying misconfiguration on this old system is that it wasnt using LVM and so couldnt be expanded on the fly, but it is what it is.
To move we create a new volume - after adding more storage if needed - this time making it under LVM so further expansions can be done on the fly if needed.
as ever ensure we have backups before making changesshutdown jenkins safely - wait for current jobs to stop and also dont start new jobs .. https://jenkins-url/safeExitonce stopped copy /var/lib/jenkins  to /newjenk/dirchown -R jenkins:jenkins /newjenk/diredit /etc/default/jenkins - change JENKINS_HOME to point to new dirstart jenkins (systemctl start jenkins)wait a second .. go to url, login,run a test job that has previously succeeded and ensure ok. and thats it!

AWS Elastic Load Balancing Solutions

AWS Elastic Load Balancing Solutions
OverviewElastic Load Balancing - commonly referred to as ELB helps with disrtibuting traffic accross multiple EC2 instances in several availability zones, this helps with: scalingperformancereliabilityInitially AWS provided the classic load balancer (which can confusingly sometimes be referred to as ELB also, due to historical reasons).
Usually requests are round-robin, however can make them sticky so that connection between a client and endpoint is persistent for defied periods of time.. at least with ALB.
Both ALB and NLB are capable of load balancing to multiple ports on the same instance.
Dont necessarily need to define individual instances at backend - can create 'target groups' - which represents a logical grouping of resources such as EC2 instances or microservices, containers etc.
Network Load Balancers expose a public static IP, whereas an Application or Classic Load Balancer exposes a static DNS

AWS also has wizards to assist in migrat…

AWS CLI

AWS CLI Quickstart what is aws cli? essentially a command line python tool you can use to query/create/change state of all things in aws. pretty cool for performing ad-hoc commands. installing: % pip3 install awscli . . check installed ok % aws --version aws-cli/1.17.8 Python/3.8.1 Darwin/19.3.0 botocore/1.14.8 %
% aws help or aws command help
% complete -C aws_completer aws   <== add this to your .bashrc
^-^ stuart@stuartah ~ $ aws s3 (tab-tab here shows possible options below)
cp        ls        mb        mv        presign   rb        rm        sync      website
configure % aws configure AWS Access Key ID [None]: ************** AWS Secret Access Key [None]: ******************* Default region name [None]: eu-west2 Default output format [None]: json % you can also add credentials as env variables - e.g. export AWS_[ACCESS/SECRET]_KEY=...
here we're adding the credentials to allow connectivity. instead of running aws configure you can also edit ~/.aws/config and credentials:
%…

changing remote in git

generally remote is set by default to the place you originally cloned your repo from, however you may want to change this.

a simple way is on first push use:

git push -u origin remotename


the above can be configured as the default in your gitconfig file:

[push]default=current
git remote set-url ..is the command we need.

normally remote has the name origin, which is an alias for a remote repository, set as a key locally in place of the remote repos full url.

to do the change its pretty simple:

$ git remote set-url origin https://urlname.com/USERNAME/REPONAME.git
an interesting feature is we can have multiple remotes (or aliases pointing at them) in your repo... can push or pull to multiple remotes then..but be careful..can get confusing!
e.g.


git remote add alt different-machine:/path/to/repo

bash best practices

Bash best practicesA few hints on bash best practice:


* use #!/usr/bin/env bash .. this is more portable but you cant rely on a specific version
* use set, dont use options to bash - if someone runs your script with bash scriptname then it will ignore the options to bash
* use {} to enclose variables - can cause mistakes if you donr - e.g. variable name becomes VAR_ext rather than what you wanted
* to ensure you always have a value, set defaults - e.g. "${MYNAME}" = "Stuart" - MYNAME defaults to Stuart if not already set
* use spaces for tabs, not tabs - tab not portable
* max line length of 80 characters for readability - use \ to split of lines if needed
* dont have whitespace at end of lines as may confuse source code control like git
* use $(command) instead of backticks
* variables and function names lowercase and underscores - use meaninful names
* constants should be in caps, declare them first in file
* use readonly to set a variable readonly
* can use loc…

/var/log/journal

Currently working on creating a large repo - copied a lot of the packages to a VM in my home test lab - on extracting 15Gb of packages I noticed I was running short on space (had about 20Gb free but wanted to ensure extract didnt fail).

running the old du -hs * from / I noticed /var had a lot of storage being used.
(yes I dont have separate partitions on my smaller VMs)

Looking further I could see that it was pretty much being used in /var/log/journal:

# journalctl --disk-usage
Archived and active journals take up 3.9G in the file system.


I'm really not that interested in these logs - and so will add the following modification to my ansible scripts:
modify /etc/systemd/journald.conf => SystemMaxUse=100M

I then run:
systemctl kill --kill-who=main --signal=SIGUSR2 systemd-journald.service
and:
systemctl restart systemd-journald.service
running check again:
# journalctl --disk-usage Archived and active journals take up 80.0M in the file system.
cool - lots of space cleared on my VM!
(I …