Saturday, 21 April 2018

connect ubuntu to wifi from command line

so i have a server install of a beta release of bionic beaver - ubuntu 18.04 - I want to connect this to wifi as I'm not near my cobbler server and cant connect to it via ethernet at the moment.

If we've connected before we can view previous connections:

nmcli c

- havent had any previous connections so no luck there.

to see wifi hotspots near me:

root@bionic-beaver-x8664:~# nmcli d wifi list
IN-USE  SSID                  MODE   CHAN  RATE        SIGNAL  BARS              SECURITY
abramshumps                       Infra  13    270 Mbit/s              100     ▂▄▆█        WPA2
abramshumps_5G                Infra  36    270 Mbit/s                69      ▂▄▆_         WPA1 WPA2
abramshumps                       Infra  13    270 Mbit/s                64      ▂▄▆_         WPA1 WPA2

cool - so wifi working out of the box this time - not going to have to go through pain of building drivers for my laptop (Dell Latitiude 7280)

so next lets see my wifi device:

root@bionic-beaver-x8664:~# ip link show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s31f6: mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether a4:4c:c8:21:7c:44 brd ff:ff:ff:ff:ff:ff
3: wlp2s0: mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
    link/ether cc:2f:71:73:a3:3e brd ff:ff:ff:ff:ff:ff

we can try and connect to it now:

root@bionic-beaver-x8664:~# nmcli d wifi connect abramshumps password supersecretpw
Device 'wlp2s0' successfully activated with 'ead1b11c-3d70-4ba8-9ca9-48389235c7db'.

now lets test connectivity to internet:

root@bionic-beaver-x8664:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=60 time=13.2 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=60 time=13.3 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 13.292/13.344/13.397/0.126 ms
root@bionic-beaver-x8664:~#


yay - I can now add ubuntu-desktop and other goodies I want :-)








Monday, 4 December 2017

Puppet - Roles and Profiles



Forgetting about Puppet for a moment - if we are to conceptually describe how our servers are built as part of a large distributed group then we would describe them initially in terms of business function - e.g. this server stores data of financial transactions, it also needs to allow certain people to login to it, it needs to be monitored etc etc - whilst another server might be described at a high level as running a web server, needs the ability to allow certain people to login etc.

The high level description of what a server does we call a role (e.g. data store role and web server role in the example above).

The technical sub-components of a role we call a profile (e.g. install postgres database module, install ldap client module, install nginx module).

We can further subdivide profiles into roles that are common to many systems, e.g. we can define . common profile that contains:

common profile -> ldap module + packages module + monitoring module + security module

then we could define a web server profile:

webserver profile -> nginx module + haproxy module

we can define multiple profiles that contain the technical building blocks which we can put together for a specific role (business function). Each server is described by one role. Each role is formed of at least one profile - usually more. Profiles can be reused multiple times in different roles.

then our role would look like:

webserver role -> common profile + webserver profile


I have a git repository that shows a simple example of using roles and profiles - puppet rolesandprofiles

Sunday, 1 October 2017

Arista EOS - CLI quickstart

This is just a brain dump for me as I seem to very occassionally have to work on Arista switches - which by the way are pretty nice - mini version of linux on them.

Why do I like Arista - Andy Bechtolsheim (one of the founders of Sun Microsystems who I used to work for) - nice to have a mad scientist doing well ;-) - also good for Cisco to have some competition - monopolies are never a good thing. Also provided first (that I'm aware) of ULL switches. Other reasons I like it - it has linux (and so bash), sysdb - database on the switch that holds important data - used similarly to IPC, MLAG - allows port channels to exist on multiple switches at the same time, VARP - allows multiple switches to respond to arp requests for same IP, ZTP (zero touch provisioning) - loads config from network, LANZ - latency analyser, email, job scheduler (is this just cron?), tcpdump, event handlers, event monitors.


to login:
ssh admin@switch

what version are we running:
show ver

what interfaces are attached and whats their status:
show interfaces
show int status
show interfaces  ethernet 1-5 status
show run? (detail config with defaults)

what have I been doing here ;-) :
show history

lets go crazy and make changes outside of change control and config management:
chicmsw01>enable
chicmsw01#configure
chicmsw01(config)#interface ethernet 8
chicmsw01(config-if-Et8)#comment
Enter TEXT message. Type 'EOF' on its own line to end.
testing rancid
EOF
chicmsw01(config-if-Et8)#end

misc:
bash - gets us into a bash shell - nice if your from a linux background
bash python - wow!
environment -> can change fan speed etc
show logging ?
show log last 5 min
can configure syslog/email
show reload cause full (see why switch rebooted?)

lets ensure my crazy changes persist after a restart:
copy running-config startup-config


Also worked with implementing rancid - also pretty cool - dumps configs, saves to change control - and alerts me on any diffs.

Friday, 16 June 2017

Google Cloud Platform

So recently I received an email from google inviting me to use gcm free for a year - also came with $300  to pay towards chargeable services - very nice.

Previously I've used aws - pretty familiar - also its almost becoming the de-facto standard in most companies I've contracted for  - I think competition is good - so having an alternative should be welcomed.

so - google has multiple data centres worldwide - similarly to amazon it has 3-4 in each region.
when creating a machine you choose a region and a zone.

google also has its own fiber network - never more than 500 miles from an access point


  • compute engine  - iass
  • app engine - paas
  • managed services - elastic resources, machine learning, big data
  • container engine (docker/kubernetes)
  • flexible machine types - any cpu/memory configuration
  • simpler firewall rules
  • bills by minute, not hour
accessing gcp :

web console: cloud.google.com
android or iphone app
can ssh from command line after server built
programmatically - api - rest: https://developers.google.com/oauthplayground/ and developers.google.com/apis-explorer/#p

stackdriver looks interesting - this is fordiagnostics, logging, monitoring - integration into other tools such as elk/splunk/patrol etc/ debug console for java and python apps

equivalent of lamda is cloud functions - serverless/ephemeral function that responds to events - not worked on this but sounds very interesting - replacement for stuff like jenkins/ci?

cloud storage buckets are similar to s3 (which is uber cool)

machine learning - mlaas? - machine learning as a service - this is something that I certainly want to learn more about 


Monday, 5 June 2017

linux changing screen resolution

So recently I installed Linux Mint/sarah - I installed this under vmware.

All latest vmware tools installed - however trying to change screen resolution didn't give me the option corresponding to the actual resolution of my screens (1920x1080).

To fix this:


stuart@stuart-virtual-machine ~ $ cvt 1920 1080
# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync

stuart@stuart-virtual-machine ~ $ xrandr --newmode "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync

stuart@stuart-virtual-machine ~ $ xrandr --addmode Virtual1 "1920x1080_60.00"

now the required resolution appears in display settings and I'm happy :-)



Wednesday, 12 April 2017

FPM - easy way to build packages

FPM ('flipping' rpm ;-) ) is a really cool way to build packages without the general grind of creating spec files etc .. heres my quick howto for CentOS:


  • yum -y install rpm-build
  • gem install fpm
  • mkdir -p stupack/files stupack/etc stupack/logs (just making an arbitrary dir tree)
  • touch stupack/files/test stupack/etc/stu.conf stupack/logs/stulog (dumping some dumb files there
  • fpm -s dir -t rpm -n stupack stupack
              Created package {:path=>"stupack-1.0-1.x86_64.rpm"}
  •  rpm -qlp ./stupack-1.0-1.x86_64.rpm 
            /stupack/etc/stu.conf
                /stupack/files/test
                  /stupack/logs/stulog

      yay - we have an rpm in 2 secs of easyness!

      we can add  version number with a -v x.y.z to the fpm command
      x86_64 was chosen for architecture - but could choose this as anything..

      options: 

      -t (rpm,deb,solaris..)
      -s input type (directories,gem,rpm..)
      -f force - overwrite existing
      -n name to give to package
      --license - name of lic for the package
      --provides - what this pkg provides
      --config-files - marks a file in package as a config file

      overall - very nice, wish I'd had it when first tediously creating rpms!



        Wednesday, 5 April 2017

        How to use strace

        How to use strace


        really cool unix utility, lets us inspect what an executable is doing - dont need:

        • source code
        • knowledge of what  program the executable came from
        • debugger
        one big caveat to note is that strace stops/starts a process so can make it a lot, lot slower - e.g.:

        [root@einstein ~]# time dd if=/dev/zero of=./test bs=1024k count=1000
        1000+0 records in
        1000+0 records out
        1048576000 bytes (1.0 GB) copied, 11.0078 s, 95.3 MB/s
        
        real    0m11.050s
        user    0m0.001s
        sys     0m0.377s
        [root@einstein ~]#  sync; echo 3 > /proc/sys/vm/drop_caches
        [root@einstein ~]# time strace -f -o out  dd if=/dev/zero of=./test bs=1024k count=1000
        1000+0 records in
        1000+0 records out
        1048576000 bytes (1.0 GB) copied, 14.4345 s, 72.6 MB/s
        
        real    0m14.491s
        user    0m0.005s
        sys     0m0.406s
        [root@einstein ~]# 
        
        so approximately 40% degradation here in performance - something you wouldnt want in prod.

        How does a program interact with my computer?

        so when a program runs in user mode it doesnt have direct access to the hardware (unless its something cool like solaflare network card or mellanox..).
        For the program to get access to the hardware it needs to use a system call..(have a look at man 2 syscalls) - systems calls are essentially how a user program enters the kernel to perform a privileged task.

        the categories of system calls are:
        • process control (load/exec/abort/create/terminate/get/set attributes/wait/allocate/free memory)
        • file management (create/delete/open/close/read/write/reposition/get-set attributes)
        • device management (request/release/read/write/reposition/attach/detach)
        • info maintenance (get/set time date/data/process file or dev attributes)
        • communicate (create/del connection/send/rcv message/transfer status info/attach or detach remote dev)

        examples


        you dont need to be root to use strace - just have permission to read the process (i.e. generally you're own proc)
        lots of output - try

        # strace ls

        following the output of this we'll see an :
        • execve - (os starts process)
        • brk(0) - kludge to read end of data segment
        • open of filenames and assigning a filedescriptor number
        • read using filedescriptor
        • fstat details perm owner etc
        • close
        to see which files a program is opening we could use:

        strace -e open ls

        as a practical example - lets see what config files a program uses (e.g. if i type bash does it use profile/bashrc or bash_profile - I sometimes forget!):
        strace -e open bash
        /truncated output)
        open("/home/sabramshumphries/.bashrc", O_RDONLY) = 3
        open("/etc/bashrc", O_RDONLY)           = 3

        yay - we can see the files it opens!!

        useful flags:

        -e just list the system calls specified - e.g. strace -e open,close
        -f follow any subprocesses also that are created
        -p - follow a process that started earlier
        -o - write to a file so can look through output more easily
        -s print out lines of length.. e.g. give a larger number so strace doesnt truncate output)
        -c statistical summary of calls - shows output really nicely 
        -t show timestamp (-tt even finer time)
        -r relative time between calls


        syscall                           function
        open                             opens a file,returns a fd
        close                             closes a filedescriptor
        read                              reads bytes from a fd
        write                             writes to a fd
        fork                               creates a new process - cow of parent
        exec                              executes a new program
        brk                                extend heap pointer
        mmap                           map a file to process address space
        stat                                read info about a file
        ioctl                               set io properties




        Monday, 27 March 2017

        What on earth is devops?

        Devops is a pretty contentious word - lots of old school people say its nothing new and its just a label for automating everything - which people have tried (and are still trying!) to do.

        Most people agree devops isn't:-


        • a job title
        • a job description
        • an organisation/team
        I've been guilty of all three of the above misdemeanours.

        From wikipedia theres a pretty good definition of devops:

        In traditional functionally separated organizations there is rarely cross-departmental integration of these functions with IT operations. devops promotes a set of processes and methods for thinking about communication and collaboration between development, QA, and IT operations.

        So - what this is saying is essentially devops is about bringing systems admin/engineers and developers and testers closer together - rather than in silos. Hence why having a devops team is something I consider not a brilliant idea- we shouldnt be creating an extra silo - we should be bringing teams closer.

        Technology isn't so important here  - the main thing is culture/behaviour - we also need to think more in terms of the overall toolchain rather than in terms of a devops tool, such as puppet/ansible..

        • code - version control, frequent merges
        • build - ci tools
        • test - test code works (verify/validate) - performance
        • package - stage software
        • release (tricky  - always seen a lot of human interaction here) - change management/approvals
        • configure - iaas
        • monitor - is it all working!
        so essentially a large and complex set of practises and tools to bring code quicker and more reliably to the customer.

        Tuesday, 10 May 2016

        latest puppet features

        Just discussing opensource puppet here - puppet 4.4.2 features are included in puppet enterprise 2016.1

        naming conventions / version numbers are confusing - there is an overall package - e.g. 4.4 here  -that includes different versions o f puppet,ruby,facter,hiera ...

        theres also a separate puppetserver package (this is version 2.3!)

        theres also a puppetdb package - installs puppetdb 3

        Migrating from puppet 3 to 4 isnt without some effort - existing puppet dsl may no longer work.

        pretty good guide to upgrading at upgrade puppet from 3 to 4 (summary - use puppet_agent module)

        442 essentially bug fixes
        441 bug fixes + minor hiera enhancement
        440 iterables, iterator types, type aliases,produce arrays from iterators with splat (*)


        • pluginsync now deprecated - this is now the default behaviour  - same as value use_cached_catalog
        • all in one packaging (AIO) now - as per my initial moan re diverse version numbers
        • stringified facts
        • puppet kick gone - pity
        • node inheritance gone
        • now have epp instead of epp (embedded puppet) erb still works
        • new locations for files and directories:
                  /opt/puppetlabs/bin      -> linux executables
                  /etc/puppetlabs/puppet -> confdir
                  confdir now moved to /etc/puppetlabs/code
                  dir environments are always on (yay!)
                  vardir moved - /opt/puppetlabs/puppet/ccache
                  rundir moved - /var/run/puppetlabs
        • no longer dependent on OS version of RUBY - AIO now bundles it
        • biggest diff is DSL has a new parser and evaluator (re future parser):   
                  - iteration/type checking/http api changes/manifest ordering by default





        Tuesday, 22 March 2016

        Ansible 2.0 New Features

        Ansible 2.0 New Features


        big software refactoring - previous rapid growth/technical debt, bolted on features like roles not implemented perfectly. lots of code cleanup and make it easier to add ew features going forward..

        added blocks - try/except/finally c.f. python; allows you to try things, catch errors falls through to finally.

        blocks allow you to group related tasks together, dont need to use tags

        - block:
           - name: stuff 1
        ...
        can have nested blocks - too many can be overly complex to debug (e.g. could do for multiple OS)

        blocks allow subset of variables only set for tasks within that block - local scope

        improved error messages, shows line/file/col of task that failed

        new option any_errors_fatal: true (if any host fails in block then all hosts go to rescue section - all or nothing deployment) (from 2.0.1)

        execution strategy plugins - linear (traditional, wait for all hosts to complete task before moving on to 2nd task etc) or free - each host runs all tasks asap without waiting for others to catch up.

        now have dyn-amic includes

        improved variable management - variable precedence more consistent

        200+ new modules - e.g. ec2,openstack,windows(beta),docker

        new inventory scripts

        more OO oriented/inheritance etc - more of an internals feature

        what might break in V2? should be 100% backwards compatible, most possible issues due to variable precedence and dynamic includes, stricter with yaml now.







        Monday, 14 March 2016

        ansible vs puppet

        Ansible vs Puppet


        ok - so I should be comparing more than this - chef/salt/cfengine(whatver happened to that?) - however my experience of salt was minimal - also in the early days - turned out enought to make me happy to not look at it again. No real experience with chef - something I should look at more. 

        cfengine - odd one  - something I did work with a few years ago and quite liked - however from a commercial viewpoint it seems to have disappeared.

        Also - and I think this will lead to a bit of a paradigm shift - how will running apps in containers such as docker under control of kubernites mean we no longer need such fully featured config management systems - if (if!) an organisation has a few standard builds then they can easily be replicated using tools such as docker. (is docker ready for prod yet - possibly not!)

        I first started using puppet in 2006 - was extremely impressed with it - the abstraction it gave was really useful when handing things over to sysadmins with little experience, also at the time the install was easy, generally using the trinity of file, package, service most modules where very easy to use.

        moving forward - my last few contracts have exposed me to some serious problems with  puppet:

        • install - its now so big and complex - enterprise. 
        • no standard install patterns - not even from puppet. for example how does one provide HA for puppetdb.. always home brewed. However this can be an advantage - have flexibility to do things your own way.
        • issues with scalability - performance especially with Ruby - sucks. This is being addressed (e.g. cfacts, rewrite of multiple things in C++ rather than Ruby - ultimately moving to golang?) - however ansible performance doesnt seem to be exceptionally faster (yes I've tried accelerated mode and reusing ssh sockets)
        • running ad hoc commands - mcollective is a wonderful bit of coding, the ideas behind it from a computer science perspective I love .. however its not fit for purpose. One of the first things most experienced puppet devops do is to install parallel shell or something else..
        • after multiple issues with mcollective I installed ansible to run one off commands or queries - then progressed to basic playbooks.
        • Found that ansible is soo easy in comparison - for example I recently upgraded ansible at my current client from 1,9.4 to 2. took me literally 30 seconds - nothing needed other than a simple change on my server - obviously nothing needed on clients.
        Now there are lots of arguments and discussions re push vs pull model/ordering/clients/use of ssh/dependencies/relationship graphs .. these to me dont matter  a great deal - what makes ansible the clear winner for me vs puppet enterprise is that I feel again I'm a devops - I'm looking after the business problems of the company and I'm not just debugging issues with puppet setup and installs/upgrades. Once you spend more time administering the software itself - well thats time to look at other solutions.

        One of the big advantages puppet has is the userbase - large number of people experienced in it, also most modules you'll need have allready been written (google puppetforge). With Puppets maturity also comes advantages - ansible does have a few bugs, also lacking certain basic features (e.g. the mount doesnt do proper remounts for nfs, issues with variable parsing)


        Now - dont get me wrong - if you have a stable puppet install with simple modules then theres no need to move away from it - however do take a look at ansible!

        connect ubuntu to wifi from command line

        so i have a server install of a beta release of bionic beaver - ubuntu 18.04 - I want to connect this to wifi as I'm not near my cobble...