How to Install ISC DHCP Server on Ubuntu 16.04

The Internet Systems Consortium (ISC) Dynamic Host Configuration Protocol (DHCP) server is free, open-source, and easy to install. Both enterprises and small networks have used ISC DHCP in production for many years.

In this guide, I’ll demonstrate how to locate your current DHCP server and then install and set up an ISC DHCP server. We’ll then move on to gaining control of your new DHCP server, best practices, monitoring the logs, and setting up static address reservations.

Read the rest of the article here:

https://4sysops.com/archives/install-isc-dhcp-server-on-ubuntu-16-04/

Set up Ubuntu as a domain controller with SAMBA on VirtualBox

If you want to run a domain controller on your network but don’t have access to a Windows Server license, you can use SAMBA, the free open-source software, and VirtualBox, the free virtualization software. We’ll describe the procedure for setting up a virtual server using VirtualBox and netboot.xyz iPXE and move on to setting up your domain controller with SAMBA.

Read my full article here:

https://4sysops.com/archives/set-up-ubuntu-as-a-domain-controller-with-samba-on-virtualbox/

Clone a Ubuntu server in Hyper-V 2012 R2

Ubuntu runs on Hyper-V perfectly fine, so you may want to run many Ubuntu Virtual Machines (VMs) on Hyper-V Server 2012. R2 This article will show you how to clone or duplicate a single Ubuntu server on Hyper-V with different network interfaces and host names. Cloning Linux servers on Hyper-V is easy and quick when you have the right knowledge and tools.

Read my full article here:

https://4sysops.com/archives/clone-a-ubuntu-server-in-hyper-v-2012-r2/

Raspberry Pi 3 Model B 32 GB Setup and Config

Raspberry Pi 3 Model B 32GB Initial Setup and Configuration

So my first Raspberry Pi 3 Model B 32GB just arrived which is part of a “Canakit” on Amazon which can be found here. Below is a general setup and config step by step guide to get you started with your own Pi. I may follow up this post with the progress of my pie-in-the-sky project to build a miniature hyperloop “pod” and track, controlled by a stand-alone Raspberry Pi. But before I get into powering my Pi with an electrified rail, we need to get the Pi setup and configured. 

  1. Unbox, connect peripherals & power on (I swear, every video and tutorial spends too much time on this topic.) But I will say that the little machine is very quick and responsive, and I’m really happy with it’s performance so far!
  2. Initial power up. The Red LED powers on, and my monitor stays black, hmm, it’s not booting… Oh duh, it will help if I insert the included 32GB MicroSD. Important to note is the SD card does not click-eject or click-insert but is friction-based receptacle.
  3. Ok 2nd power up – it’s alive! Boots to the desktop and looks great – except the resolution. I used the Wifi config to get connected to my WAP easy. But it’s strange that I can’t ping anything. A quick search found this post, and the command:
     sudo chmod u+s /bin/ping

    With that command I can now ping out and about.

  4. Now I can go online with the default web-browser (Web 3.8.2) and find a fix for my monitor’s resolution. To set a raspberry pi 3 monitor resolution, follow the instructions in this post – which told me to first check my monitor’s capabilities with the commands:
    tvservice -d edid
    edidparser edid

    and then seeing that I do have a DMT monitor (group 2) and an hdmi mode 82, I edited /boot/config.txt with the new editor named Geany. I started geany with sudo so I could edit the protected file with the command:

    sudo geany
    

    Which brings up the Geany file editor running as sudo, and then I could open and browse to the File System > /boot/ directory where I could edit the file config.txt to include the following lines:

    hdmi_group=2
    hdmi_mode=82

    After a restart my monitor’s resolution is correct.

  5. So now to make sure my OS is up to date. By default the version I installed is
    Linux raspberrypi 4.4.9-v7+ #884 SMP Fri May 6 17:28:59 BST 2016 armv7l GNU/Linux
    

    So it is time to run an update. It’s so nice that the distro is debian-based and the familiar apt package management system is installed. No dist-upgrade necessary.

    sudo apt-get update
    sudo apt-get upgrade

Modern PHP development environment – Setup of Ansible, pycharm, sourcetree and workflow with bitbucket

When getting started with development with a cloud repository such as git, it may be a little daunting to decide how to get started. With some help from an associate, I put together a short simple guide to setup a development environment on OS X. I hope this information provides someone with a good start to development with bitbucket, version control, and PHP Development in conjunction with a cloud repository.

Bitbucket is similar to git, but allows free repos. We prefer to use bitbucket for a repository of code so that we can manage changes to our ubuntu servers and files. Bitbucket is the “Book of Truth” and will be the keeper of all files and things that are good. Ansible runs on a dedicated management ubuntu server and pushes out changes (playbooks) to either a single, a few, or all of our linux servers. Either way, with pull/push of data from our code repository, we can control what is deployed on our systems, an use our repo as our backup. If a server dies, we can setup a new system, and pull in the good data.

Setup

First, you need a bitbucket account and sign-on. Once signed on to https://bitbucket.org/brooksinstitute/ You should be able to create your first repo. You might want to create your own private repo for notes, configs etc. As mentioned earlier, bitbucket is where we keep our known-good source code, and changes to this should only be done from your own computer’s copy of the repo, and only changed with commits – more on this later.

sourcetree

Next on your local machine, download sourcetree https://www.sourcetreeapp.com/

Once downloaded and installed, tell sourcetree where your repos live at bitbucket (simple username/password login).

Next, SourceTree will ask you which remote repository you want to clone to your local machine. You want to clone the remote repos on bitbucket so that you can make changes to your local versions before you commit them back to bitbucket. If you work with a group of developers you will probably want someone to review your files before you commit. You should also “checkout” local copies within pycharm, if someone else will also be working on your local files.

pycharm

Now it’s time to install and configure pycharm Community Edition https://www.jetbrains.com/pycharm/ . Pycharm is a Development Environment (IDE) that provides code completion, nice pretty colors and integrates with VCS/Git to do versioning control of your local (cloned) repo. In Pycharm, you want to go to the File → Open menu, browse your local machine, and choose the root folder of the cloned repo of your choice. This will get you to the point where you can begin to edit files.

ansible

Ansible http://www.ansible.com/ is a management utility that helps you easily manage systems and deploy apps. Here is some introductory documentation http://docs.ansible.com/ansible/intro_getting_started.html. Ansible usually runs on a dedicated Admin server, and this is the server that issues commands or “playbooks”. Although your Admin server contains the ansible playbook files, we only want to make changes to the files linked to the bitbucket repo before we pull them into the Admin server and then execute the commands.

 

Vagrant

Vagrant https://www.vagrantup.com/ provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize productivity. First download, install, and run VirtualBox https://www.virtualbox.org/wiki/Downloads , then open a terminal, and startup a vagrant “box” with the following:

   $ vagrant init hashicorp/precise32
   $ vagrant up

Vagrant will download and install the ‘precise32’ “box”. And now, in virtualbox you will see the new virtual machine. Then next from the command line you can issue the command ‘vagrant ssh’ which will open a shell to your new precise32 vm. You can use this vm to test your configurations and playbooks against before you roll them out to your production servers.

Workflow

When you’ve changed something in your local (cloned) repo, and you want to have that become the “truth” on bitbucket, do the following:

  1. Open the file from your local repo in pycharm (double-click on the file icon in the menu tree)
  2. Edit the file
  3. When done editing, right-click on the file → Git → Commit file
  4. Now you want to push this edited file up to bitbucket. Review the code, make comments and then push.

 

Ubuntu Linux Server setup guide – Setup ssh, keygen, brew, and ssh-copy-id on Mac OS X

 

 

iTerm on OS X
ssh config file in iTerm on OS X

What follows is a ubuntu/linux server setup guide that can be used to configure, 1. A new linux server and 2. setup an OS X workstation to easily connect to your linux servers with preshared keys.

  • Build the server on Hyperv, then setup your initial account during the Ubuntu LTS 14.04.2 setup.
  • Log in as the initial user and add accounts as necessary:
    • “sudo su -“ – this does a sudo and copies root path and all environmental variables
    • useradd -m -s /bin/bash jcoltrin
    • passwd jcoltrin
    • vi /etc/sudoers
      • (end of file) add line: jcoltrin ALL=NOPASSWD: ALL
    • su jcoltrin – make sure you can su.
    • sudo su – this sequence has allowed you to sudo without having to type in your password.
    • Just a note: modifying /etc/group – putting users in here is the wrong way of adding sudoers – no granular control – users here will be required to enter their password when doing sudo.
  • ctrl+l clears screen
  • Add static IP address and dns-nameservers to /etc/network/interfaces
    • Get the name of your network interface with command:
    • ifconfig -a

      In my case, the network interface name is ens33. So to make my ens33 interface a static interface, I configure the /etc/network/interface with the text editor vi. The first interface is lo, which is the loopback interface. The line ‘auto ens33’ is necessary because it is used to start the interface when the system boots.

    • 
      source /etc/network/interfaces.d/*
      
      # The loopback network interface
      auto lo
      iface lo inet loopback
      
      # The primary network interface
      auto ens33
      iface ens33 inet static
              address 10.0.10.151
              netmask 255.255.255.0
              gateway 10.0.10.254
              dns-nameservers 8.8.8.8 8.8.4.4

       

  • apt-get:
    • apt-get update – checks online for updates
    • apt-get upgrade – installs updates and security patches
    • apt-get dist-upgrade – note: make sure /boot dir is not more than 80% full. If it’s full it may have old kernel upgrades so google ubuntu clean old kernels.
    • reboot
Setup ssh, keygen, brew, and ssh-copy-id on Mac OS X

Now we need to establish a secure and easy connection from our mac to the new server. On our Mac issue the commands:

  • Install iTerm on your Mac. Configure to your liking, but it’s a good idea to set, in the Terminal settings, the scroll-back limit to either 99,999 or unlimited. Now in our new iTerminal, issue the command: ssh-keygen – this generates both public and private keys in our .ssh directory in our home directory.
    • Install HomeBrew on your Mac in order to get unix tools installed on your mac:
      • Make sure your account on your Mac is an administrator by going into System Preferences → Users and Groups → (unlock) → Select Account → checkmark Allow user to administer this computer.
      • First install XCode, then open a terminal again and paste in the command for installing homebrew from http://brew.sh
      • Install homebrew as it prompts, and run brew doctor so that we know we’re ready to install homebrew
      • brew install nmap ssh-copy-id wget htop ccze – this installs the linux tools we want on our mac
  • ssh-copy-id jcoltrin@serverIPaddress (password) – this copies our public key into the server we connected to. Now we can log into the servers from our mac terminal without having to type in the password.
    • Also on the mac we want to make it easy to ssh into, for example, server.domain.com.
    • vi .ssh/config
    • Line 1: host server
    • Line 2: hostname server.domain.com
    • Line 3: User jcoltrin
    • Line 4: KeepAlive yes
    • ctrl+wq!
    • The result should look like the following:

jcmbp:.ssh jcoltrin$ cat config

Host	    server
    Hostname server.domain.com
    User jcoltrin
    KeepAlive yes
    ServerAliveInterval 15

Host    myAmazonAWS1
    Hostname jasoncoltrin.com
    user ubuntu
    IdentityFile ~/.ssh/jasoncoltrin_keypair1.pem
    KeepAlive yes  
    ServerAliveInterval 15
  • ssh server – now we are able to issue this command and get in immediately without having to enter a password and also we can run sudo commands without having to enter our password again. As you can see in the config file above, we can also copy our .pem files into our .ssh directory and have config point to them so that we can easily ssh into our amazon AWS servers as well.
  • If we will be running websites, we now want to install virtualmin. Go to http://www.virtualmin.com/download.html#gpl and follow instructions here for downloading install.sh
Adding a new remote Administrative User’s ssh keys to a Linux Server

useradd -m -s /bin/bash newadmin1
mkdir ~newadmin1/.ssh
echo ssh-dss ****key data***..xxblahblahACBAM……kpucyrGw== [email protected] » ~newadmin1/.ssh/authorized_keys
chown -R newadmin1:newadmin1 ~newadmin1/.ssh
chmod 700 ~newadmin1/.ssh
chmod 600 ~newadmin1/.ssh/authorized_keys

vi /etc/sudoers

newadmin1 ALL=NOPASSWD: ALL

While this guide is not meant to be a comprehensive step-by-step guide, it should provide you with enough to setup an OS X workstation with pre-shared keys, and copy those keys to your new server. Working with iTerm and pre-shared keys, I think, is vastly superior to Putty on Windows. I hope this guide helps a few admins become more efficient and versatile working on OS X and linux.

 

WP-Filebase plugin for WordPress – changing maximum upload file size limits in php

WP-Filebase
WP-Filebase

So my new favorite plugin for WordPress is WP-Filebase; a free, easy way to upload files into WordPress that makes those files easy for others to download. While the plugin seems a little daunting at first to manage, it pretty much follows the typical methods other plugins employ such as shortcode. While editing a page or post there is a WP-Filebase button next to the other editing buttons such as “insert link” or “Italic”. Once the basic concepts are mastered, it becomes a pleasure to create categories, upload, and post files for download. There are a ton of other features to categorize, post and track hits with WP-Filebase if you want.

One issue I encountered when using WP-Filebase is that by default the upload size for files in php and WordPress are pretty small, and that the upload size for my entire site had to be increased. When I tried to upload a file with WP-Filebase of any size larger than 2MB, the upload would quit and fail without much information or error messages. When you’re expecting to see “File added”, and instead the page just refreshes without an error, it can be a little frustrating. Here are the steps I took to increase the file upload size:

Login to an ssh session on the server running WordPress.

Before you edit php.ini, it’s always a good idea to make a copy of the original file with a command of something like:

cp /etc/php5/apache2/php.ini /home/jcoltrin/php.ini.original

edit /etc/php5/apache2/php.ini with the command:

sudo vi /etc/php5/apache2/php.ini

Below this paragraph are the php settings to find and change in the vi editor. To find the settings, it can be useful for vi to be in command mode (the vi editor starts in command mode by default). While in command mode, hit the forward slash key /, and then type the keyword, then hit [enter/return]. The vi editor will jump to the first instance of the keyword it finds. You can then simply hit the n key to cycle to the next instance of that keyword. Then hit the i key to go into insertion mode. Now you can hit the delete/backspace keys and use your arrow keys to edit the settings. When you’re done editing, hit the Escape key, then the : key, then type wq, then hit enter. There are ton of other shortcuts, tips and cheats for vi you can find here: http://www.lagmonster.org/docs/vi.html .

upload_max_filesize = 20M
post_max_size = 20M
max_execution_time = 500
max_input_time = 500

After making these changes, I wanted to be sure they stuck if apache restarts. I did this by restarting apache and then viewing the php settings coming from the web browser itself.

Restarted apache2 with:
sudo /etc/init.d/apache2 restart

Made sure these limits stuck by creating a new file in the root of my apache website files location:
sudo touch /var/www/phpinfo.php

Insert the following line into phpinfo.php (only):
<?php phpinfo(); ?>

Then visited the file by going to http://domainname.com/phpinfo.php

I found that the settings were active and applied successfully by looking at the phpinfo settings website.

I then tested uploads of 12.9MB files which were previously unable to upload and can now upload successfully.

Delete the phpinfo.php you created (you probably don’t want this file hanging around for the world to find.)

That’s it, enjoy using WP-Filebase, and uploading/downloading files of any size to your heart’s content.

linux iptables intro and basic network information

Introduction iptables – the standard linux firewall

iptables is a standard firewall built into common Linux distributions such as ubuntu, debian, and centOS.

First, packets are logical containers of data representing the flow of data. Protocols are languages and sets of rules used by network devices to send and/or receive data. Ports are numerical representations of protocols and are common throughout TCP/IP networking. Registered ports are those from 0 through 49151. IANA maintains the official list of both ranges.The dynamic or private ports are those from 49152 through 65535. One common use for ephemeral ports are used by servers to continue communications with a client that initially connected to one of the server’s well-known service listening ports. Here is a list of about 250 well-known ports.

iptables drops network packets when those packets meet a certain set of pre-defined CHAINS of rules stored in the computer’s memory. The chains can be placed in different binding orders and they organize the firewall.

A packet, or a datagram, is a unit of a series of bits that forms a container that can be examined, routed, dropped, and filtered in regards to it’s headers, source, destination, and content.

The packet is organized into different fields. It is typically 32bits and contains different data objects which contain mac address source/destination, and IP address source/destination. Cyclical redundancy checks (CRCs) are used to check the values of a packet before they are sent. When the datagrams reach their destination a checksum is attained and checked against the CRC field. In TCP, if the two match then the datagram is marked as successfully sent. If it is different, the source is notified that the packet is bad and will need to be resent.

Datagrams on a wired network really just represent electrons (ethernet) or pulses of light and radio waves that modulate in frequency and amplitude in optical transmissions.

CSMA/CD is used to manage collisions and prevents simultaneous transmission of data on both wired and wireless networks.

Layer 3 of the OSI model is where routers route packets to different vlans and subnets based on their field values using static routes and dynamic protocols such as RIP and OSPF. Layer 2 switches create connections between nodes with addresses in their MAC tables through Application Specific Integrated Circuit (ASICs).

Services running on a server rely on field data in each datagram. The traffic is organized by standard protocols that are bound to specific ports. Each port is represented by a number and are filtered by opening or closing the ports to accept or drop packets whole field data matches that port.

Like other firewalls, iptables manages ports on a NIC where packets can enter, pass, or exit. Ports can be opened, listen, or closed for each service or kind of traffic that will be allowed. Other ports are closed for traffic to be denied.

Chains are sets of rules that manage network traffic by opening or closing ports that can be applied or bound to a Network Interface in a particular order.

There are three kinds of CHAINS:

  1. INPUT – packets coming into the PC.
  2. OUTPUT – packets leaving out our PC.
  3. FORWARD – packets that pass through the PC if it’s multi-homed and being used as a router.

Here are common iptables switches used in chains:

  • -s = source address
  • -d = destination address
  • -p = protocol
  • -j = action
  • -P = specify default policy for a chain
  • -D = delete a rule for a chain
  • -R = replace a rule for a chain
  • -F = remove all rules for specified chain.
  • -L = list all chain rules
  • -A = add/append a rule to the end of a chain

Rules are used to define and manage the traffic you want to ALLOW first in iptables. Then you add the last rule, or the catch-all rule at the bottom of these rules. The last rule blocks all other traffic not previously allowed.

Example of a rule applied to the INPUT chain:

  1. Allow HTTP traffic for an Apache2 web server on port 80 on the interface named eth0:

iptables -A INPUT -j ACCEPT -p tcp –destination-port 80 -i eth0

2. Allow FTP packets for the VSFTPD daemon/service on port 21:

iptables -A INPUT -j ACCEPT -p tcp –destination-port 21 -i eth0

3. Allow SSH traffic for Secure Shell connections on port 22:

iptables -A INPUT -j ACCEPT -p tcp –destination-port 22 -i eth0

4. Apply a CATCH-ALL rule:

iptables -A INPUT -j DROP -p tcp -i eth0

*Note – catch-all rules must be entered and applied LAST.

You can define your own iptables chains as well as view the built-in chains present. Many users will define their own iptables rules in a shell script that is run automatically at boot.

 

Usage of suid and sgid in linux

So when it comes to certain files and executable scripts, as a linux admin you may want to allow certain users to run these scripts with elevated privileges.

setuid and setgid allow you to grant limited elevated privileges (root) without having to add the users to the sudoers file.

Similar to chmod, where you indicate where you want to set the user id bit, you can set the permissions with 4, 2 and 1: suid = 4 sgid = 2 stickybit = 1

To do a suid:

$chmod 4777 script  – would give you permissions of

-rwsrwxrwx 1 jason jason

to do sgid use:

$chmod 2777 script – would give you

-rwxrwsrwx 1 jason jason

by using $chmod 6777 script – you would get

-rwsrwsrwx 1 jason jason

For setting back to normal you would use

$chmod 0777 script

SGID is often used with folders for example

$mkdir groupFolder

#chmod 2775 groupFolder

would give you:

-drwxrwsr-x 2 jason jason

when you set groupid on the folder, anyone that adds a file to that folder, the group ownership of the file will receive the group ownership of that folder.

If you have a file that is suid, and is malicious, you can find files on your system that have the suid and/or sgid bit set:

find .  -perm +6000

find .  -perm +2000

find .  -perm +4000

You should occasionally look for these files so you know which files and/or folders are automatically setting permissions.