Thursday, December 29, 2011

Active Directory UserMod Assistant

The Active Directory UserMod Assistant is an awesome application for your end users to change their own AD profile data, such as telephone number or address. It's a free solution, and it's built on HTML and VBScript - so all the core requirements should be on your user's workstations. In addition, the program is not compiled, allowing easy editing. For example, we put extensions in the 'Business' telephone number field, and don't want to validate it.

I'm deploying it via shortcut to a shared folder (The shortcut is autocreated on the user's desktop if it doesn't exist on login), so users who need VPN access can add their mobile number(s) for use with Phone Factor.

Note - There may be Security Issues when running the application from a share that is not in your Trusted Sites.

Wednesday, December 21, 2011

'Enterprise' System Monitoring with OSS tools

I've been building an IT Department, and of course, one of the basic requirements is metrics and monitoring. I played with AlienVault for a bit, but it's more of a vulnerability management/log consolidation tool. I stumbled upon ZenOSS via Proxmox (it's an included VM), and seems to work quite well.

After some setup, I now get both WMI and SNMP notifications (like Nagios) from various devices. I also have traffic graphs (like Cacti) from SNMP for interfaces on those devices. Note - Exchange sucks, and when it crashes I get no notifications. Time to get the Qmail SMTP relay in place for reliable email delivery.

What I was missing was NetFlow data. Not a big deal - throw a OpenVZ Debian VM onto Proxmox and update it with nfdump and nfsen (modified from http://www.linuxscrew.com/2010/11/25/how-to-monitor-traffic-at-cisco-router-using-linux-netflow/ )

apt-get update
apt-get upgradehttp://www.blogger.com/img/blank.gif
dpkg-reconfigure tzdata
apt-get install nfdump

Whoa! Do you need to see ASA Netflow? - it's non standard, install an older 'nsel' version of nfdump from source instead!

aptitude install rrdtool librrd2-dev librrd-dev librrd4 librrds-perl librrdp-perl
apt-get install libmailtools-perl
apt-get install apache2 php5
apt-get install tcpdump

cd /usr/src/
wget http://sourceforge.net/projects/nfsen/files/stable/nfsen-1.3.5/nfsen-1.3.5.tar.gz/download
tar -xvzf nfsen-1.3.5.tar.gz
cd nfsen-1.3.5
cp etc/nfsen-dist.conf etc/nfsen.conf

mkdir -p /data/nfsen

In order to continue you should edit file etc/nfsen.conf to specify where to install nfsen, web server’s username, its document root directory etc. That file is commented so there shouldn’t be serious problems with it.

One of the major sections of nfsen.conf is ‘Netflow sources’, it should contain exactly the same port number(s) you’ve configured Cisco with — recall ‘ip flow-export …’ line where we’ve specified port 23456. E.g.

%sources = (
'Router1' => { 'port' => '23456', 'col' => '#0000ff', 'type' => 'netflow' },
);

Now it’s time to finish the installation:

./install.pl etc/nfsen.conf

In case of success you’ll see corresponding notification after which you will have to start nfsen daemon to get the ball rolling:
http://www.blogger.com/img/blank.gif
/path/to/nfsen/bin/nfsen start

From this point nfdump started collecting netflow data exported by Cisco router and nfsen is hardly working to visualize it — just open web browser and go to http://linux_web_server/nfsen/nfsen.php to make sure. If you see empty graphs just wait for a while to let nfsen to collect enough data to visualize it.

That’s it!

Parts taken from linuxscrew.com

Now I just need to figure out why my other ASA doesn't seem to support Netflow exports...

Friday, December 16, 2011

ZFS speed

So I ran some quick dd-based write numbers before choosing a filesystem for my new Proxmox based virtual server. I really wanted ZFS, but I need OpenVZ and KVM more, and ZFS-Fuse just does cut it.

In short - ext4 is the winner, though ZFS sure looks pretty damn fast. SolarisInternals has more info on ZFS write speed

Another thing to note, though the below numbers for ZFS-Fuse don't reflect it, is that ZFS performs better with Write-Back cache off on your Raid controller. Had I tested OpenSolaris with Write-Back cache on (boot up was ungodly slow with it on), I'm sure it would have been reflected in the numbers.

4 600GB, RAID 5, wb on, perc 7, ext4

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 2.53637 s, 423 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 20.3379 s, 528 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync
107374182400 bytes (107 GB) copied, 192.164 s, 559 MB/s


---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, ext3

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 4.58121 s, 234 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 37.0681 s, 290 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync
107374182400 bytes (107 GB) copied, 371.047 s, 289 MB/s


---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, reiser3

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 2.91366 s, 369 MB/s


proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 28.923 s, 371 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync
Hmm Not sure what happened here...

---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, reiser4

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 2.87369 s, 374 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 28.9769 s, 371 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync

Hmm Not sure what happened here...


---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, zfs-fuse (defaults)

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 15.0122 s, 71.5 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 159.544 s, 67.3 MB/s


100GB - didn't bother

---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb off, perc 7, zfs-fuse (defaults)


proxmox2:/# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 23.7539 s, 45.2 MB/s

---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb off, perc 7, zfs-fuse (64k recordsize)

proxmox2:/# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 21.3764 s, 50.2 MB/s


---------------------------------------------------------------------------------------
4 600GB, raidz, wb off, perc 7, zfs-fuse

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 19.1704 s, 56.0 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 187.938 s, 57.1 MB/s





---------------------------------------------------------------------------------------
4 600GB, raidz, wb off, perc 7, ZFS opensolaris (Nexenta)

root@proxmox2:/volumes# dd if=/dev/zero of=/volumes/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 1.30164 seconds, 825 MB/s

root@proxmox2:/volumes# dd if=/dev/zero of=/volumes/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 5.18124 seconds, 2.1 GB/s

root@proxmox2:/volumes# dd if=/dev/zero of=/volumes/images/file bs=1024k count=102400 conv=fsync
107374182400 bytes (107 GB) copied, 45.0769 seconds, 2.4 GB/s