Thursday, December 29, 2011

Active Directory UserMod Assistant

The Active Directory UserMod Assistant is an awesome application for your end users to change their own AD profile data, such as telephone number or address. It's a free solution, and it's built on HTML and VBScript - so all the core requirements should be on your user's workstations. In addition, the program is not compiled, allowing easy editing. For example, we put extensions in the 'Business' telephone number field, and don't want to validate it.

I'm deploying it via shortcut to a shared folder (The shortcut is autocreated on the user's desktop if it doesn't exist on login), so users who need VPN access can add their mobile number(s) for use with Phone Factor.

Note - There may be Security Issues when running the application from a share that is not in your Trusted Sites.

Wednesday, December 21, 2011

'Enterprise' System Monitoring with OSS tools

I've been building an IT Department, and of course, one of the basic requirements is metrics and monitoring. I played with AlienVault for a bit, but it's more of a vulnerability management/log consolidation tool. I stumbled upon ZenOSS via Proxmox (it's an included VM), and seems to work quite well.

After some setup, I now get both WMI and SNMP notifications (like Nagios) from various devices. I also have traffic graphs (like Cacti) from SNMP for interfaces on those devices. Note - Exchange sucks, and when it crashes I get no notifications. Time to get the Qmail SMTP relay in place for reliable email delivery.

What I was missing was NetFlow data. Not a big deal - throw a OpenVZ Debian VM onto Proxmox and update it with nfdump and nfsen (modified from http://www.linuxscrew.com/2010/11/25/how-to-monitor-traffic-at-cisco-router-using-linux-netflow/ )

apt-get update
apt-get upgradehttp://www.blogger.com/img/blank.gif
dpkg-reconfigure tzdata
apt-get install nfdump

Whoa! Do you need to see ASA Netflow? - it's non standard, install an older 'nsel' version of nfdump from source instead!

aptitude install rrdtool librrd2-dev librrd-dev librrd4 librrds-perl librrdp-perl
apt-get install libmailtools-perl
apt-get install apache2 php5
apt-get install tcpdump

cd /usr/src/
wget http://sourceforge.net/projects/nfsen/files/stable/nfsen-1.3.5/nfsen-1.3.5.tar.gz/download
tar -xvzf nfsen-1.3.5.tar.gz
cd nfsen-1.3.5
cp etc/nfsen-dist.conf etc/nfsen.conf

mkdir -p /data/nfsen

In order to continue you should edit file etc/nfsen.conf to specify where to install nfsen, web server’s username, its document root directory etc. That file is commented so there shouldn’t be serious problems with it.

One of the major sections of nfsen.conf is ‘Netflow sources’, it should contain exactly the same port number(s) you’ve configured Cisco with — recall ‘ip flow-export …’ line where we’ve specified port 23456. E.g.

%sources = (
'Router1' => { 'port' => '23456', 'col' => '#0000ff', 'type' => 'netflow' },
);

Now it’s time to finish the installation:

./install.pl etc/nfsen.conf

In case of success you’ll see corresponding notification after which you will have to start nfsen daemon to get the ball rolling:
http://www.blogger.com/img/blank.gif
/path/to/nfsen/bin/nfsen start

From this point nfdump started collecting netflow data exported by Cisco router and nfsen is hardly working to visualize it — just open web browser and go to http://linux_web_server/nfsen/nfsen.php to make sure. If you see empty graphs just wait for a while to let nfsen to collect enough data to visualize it.

That’s it!

Parts taken from linuxscrew.com

Now I just need to figure out why my other ASA doesn't seem to support Netflow exports...

Friday, December 16, 2011

ZFS speed

So I ran some quick dd-based write numbers before choosing a filesystem for my new Proxmox based virtual server. I really wanted ZFS, but I need OpenVZ and KVM more, and ZFS-Fuse just does cut it.

In short - ext4 is the winner, though ZFS sure looks pretty damn fast. SolarisInternals has more info on ZFS write speed

Another thing to note, though the below numbers for ZFS-Fuse don't reflect it, is that ZFS performs better with Write-Back cache off on your Raid controller. Had I tested OpenSolaris with Write-Back cache on (boot up was ungodly slow with it on), I'm sure it would have been reflected in the numbers.

4 600GB, RAID 5, wb on, perc 7, ext4

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 2.53637 s, 423 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 20.3379 s, 528 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync
107374182400 bytes (107 GB) copied, 192.164 s, 559 MB/s


---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, ext3

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 4.58121 s, 234 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 37.0681 s, 290 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync
107374182400 bytes (107 GB) copied, 371.047 s, 289 MB/s


---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, reiser3

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 2.91366 s, 369 MB/s


proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 28.923 s, 371 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync
Hmm Not sure what happened here...

---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, reiser4

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 2.87369 s, 374 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 28.9769 s, 371 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=102400 conv=fsync

Hmm Not sure what happened here...


---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb on, perc 7, zfs-fuse (defaults)

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 15.0122 s, 71.5 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 159.544 s, 67.3 MB/s


100GB - didn't bother

---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb off, perc 7, zfs-fuse (defaults)


proxmox2:/# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 23.7539 s, 45.2 MB/s

---------------------------------------------------------------------------------------
4 600GB, RAID 5, wb off, perc 7, zfs-fuse (64k recordsize)

proxmox2:/# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 21.3764 s, 50.2 MB/s


---------------------------------------------------------------------------------------
4 600GB, raidz, wb off, perc 7, zfs-fuse

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 19.1704 s, 56.0 MB/s

proxmox2:~# dd if=/dev/zero of=/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 187.938 s, 57.1 MB/s





---------------------------------------------------------------------------------------
4 600GB, raidz, wb off, perc 7, ZFS opensolaris (Nexenta)

root@proxmox2:/volumes# dd if=/dev/zero of=/volumes/images/file bs=1024k count=1024 conv=fsync
1073741824 bytes (1.1 GB) copied, 1.30164 seconds, 825 MB/s

root@proxmox2:/volumes# dd if=/dev/zero of=/volumes/images/file bs=1024k count=10240 conv=fsync
10737418240 bytes (11 GB) copied, 5.18124 seconds, 2.1 GB/s

root@proxmox2:/volumes# dd if=/dev/zero of=/volumes/images/file bs=1024k count=102400 conv=fsync
107374182400 bytes (107 GB) copied, 45.0769 seconds, 2.4 GB/s

Tuesday, November 29, 2011

gpupdate /force fails on Windows 7 x64

So I'm back in the thick of things as IT Manager of a small company and having been specifically in a Security role for a year, my first action is to push out Group Policy changes to make sure Updates are installed and more importantly - Flash is up to date.

I've discovered Windows 7 x64 on Win2k3 domains have major issues.

First they won't join the domain in the normal manner. Then I found one that was previously setup (I have no idea how), that won't run gpupdate /force. When you do, all you get is:

The processing of Group Policy failed. Windows could not resolve the computer name. This could be caused by one of more of the following:
a) Name Resolution failure on the current domain controller.
b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).
Computer Policy update has completed successfully.
To diagnose the failure, review the event log or invoke gpmc.msc to access information about Group Policy results.


I hate Windows.

To fix both issues - add the following Key to your registry:

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\RPC]
"Server2003NegotiateDisable"=dword:00000001

Or save that in a text file and give it a .reg extenstion, then double click it.

Enjoy.

Wednesday, July 27, 2011

FreeBSD 64bit php 5.3 ioncube Zend xcache

This is mostly a reminder for myself, when installing ioncube from FreeBSD ports an ioncube.ini loader is created (or exists) in /usr/local/etc/php

If you have xcache installed, there is also a /usr/local/etc/php/xcache.ini

When firing up php (php -v / php -m), you will receive an error:

PHP Fatal error: [ionCube Loader] The Loader must appear as the first entry in the php.ini file in Unknown on line 0

The reason is due to the defaults in xcache.ini. Xcache is loaded as a full module instead of a Zend extension. Change the xcache.ini file to load it as a Zend extension, and prepend the load lines with the ioncube lines like so:

[Zend]
zend_extension="/usr/local/lib/php/20090626/ioncube/ioncube_loader.so"
zend_extension_ts="/usr/local/lib/php/20090626/ioncube/ioncube_loader_ts.so"

[xcache-common]
;; install as zend extension (recommended, but not working yet)
zend_extension = /usr/local/lib/php/20090626/xcache.so
zend_extension_ts = /usr/local/lib/php/20090626/xcache.so
;; or install as extension
;;extension = xcache.so


PHP 5.3.6 with Suhosin-Patch (cli) (built: Jul 7 2011 09:16:37)
Copyright (c) 1997-2011 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies
with the ionCube PHP Loader v4.0.9, Copyright (c) 2002-2011, by ionCube Ltd., and
with XCache v1.3.2, Copyright (c) 2005-2011, by mOo

Wednesday, January 26, 2011

AVS / ZFS Seamless (Copy of Original Sun Blog)

I can't let this die .. It's a solution with such high potential. It seems with the Oracle purchase of Sun, the Sun links are all dead. So I'm reposting the original page from the Wayback Machine for reference..


How 'suite' it is... - Jackie Gleason The "Availability Suite"

* All
* Personal
* Sun

« Sun StorageTek Avail... | Main
Tuesday Jun 12, 2007
AVS and ZFS, seamless?

A question was recently posted in zfs-discuss@opensolaris.org on the subject of AVS replication vs ZFS send receive for odd sized volume pairs, and does the use of AVS make it all seamless? Yes, the use of Availability Suite makes it all seamless, but only after AVS is initially configured.

Unlike ZFS, which was designed and developed to be very easy to configure, Availability Suite requires explicit and somewhat overly detailed configuration information to be setup, and setup correctly for it to work seamlessly.

Recently I worked with one of Sun's customers involving the configuration of two Sun Fire x4500 servers, a remarkably performing system, being a four-way x64 server, with the highest storage density available, being 24
TB in 4U of rack space. The customer's desired configuration was simple, two servers, in an active - active, high availability configuration, deployed 2000 km apart, with each system acting as the disaster recovery system for the other. Replication needed to be CDP, Continuous Data Protection, offering 24/7 by 365, in both directions, and once setup correctly, CDP would work seamlessly, and be a lights out operation.

Each x4500, or Thumper, comes with 48 disks, two of which will be used as the SVM mirrored system disk, (can't have a single point of failure), leaving 46 data disks. Since each system's configuration will be the disaster recovery system for the other site, this leaves 23 disks available on each system as data disks. The decision as to what type of ZFS provided redundancy, the number of volumes in each pool, if compression or encryption is enabled, is not a concern to Availability Suite, since whatever vdevs are configured, the ZFS volume and file metadata will get replicated too.

For testing out this replicated ZFS on AVS scenario in on my Thumper, here are the steps followed:

1). Take one of the 46 disks that will eventually be placed in the ZFS storage pool. Use the ZFS zpool utility to correctly format this disk, and action which will create a EFI labeled disk, with all available blocks in slice 0. Then delete the pool.

# zpool create -f temp c4t2d0; zpool destroy temp

2). Next run the AVS 'dsbitmap' utility to determine the size of an SNDR bitmap to replicate this disk's slice 0, saving the results for later use.

# dsbitmap -r /dev/rdsk/c4t2d0s0 | tee /tmp/vol_size
Remote Mirror bitmap sizing

Data volume (/dev/rdsk/c4t2d0s0) size: 285196221 blocks
Required bitmap volume size:
Sync replication: 1089 blocks
Async replication with memory queue: 1089 blocks
Async replication with disk queue: 9793 blocks
Async replication with disk queue and 32 bit refcount: 35905 blocks
Remote Mirror bitmap sizing

Selection will be for either synchronous replication with memory queues. Other replication types also work with ZFS, but synchronous replication is best, is network latency is low.

3). To assure redundancy of the SNDR bitmap, each will be mirrored via SVM, hence we will need to double the number of blocks needed, rounded up to a multiple of 8KB or 16 blocks

# VOL_SIZE="`cat /tmp/vol_size| grep 'size: [0-9]' | awk '{print $5}'`"
# BMP_SIZE="`cat /tmp/vol_size| grep 'Sync ' | awk '{print $3}'`"
# SVM_SIZE=$((((((BMP_SIZE+(16-1) / 16) * 16 ) * 2)))
# ZFS_SIZE=$((VOL_SIZE-SVM_SIZE))
# SVM_OFFS=$(((34+ZFS_SIZE)))
# echo "Original volume size: $VOL_SIZE, Bitmap size: $BMP_SIZE"
# echo "SVM soft partition size: $SVM_SIZE, ZFS vdev size: $ZFS_SIZE"

5). Use the 'find' utility below, adjusting its first parameter to produce the list of volumes that will be placed into the ZFS storage pool. Carefully examine this list, and adjust the first search parameter and/or use 'egrep -v "disk|disk"', for one or disks to exclude from this list any volumes that are not to be part of this ZFS storage pool configuration.

This resulting list produced by "find ...", is key in reformatting all of the LUNs that will be part of a replicated ZFS storage pool.

# find /dev/rdsk/c[45]*s0
or
# find /dev/rdsk/c[45]*s0 | egrep -v "c4t2d0s0|c4t3d0s0"

6). Re-use the corrected find command from above as the driver to change the format of all of those volumes.

# find /dev/rdsk/c[45]*s0 | xargs -n1 fmthard -d 0:4:0:34:$ZFS_SIZE
# find /dev/rdsk/c[45]*s0 | xargs -n1 fmthard -d 1:4:0:$SVM_OFFS:$SVM_SIZE
# find /dev/rdsk/c[45]*s0 | xargs -n1 prtvtoc |egrep "^ [01]|partition map"

7). Re-use the corrected find command from above, with the additional selection of only even numbered disks, placing slice 1 of all selected disks into the SVM metadevice d101

# find /dev/rdsk/c[45]*[24680]s1 | xargs -I {} echo 1 $1\{} | xargs metainit d101 `find /dev/rdsk/c[45]*[24680]s1 | wc -l`

8). Re-use the corrected find command from above, with the additional selection of only odd numbered disks, placing slice 1 of all selected disks into the SVM metadevice d102

# find /dev/rdsk/c[45]*[13579]s1 | xargs -I {} echo 1 $1\{} | xargs metainit d102 `find /dev/rdsk/c[45]*[13579]s1 | wc -l`

9). Now mirror metadevice d101 and d102, into mirror d100, ignoring the WARNING that both sides of the mirror will not be the same. When the bitmap volumes are createD, they will be initialized, at which time both sides of the mirror will be equal.

# metainit d100 -m d101 d102

10). Now from the mirror SVM storage pool, allocate bitmap volumes out of SVM soft paritions for each SNDR replica

# OFFSET=1
# for n in `find /dev/rdsk/c[45]*s1 | grep -n s1 | cut -d ':' -f1 | xargs`
do
metainit d$n -p /dev/md/rdsk/d100 -o $OFFSET -b $BMP_SIZE
OFFSET=$(((OFFSET + BMP_SIZE + 1)))
done

11). Repeat steps 1 - 10 on the SNDR remote system (NODE-B)

12). Generate the SNDR enable on NODE-A

# DISK=1
# for ZFS_DISK in `find /dev/rdsk/c[45]*s0`
do
sndradm -nE $NODE-A $ZFS_DISK /dev/md/rdsk/d$DISK NODE-B $ZFS_DISK /dev/md/rdsk/d$DISK ip sync g zfs-pool
DISK=$(((DISK + 1)))
done

13). Repeat step 12 on NODE-B

14). Perform then ZPOOL enables

# find /dev/rdsk/c[45]*s0 | xargs zpool create zfs-pool

15). Enable SNDR replication, and take a look at what you have done!

# sndradm -g zfs-pool -nu
# sndradm -g zfs-pool -P
# metastat -P
# zpool status zfs-pool

Posted at 07:04PM Jun 12, 2007 by jilokrje in Sun | Comments[2] | Permalink
Comments:

I'm trying this on Solaris Express b77

Step 14 doesn't work. I get the error:-

cannot use '/dev/rdsk/c3d0s0': must be a block device or regular file

"sndradm -g zfs-pool -P" should be a lowercase "-p"

Despite this I cannot get it to work. I can set it up OK on both machines, but nothing is being replicated between nodes at all.

Posted by Nathan on December 26, 2007 at 11:28 PM EST #

Sorry, I should have said "metastat -P" should be "metastat -p"

I kickstarted everything off with a full volume copy "sndradm -m" and then turned autosync on with "sndradm -a on" on both nodes. Now there's a lot of activity.

Posted by Nathan on December 27, 2007 at 04:15 AM EST #