Resurrecting Insignia SoftWindows95 for SGI IRIX

Posted by: admin  :  Category: Operating Systems, Virtualization

Well, well, I just feel like 10 years ago, playing around with my rather aged SGI O2, which I didnt touch for years, doing a fresh reinstall of IRIX and some apps.

While flipping through my CDs I stumbled accross SoftWindows95 for IRIX. I just couldn’t resist and put the flipper in, having the software installed just minutes later, only to see that I didn’t have the license any more 🙁

But there’s hope …
Read more…

Strange compilation error on MySQL

Posted by: admin  :  Category: FreeBSD, Programming

Yesterday I started digging around for a solution to create per-user or per-database statistics on MySQL, one of the more important peaces I was missing from it for a long time.

Luckily enough, some guys over there had already done some work on this topic, so I wouldn’t have to start over from scratch 🙂

Read more…

Enabling ReiserFS, XFS, JFS on RedHat Enterprise Linux

Posted by: admin  :  Category: RHEL

Despite the Linux kernel having support for so many file systems, not all of them are enabled in RedHat Enterprise Linux by default. This might be well as some of them might not yet classify as “enterprise grade” in the eyes of RedHat, who knows… Luckily, support for missing file systems such as ReiserFS, XFS and JFS can be added easily as outlined below.

Read more…

Killing a Windows Terminal Session from remote

Posted by: admin  :  Category: Windows

Darn it!
Imagine what happens when a Windows box, which is configured for remote administrative terminal mode only, is left with two zombie terminal sessions.
Read more…

An AutoFS executable map to automount device nodes

Posted by: admin  :  Category: Operating Systems, Utilities

For my company’s hard disk-based backup system I needed the ability to automount disk drives by their device name into a standard directory structure.

One possible approach would be to add some lines like these to fstab:

/dev/sda1       /mnt/sda1       ext3    defaults,noauto 0       0

This may be good enough in some cases, though it wasn’t sufficient for me, when there were dozens of device nodes which could get mounted eventually.

So I basically wanted something that would allow me to just access a directory, while the underlying disk was mounted automatically, then having it unmounted automatically if not in use, but still being dynamic in it’s nature so it would auto-adjust.
Read more…

Is RAID1 possible on an USB stick?

Posted by: admin  :  Category: FreeBSD, HA

Last week we had a discussion at the office wether it would possible to span a RAID across USB sticks.
That question came up as a joke while I was working on some RAID system for evaluation purposes.
Well, my friend doubted it when I replied that it would definitely work out with a FreeBSD software RAID using gmirror (geom vinum as a matter of fact works, too).

Proof?
Read more…

ufs_dirbad panic with mangled entries in ufs

Posted by: admin  :  Category: FreeBSD

FreeBSD’s ufs usually does an excellent job in preventing file system corruption. But even the best system happens to mess up once in a while.

One thing you may eventually stumble accross are so called mangled entries, which are usually not fixable with fsck and result in kernel panics upon access.
Read more…

FreeBSD software RAID0: gvinum vs. gstripe

Posted by: admin  :  Category: FreeBSD, HA

Back some time I announced reviewing FreeBSD’s geom software RAID implementations.

Todays article compares geom stripe (gstripe) along with geom gvinum (gvinum) for disk striping (RAID0).

All testing was done on the same hardware as before to get results comparable to previous tests.

Benchmarks were taken using stripe sizes of 64k, 128k and 256k and measured using dd, bonnie++ and rawio as before.

As for the technology gstripe follows the same approach than gmirror which I look at previously.

# rawio benchmark results

rawio was choosen to measure I/O speed during concurrent access. rawio was set to run all tests (random read, seq read, random write, seq write) with eight processes on the /dev/stripe/* and /dev/gvinum/* devices.

Results for the single disk are provided as well to compare performance not only between the different frameworks but also against the native disk performance.

Click the images to see the actual result values and a chart.

529

532

# dd benchmark results

dd was choosen to measure raw block access to /dev/mirror/* and /dev/gvinum/* devices. dd was set to run sequential read and write tests using block sizes from 16k to 1024k.

Click the images to see the actual result values and a chart.

523

520

# bonnie++ benchmark results

finally, bonnie++ was used get pure file system performance.

Click the images to see the actual result values and a chart.

517

513

# conclusion

Looking at raw disk access I must conclude that none of the frameworks beats single disk performance in overall when it comes to blockwise input/output with dd.
gvinum generally performs better than gstripe except when using 256k stripe sizes.

Now since ‘dd’ is very synthetic by it’s nature, rawio is much better to see how the devices would perform under a more “real-life” situation.
Although rawio benchmark results may look low, these numbers where achieved by running 8 processes at once. They’ll reflect best what could be expected in a true multi-user environment with concurrent access.
As from the results there is no absolute winner, as depending on the stripe sizes either of both implementations out-performs the other.

Finally for bonnie++ we see some interesting results. Performance is almost identical for all implementations.
One notable exception was seen with gvinum (64k stripe size) which clearly outperformed its competitors..
One must keep in mind that the first six tests performed by bonnie++ (rand delete/read/create, seq delete/read/create) are limited by I/O performance of both the system bus and the device itself. The hardware I used for testing was capable of about 160 – 170 I/Os per second. I admit that results could be different if the tests are re-run on decent hardware with a higher I/O throughput. It’s possible that modern hardware reveals an I/O barrier for abstracted devices which cannot be seen from my tests.

Personally I prefer using gstripe over gvinum because of it’s more simplistic configuration approach. In terms of performance, gvinum seems to offer superiour performance when it comes to disk striping.

The next article will discuss gvinum and gstripe for RAID10.

FreeBSD’s loader fails with wrong harddisk geometry in BIOS

Posted by: admin  :  Category: FreeBSD

I’s been a while since I last saw issues with FreeBSD’s loader(8).

The error I came along today read like this:

can't load kernel

Read more…

No "sleep" command for batch files? Make it a choice!

Posted by: admin  :  Category: DOS, Scripting, Windows

I just trapped myself while hacking up a batch file.
Used to shell scripting I wanted to add a delay to the batch using “sleep”.

Dough! Bad Idea! Bad command or filename. Smash your head here to continue {(x)}!
Read more…