pv_sapl's blog

as-Kiosk'ing the Browser (Firefox)

Few options exist when it comes to forcing a perticular browser interface down the throats of patrons. In my case, I just wanted a minimum set of tools with the least amount of hassle when browsing the library catalogue.

Enter kiosk mode. Simply putting the "-k" in either of the IE or Firefox shortcut's Target input box after the end quotation of the executable and - shabang! The whole monitor gets covered of pure html goodness.

There is a down side. This unfortunately leaves the user with no UI elements at all - no back button, print button, search box, address, menu or url bar. Well, removing those elements is "A Good Thing" (tm). I knew that the catalogue has provisions enough for the user to navigate around. Too bad my superiors preferred something with with a few more options. Well, the button bar, to be exact. -- Read More

Linux, Windows & OpenBSD; Oh My! (part 1)

Wanting to exploit my latest workstations hardware's fullest potential and commit to some research at the same time, I've grown my desktop from it's humble beginings as a Socket A 750MHz, then Athlon XP 2500+ barton, then Athlon64 3000+ to a full fledged dual-core 3800+ complete with shiny 250gig hard drive and two gigabytes of ram.

For what purpose? Why, virtual machine multiple operating systems of course! Well.. simply put there can be no better cross platform web development experience than co-operatively running both Windows, Linux, and OpenBSD at the same time (although I'm not too sure about running XOrg via a VM for OpenBSD overtop of a host OS, but I digress).

My first task with this beast of a system was to find out which linux distribution worked best with VMWare. Sadly, I was unaware of other excellent virtualization products that may be able to accomplish the same thing (more on that later). Of course the host OS needed to run in 64bit mode (damn Macomedia and their lack of 64bit flash support), and of course the host OS needed to be smp-ready. A pure linux 64bit environment for vmware is actually, not possible. That left my favourite KDE distro, Kanotix, out of the picture. Apparently VMWare requires 32bit libraries around on your system for certain things, and if you don't have them you will not be able to complete the vmware installation. At the time I was not able to figure out how to install 32bit libraries that VMWare required for Kanotix, so I left that distribution for Ubuntu, which was recommended as a host OS on the VMWare website.

I used vmware & ubuntu for awhile, but I felt that the gnome interface was a touch heavy. I snooped a bit and discovered XFCE, a lightwieght window manager with some pretty good features, and I'm sticking with it this very day.

However, one of the biggest flaws in XFCE is the lack of ability to add printers easily. After searching the Internet for what seems liked far too long for a simple task, I finally came across the answer and wanted to post it here:

sudo adduser cupsys shadow
sudo /etc/init.d/cupsys restart

That's it. Those two lines above. After typing that, load Mozilla and point it to localhost:631 and follow the prompts. When you get to the point where it asks you for a user name and password, simply provide login inforation as if you were going to use SUDO.

Anyways, part one is done, mainly because the 2.6.17.4 smp-k8 kernel I was compiling is finished. I want to install the kernel and boot it so I can download the version 1.0 release of VMWare server. Enjoy the rest of your day, peepz!

Adventures in Digital TV Recording

Okay. I'm tired of my VCR. Granted they are only $80cdn for a good HiFi stereo one, but the quality is 20years old and the time is now to use something else. I could go out and purchase the likes of this but these devices ball park about $220cdn, more if you want a hard drive included with them. I can't afford that. I have too much stuff to pay off first.

But I can afford $30 for a software based tuner. Asus TV FM Tuner card was on sale locally for $30cdn. Now I can afford that. I've also got a Celeron 1.1Ghz with 128k of L2 cache, sitting on a slot 1 adapter that I am not using, add that together with the good ole classic Abit BH6 and a Rage 128 Magnum (tv out) video card from ATi. All that in a case with a hard drive given to me by a friend, I have a unique oppertunity to to test out creating a Video Recording on a PC.

Well, getting this going isn't all roses 'n cherries. First, the machine didn't boot at all. Ha-ha on me, the power switch wasn't connected to the panel array on the mainboard. Then there is the not-so-minor issue getting the Celeron 1.1Ghz to play nice on the BH6 - it works, but you have to work it: Being a Celeron, the BIOS wants to put the FSB @ 66MHz, not 100, so you have to coach-force the bios into learning the new speed is actually OK for the CPU (persistence here is the key).

So once I got the hardware setup so that each component likes each other, the next step was Operating System software. Two options exist, really. KnoppMyth or Microsoft Windows. To start with I chose KnoppMyth.

KnoppMyth is a distribution of linux based upon Knoppix but customized for MythTV. Installation is pretty slick, although it is worth mentioning that I highly recommend you have the installer handle the partitioning of that hard drive, as doing it by hand with the given interface is difficult to do properly. The automatic partition scheme is quite well decided, as I found out later that really there is a boot partition, a O/S partition and a data partition, all pretty much handled nicely by the install and subsequent scripts it makes no sense to do much else.

All complete, remove the CD and reboot. The next part of the process loads linux, and begins the painstaking part of pompting you for various Myth specific decisions that I will not get into. Suffice to say that, the only important part of this process is the channel grid information from datadirect, which is pretty much detailed here.

Unfortunately I could not get it to work. At the time all I could get was "1: Television" . No sound, no nothing. Busted. I couldn't even tell if the TV card was dud or not. On to trying Windows...

Although my Windows experience was OK, I chose W2K as the OS as the machine is older, slightly less powerfull, and has less memory than the average today's workstation. Thus, to making a long story short, everything works. I changed to a diffent sound card though because I could not find a mixer control that would allow me to enable the aux in internal jack on the sound card. Why? So I could not only see the TV programme, but hear it as well. Guess what? I heard the TV programme with the new card, until I installed the real sound card drivers, which didn't have the mixer control option for the input, and thus I didn't have sound at all. Go figure. So I re-installed windows (cause when I reg-edited all the mixer control settings I broke mixer control), went to record a tv programme, no sound. Looks to me like the audio from the TV needs a mixer record option or it might not record the sound. Unless I give up my CD input, because that has a mixer control for both playback and recording.

Given now some time to think on it, I wonder if I needed to run some kind of tv tuning script in KnoppMyth before I get the channels to come out, or even at least change the channel to channel two. Anyway, since KnoppMyth is miles better than the software that came bundled with card for Windows, I intend to spending the time to get KnoppMyth working for my setup, even if I have to give up on straight audio cd's to do it with, after all, most the music I playback these days is MP3.

author note: I'm going to find out if the Asus TV FM multimedia PCI card based on the Philips 7135 chip is supported on OpenBSD, and if so, give that OS a try.

Anatomy of a Hack

Microsoft put out a good article this year about how and intruder might get into your network. I stumpled upon this gem whilst looking for something else, so I haven't quite read it yet, but you can right here.

One of the things he mentions though, might be an oversimplification:

ICMP traffic should be sent to /dev/null at the border. Even a half decent firewall should block ICMP, but it is surprising how often administrators forget to ensure that it is actually disabled. No response should even be sent.

Interestingly enough, in not one but two papers agree that unbiased blocking of ICMP could invariably lead to trouble. The RFC summarized it best, I think:

ICMP messages are commonly blocked at firewalls because of a perception that they are a source of security vulnerabilities. This often creates "black holes" for Path MTU Discovery [3], causing legitimate application traffic to be delayed or completely blocked when talking to systems connected via links with small MTUs.

As for the firewalls I've setup, I leave ICMP alone. There are too many instances where I've found it thankfull when I've been able to ping the firewall's interfaces from all networks attached.

IPAC / HIP (In)Security

Yet another flaw has been found in IPAC (if you use 2.xx) or HIP (if you use 3.xx), this one sent to me from my Dept. Head who hangs out on the Horizon-L message list service.

The core of the message goes like this:

My security officer notified me of the following JBOSS vulnerability and his
investigation. I have notified Dynix but so far have had no reply. We have
port 8083 blocked to the world but it is open internally and cannot easily
be blocked. How are others coping with this? Has anyone implemented the
suggested fix?

Oh yeah, in other news, did you know that Sirsi and Dynix have merged? All I can say is I hope they pool their resources together and come out with better, more complete and secure products. Did somebody hack their website or something? At the time of this writing, all I get is an generic Apache webpage. Anyways, I digress...

Well, I suppose it really isn't SirsiDynix's fault, I mean, they took advantage of OpenSource software JBoss. A simple update to JBoss, assuming on has been released already, and all is well. The real question is when Dynix is going to get around offering an update, and if they do, will they push the update existing customers?

Want to know what I think? Tough - I am going to tell you anyways what I think: I think Dynix will not push out the fix, and will not notify existing customers because it costs too much time and money. You'll only get the fixes when you upgrade to the latest and greatest of existing products.

I'm trying to be positive, I'm trying to be positive [fade into background]. I mean, I too have found a security related hole in HIP that has been around since the early days of IPAC. If you have a Dynix customer support login, see here. It covers some but not all of what you need to know on how to combat that problem. Hell I even offered a method of fixing this issue with Dynix - but they didn't use it. Instead they release that %&*#&@ incomplete LogExpress. And I will not hesitate to say that I am as mad as hell that they didn't fix the problem. At the same token however, HIP will soon no longer be using Interbase/Firebird to run HIP's administrative database data - and once Interbase/Firebird is gone, so does this problem. Until then, what are you to do?

My solution to the problem? Stick a bridging packet filter infront of your HIP/IPAC server. I used OpenBSD for that purpose, and it worked nicely ever since. Only allow through the ports of what world needs to see: the true port to your catalogue, and nothing else.

Two years before our upgrade to Horizon earily this year, I put into place a Intel Pentium 166MHz box, 64Megs of EDO RAM with two decent intel NICs and 512Meg hard drive to cover for this purpose, and that unit still sits infront of our now static Win2000 Server Std webserver working hard at protecting open microsoft ports from malicious intent. Hehe, sometimes old hardware just never dies.

Need a site search?

Why bother with complicated software, when you can create a html form/button that will have Google do it for you? Simply modify the hidden button values to your websites domain (not host) and instantly you have a your-site-only google search.

<form method="get" action="http://www.google.com/search">
<input type="text" name="q" maxlength="255" value="">
<input type="hidden" name="domains" value="your.domain.here">
<input type="hidden" name="sitesearch" value="your.domain.here">
<input type="submit" name="btnG" value="Search">
</form>

Want a Google Email Account?

I've got 47 more gmail invites to give away. Just email whyzzi - at - gmail.com (take out the spaces and the "-" signs) for a chance at yours today!

On a side note: If you are looking for a 1 Gigabytes worth of storage and already have a yahoo mail account, don't jump to gmail just yet. There was announcement recently that yahoo is due to increase this amount around mid-april. Don't believe me? Start here.

M$ DFS: The Love - Hate relationship

Dynamic File System is not Micro$oft's idea of a joke. In reality, it is their attempt at - oh how did they put it - "help simplify access to files and folders, system maintenance, help enhance availability and performance, and help lower total cost of ownership (TCO)". Curiously, let me break this down into the sum of it's parts:

  • help simplify access to files and folders
  • system maintenance
  • help enhance availability and performance
  • help lower total cost of ownership

Help simplify access to files and folders

This is true. DFS, by unifying different servers and their shares into one share, things just get easier. Rather than spew a whole bunch of Micro$oft propaganda, just watch this flash video.

System Maintenance

This could be true. I don't have DFS working for this purpose, so I don't really know (and honestly, don't really care).

Help enhance availability and performance

I would say that most of this true. By using File Replication Service service in Windows Server 2003, availability is increased by having the document information automagically replicated between different file servers. Performance is increased in the sense that, the down time of a replicated share for the end user is or at least should be, no time at all. Performance in the sense of network performance is decreased, because of the nesessity of replication: chewing up bandwidth between available servers every time a change is made.

Help lower the total cost of Ownership

This is pure BUNK. Should DFS, actually the underlying service FRS, fail systemwide - you have to rebuild it - from scratch. Funny part is, the event ID I went through, I cannot find a web based version anywhere on the Micro$oft website describing the recovery process. I did however, find a copy posted in a forum, and I will post it for all to read now:

The File Replication Service is in an error state. Files will not replicate
to or from one or all of the replica sets on this computer until the
following recovery steps are performed:

Recovery Steps:

[1] The error state may clear itself if you stop and restart the FRS
service. This can be done by performing the following in a command window:

net stop ntfrs
net start ntfrs

If this fails to clear up the problem then proceed as follows.

[2] For Active Directory Domain Controllers that DO NOT host any DFS
alternates or other replica sets with replication enabled:

If there is at least one other Domain Controller in this domain then restore
the "system state" of this DC from backup (using ntbackup or other
backup-restore utility) and make it non-authoritative.

If there are NO other Domain Controllers in this domain then restore the
"system state" of this DC from backup (using ntbackup or other backup-restore
utility) and choose the Advanced option which marks the sysvols as primary.

If there are other Domain Controllers in this domain but ALL of them have
this event log message then restore one of them as primary (data files from
primary will replicate everywhere) and the others as non-authoritative.

[3] For Active Directory Domain Controllers that host DFS alternates or
other replica sets with replication enabled:

(3-a) If the Dfs alternates on this DC do not have any other replication
partners then copy the data under that Dfs share to a safe location.
(3-b) If this server is the only Active Directory Domain Controller for
this domain then, before going to (3-c), make sure this server does not have
any inbound or outbound connections to other servers that were formerly
Domain Controllers for this domain but are now off the net (and will never be
coming back online) or have been fresh installed without being demoted. To
delete connections use the Sites and Services snapin and look for
Sites->NAME_OF_SITE->Servers->NAME_OF_SERVER->NTDS Settings->CONNECTIONS.
(3-c) Restore the "system state" of this DC from backup (using ntbackup or
other backup-restore utility) and make it non-authoritative.
(3-d) Copy the data from step (3-a) above to the original location after
the sysvol share is published.

[4] For other Windows servers:

(4-a) If any of the DFS alternates or other replica sets hosted by this
server do not have any other replication partners then copy the data under
its share or replica tree root to a safe location.
(4-b) net stop ntfrs
(4-c) rd /s /q c:\windows\ntfrs\jet
(4-d) net start ntfrs
(4-e) Copy the data from step (4-a) above to the original location after
the service has initialized (5 minutes is a safe waiting time).

Note: If this error message is in the eventlog of all the members of a
particular replica set then perform steps (4-a) and (4-e) above on only one
of the members.

What this is basically saying is that you need to move your data out of the existing share directory, because the directory is become empty if you choose to replicate that share again!!!.

This is what happened to me. I mean, who has time for this nonsense, when your users need access to that data NOW. Luckily Server 2003 moves all that data into a "pre-existing directory" within the shared folder, so all it takes is a copy back into the share and all is well.

What I don't understand, is why they couldn't build in some kind of crash-recovery feature - like a file merge similar to RSYNC. Is that really so much to ask?

Active Directory, Take Three thhbbb's

Well, the move to Active Directory is long since done. Once I got the memory in the migration was all pretty straight forward. Install Windows Server 2003, follow the wizards to join as a directory server in Active Directory, transfer FSMO roles, retired the temporary server, etc - etc - etc. In reality, after the migration, everything worked as before. Some of the services I had on OpenBSD were transfered to MS - like the Dynaic DHCP services and DNS.

Somewhere in the mix was a call to purchase a new file server. And then they let me configure it (hehehe). OK, so the hardware I got was overkill: Dual Opteron 246's, 2Gigs of RAM, 2 36gig 15k rpm SCSI hard drives in a mirror config via MS, Dual Layer DVD burner, Seagate DAT72 backup tape. Slipped that hardware into a cool tri-powersupplied 3U rack box for the light price of around $6,500cdn. The software, A second set of Windows Server 2003 with 75 CALs need to be added to it - but with MS Volume License Educational Pricing, came in less than you might think. Of course I got the system custom built, it is near impossible to find "Tier One" like HP, Compaq, or IBM selling something near that config - let alone with a Dual Layer DVD burner!

So, originally I was going to retire the old Pentium 3 600 file server, but once I got into researching some of the new features of Server 2003, I had a different plan. I discovered Dynamic File System, or DFS. DFS is an awesome idea. A central place to organize and publish shares from. In this way, you share from the domain and not the server. But not only that, you can setup mirroring between two distinct shares on two or more different machines. Instant on the fly backups! If one machine goes down - ie hard disk failure, or power supply, or mainboard, your staff and their workflow are not down or even interupted! Unless of course, the other machine(s) go down but that is in a whole different category. How can there be a downside you ask? Oh there is, believe me. DFS is not all sweet and juicy cherries. I'll cover what I learned in my next journal entry.

Oh, and our migration from Dynix to Horizon is long since passed. I have some beefs about it too, but I figure I'm quite lucky cause I don't have to deal with it everyday. One suprising note is that Dynix has moved the HIP backend database from Interbase 6 to Firebird - at least that is what they installed for our HIP installation. For this Dynix, I applaude you.

Am I somewhat sorry for putting the library through this pain without actual training? To be honest, I highly doubt that I would have done much better even with proper training. Now, <knock on much wood> as long as this Acer Motherboard will last, everything should be just dandy! I've ordered more memory for it, going fill up all banks in the box for a whopping 1Gb. I guess that is not so whopping anymore, eh?

Active Directory Migration ... Take Two

Take 2 <insert your favourite mind numbing, brain halucinating, aphrodisiac or other drug(s) here> and don't call me until next month, when I've finally got this stupid thing going...

It is really that serious. In "take 1" I talked about using ADMT 2.0 to upgrade our library's infrastructure to Windows Server 2003 and Active Directory. But what I didn't describe last time was what happened to my domain afterward, and why I chose a different method to migrate.

So I had Server 2003 running, along with my ghosted image of NT4 on a temprorary server. Downloaded and installed ADMT 2.0, but what was proper order to upgrade? I started out with this Microsoft Knowledgebase article. But that wasn't quite enough, so I found and followed this article on TechReplublic to a point. The stumbling block I hit was how to setup the trust relationship between the two domains.This didn't tell me how it was done, so I had to dig deeper. After some time I finally came across this.

Now that the trust relationship between the two domains was working, I was able to carry over the users via ADMT, no problem. But the computers were not transfering. I had a the trust relationship between domains established, the migration user in both domains and with administrative rights, and even the migration user listed locally in the test migration XP machines, but the computer information transer still failed. It wasn't until I added "[Active Directory Domain Name]Domain Admins" into the administrators group local the machine I was attempting to migrate before ADMT would properly transfer the computer into Active Directory. Putting it simply, each XP Workstation needed to "Trust" the new domain for it allow such a transfer.

However, all this research was unfortunately taking too long. In order to ease the pain for the staff and public, I decided to switch all the clients back to the NT4 domain until I got the migration method practised enough to work without flaws. Ah the joys of running around to each machine in order to switch the domains around.

Then I started installing all of the new software required (eg Backup Exec) and toying with some of Server 2003's new services and features (eg. Shadow Copy Service and Software Update Services). I realized that the 256 Megs of memory installed in our 4 year old file server was not going to be enough. Especially since Backup Exec installed an instance of MSDE. MSDE is really like MSSQL 2000 "lite" Edition, and a heftly application to run on an already service laden server.
Realizing that the memory in this server was too little, I set out to purchase another 256 Megs of ECC Registered SDRAM from Crucial. Guess what? The finicky server rejected it, and rejected the next DIMM Crucial sent too. So, to make a long, boring story short, 3 weeks later I plugged the equivilent stick from Kingston inside the server and it worked immediately.

While waiting for the memory to arrive I had some time to contemplate the workload before me. Running around again to each staff machine to have it setup so that admt could work it's wonder was starting to get on my nerves. In consultation with some friends of mine they kept asking me why I don't setup a couple of temporary upgrade servers and perform the migration to Server 2003 that way. The more I thought about it the more I liked the idea of only running around to three machines instead of sixty (OK, 16 of those Windows 98 OPACs... but we have soo many public XP boxes with Deepfreeze...) this route for the upgrade just made alot more sense.

Coming Soon to a journal near you: Active Directory, Take Three

Active Directory Migration: Take One ..

To the hospital, that is.
Work like this is never fun to undertake. Especially when you are unprepared for such a feat. And that was me, on August 12. You heard right, I've got no formal training in Active Directory. Or Windows NT. Or unix, for that matter. I've turned down training for Server 2003 because the courses that are offered I already know how to do. I know DNS. I know DHCP. OK, well, I admit, I don't know WINS. But I do know permissions. And File/Printer sharing. I also have experience with Windows Registry, Group Policy Editor, and MMC. I know what know from working the front lines, along with perhaps a certain aptitude and patience for dealing with situations like this.
Anyway, I've been itching to move along our library's upgrade to Windows Server 2003. Dealing with the SCSI failures on the file server made me want to quickly remove the interim IDE hard drive. The adaptech hostraid drivers installed themselves simply enough. Had to disable the system drives on the old controller first (Windows has this *thing* about copying files to the first hard drive on the first controller it finds, in this case Windows was finding the failing hard drives first). After that, Windows Server 2003 is reasonably straight forward. Even after the reboot and the configuration of Active Directory through the "Configure Your Server" and "Manage Your Server" wizard. I retyped all the groups that existed on the previous domain, retyped all the users. Recreated the login scripts - even found the right place to store them.
The only problem I had with DNS was for reverse DNS lookups. Their Reverse Lookup Zone Wizard does the reverse for you: the zone text for you '192.168.0.x' becomes 'in-addr.arpa.0.168.192', which has to be typed in reverse in ISC's Bind.
Once everything is said and done, I (with the help of my boss) ran around to all the client machines and either installed Active Directory Client for Windows 9x or joined to the new domain name. I thought, "Horay, we're done".
There was hell to pay the next day.
Questions like "Why can't I print?", "Where is my email?" rang from around the library as I tried to get a grip on understanding what went wrong. I was thinking like "Since all user profiles are local to that machine, why aren't the XP machines using the profiles already on the client?".
The answer, in my humble opinion (now if this all, half, or none of the correct answer please feel free to flame this journal entry by using the comment section below), Microsoft's Domain security structure greatest strength in fexibility is also it's greatest weakness when it comes changing the network/domain structure of the client computers. Those profiles are so tied to the domain security that it cannot be carried over into a new domain - atleast, not without help.
The help, as I discovered later, comes in the form of Active Directory Migration Tool. There are some tricks to get the tool to work properly, which will be covered in Take Two in this Library's network upgrade to Server 2003. To use the migration tool, however, it is imperative that the existing NT domain server be alive and well. So, I took a spare P3 machine, slapped the IDE HD that has Windows NT installed on it, re-applied NT and Service Pack 6 (had to - the hardware was not the same), and began reading "How To"'s on migrating to Server 2003. Oh jolly what fun.
Some saving graces during the 3 days I forced the library to be off line (eg no printing, no shared files, limited drive c access - Internet was OK though) was

  1. Library Director away that week
  2. NT already on IDE drive
  3. Had spare same-generation hardware with lots of memory
  4. Original NT CDROM, and service packs
  5. An understanding boss

So I didn't quite go to the hospital. Sure felt like hell during that time though.

Thank goodness for Ghost

With the SCSI problems on the file server lately, something needed to be done and fast. The caching SCSI controller on the file server kept marking the hard drives as bad. The fix I had worked out lasted only for three weeks, and then last week the array went down again. And then the next day, and the next.

Add to the tip of the hat, that dumb Mylex SCSI controller does not have the array BIOS built bootable from the adapter card. Nope, I have to load DOS from floppy and run a special exe to give me access to the RAID building functions.

The controller kept deciding that the drives were bad, but not right away. During that last week it only worked on day at a time - and the next day the server was either powered off or every client was seeing strange error messages. We actually lost a document in that mess, and we have one staff member stuck guesstimating the total number of discards, and what categories they belong in.

For the interim, marked the drive good again but this time used Ghost to clone the 18Gig SCSI drive to a spare, (slower) 20gig IDE. Data transfer successfull, and no messing with mount locations or anything. Worked the first time I powered the server back up. I unplugged power to the hot swap box so that Windows will not try to load from SCSI.

I can't pull the SCSI hard drives out of the box and test them to determin the actual cause because the file server has the only "hot-swappable" SCSI array I can get my hands on, and we need that server live.

Oh well, out network software upgrade is just around the corner, and if wasn't for ghost, there would have been alot more pressure to get the new drives working. Instead, our network upgrade can proceed on shedule.

IDE RAID: The good, the bad, the ugly

*Sigh* Spent all of the morning rebuilding the web server IDE raid array. And thats IDE as in "Integrated Device Electronics", not "Integrated Development Environment" commonly used by most programmers.

Back in the day (we are going nearly 4 years back) it came time to move our web server off Win Box P-II server onto something else. You know, one of those "There is a surplus in our budget, can you think of anything we need?" type questions you might get from a mid-size type library department head.

Oh I was thinking a few things, believe me. Then it was said "You can only spend $3000". So I had this "Great Idea" about giving our web site a new server.

Best bang for the buck, I configured a workstation turned server out of a Althon 800MHz 512Megs PC-133 and 4 30Gig IBM DeskStar hard drives setup in IDE 0+1 RAID Config, using the infamous Abit KT7-A RAID mainboard utilizing the highpoint 370 Ultra100 Software RAID Controller.

Time went on and a year later I offered another ability to purchase a proper tape backup drive with SCSI controller. Had to fight with that bit, the Highpoint BIOS and Adaptec bios overlapped each other, preventing boot. I updated the Adaptec BIOS and all was well.

IPAC for Dynix grew and grew so additional hardware needed to be purchased. Needing to keep the existing RAID config I went with an Althon XP 2100+ and 1gig of DDR ram carrying on the Highpoint controller via Abit KR7-RAID.

That upgrade went pure fluke I think, as there was spots on the web that told of horror stories of the RAID between controllers/mainboards not carrying over correctly. Fortunately for me I encountered no problems on the mainboard exchange, and the old webserver hardware went into the making of the new email server (was running on a Pentium 90. The IMAP protocol is not fast on a Pentium 90).

Add another gig of RAM later and that will bring this up into present day, where last week I got to questioning the raid array on the web server (after dealing with a raid failure on the file server that same day) that I would research and install the RAID monitoring software on the web server to find out the actual status of the raid array by scheduling in a consistancy check slated for last Sunday evening.

You know it or I wouldn't be writing it: The consistancy check failed.

Various attempts between yesterday and today to restore the RAID array into working condition failed. But what disk had the problem? The Highpoint 372 RAID controller software isn't programed to tell you. Or they have it hiding in a real obscure place.

So yesterday I bolted off to the computer hardware store to pick up a couple of extra IDE drives in the hope of fixing this little problem. Since I was only replacing 30gig hard drives, I grabbed some Western Digital 40gig hard drives. Normally I don't recommend buying Western Digital, but these had the 3 year warranty on them plus and 8Meg (rather than the standard 2Meg) Hardware Cache.

Now the second stage of the RAID rebuilding has begun. And again I'm thinking that the way the RAID is setup I had some luck again (considering the devices that had bad blocks on them).

Controller 0 Drive 0 !Bad Block!Need Replacement!
Mirrored /w
Controller 0 Drive 1 *OK*

those two drives were striped with

Controller 1 Drive 0 *OK*
Mirrored /w
Controller 1 Drive 1 *Bad Block!Replacement!

You see, even with those two drives with bad blocks and/or problems, because of the setup this raid, the raid array can be rebuilt without difficulty.

The Good: RAID is good. Even with two problematic drives, a 0+1 config is very recoverable without resorting to backup tape. I think it is a good thing I caught this problem before it really got out of control.

The Bad: I had hoped I was out of the range on the bad bunch of DeskStar Hard drives. I guess I was wrong. Raid array rebuilding on the highpoint controller could be better too, as it rebuild process could be on a per drive basis, rather than a per array basis. I found myself sitting through 3.5 complete drive rebuilds because the highpoint controller didn't tell me that the second drive had problems, and neither did the third party diagnostic utility until I did an advanced scan rather than a quick scan.

The Ugly: Shame on Highpoint for not being able to point out which drive on which controller is actually causing the problem. "Consistancy check failed" or "Unable to rebuild" error messages don't tell me enough as to how to fix the problem or what needs to be replaced.

Limited to the bad block senario only? Perhaps, but it would have saved me alot of time by not having to download a third party testing utility to get the answer. And to find out the problem I had to spend the time running an advanced scan to find the bad block problem on the second hard disk.

Well, the rebuild is almost complete, and I need to make sure Windows 2000 server reloads as expected. I'll reschedule a consistancy check for Sunday.

On a side note: I really should think about getting a third fan in that webserver, those 7200rpm drives run real hot.

Horizon on something other than Sun

Or Microsoft Windows, for that matter.

The official platforms thus far for both Horizon and "Horizon Information Portal" are: MS Windows, Solaris [sparc only - methinks], and Redhat Enterprise Linux. Now Horizon can use MS SQL, Sybase or soon (as they proclaim) Oracle. HIP uses Borland's Interbase and Sun's Java plus JBoss.

Now that Horizon can run on an the Linux kernel, this opens up a whole new realm of possibilities that can be accomplished. Although the entrepreneuring systems librarian would have to get around all the neuances/paths/shared libs/etc it is quite concievable to run Horizon on say Debian, or SuSE. But, with Linux binary emulation on BSD, you could try to run it on Free/Net/OpenBSD too.

What hits the stumbling block for Horizon is the database software. With only expensive databases to choose from, only two are really geared to run on Open Source unix, neither of which really support BSD as an Operating System to run on. Searching the Sybase site, one can find references to SunOS (derived from BSD), but usually in the context of migrating over to Solaris.

On Horizon Information Portal side, things are alittle different. Many HIP admins already know that the Information Portal uses the Open Sourced version of Interbase 6 for the administrative database, which was placed back into closed souce for subsequent releases. What admins might not know is that Firebird, the Open Source derived work from Interbase 6, is essentially 98% compatible with Interbase 6 (along with many bug fixes) and that Firebird can be easily found for Linux and FreeBSD. With Jboss as a .deb or .rpm package, and a port under FreeBSD, quite quickly and easily one could create a HIP portal server using the jboss source code from Dynix (under thier license of course), without emulation. And, if necessary, under emulation on NetBSD or OpenBSD.

All this assumes, of course, that the Java code that Dynix wrote is as portable as Java claims to be.

You can turn me upside-down and take all my change, but I think that is safe say that this reasoning, at least partly, is why Dynix does not offer the Linux version of either thier HIP or Horizon for download off their website. ;)

PeeCees

This is the year for the personal computer swap out. Recently finished 5 computers in Bib, add the one in Adult and the 7 waiting to be ghost'd. Completed another two as public wordprocessing boxes. And then there is 20 or so Compaq's slowing coming down the line for browser catalogue terminals only. Those are to be Win98 boxes...

(gets up, presses the 'finish' button on the Windows XP service pack install).

At least I have a private office. I'd go crazy in here without these MP3s.

(Crimps some RJ-45 ends onto some Cat5 cable, plugs one end into the box and the other into the jack 7 on the hub).

What is this? No light? I was very delicate and sure of the crimp work I just completed.

(removes cable from jack 1, moves cable from jack 7 to jack 1. Light comes on)

Thats better. I'd already tried another cable either today, but I knew that cable was flakey. Bah, guess I'd better get back to system prepping.

I once looked on in wonder a plenty of years back when some guy told me that he would not touch basic system support any more. "I'm way above that crap" I remember him saying. I didn't understand it then, but you'd better believe that I understand it now.

Is setting up vlans really that easy?

We all got to start some where. Whether you are a giggling lunatic looking to stop laughing at the slightest little thing, or the village idiot looking for some brains, it all begins with the decision to start.

I was looking forward (in time, mostly) to connecting up to the Alberta Supernet when it finally gets dropped at our door. What is Supernet as it referenced here in Alberta? From this pdf then:

In its simplest terms, SuperNet is a high-capacity fibre-optic and wireless network linking government offices, schools, health-care facilities and libraries in more than 400 Alberta communities.

A drop is the pipe that connects our location to the rest of SuperNet. Now, those devices that sit at our location, the piece of hardware that begins the journey of moving our library's content over the fibre is called a SuperNet Edge Device, or SED. This is usually some highly priced Cisco device. The idea being having organizations connecting to SuperNet to buy some other basic 802.1Q device, called a Customer Edge Device (CED), also from Cisco.

Now, anyone that has read my journals by now knows that I would have no intention of lowering myself to such a level (of purchasing a Cisco device, that is - but being forced into it is another matter entirely). Why should I? I know of an awesome OS that can do the job and more. Plus I have plenty of retiring hardware in the "to be re-assigned" queue. All that was left was the know-how.

So I knew I needed basic connectivity using something called "vlans". But how the heck does it all work together? To find out, I set up two PIII's side-by-side, each with 2 NICs, and assigned some bogus IP addresses, and installed the latest version of my favourite OS on them. The real key was to know how to use "vlan" device. After some digging, I finally came upon this example. Well, problem post on a very good forum board, actually. But for my simple test, serves as my example very nicely.

  1. Machine 1:
    • ifconfig vlan0 create
    • ifconfig vlan0 172.16.0.10 255.255.255.0 vlan 0 vlandev fxp0
  2. Machine 2:
    • ifconfig vlan0 create
    • ifconfig vlan0 172.16.0.20 255.255.255.0 vlan 0 vlandev fxp0
  3. Then ping "172.16.0.10" or "172.16.0.20", depending on which machine you are on. You should see something like this:

    # ping 172.16.0.10
    PING 172.16.0.10 (172.16.0.10): 56 data bytes
    64 bytes from 172.16.0.10: icmp_seq=0 ttl=255 time=0.390 ms
    64 bytes from 172.16.0.10: icmp_seq=1 ttl=255 time=0.158 ms

Yes. It really is that simple. Scary, eh? The cost of the OS and the re-use of some old hardware, you could save your library $500+ too.

Did I write that code?

Found myself today with a wee bit of extra time on my hands. Not that there isn't plenty to do, mind you. After all, I have a MPLS Internet connection to plan for, which means a firewall to upgrade, an email server that needs upgrading, more computers to roll along (wich also means old PC re-location), garbage to toss (soo many boxes from the last PC roll-out), the list goes on, but I'd rather not bore you.

But instead I decided to have a look at some old Visual Basic ASP code I wrote some time last year. Skimming it over, I begin to wonder: Did I write that code? What in the name of heck was I thinking? I can't read this!

"Hold on, there", I tell myself. I suppose that I actually sat down to re-read it thouroughly, it would make some kind of sense. Even the nested For-Next loop that determines if ip address falls within a certain subnet.

Anyway, what the ASP script is supposed to do is decide via the ip address of the visitor is supported to be linked directly into a link or be sent somewhere else: Say you have a website that is used both inside the library and by patrons at home, and there are licensed databases on that website. This script allows the users from inside the library to go directly into the vendor's database, while users outside the library are sent to a web page that requests authentication (is that individual a library patron?).

I know most vendors already provide this type of functionality with their database service, but our case specifically involves a multi-library joint venture on true authentication to discover if a library member in good standing at the library, and whether or not grant that patron access to the licensed database.

So, this means that unless the URL is parsed by some kind of script, only one URL is allowed to be contained in an HREF tag. In our case it would be the link to the authentication page for our patrons at home. And that means every time library staff clicked on a database from our web site, they would be asked to authenicate. Bummer, dude.

Now that background on the whole script is over with, I was debating on ways on improving it. Not that the script isn't good already. After all, it can parse the correct url based on network subnets (255.255.255.0). I was thinking of including it in our weblinks database (project also written in ASP) somehow. But it need it own seperate database table because of the extra url, additional fields to handle hit counts. It was not that part that stopped me from enhancing the script this way, it was thought of re-writing our database links page.

More work than I got in time right now.

On the bright side I cleaned up the databases page, ditching an unneeded table and removing tonnes of whitespace. Ah, the little things that make us feel good.

If anyone is interested in the source of either of these to mini-projects just email me pverhagen-at-sapl.ab.ca and I'd be happy to send it to you.

Burn that Burnin' Software

I have alot of machines to prepare this year. Our upcoming move to Horizon has us running around like a chicken with our heads cut off, no kidding around.

So, in an effort to make things easier for myself (yah right, eh?), I asked an recieved a new DVD-Burner, Symantec Norton Ghost, along with Nero 6 OEM. I figure, why monkey around with serveral CDs of an XP image with two office suites and extra software, when all that will fit on one DVD?

As easy as it sounds, yes? Actually yes, unless you decide to take it one step farther.

In my case, I not only wanted to have all of the ghost images on dvdrw and have it bootable, I wanted the install process to *almost* input free. This is where a custom boot disc is needed, an extra file needs to be on the DVD, and thus the standard ghost bootable image creation is just not quite enough.

This is where Nero comes in. Or would come in, had it actually supported DVD writing on the LG GSA-4081B. Nope, according to this page only the model below, the GSA-4040B is supported writing in DVD mode. Come on, I thought generic atapi support was easy enough to do, and if the product can actually make a data dvd, then why can't the generic atapi support make that dvd? But I was wrong, and out of making a completely custom bootable DVD SysPrep'd ghost image. But where to turn?

First I tried cdrecord, a commonly used burning application in the unix world to make cds. Even a port exists of the latest cdrecord to run on Win32 systems and cygwin. It works, but not to burn DVDs. Then I tried ProDVD from the same page, but it only supports burning images up to 1gigabyte for personal use, unless you get some kind of key (that I guess you had to write the author for). I knew of GrowISOfs, but could not find a suitable win32 port of it (if you know of one, add a comment and let me know, I would be most interested in trying it).

But, I do have a copy of the latest Knoppix handy. I figured, maybe it can burn using growisofs. So, I grabbed a spare hard drive from my local parts cabinet, hooked it up to my workstation to copy the ghost images from the dvd onto the hard drive. Grabbed a machine I was win98 prep'ing for Internet Browser Terminals, slapped the spare hard drive and dvd burner into it, popped the Knoppix CD into the CDROM drive that was already there and presto! Instant data DVD burning machine via free software.

More work than it was worth? Likey. Took up too much of my valuable time. Time spent of monkeying with different DVD burning software, and finding out the hard way not to reverse the 80pin UDMA133 hard drive cable (no wonder why one end is colour coded, you dunce!:).

To finish up the long story, the bootable DVDROM created under Knoppix Debian Linux via K3B Works like a charm. I wonder if I can find a spare P2 machine around here so I can slap knoppix on it and use it as my emergency burn station? We'll see.

23-5

"Some people feel uncomfortable using software that is freely available on the Internet, particularly for security critical applications."

From Building Internet Firewalls

Kinda sums up what I've been into the last few years, doesn't it?

To PIX or not to PIX; that is the question

I'm talking firewalls! Sheesh, some people...

PIX is a type of Cisco firewall. You'd think that there'd be something super inteligent in an entry level firewall from Cisco that cost $1,500. Just to satisfy your curiosity, check out this link.

But nooo, what you're looking at right there is likely a 350MHz P2 box with a customized OS and almost no memory. You can get the general idea from here and here.

So why pay $1,000 to $1,500 specially budget'd dollars to yer library for that retired doorstop? I don't know, you tell me! Since I can get this system for under $300cdn. Ok, I give, it has no floppy or monitor. But then, why buy new when that retired P2 is doing nothing but collecting dust on your shelf?

All you need to do is pay for a top quality, security conscious O/S and a couple of decent network cards (Intel NICs are highly recommended), a little time and viola, you have yourself a high quality vpn/ipsec and lets not forget redundancy capable firewall.

A great comedic spoof has been written and preformed in honour of OpenBSD's next release, 3.5. I encourage all those individuals that are fans of the great Monty Python troupe please visit this web page for a hilarious out-take on how and why redundancy was built into OpenBSD.

Yes. You caught me. I'm done plugging OpenBSD (for now, anyway :). A little evangelism never hurt anybody, right?

Syndicate content