linux

More On Net Neutrality

Another great opinion on Net Neutrality which closely (if not exactly) mirrors my own. For those too lazy to go and read for themselves, here's a quick snippet.

We need policy to help cut a path for more competition, rather than protecting incumbents -- a Bandwidth Competition Act of 2008, not bogus net neutrality. All takers should be allowed access to poles or underground conduits. This is where neutrality should be enforced, instead of being a choke point.

As I've long said, a government bureaucracy isn't going to solve the problem. It's going to create less incentive for Internet companies (like mine, full disclosure) to even toss their hat in the ring. Try forming your own telephone system and you'll know what I mean. The rules are ridiculously complicated and it takes an army of lawyers to sort through them. Please please please don't turn the Internet into the phone system.

tags: 

Chown

Sometime last week I happened upon a handy little shortcut when using the chown command. I mistakenly keyed in the command wrong and it turned out to work, so I investigated. Turns out that by leaving off the group name, but leaving the colon, chown will automatically use the default group of the specified user. That's so handy. What's surprising is how much I really use that trick. Why, I must save literally seconds every other day or so. That's gonna add up, baby.

Here's an example for you impatient, graphical learners:

chown tensai: file.txt

tags: 

Grudge Match: scp, tar+ssh, rsync+ssh

The question came up today about relative speeds of scp, tar and rsync (the latter two using ssh as a transport mechanism). While anecdotes and rumors are great for defining security policy (think TSA), I wanted some more concrete numbers so I ran a test.

I set up a script to copy a directory 5 times from my laptop to a server on the same subnet. I routinely pull 3MB/s from that server (over wifi), so bandwidth wasn't an issue. I used /var/lib/dpkg as my source directory. It weighed in a 57MB and contained 6896 files. Because rsync will compare changes between source and destination, I made sure to nuke the directory off the server after every run.

Method:            scp  rsync+ssh   tar+ssh
Average Time:  269.75s      33.6s    24.43s
Bandwidth (mbps): 1.69      13.57     18.66

The results are what I expected, at least as far as scp is concerned. It does not do well with large numbers of small files. It copied each file over completely before it started with the next one. Tar of course put the whole thing together and then shipped it off. Rsync read all the files first, then compared them to the server and then shipped them all in one go. Apparently there were some significant I/O savings to be had that way.

One other important item of note is that scp did not handle symlinks the way tar and rsync did. It dereferenced the symlink and copied the contents of that link rather than copying the link itself. That was a problem because I had picked some self-referential directories before I settled on /var/lib/dpkg.

For your reference, here are the commands I ran to test:

for i in 1 2 3 4 5; do time scp -qrp /var/lib/dpkg [server]:/tmp; ssh [server] rm -fr /tmp/dpkg; done
for i in 1 2 3 4 5; do time rsync -ae ssh /var/lib/dpkg [server]:/tmp; ssh [server] rm -fr /tmp/dpkg; done
for i in 1 2 3 4 5; do time tar -cf - /var/lib/dpkg |ssh [server] tar -C /tmp -xf - ; ssh [server] rm -fr /tmp/dpkg; done

tags: 

Hibernate Ubuntu Edgy

I took the plunge yesterday and upgraded my laptop from Kubuntu Dapper to Edgy. For the most part I like it. Evolution is snappier, Firefox 2 is awesome, Amarok 1.4.3 works almost perfectly with my mp3 player. One thing I lost though, was the ability to hibernate my laptop. I did gain back the capability to suspend, which I'm sure I'll use because it's a lot quicker. But when I leave work at the end of the night I prefer to hibernate because who knows how long the system might sit in my bag.

But I got it working. Here's what I did. Now understand that this is just based on a few things I pulled together so that it Just Works(tm), but it may not be the Right Way(tm).

MegaRAID Nagios Script

Last year we bought some Dell PowerEdge 2850 servers with a PERC 4e/DC RAID controller. It's based on the LSI MegaRAID chipset which we really liked. It's fast which is great, although so far it hasn't be entirely stable, which is greatly annoying. To that end, I was tasked with get a Nagios script in place to monitor the array and alert us if it fails (again!).

With the server came a disk with some utilities. One of those is the MegaServ and its corresponding MegaCtrl. It seems like a good idea, but the blasted thing doesn't work in any sane manner. It generates alerts for things like how many percent a rebuild is at and when the battery starts charging. It can get bad. Worse still is that it stopped sending any alerts.

But today I found another utility for Dell. It's an extension to snmpd, named percsnmp that polls a daemon for the current status of the controllers. It's great and so full of good info. For now I'm just looking at the online state, but given all these other fields I may have further uses for it. Most especially I'm interested in the settable rebuildRateInPercent field since rebuild rate can't be set through the megamgr (a copy of the BIOS-level tool).

Printing Multiple Pages

I'm working on a publication for my wife. I can't say more because it's really hush-hush (nuclear secrets, you know). We decided to print it on 5.5"x8.5" paper, which the astute among you may recognize as a half-sheet of US Letter size paper. We just didn't need a larger size (atoms are pretty small). The hard part was getting OpenOffice.org to double up the pages.

My first try was to change the page to Landscape and create two columns. That worked, but the page numbers got messed up and I couldn't find an analogous "column number" field. That's a problem because we use automatic tables of contents and indexes extensively.

Save tcpdump to file and print to stdout

Today I found myself needing to save packets from tcpdump to a file but also view them on screen. I've wanted to do that in the past, but today it became more important. It was suggested to me to use two instances of tcpdump, but I thought there had to be a better way. Luckily there is.

# tcpdump -U -s 1500 -w - <bpf> |tee <file> | tcpdump -lnr -

Asterisk: The Future of Telephony

Title: Asterisk: The Future of Telephony
Author: Jim Van Meggelen, Jared Smith & Leif Madsen
Published: 2005 by O'Reilly
ISBN: 0-596-00962-3

I've been playing with Asterisk for about a year, and I've been interested in it for twice that, pretty much since I started working with proprietary PBX systems. First was a Nortel, and now a Vodavi and an NEC. I can't understate the symplicity of having a computer-controlled (especially a Linux-based one) phone system.

tags: 

Manual route override in Exim 3

Ever heard of publicaster.com? Yeah, me neither. Apparently one of our customers has as he attempted to send an email to them last week. Unfortunately, it got stuck in the queue because of a combination between a weird configuration in publicaster.com's DNS and Exim 3.

make_server_info_info3: pdb_init_sam failed!

I converted a few Samba 3.0 servers from doing local authentication to using our primary domain controller. I'm not quite sure why, but I thought it would be fun. It seemed to work but every time I would try to connect I would get this error:

[2005/05/16 14:11:15, 0] auth/auth_util.c:make_server_info_info3(1134)
make_server_info_info3: pdb_init_sam failed!

Took me a while to find the right answer on google, so I figured I'd preserve it here for posterity's sake. This post had the correct answer. It was because I was using a username map to map my name on the domain controller to the local *nix user name, which happened to be different. Apparently that's the message that means "unknown local user".

Pages

Subscribe to RSS - linux Subscribe to zmonkey.org - All comments