GoDaddy Cert, Safari and the BigIP
Ran into and issue after our deploy with Safari not thinking our secure cert for Gocomics.com was valid. It took some searching but I ran across a post on F5 DevCentral which show how to go to Verisign or the GoDaddy repository and download their intermediate certificate and add it to your chain. The steps are essentially import the certificate:
1. Log in to the Configuration utility.
2. Click Local Traffic.
3. Click SSL Certificates.
4. Click Import.
5. Select Certificate from the Import Type menu.
6. Click the Create New option.
7. Type intermediate for the Certificate Name.
8. Click Browse and navigate to select the intermediate certificate or chain certificate to import.
9. Click Open.
10. Click Import.
and then add it to that Client SSL Profile
1. Log in to the Configuration utility.
2. Click Local Traffic.
3. Click Profiles.
4. Select Client from the SSL menu.
5. Select the Client SSL profile to configure.
6. Select Advanced from the Configuration menu.
7. Select intermediate from the Chain menu.
8. Click Update.
Cake.. once you figure it out.
BigIP pwn3d
It took a whole lot of looking but I finally figured it out, SSL pass through on the BigIP LTM.
When passing through a connection from HTTPS to HTTP the Virtual Server needs to have SSL Profile (Client) pointed to a SSL profile you created using your certs, HTTP Profile set to “http”, Port Translation set to Enabled and then (the final thing that was kicking my ass) SNAT Pool set to Auto Map.
I have now defeated thee.
+500 exp
Not so friendly stable Ruby
Went to install ruby on a RHEL4 machine and wanted to get the latest and greatest, becaue the 1.8.1 packed for RHEL4 wouldn’t cut it. So I went out to the ruby site and grabed that package marked latest stable release. I believe this ended up untaring to be 1.8.8.1, which I then  installed. Upon trying to install gems and it wouldn’t go because it doesn’t like the version number of ruby. Had to back down and get an older version of ruby and everything works now.
I don’t spend my day doing ruby nor rails and don’t deeply study the release cycle. When you put out a release and tag it “stable” I think it would be good practice to make sure it at least works with the package manager for the product.
How To Create Linux LVM In 3 Minutes
In this 3-minutes Linux LVM guide, let’s assume that the LVM is not currently configured or in used. Having say that, this is the LVM tutorial if you’re going to setup LVM from the ground up on a production HP server with a new partition allocated within your raid controller
How to setup Linux LVM in 3 minutes at command line?
- Login with root user ID and try to avoid using sudo command for simplicity reason.
- Using the whole new partition as a LVM partition:
fdisk /dev/cciss/c0d1
- At the Linux fdisk command prompt,
- press
n
to create a new disk partition, - press
p
to create a primary disk partition, - press
1
to denote it as 1st disk partition, - press ENTER twice to accept the default of 1st and last cylinder – to convert the whole secondary hard disk to a single disk partition,
- press
t
(will automatically select the only partition – partition 1) to change the default Linux partition type (0×83) to LVM partition type (0×8e), - press
L
to list all the currently supported partition type, - press
8e
(as per the L listing) to change partition 1 to 8e, i.e. Linux LVM partition type, - press
p
to display the secondary hard disk partition setup. Please take note that the first partition is denoted as /dev/hdb1 in Linux, - press
w
to write the partition table and exit fdisk upon completion.
- press
- Next, this LVM command will create a LVM physical volume (PV) on a regular hard disk or partition:
pvcreate /dev/cciss/c0d1p1
- Now, another LVM command to create a LVM volume group (VG) called VolGroup01:
vgcreate VolGroup01 /dev/cciss/c0d1p1
- Create a 400MB logical volume (LV) called LogVol3 on volume group VolGroup01:
lvcreate --size 400M --name LogVol3 VolGroup01
Some of the useful LVM commands reference:
vgdisplay vg0
– To check or display volume group setting, such as physical size (PE Size), volume group name (VG name), maximum logical volumes (Max LV), maximum physical volume (Max PV), etc.pvscan
– To check or list all physical volumes (PV) created for volume group (VG) in the current system.vgextend
– To dynamically adding more physical volume (PV), i.e. through new hard disk or disk partition, to an existing volume group (VG) in online mode. You’ll have to manually executevgextend
afterpvcreate
command that create LVM physical volume (PV).
Contents mostly from: http://www.walkernews.net/2007/07/02/how-to-create-linux-lvm-in-3-minutes/
netcat dd
At one point I had this on here but I can’t find it via search this morning so I’ll just go ahead and add it again.
Useful for cloning drive partitions when you don’t have the advantage of a SAN. ( taken from here )
- On Slave, run
nc -l -p 9000 | dd of=/dev/sda
(note that it is important to start with Slave) - On Master, run
dd if=/dev/sda | nc 192.168.1.220 9000
- Go have a drink
RT respond on resolve
I’d done a search before on switching RT to reply to requestors on resolving a ticket rather than it being a comment and came up blank. Today provided this gem:
In <path-to-your-RT>/share/html/Ticket/Elements/Tabs, search for the part where it says title => loc('Resolve'), (which, in my code, is $actions->{'B'}), and change it from Action=Comment to Action=Respond in the Update.html URL. Eric Schultz United Online
In my revision it was actions->{‘G’} line 180 in /opt/rt3/share/html/Ticket/Elements/Tabs
mac hosts again
they changed host flushing on osx, the latest way to do it is
dscacheutil -flushcache
so the post I made earlier is no longer valid
mac hosts
For future reference, I am not insane when when editing /etc/hosts and the changes don’t show up immediately on OSX. Mac uses lookupd so I need to send a /usr/sbin/lookupd -flushcache.
Virtualization, Rails and CDNs
A few weeks ago we (not me, but my company) started work on our GoComics igoogle gadget for the release of igoogle v2. We weren’t really sure how well it was going to go over, it didn’t take long before we were suddenly at 150,000 users and we were listed along with the New York Times and Wall Street Journal on the front of igoogle as the gadgets to get. Load generated by the gadget itself is extremely minimal as google has a excellent caching and proxy system that keeps the load off of us. We started to run some numbers on what we might see for users coming through from the gadget to browse the actual gocomics site. Those numbers started to looks a little scary and we (this is me now as the servers are my deal) got a bit concerned. Quick math lead us to figures that could easily double the traffic to our site and the amount of traffic we handle now isn’t trivial.
Our servers are virtualized using Xen and I had thought ahead, with some extra resources in place replicating a few more servers out didn’t take long. One of the things on our roadmap was to add a CDN in the near future, that got moved up a couple notches. In the original plan of the site we’d talked about separate asset and application servers, but as thing worked working well at launch we tabled that additional complication for later knowing we could add it in if need be. Later came upon us quickly, as often seems to be the case. I believe it took the Rails Dev less than a day to get the ability into the codebase for a distinct asset address, do testing and get things rolled out to a live environment. Going from decision of implementing the CDN to having the site running and using it took less than a working day. The speed at which we can do things in my company amazes me, I think of the extended projects I hear about other places and realize how special that is. Working with the talented crew that I do makes handling the back end so much easier and I can’t thank them enough.
Monday will be our big day and it looks like we gone from 150k to nearly 250k gadgeteers just starting into the weekend, I can’t guess as to how many we’ll have come the first “official” work day of the week. There’s a lot more tuning that can be done but it’s my view that we need to learn to run on high octane gas before we switch over to the specialized pieces in order to run rocket fuel.
Lost some boxcars
Pushed a new site live today and I got a call this evening from my VP, she was getting a 500 error on the site. There is no way this site is getting to much traffic and it works just fine when I pull it up from home. I start log hunting and make my way to the Rails log:
/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/mongrel_rails:281
/usr/bin/mongrel_rails:16:in `load’
/usr/bin/mongrel_rails:16
/!\ FAILSAFE /!\Â Wed Oct 08 19:33:05 -0500 2008
Status: 500 Internal Server Error
IP spoofing attack?!
HTTP_CLIENT_IP=”1.2.3.4″
HTTP_X_FORWARDED_FOR=”1.2.3.4, 4.5.6.7, 10.168.1.81″
Awesome, so it’s Rails that’s tossing an error, at least we know what is up now. I send out an email to the code gurus and get a quick response back pointing out this site (because I’m in a company of freaks that likes to stay up late, read work email and figure out problems… it’s good kind of freak). The answer is that RoR is pissy in later versions and if your HTTP_CLIENT_IP header differs from your HTTP_X_FORWARDED_FOR header it going to put the breaks on for you and throw a 500 error. Unfortunately were working in an environment with a loadbalancer as well a Apache/mod_proxy in from of mongrel and this will happen to us a lot. The solution is to add
RequestHeader unset Client-IP
to you VirtualHost config and make sure you have mod_headers enabled. At this point it should clear your HTTP_CLIENT_IP and stop the error.
Now I can go back to eating my chili.
In the Cloud
Today I got my first EC2 instance running. It took a total of about two hours from registering for the service to a point where I had a full CentOS 5 install up and going serving out a static web page. Responsiveness of my server was fast ( even at the cheapest level ) and my experience on the command line felt like I was running on my own hardware. I have to say I was pretty high on the whole thing, that was until I realized there is no way to get a static ip or host name. Not getting a static IP I can understand. The inability to have a static hostname I don’t understand, this kills most of the uses I would have for the service. Sure there are ways you can get around this with dynamic dns and I can hear the hacker side in it’s Darth Vader voice, “Put together an XMPP based service to update to a master and share server locations as they come up. Only then will you know the true power of that Dark Side”. While my sysadmin side manifests itself as ObiWan’s ethereal image and says “Don’t listen to the guy who’s life support system shot craps and let him die”.