Getting Found
One of the biggest problems being faced today by companies in the business of content is that of discovery. It used to be that if you could weasel your way into a spot in the newspaper odds were that you would get a reasonable amount of people to look at your product. Ideally those people would tell their friends to check it out in the paper and eventually you have an old school social media explosion. In the last ten years this has really started to change. The internet has provided for an experience where you not only go get exactly what it is you are looking for, but people have also been trained to look away from those things that look different or aren’t specifically what they want. In the old days a reader would pick up the paper and thumb through looking for the Dave Barry column, get a chuckle, and would probably read a few columns around it. These days a person pops open the web page reads the column and is off to do something somewhere else on the internet or doesn’t want to click that side link because who knows where it will take them. With the newspaper no longer being theubiquitous medium the built in mass marketing and cross promotion that came with it is simply gone.
The new battle is building brand digitally at the start and most content producing companies are still in the process of figuring this out. You have to let people know the “New Tide” is out there and simply putting it on the shelves isn’t going to make it a best seller. Socially engineered digital marketing has to be the first step because the keyholders’ no longer have a set of golden keys. Content producers need to go directly to the public, which in an ideal case, when the content appeals will create a demand for more of it (note the tv show life cycle). This build brand from the get go outlook is starting to show it’s head in the content world but it’s a slow process. Old habits, old ways and old comfort zones are hard to break out of. If you are comfortable though, someone is probably doing it better than you.
enabling ESXi ssh
After installing ESXi I found need to ssh into the hypervisor, this isn’t enabled by default. To get it going you need to:
- At the console of the ESXi host press Alt+F1
- Alt+F1
- Type in “unsupported” and hit enter. This is done blindly and you won’t see any indication you are doing anything
- unsupported
- If you did step 2 correctly it will prompt you for your root account password now.
- <root password>
- You will dropped to a command prompt and need to edit the inetd.conf file
- vi /etc/inetd.conf
- Uncomment the lines pertaining to ssh by removing the # sign in front of them
- this can be done in vi by going to the # sign you want to remove and hitting the ‘x’ key
- Save your changes and exit, typing :wq!<enter> should force the file to write(save) and quit(exit) you out if vi
- :wq!
- Next you need to restart the inet.d process. Figure out what the pid of the inetd process it
- ps -ef | grep inetd
- The left most number will be the pid of the process that you need to restart. If the number was 5128 the restart command would be
- kill -HUP 5128
- You are all done now and ssh should be accessible so log out
- exit
If you don’t like my instructions just search for ESXi + ssh and you should find a million other write ups and videos on doing it.
Changes
These last couple weeks I’m taking a look at some major infrastructure changes. We’re looking at doing a gadget for Yahoo which means I had chance to implement some new infrastructure options. Things which I’d thought about at one time or another or someone else had brought up to me in the office and was waiting on a good project to test implementation.
The first decision was servers and I decided to shop some used Dell 1950s. Recently I bought a dual quadcore HP G5 with 16GB RAM for about the same price I got 5 dual dualcore Dell 1950s with 16GB RAM from Stallard. While there isn’t much like a fresh from the box server, the amount of compute power for the dollar I can get with a set of these older servers is amazing. They still have virtualization extensions and I’m getting 20 rather than 8 cores, 80 rather than 16GB or RAM and 5x as much potential throughput. I think it’s a win thus far.
Next up is a switch from Redhat Xen to VMWare ESXi. I like Redhat and I’ve been working in it for most of my career, the problem with Xen is that it doesn’t have a decent administration interface that I can get someone else excited about using. VMWare just make too many things simple that are ugly in Xen. [ One thing it doesn’t make simple is installing the damn management client if you are not a Windows user, someone in corporate needs to realize how silly that factor is. ] Once the client is up and running the process of connecting to the host, adding servers and allocating resources is a breeze. The built in reporting functions and monitoring of resources is another nice bonus. The biggest reason for all of this is that we’re running VMWare in the office, so this could possibly consolidate our two hypervisor environments down to one, enable quick DR and our office syadmin to better cover my ass when I’m on vacation (and me cover his too I suppose).
Along with the switch to ESXi is a switch to Ubuntu. The nice thing with running Redhat Xen as your hypervisor is that you can run four or unlimited guests, depending upon which subscription level you purchase. The flip on this is that unlimited subscription costs a yearly $1000 fee on a piece of hardware I just paid about $1000 for, that seems wrong in concept. This leads me back to Ubuntu server being the guest to work with here, in a few choice places we’ve been running server flawlessly since 6.06LTS already. I love Ubuntu desktop and of our four developers I would say two of them are fluent in Ubuntu, this leads to potential for leveraging current knowledge bases.
Thus far I’ve got a couple of the hosts built out and basic builds of the variety of servers well need for this project: memcache, mysql and apache/passenger. All this is the simple stuff and over the next week will be getting them ready for dropping webistrano builds some F5 load balancing followed by hardcore load testing. The interesting thing will be whether skipping out on the para-virtualization I adore is going to cause too much of a resource hit to make the switch work.
CMYK to RGB
On of the real beasts I’ve fought for a few years is getting ImageMagick to do a good CMYK to RGB conversion of files. In the past I’ve messed with the color profiles but always came out on the losing end of the deal. This week though I got all my image conversion to look good. My command ends up looking like
convert CMYK.tif -quiet -profile Generic_CMYK_Profile.icc -profile Generic_RGB_Profile.icc -strip RGB.png
The problem that I’ve always had in the past is that I had neglected to set a profile in the image to start with as my images were coming without a profile. Once I gave it the CMYK profile and told it to transform to the RGB profile all my colors start to come together better. I strip off the profiles in the end in order to reset myself back to a non-profiled image. No more neon greens, hot pinks or electric blues in those images.
OpenX email ads
I’ve been getting and instance of OpenX up and running starting yesterday morning and it’s look to be some pretty kick ass software. My gripe is that one of the major things I’m trying to tackle is a replacement for marking up email templates every time a new ad rolls in. OpenX has this capability but apparently it’s broken recently. I found this post in their forum. There’s a fix posted in the forum that works too. Here’s the guts of it:
Anyhow, you’re not supposed to edit this file directly, but if you need this fixed ASAP like we did, try opening up deliver/ck.php (note that we are on version 2.8.1) and adding the following at line 3174:
reset($zoneLinkedAds[‘lAds’]);
This is just before the line:
list($adId, $ad) = each($zoneLinkedAds[‘lAds’]);
in the function _getZoneAd(). This fixed the problem for us, though you might have to monkey around with it (carefully) a bit if this doesn’t work for you.
getting my stat on
I’ve spent some of the last weekend and a few nights working on getting my stats on. Specifically I now have memcached and mysql pushing stats out via SNMP, which I’m polling and graphing with my install of OpenNMS. Next up will be a quick piece to check the queues of the postfix and qmail servers. Once I get my graph output squared away ( pretty colors and all ) I’ll push up a quick guide detailing what I did and why.
Thus far simply have the graphs going a couple of days are giving me good insight into what I’ve got running and the trends being shown let me know there are some server tweaks that need to happen as well as applications logic which will need to traced back to figure out how it’s leading to certain results.
In the end popping this information out and graphing it was much easier than I thought it would be.
MySQL I Deny Thee Extended Insert
–extended-insert=false
rest_client not equal to rest_client
Note when a Ruby setup, Sinatra in this instance, says it’s missing rest_client it is missing the gem rest-client.
RabbitMQ – 1.5.5 Denied – 1.5.3 Rock On
No sooner had I got my RabbitMQ up and running did I find out that there wasn’t a perl module out to talk AMQP, so wrong. So google and twitter high jinx ensued and I found that there is a STOMP perl module and it seems a decent amount of people are talking STOMP with ruby. Awesomly enough there is a STOMP addon module you can compile up for RabbitMQ available from the RabbitMQ group itself. Unfortunately the 1.5.5 release of RabbitMQ doesn’t seem to play happily with the last version of the STOMP addon, suck. I did find out that the 1.5.3 RabbitMQ and STOMP addon are all good to go though. So I ended up backing my install down a couple version and once I grabbed the corresponding STOMPer it was back up and running. I’ve tested it with perl Net::Stomp and another developer has hit on it with a java AMQP implementation and I believe he’ll be giving it a go with ruby tonight.
I’m REALLY looking forward to messing with AMQP to pass things around. There’s more than a few places where I see we could have had a much smoother implementation/systems running with a decent message queue or broker running on the backside, especially a language independant one. I’m not a proponent (in my world anyway) of everything having to live on the BUS. It’s my belief that getting this component in our aresenal is going to allow us to make an evolutionary step forward in our systems.
RHEL5 RabbitMQ Install
Just did a RabbitMQ install on a RHEL5 server and it was insanely easy.
- Get the RabbitMQ rpm from rabbitmq.com (here at time of this post)
- Become EPEL enabled if not so:
su -c 'rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm'
- yum install erlang
- rpm –install rabbitmq-server-YourVersion.rpm
- /etc/init.d/rabbitmq-server start
- open up port 5672
That’s all there is to that. If it’s going to be in the wild make sure to change the default user ‘guest’ using
rabbitmqctl add_user username password rabbitmqctl delete_user guest