Not sure if anyone noticed, but both the blog and the new GPF beta test site were down last night. Our hosting service, Slicehost, informed us that a breaker blew in their data center and they were forced to bring a number of machines down to protect them. In addition, the blog server (which also hosts a number other private sites I run) stopped responding, so they had to reboot it again.
Unfortunately, while Slicehost was very informative and sent me several e-mails to keep me apprised of the situation, the sites continued to be down until early this morning. That’s when I discovered that for some bizarre reason the MySQL and Apache services were not configured to start at boot time. This is baffling, in my opinion, as I thought this was automatic with Fedora. You install the application package and, if it’s a service like this, it also installs the appropriate links in the init directories to make sure the services start on boot. Not so, apparently. I’m not sure if this is Fedora’s fault, Slicehost’s, or mine, to be honest, but it should be fixed now.
There’s one part of me thinks that this outage is an ominous sign on the eve of my leaving Keenspot. Then again, it also helped me catch a critical flaw that would have been extremely annoying if it happened a week later, after the move when thousands of readers would be hitting the new site. So I don’t know whether to be paranoid or relieved. (O_O)
Here’s my line of reasoning: In this episode, Leo Laporte and his unusual round of suspects are joined by Jonathan Coulton, geek musician extraordinaire. Aside from discussing a few topics of current note (like the death of HD DVD), they discuss a recent concert by Coulton where Leo and company joined him to play Rock Band before a nerd-filled audience. They go on to talk about the “new” Internet phenomena of niche entertainment targeting–skipping the big, mass-market blitzkrieg typically used by music, TV, and movie studios and canvasing thousands or millions of potential customers, to instead go directly to your core fans, the few dedicated people who are the ones that will really appreciate what you do. Coulton talks of making a living catering to a small handful of hard-core fans and how this is much more fulfilling that the big media alternative, where both the artist and the audience are faceless statistics on the bottom line of a balance sheet. And they discuss this with such freshness and enthusiasm, as if this is were the next new thing, some epiphany that no one has yet uncovered.
What I find so funny about it is… those of us in webcomics have already been doing this… for years. 😀
I’ve noticed this a lot over the past near-decade of GPF‘s existence. Blogs, podcasts, and other forms of grass-roots media have all cropped up during that time, putting publishing power in the hands of the masses, becoming “innovative” and “groundbreaking” in bringing content production to the people. But a fair number of “new” trends (and problems) associated with these technologies are things I remember seeing crop up among webcartoonists several years before. Long before the term “blog” was coined, I remember chatting with other cartoonists on mailing lists and news groups, swapping ideas about search engine optimization (before that term was coined as well), getting and retaining readers, how to monetize your site, etc. It’s entertaining now to watch many tech headlines to see “fresh” ideas crop up that I’ve personally tried–and abandoned–a couple years before. It’s like the wheel reinventing itself every couple of years, only with different colors and/or materials.
Of course, I would never be so conceited to believe webcomics “did it first.” Webcomics themselves borrow heavily from the underground comics movement of the 1950s, 60s, and 70s, where small independent publishers ducked under government sensors to push out innovated and controversial content directly to the people who wanted them. What changed between then and now is that the interconnectivity of the Internet moved this from basements and back rooms to hidden mailing lists and chat rooms, eventually making its way to the mainstream, all while expanding the sphere of availability from isolated pockets of common interest to global reach. It would also be naive to believe this flow of “innovation” is one-way; RSS and other syndication technologies took off first in the blogosphere, and was only later ret-conned and shoe-horned into webcomic automation systems as a handy update notification system.
Perhaps one of the reasons bloggers and podcasters didn’t learn any lessons from webcartoonists is the difference between skill level–real or perceived, take your pick–required for entry. Cartooning obviously requires some level of artistic talent as cartooning, in all of its myriad of forms, is a form of art. It’s often a commercial art, intended more to generate revenue than anything else, but an art nonetheless, conveying ideas and emotions graphically. And while a well-crafted blog certainly requires a talent for writing, that is often easier to come by than the ability to both write and draw. Thus the critical mass of webcartoonists is much smaller than that of bloggers and podcasters, making it less noticeable to the mainstream. That’s also why “break-out” blogs now seem to be a dime a dozen, but it’s still major news when an online comic gets noticed by big media and gets optioned for TV/movie deals. Everyone knows about blogs and maybe even reads a few, but there are other comics on the “intraweb” besides Dilbert?
I’m not sure if there’s anything useful to these observations, other than the fact that they amuse me occasionally and it gives me something to post about. I’m not sure if anyone else has made these kinds of observations or, for that matter, anybody else cares. But I’ve often wondered if those underground cartoonists of yesteryear thought to same way about us webcartoonists as I have about bloggers. I’d like to think so, just because it creates a nice symmetry. I can’t wait for bloggers to sit around in the old bloggers’ home, thinking such thoughts about whatever comes next. “Those kids with their holocasts… if they had learned the lessons we did about AI search, they’d be raking the quatloos by now….”
By now, I’m assuming most of you have read Mondays GPF News item. (If you haven’t, shame on you.) GPF is leaving Keenspot, and I’m neck-deep in unit testing the new site with hopes of releasing it to beta testers soon. If you’re interested in beta testing, you can volunteer in this thread on the old forum.
However, I’ve hit upon one little programming snag, so I thought I’d put out an appeal for help. I thought the blog would be more appropriate venue for this than the forum; that assumption could be wrong, but I’ll go with it anyway. For those of you with some Web-based programming knowledge, especially in the areas of PHP and cookies, please put on your thinking caps.
As part of the new site, I’m implementing my own version of Keenspot’s PREMIUM service, reusing the old relabeling of GPF Premium. Keenspot PREMIUM is going away (for several reasons I won’t go into here), but as the service’s biggest proponent and largest beneficiary, I’d hate to lose that functionality. So the new site will launch with its own independent Premium functionality including all the old service’s features (optional ad-free surfing, weekly archives, High-Def archives, tons of exclusives like Jeff’s Sketchbook, etc.) plus a few new features that I’ve been wanting to implement but haven’t had the time or technological hoop-jumping expertise to work on at Keen.
For security reasons, I want to secure Premium sign-ups and account management via secure HTTP (HTTPS). The benefits should be obvious. By encrypting account creation & management pages, you eliminate sniffing attacks and protect user privacy. While these pages may still be susceptible to other forms of attacks (and I’ve coded them to be as resilient as I know how), encrypting the traffic end-to-end can go a long way to cutting off those vectors of attack.
The problem occurs when I set the cookie over the encrypted HTTPS connection, then try to read it over unencrypted HTTP. I appears that none of my test browsers send the cookie back when the encryption state changes. The reverse is the same; if I change the URL and set the cookie over HTTP, then try to access a page via HTTPS, the encrypted page can’t see the cookie either. It works like an either-or situation, when what I really want is both. If I set a cookie over HTTPS, I want to see it in both HTTP and HTTPS mode.
PHP’s primary cookie interface is the
setcookie() method (for setting) and the
$_COOKIE array (for reading).
setcookie() includes a boolean parameter for secure cookies, i.e. cookies that will only be sent via HTTPS. What’s annoying is that even when I set this flag to false to force it to be insecure, the scripts continue to exhibit the same behavior: cookies set via HTTP can only be read via HTTP and vice versa. I’ve also tried setting the same cookie both ways–first in one protocol, then the other, without erasing the first cookie–but that didn’t seem to work. The second cookie overwrites the first one, effectively turning it off.
I had heard that IE 6 exhibited this behavior as a bug. However, I tried the exact same tests in Firefox 220.127.116.11, Opera 9.24, and Safari 3.0.4 (all on Windows) as well as IE 7, and all reacted the same way. Cookies set over HTTP could not be read over HTTPS and vice versa. It’s a bit frustrating. Obviously, I don’t want my Premium folks to be forced to use the new site in encrypted mode all the time, as this would slow down all the pages and put a significant extra load on the server as the number of subscribers increases. But I want to protect my users’ privacy and settings (and one of my important revenue streams) by encrypting their account access.
So I guess I’m looking for answers to two questions:
Any responses via e-mail or (preferred) comments below will be appreciated.
Update March 5, 2008: Thanks to the input of many commentors below, it looks like I’ve got a solution. The problem, as usual, was somewhere between the chair and the keyboard and the faulty component has been sufficiently flogged with a wet noodle. Immense thanks to everyone who provided feedback and suggestions.