If you’ve been following my Twitter account at all, you’ve probably noticed by now that I’ve become an avid mobile device (i.e. smartphone) user, and a fan of Android in particular. This isn’t just a passing phase for me, nor is this a technology fad that’s just going to fade away. Mobile technology is really taking off, and I wouldn’t be surprised if a paradigm shift won’t occur—if it hasn’t already—where more people will be using smartphones and mobile devices to access the Internet and other online services than using a full desktop or laptop. There are other contenders vying to be our one-and-only window to the digital world, like set-top boxes, digital TVs, and such, but nothing is as personal and portable as the smartphone and its bigger brother, the tablet.
That said, I’m not in the camp that believes that the Web is dead and that mobile apps are the way of the future. I’ve expressed my feelings on that here before. Apps won’t and can’t be the end-all, be-all interface to data and the mobile Web will always have a place. Thus the mobile browser is one of the most important apps a smartphone can have. That said, most browsers on smartphones are anemic, underpowered, and severely lacking in important functionality. Smartphone manufacturers and OS authors want us to believe that we can leave the laptop behind and work entirely from that wondrous miracle in our pocket, but fail to deliver the tools we need to make that dream a reality.
My case in point: client-certificate authentication. As a very brief summary, the entire industry of e-commerce rests entirely on a set of encryption technologies such as HTTPS, SSL, TLS, etc., that allow secure, private communication between a client (such as an online shopper) and a server (an online store). The server authenticates itself to the client by using a digital certificate, signed by a trusted certificate authority which has investigated and authenticated the server as a legitimate entity. The client can rest assured that the server belongs to the authenticated entity because the certificate uses strong public-key cryptography to provide a chain of trust back to the authenticating authority. Without this technology in place, we wouldn’t be able to tell legitimate businesses such as online retailers and banks from the phishing scams so prevalent on the Web. (This doesn’t always solve problems between the keyboard and the chair, of course, but it is effective as long as the wetware interface is working properly.)
But digital certificates can be used to authenticate the client as well as the server. Many businesses and governments use client certificates to authenticate users to secure systems. For example, I use a government-issued Smart Card to authenticate with my client’s servers. On this card is chip that contains my digital certificate, signed by a private certificate authority. When I authenticate with the client’s services, the private key on the card creates a digital signature which the server can authenticate against my public key, the inverse of what happens between the online shopper and the store front. Thus, I can trust the validity of the government’s certificate and know I’m connecting to their servers and no one else, and they in turn can validate that I (or the person who has my card) am who I say I am and let me in. I use a similar technology with GPF, although I import my certificates directly into the browser rather than use an external card. I created my own private certificate authority and issue client certificates to each browser I wish to use to access my admin interfaces. That way, I know only certain machines can access those portions of the site, offering a lot more security than just a simple password can provide.
This isn’t a new technology. SSL has been around almost as long as the Web itself, and it wasn’t long before the model was flipped around to authenticate clients to servers as well as servers to clients. This is a tool used by businesses every day all over the world. Every desktop browser supports client certificates because they are a standard. Any browser that doesn’t support them is likely to be overlooked or ignored in favor of browsers that do.
Yet the support for client certificates on mobile devices is appallingly absent. I know the built-in Android browser doesn’t support it, and I created an issue in Google’s official Android issue tracker to complain about it. Android supports client certs for WiFi authentication, but not in the browser, e-mail, or any other key service vital to secure business communications. Supposedly support for this functionality is going to be added in future versions of Android, but that doesn’t help me or any of the millions of current Android users until it comes time to upgrade our devices. I’ve read in various places that the iPhone supports client certs, but I’ve never been able to get any of the solutions to work with my iPod Touch (essentially an iPhone minus the annoying contract and poor service of AT&T). The only success I’ve had in this area has been with Firefox Mobile, which is pretty much a Firefox 4 release candidate smooshed and crunched down to fit on a mobile device. It’s bloated and a lot slower than Android’s built in browser and there’s no handy UI for importing certs like there is on the desktop, but if you take a sledgehammer to it and do some manual file tweaking, you can import your client and CA certs into the certificate database and use it effectively.
Seriously, guys… you want your devices and mobile OSes to be taken seriously by businesses as tools to take our work out of the office and on the road. Yet, you don’t give us the essential tools required to take advantage of this amazing freedom. Sure, you tell us “there’s an app for that”, but frankly, there isn’t. I’ve looked, and they’re not there. Apple won’t let third-party browsers compete with Safari on iOS and none of the Android add-on browsers support client certs either. Only Firefox, a desktop browser masquerading as a mobile app, comes close, and it takes a bit of technical wizardry to do something that should be a quick five second import. Someone’s got to step up to the plate and make some progress here, or no business that really understands security is going to take the mobile space seriously.
By now, the tech savvy among you have probably heard of Firesheep, the infamous unofficial Firefox plugin that lets you swipe other people’s session cookies and impersonate them on various popular, less-than-secure websites if you and they share the same unencrypted WiFi access point. The less tech savvy ones probably could care less, or are so terrified and spooked that you’ve turned off and unplugged your computers, buried them in a 20-foot-deep hole in the backyard, and layered on top of them concrete, asbestos, Kevlar, lard, and ten thousand old AOL CDs you’ve been hoarding in the closet since 1990.
OK, I was only kidding about the lard.
Last week I tweeted that “Firesheep makes me want to weep for the Internet and laugh maniacally, both simultaneously”. That’s no exaggeration. On one hand, it’s performing wonders by raising awareness of just how insecure many of our favorite sites really are. The problem Firesheep exposes has been around for ages; hard-core hackers could perform all the tasks that this plugin does through readily available tools and a lot of dedicated logging and log scanning. What Firesheep does is take a complicated, hard-core hacker task and make it bone-headedly simple: install, scan, infiltrate. It provides a wake-up call to Web 2.0 developers that they need to look seriously at security rather than just pay it lip service. And at this task it seems to be doing quite well; already Google has made moves to force SSL for all GMail access and Facebook is mumbling under its breath that they’re “looking into it”.
What scares me about Firesheep is the bone-headedly simple aspect. I won’t get into the ethics of responsible disclosure of security flaws, but releasing a tool like this that makes such a questionable task as simple as clicking a button is bound to have repercussions. Putting this tool in the hands of everyone means putting it in the hands of everyone, no matter what color hat they wear. Yes, we’ll hopefully see lots of increase in security at many of the websites we use every day, but how many innocent and ignorant users will be maliciously attacked before those changes occur? The gun was a very useful tool for early pioneers to hunt and protect one’s family, but it’s also useful for criminals to steal, coerce, and murder their victims. Technology is inherently amoral; it is people that are moral or immoral.
I won’t go into the details of how Firesheep works or the many ways it can be easily thwarted. A quick spin by your favorite search engine will likely provide all the information you may need. However, I did want to take a few minutes to publicly analyze the various aspects of this site and the GPF site and reassure all my readers that your information should be reasonably safe. Right now, it looks like the person most likely to be impacted would be me, directly or indirectly, and the risks are actually pretty darn low.
First up, this site: Firesheep does indeed include information on how to “hack” WordPress. Well, how to hack WordPress.com. Since Neural Core Dump is self-hosted, the built-in attack against WordPress.com hosted blogs won’t affect us here. However, Firesheep is open source, so it is trivial to modify the code to attack specific domains, so the WordPress.com attack can be tweaked to attack an individual self-hosted WordPress blog. My original assumptions here proved to be incorrect; in looking back over the the Firesheep code, it doesn’t look specifically for WordPress.com domains, but for common cookie names used by all instances of WordPress, whether it’s self hosted or not. Thus, any logged-in user here could potentially be exposed. In this case However, this blog’s small size becomes its advantage; the likelihood that anyone will directly attack it is pretty low, and even then I keep extensive backups and can easily back out malicious comments or posts. (Mind you, being too small should not be used as an excuse not to be concerned, just that the threat can be downplayed for the time being.) I rarely use public, open WiFi hot spots (to be honest, there aren’t that many of them around where I live), and on the rare case that I do, it’s easy enough for me to create an SSH tunnel to my home Linux box and proxy all my HTTP traffic through it.
As for GPF, all logins occur over SSL, so no passwords are ever sent in the clear. Of course, Firesheep does not sniff passwords but rather session cookies, so this isn’t really the problem. I thought of a few scenarios where Firesheep could be used against GPF to varying degrees of success:
Again, GPF’s probably far too small a target for anyone to really bother with, but the fact is that so little attack surface is visible that the only person likely to be hurt by it is me.
There, I hope I laid all your GPF/Firesheep fears to rest. What was that? The only person really concerned about this was me? Oh… well, in that case… um… never mind, I guess.
UPDATED November 4, 2010: Updated the paragraph about this blog to correct an incorrect assumption about only WordPress.com blogs being affected.
In the ongoing spirit of releasing pointless Open Source software, I semi-proudly announce the release of Cryptnos 1.0 for Microsoft .NET 2.0.
So what is it? Cryptnos is a secure password generator. By now, I’m sure many of you have heard of various programs, especially browser plug-ins, that let you generate unique passwords for all your various online logins. They usually do this by combining the domain name of the site with a master password you supply, then run those inputs through an MD5 hash to give you a “strong” password that is unique for that site. Many of these applets also search the page you’re currently on for the login form and attempt to pre-populate the password box for you. Well, Cryptnos is kind of like that. Only it’s not.
Like these other apps, Cryptnos generates a password from your master password and from some mnemonic or “site token” that you supply. But that’s where the similarities end. First of all, Cryptnos does not live in your browser, so it can be used for any application where you need a strong password. As a corollary, the mnemonic does not have to be a domain name, although it certainly can be; it can be whatever you want it to be, so long as it is unique and it helps you remember what the password is used for. Next, Cryptnos gives you unparalleled flexibility in how your password is generated. You’re not stuck using just MD5, a broken cryptographic hash that is horribly out of date and which should no longer be used. You can select from a number of hashing algorithms, as well as how many times the hash should be applied. Crytpnos also uses Base64 rather than hexadecimal to encode the output, meaning your generated passwords can have up to 64 possible options per character instead of 16, making it stronger per character than the other guys. You can further tweak your generated password by limiting the types of characters used (for those times where a site requires you to only use letters and numbers) and the length of your password. Best of all, Cryptnos remembers all of these options for you, storing them in an encrypted state that is nearly impossible to crack. Your master password is NEVER stored, nor are your generated passwords; your passwords are generated on the fly, as you need them, and cleared from memory once the application closes.
Cryptnos originally sprang from the “Hash Text” function of WinHasher, which I used to generate passwords in a similar fashion for a long time. I quickly ran into limitations in using WinHasher this way, especially when it came to sites where I had to tweak the password after it was generated. I thought to myself, “I’ll never be able to remember all these tweaks for all these passwords. Why can’t I just rip this function out of WinHasher and wrap a program around it to let the computer do all the work for me?” And that’s exactly what I did. I’ve been using Cryptnos to generate and “store” my passwords for months now and I finally decided it was stable enough to release it to the world at large.
Right now, Cryptnos is only available for Microsoft .NET 2.0, which means by default it runs on Windows. However, I’m also working on a Google Android version, which means a pure Java implementation should be simple to extract after that. I’ve even been pursuing a PHP and/or JavaScript implementation that does everything except storing the parameter data. I’m not sure when any of these will escape from my hard drive, but anyone interested in them can drop me an e-mail and I’ll happily open a dialog.
Oh, and the name? Um, well, I wanted a better one, but that’s the only thing I could find that sounded “passwordy” that didn’t have a lot of hits on Google.
Wow! A non-Twitter digest post! Amazing!
This is a quickie to let you guys now I’ve just released WinHasher 1.6. This is a minor release containing a few cosmetic and minor functional changes, so there’s no need to upgrade unless the features or bug fixes listed below seem worth the effort.
For those who don’t know, WinHasher is a cryptographic hash generator for Microsoft .NET. It is roughly analogous to digest programs on other platforms (such as “openssl dgst” from OpenSSL) but designed for Windows and other .NET platforms. It lets you verify the integrity of downloads and determine whether changes have been made to files. It does NOT guarantee the authenticity of a file; for that, use cryptographic signatures such those produced by PGP or GnuPG. It also lets you create hashes of arbitrary text, which is handy for generating strong “passwords”, although I’m working on a different project that will do a much better job of this particular task. [Looks around shifty-eyed.]
There’s an interesting trend in webcomics for a push onto mobile devices. I think it started with Clickwheel.com (which apparently no longer exists, hence no link), which tried to bring comics to the iPod by encoding them as short video files syndicated like a podcast. I thought this was an interesting idea, and I was even offered an opportunity to get into it on the ground flood, right when it started. However, I had a number of technical and rights management questions about the service and dragged my feet, eventually losing out on the deal and never following up on it. Given that the domain is now owned by a Norwegian ISP that apparently serves up malware, I’d say apathy may have been the right choice.
Nowadays the hot new distribution medium is to put an app on the (seemingly) ubiquitous iPhone (or its GSM-crippled sibling, the iPod Touch). Keenspot was the first place I remember seeing webcomic iPhone apps showing up, although I can’t say for certain that they started the trend. Since then, I’ve seen iPhone apps for various comics popping up here and there. The one I’ve been watching the closest has been Howard Tayler’s Schlock Mercenary (since Howard and I follow each other on Twitter and Facebook). It’s a curious trend to be certain, and it certainly has an element of “hipness” to it. After all, the iPhone is the “it” mobile device these days. And one thing every webcartoonist wants is more eyeballs reading their comics. Certainly it makes sense to go where those eyeballs are, to reach as many potential readers as possible.
Then a thought occurred to me: No one has really asked me why there’s no GPF iPhone app. Certainly it’s a valid question, and I’m even more surprised it hasn’t been brought up yet. I know a number of you out there use iPhones, as I’ve read your comments and seen your screen shots of the GPF site in the past. So I thought about this for a while and came up with a list of reasons why we don’t have an app, then decided to document those reasons here so I can point folks to one place so I won’t have to repeat myself. I thought about putting this in the GPF News, but since it’s more of an opinion piece than a news item, it probably belongs here instead. (There will probably be links from the FAQ eventually, if nothing else.)
The primary reason there is no dedicated GPF app for the iPhone will surely come as a shock to those out there who can’t get enough of their favorite beloved Apple device. I’ve never been one for great diplomacy or delicacy, so I’m afraid I can only be my blunt, bullish, blundering self. I really hate to say this, but it has to be said:
The iPhone isn’t the last word in mobile computing.
Now, before the fan boys start picking up your torches and pitchforks, let me elaborate. I have nothing against the iPhone. In fact, at one point, I seriously considered getting one. The GPF Year Nine story “iDilemma” is actually semi-autobiographical. (GPF Premium subscribers should check out the Author’s Notes for that story to see how it diverges from real life.) In the end, it all boiled down to economics, just as it did for Nick and Ki; it was less expensive for me to buy my current Treo 700p without subsidy than for me to break my contract with my current carrier, switch to AT&T, buy the iPhone plus another phone for my wife, and so on. While I passed on the device itself, several of my coworkers at my day job have iPhones, so I can pretty much get access to one to play with any time I wish. Thus I’m familiar enough with how it works and all the whiz-bang spiffiness it purports to have. I know a thing or two about what it does right, what it does wrong, and how it’s revolutionized the mobile computing or “smartphone” industry.
That said, the iPhone’s 30+ million units pales in comparison to the number of BlackBerry devices in circulation. The iPhone represents one device, one platform, on one network. BlackBerries are available in many form factors from almost every wireless carrier. On top of that, Android is a rapidly-growing platform; while it hasn’t yet matched the numbers of the iPhone, like the BlackBerry it comes in many flavors from many manufacturers and can be found on almost every network. It won’t be long before Android phones overtake iPhones in number by mere aggregation of disparate devices. And while some folks dismiss Palm as a has-been in the market, the Pre and the Pixi are selling modestly and may represent a comeback for the company. (Don’t forget the many of us who, ahem, still use good ol’ Palm OS, myself included, despite its age.) No matter how much we’d all wish it just went away, Windows Mobile still exists and people are still suckered into buying phones with it installed. And all of this ignores the biggest player of all in the field: Symbian, which runs about half of all mobile phones in the world.
Right there, I’ve listed off seven mobile platforms, including the iPhone. To pick one would severely limit the potential to reach new customers. To pick one with such a small market share (~14% as of Q2 2009) would be even more limiting. If my goal were to reach as many eyeballs as possible, why would I focus on one tiny segment of the market, simply because it’s the one everyone is talking about at the moment? After all, everyone might be talking about something else in a couple months.
Of course, this plethora of platforms opens up another can of worms. My goal with GPF has always been to be as accessible as possible to as many people as possible. Although the comic is (currently) confined to the English speaking world, it is available to just about anyone with a Web browser. I carefully designed the site to be as cross-browser compatible as possible, sometimes even sticking with older technologies longer than I should so the site will keep working in older browsers. If nothing else, it degrades gracefully and is still functional if you don’t have something top of the line. For that matter, thanks t0 our Oh No Robot transcriptions, you can even read 95+% of the archives with a text browser! That also means screen readers for the visually impaired can be used to enjoy the strip. It’s not ideal, of course, but it’s functional, and it’s helped us garner fans in ways you might not expect.
So if I’m not going limit myself by building a specialized app for one mobile platform, does that mean I’m going to end up making applications for all mobile platforms? No, that too is an exercise in futility. Every mobile platform has its own SDK with its own quirks. The iPhone and webOS use HTML/CSS/JavaScript, Android uses its own version of Java, and BlackBerry, Palm OS, Symbian, and everything else requires specialized cross-compilers and development environments. No, developing for individual platforms isn’t the answer. It just turns everything into a development and maintenance nightmare, one that is ridiculously expensive from a financial, time, and resource perspective. What I need is something that works everywhere, regardless of platform, using resources common to all devices out there.
And the answer, my friend, is the same as it is the desktop: the Web browser.
What piece of software do all the nifty little gadgets listed above have in common? A Web browser, of course. Some make it the core of everything the device does, like in webOS and to some extent the iPhone. To others, it’s just another app available among many. But even the most rudimentary phones have simple browsers these days, enough to grab small snippets of HTML and display it competently. Even my Treo, which most iPhone users would likely scoff at, allows me to do the odd bit of online banking, news reading, and forum checking. While no single mobile platform is ubiquitous, the Web browser itself comes alarmingly close.
So I’m happy to announce the creation of GPF Mobile, the official mobile-optimized version of the GPF site. There’s nothing special to learn or type in; just visit the main GPF site at the usual URL and it will detect your mobile device and bounce it to the mobile site seamlessly. With the exception of one or two multimedia-rich updates, you can read the entire comic archive, browse the News archive, read the forum, or search the wiki. If you are a Premium subscriber, you can do all of this ad free, as well as get mobile access to the Jeff’s Sketchbook and Rumor Mill archives. The entire mobile site is specially optimized to minimize clutter and trim bandwidth, so it loads fast and doesn’t break your data plan. But if you have a smartphone with a bit more horsepower and a fatter pipe, switching to the “full” site is as simple as a few extra clicks. Just use our site to set a cookie (and you choose its duration) and have access to the full size for as long as you choose. I’ve been using the mobile site myself for months now, especially to keep track of the forum while I’m on the road, and it’s been beta-tested by a number of hand-picked Faulties. It’s not necessarily pretty (in fact, it’s downright Spartan), but it does let you get your GPF fix on the go.
Best of all, it works with BlackBerries, Android, webOS, Palm OS, Symbian, Windows Mobile, and… yes, folks, wait for it… the iPhone. I guarantee that bookmark will take up less valuable storage space than some bloated, unnecessary “app”.
Recently, our family took a long, hard look at some stock options my wife had been sitting on for a while and discovered that, even in the current questionable economic client, these options were looking pretty good. Well, a bit better than just “pretty good”. How about we say, “even after taxes, ‘pretty good’ still looks like an understatement”. After agonizing for a while over whether we should pull them all now or wait for the chance for the stock to go up even further, we decided to pull the trigger and take them all at once. After immediately moving the money to the savings account (where it will earn the most interest while still remaining liquid), we sat down and rationed how to slice up our piping hot and fresh money pie. Healthy chunks have or will go into numerous investments, of course, including the boy’s college fund and both long and short-term investments with decent returns. But we also wanted to keep some of that for ourselves, just to have a little fun. We’re planning on getting Ben a nice play set next spring or summer, and earmarked some to buy a few “toys” for ourselves.
The biggest “toys” are a new 55″ (139.7 cm for you metric-heads) LED LCD high-definition television, wall mounted, and a Blu-ray capable home theater system. Let me tell you folks, I was one of those people skeptical of the “high definition” craze when I had no basis of comparison. But after watching good ol’ standard DVDs on this thing and comparing them to what we got on our old 57″ (144.78 cm) projection TV, the difference is amazing. And that’s with “standard” definition DVDs! I think we still haven’t played an actual Blu-ray disc in this thing yet. And while surround sound is generally relegated to a gimmick in my book, I will admit that at times it’s a pretty good gimmick. I only wish now I actually had time to watch anything.
But none of that is the point of this post.
Rather, this is about the unilateral proliferation of the ubiquitous remote control. You know what I’m talking about. Every A/V device comes with one, and no matter what the manual tells you, you can try to program it to control your other devices, but you inevitably can’t. Either one device partially works but the rest don’t, or there’s one or two critical buttons that you absolutely need that never get mapped, or your device x from manufacturer y is not supported by the remote for device a from manufacturer b. So you end up with three or more remote controls sitting on the arm of your couch, each dedicated to one device and only halfheartedly supporting one or more others, if you’re lucky. You might be able to use the DVD player remote to turn on the TV and control the volume, but you have to switch back to the TV remote to get the aspect ratio right or switch the input mode.
Our recent purchase made our ever-breeding collection of remotes even worse. We were fortunate enough that the Tivo remote fully replaced the cable box remote (since the Tivo controls the cable box anyway), but now we were stuck with the Tivo, the TV, the home theater, and the old five-disc DVD player (kept in the loop mostly for its multi-disc capacity), all leaving remotes on the couch. (After about ten seconds of thought, we opted to retire the old VCR completely, eliminating a potential fifth remote.) Turning things on or switching activities required the “remote shuffle”, switching from one device to another to get everything just right. Worst of all, many times there were only a handful of buttons on each remote that were really needed for everyday use, meaning a lot of space, plastic, and silicon was being wasted.
Like any good geek, I thought that there had to be a better way. Larry Wall‘s first and second virtues of a great programmer are laziness and impatience, and I have both in spades. (Hubris, the third virtue, is something I struggle with as I have a chronic case of humility.) If only there were a way for me to consolidate all those useless logs into one, a single device that would let me push a single button and have everything just do what it needed to do: turn on what needed to be on and only those devices, put the TV and home theater on the right inputs, adjust settings for a device for one activity and then again when the activity changes, and make sure everything gets turned off when we’re heading out the door. I wanted something “scriptable”, something that with one button press would send off a chain of commands and “just do it”. Yes, there are “universal” remotes with macro languages out there that you can program to do just that. But I’m lazy (virtue #1); I wouldn’t mind a good starting point where most of the work is already done, and I don’t want to exert any more effort that I have to to make everything “just right”.
If you hadn’t guessed, we eventually purchased a Logitech Harmony remote, a Harmony One to be exact. For those whose definition of a “universal remote” consists of a $25-50 cheap plastic brick you can pick up at any drug store that “learns” by you pressing buttons on the old remote while pointing it at the new one, the Harmony line might seem like overkill. With prices starting around $100 and skyrocketing from there, Harmony remotes aren’t cheap. But for the premium price you get a ton of premium features that quickly make you wonder why you ever put up with the remote shuffle in the first place.
Harmony remotes are driven primarily by a single online database of devices. Using the Harmony software, you enter all the model numbers and it will look them up in the database, returning a pretty good mapping for all their remote keys. The database is pretty extensive, with tens of thousands of devices from thousands of manufacturers. Even our brand new TV (just released when we purchased it according to the manufacturer’s website) and home theater (which still doesn’t show up on their website) were there, ready to go. Best of all, all of the Harmony remotes share the same database, so the cheapest of the line can control the exact same devices as the most expensive. Of course, sometimes the database entries are inaccurate or incomplete since they are often populated by other users. However, Harmony remotes can learn just like the cheap URs can. I’ve been able to add a number of buttons from our home theater remote that were missed in the database import, and hopefully others will be able to share that effort.
To control your devices, Harmony uses an “activity” based process that may take a little bit of getting used to. You first need to decide what activities you plan to perform with your devices, such as “watch TV”, “watch DVD”, “play a game console”, etc. Once you have this list, you select what devices are needed for each activity and either let the software map the buttons for you or manually map them yourself. For example, our “watch TV” activity involves the TV, home theater (f0r audio), and Tivo box (which controls the cable). Many of the buttons on the remote map to the Tivo’s controls, so that’s how we switch channels, control video flow, etc. The volume and mute buttons are mapped to the home theater (the TV speakers are turned off). For the Harmony One, old remote buttons that don’t have an easy mapping (like the infamous Tivo “thumbs up” and “thumbs down” buttons) are mapped to “soft buttons” on an LCD touch screen; cheaper Harmony remotes have a simpler text LCD with hard buttons next to each option. Default mappings are easy enough to modify with the Harmony software. When the activity is started, all the relevant devices are turned on if necessary and are switched to whatever inputs and settings you specify. While you remain in that activity, the buttons remain mapped to where you set them. At any time you can switch to a “device mode” that controls a single device exclusively, mapping all the buttons to control that once device. Once you’re done with taht, you can simply switch back to activity mode to restore the activity mappings. When you finish the activity or switch to a different one, devices are turned off and reconfigured as necessary to fit the new role and your button mappings change as appropriate. Hitting the “power” button doesn’t technically turn everything off, but rather ends the current activity and turns off all the devices currently in use… which is often the same thing.
The Harmony is not without its quirks, of course. As previously mentioned, the database isn’t always accurate and most likely you’ll need to learn a few commands and remap a few keys. This is simple enough and just requires a few minutes button pressing and a sync with your computer. Initial setup isn’t for the faint of heart, so non-techies may want their favorite tech-savvy relative set things up for them at first. After that, though, using the remote can be very intuitive if your key mappings are set up correctly. Although technically not the Harmony’s fault, some devices still require you to tweak things after an activity has started. For example, our TV does not provide a direct way to specify the aspect ratio (i.e. you have to cycle through the options by repeatedly pressing a single button), so that can’t be scripted as part of the activity. However, it’s easy enough to map the TV aspect ratio button to a soft button in any activity, making that function readily available at all times. It obviously can’t control hardware switches—for example, our five-disc DVD and the Wii share the same component video input on the TV, so a small splitter box combines both streams into one—so you may still have to walk up and flip a switch every now and then. And while it often does a good enough job of it, the remote occasionally forgets what state a certain device is in and turns it off when it’s supposed to be turning it on. That, however, is simple enough to fix using an integrated help function. (You can’t just go in and turn the device back on in device mode, though; you have to use the help so the device state manager knows that the device is supposed to be on.)
So now we have a single remote controlling, either directly or indirectly, five A/V devices. We’ve only pulled out the old remotes once or twice, primarily to learn the missing keys and add them to the Harmony database. We feel more confident that we can hand this remote to one of our less tech-savvy relatives and not come back with infinite picture-in-picture nesting going on and with all the colors shifted blue. I definitely think this thing was a worth-while purchase for us, and I’d heartily recommend it for anyone tired of doing the remote shuffle.
(I should add the disclaimer, of course, that I was not paid for this “endorsement”, nor was I given any promotions, samples, or “freebies” in return for a favorable review. No, I’m just a happy customer who paid full price for a nifty device that I really enjoy and I want to share that enjoyment with others. Make of that claim anything you see fit.)
I have a bit of a quandary that’s got me effectively stuck on a task at my day job. Thus far, Google and every other resource I’ve searched have been little help. In the unlikely event somebody out there that reads this blog (or at least gets the update notices via RSS, Twitter, or the other various feeds) can help me, I’m going throw this out and hope it garners some feedback.
I’ll try to keep this as short as possible. Our production Web site, built in ASP.NET and C# and running in IIS on Windows Server 2003, recently added authentication via client certificates stored on users’ smart cards. We allow users to attach their smart card certificates to their existing account, then authenticate them by verifying their certificate, looking up the user account by that certificate’s fingerprint, and loading their profile. These certificates are signed by a trusted third-party certificate authority (CA) owned by the client and every morning we download the latest certificate revocation lists (CRLs) so we can reject certificates as they are revoked by the CA. My download process is working fine and dandy, so that’s not the problem; neither is the actual import process, as I know the command line options for Microsoft’s certutil
command that will import the CRLs.
My problem stems from removing the old CRLs, which so far I haven’t been able to accomplish without going into the Microsoft Management Console and clicking through the GUI. We’ve had problems with the size of the certificate store, as the CRLs tend to be very large and we have to remove the old ones before the new ones can be imported. I’ve tried the few suggestions I’ve found online that haven’t seemed to work, such as a command-line switch for certutil
that’s supposed to overwrite the old CRL with the new one (it just imports the new one and leaves the old one in place). We want to automate this process into a scheduled task, so it can run early in the morning when our users aren’t on the system and without human intervention.
Here are the tools available to me:
certutil
(part of Microsoft’s Certificate Services package);I’ll tell you, I’m pretty frustrated and exhausted by this task. It’s not that I can’t do the research and figure it out for myself; I have done the research, and everything I’ve read applies to certificates and not CRLs, and they’re not exactly a direct swap in usage. I’d prefer not to provide much more detail than this for security reasons.
For the time being, I’ve been manually removing the old CRLs through MMC and then running a batch script to do the import every morning as my first task. That’s working fine for now, when I’m in the office every morning, but I’ll be taking some vacation time soon that will start to cause problems. I swear, if this was OpenSSL and Apache on Linux, I’d have this solved in a heartbeat (or at least an afternoon). If you have any suggestions, please feel to post a comment or shoot me a direct e-mail at the usual address.
Just posting a quick note to let you guys know I’ve bumped good ol’ WinHahser to version 1.5. This is both a bug and feature release, so both of you using it will probably want to upgrade. Here’s a quick list of the changes:
System.IO.FileStream
object uses a 64-bit integer for its Length
attribute, meaning this was totally my mistake, not Microsoft’s. The end result here is that WinHasher would crash on files larger than 2GB since it would end up trying to calculate its percent complete on an overflowed negative value. I’ve updated the code so that the single-file length calculations also use 64-bit integers and now I can finally validate that Fedora 11 DVD ISO download. Note that there’s still a hard cap at 8.05EB whether your hashing a single file or you sum up multiple files together. While it’s possible to bump this up to an unsigned 64-bit integer and go for even more ridiculous large numbers, I seriously doubt anyone is going to be running a SHA-1 hash that large any time soon.MessageBox
object for this, meaning the hash was displayed in a read-only form that couldn’t be copied and pasted elsewhere to be compared. (It’s much easier to copy and paste two hashes into a text editor, for example, and visually scan the two lines for differences.) Well, I wasn’t the only one to find this annoying. WinHasher user Todd Henry had issues with this too and suggested that I either put the hash result in a text box that could be copied and pasted elsewhere, or add a box where an externally produced hash (say from a Web site) could be pasted into the dialog and have WinHasher compare them. Interestingly enough, I was already planning to make that change when he wrote me, and now it’s there. Once WinHasher is done, it will display a new result dialog with both a copyable hash result field and a new “compare to” field that will take an external hash string and tell you if it matches or not.I realized after I updated the files and the site that I forgot to make any changes to the documentation to reflect these updates. Oh, well. I don’t think they’re major enough to sweat over, so I’ll leave those alone for now and make sure they get updated by the next release.
I hate breaking the drought of (real, non-Twitter summary) updates with a gripe fest, but this has been bothering me for a couple weeks and I just wanted to get this off my chest. If you don’t listen to the podcast “This Week in Tech” (or TWiT), feel free to ignore this post. Of course, if you’ve considered listening but haven’t gotten around to it yet, this might be informative enough to help you reconsider, but I’ll leave that up to you to decide.
I’ve been a fan of Leo Laporte for a number of years, ever since we first discovered TechTV (before it died a miserable death at the hands of Paul Allen and G4). “The Screen Savers” was one of our favorite shows and became a nightly staple in our house for several years. When Laporte left TechTV and “The Screen Savers” was canceled (or, more properly, devolved into “Attack of the Show”), we had a small sense of loss. The show was entertaining and informative, and a big part of the entertainment value was Laporte’s friendliness and personality. The network was never the same after that, and now we largely ignore G4’s existence on our cable listings. (“X-Play”, the only remaining show from TechTV’s original line-up, is the only thing still worth watching on G4, and even then it’s not nearly as good as it used to be.)
When I discovered a year or so ago that Laporte had gone on to create his own podcasting network, I was thrilled. Several old TechTV allumni were among the guests and cohosts, and the selection of podcasts has been diverse, engaging, and ever expanding. The flagship of the network, of course, is TWiT, a weekly roundtable of tech industry players and journalists discussing the latest tech news. The show is often wild and unpredictable, spiraling down rabbit holes and meandering in bizarre directions, but that’s often part of the fun of the show. The bottom line, though, was that the show was about tech news, and it and Slashdot were two of my main ways of keeping on top of what’s been happening in the tech world.
Something happened in recent weeks to change that, however. Since I can’t follow the live streams (both for practical and technical reasons), I can only guess the sequence of events based on what’s been released in the podcasts or written by others after the fact. But from what I can tell, the TWiTters have been hosting a live wine tasting show right before TWiT starts recording on Sundays. Now I’m a teetotaler myself, but I won’t condemn anyone who wants to imbibe their spirits if they really want to. What self-destructive behavior they engage in on their own time is up to them. As long as no one’s forcing me or anyone else to participate and nobody’s operating motorized vehicles, they are free to destroy their own livers to their hearts content. But what’s really annoying is that once TWiT starts taping, everyone in the studio is already tipsy, if not totally soused. The wine continues to flow as the show progresses, and what follows is a train wreck of drunken giddiness and squabbling that’s only really entertaining to those who are equally inebriated. To top everything off, from what I’ve read the final podcast (what I’m actually hearing and complaining about) is heavily edited before it’s released; the live feed is even worse.
The latest episode is a perfect example. Subtitled “Corked” (which is appropriate; I originally intended to say “ironically” but I’m pretty sure the choice of subtitle was intentional), the show is a disaster of panelists talking on top of each other about nothing worth talking about. Leo, who is usually an excellent host and often does a great job of keeping everyone else in line, is interrupting his guests and spinning things even further out of control. John C. Dvorak, whose input I always find amusing and often enlightening, is equally rude and—from what I’ve read from those who saw the live feed—apparently egged on the other guests to get them even further inebriated. I was originally going to complain that neither of the female guests, Lisa Bettany or Shira Lazar, could manage to finish a sentence before being trampled upon by Leo or Dvorak, but Lazar was just drunk enough to be an unstoppable stampede of rambling who couldn’t let a topic go. As previously stated, one of the appeals of TWiT is its unpredictable nature, but this show was so far off the beaten path that there was no path left to beat. Somewhere, deep inside the tangled mess of four people talking at once about Twitter drinking games, is only the vaguest hint of tech news, a thin whiff of the scent of information that rapidly gets swept away by the torrent of uselessness that follows. And for the cherry on top, several times during the show Leo pauses to read complaints from the live chat room about how terrible the show has become… and makes fun of them. This following a single glimmer of insightfulness in a discussion about how important the community has become in modern online media.
Now, I’ve been a webcartoonist for a decade, so I’m no stranger to the vast swing between amateurism and professionalism when it comes to online media. Before there were basement-dwelling podcasters, there were basement-dwelling webcartoonists, and you can tell in both cases which ones take their craft seriously and which just throw things out without any care for quality. I consider Laporte an accomplished pro, and virtually every other show on his network stands as shining proof of that. “Security Now!” is brilliantly informative (and my personal favorite), “FLOSS Weekly” (when it updates) shines the spotlight on some great open source projects, and “Jumping Monkeys” (before it went on indefinite hiatus) was a great parenting podcast for tech-savvy parental units. In all three of these examples, Leo is an excellent cohost to the show’s main star, showing his versatility with rare skill. He asks the questions many of us are thinking, assuming the role of the everyman so the expert can answer to the fullest. The TWiT Network as a whole is an example that many podcasters should look up to, a yardstick of professionalism by which all others should be compared.
All except for TWiT itself. Leo, what the heck happened?
I won’t stop listening to “Security Now!” or “FLOSS Weekly”, both of which I enjoy immensely. If “Jumping Monkeys” ever comes back, I’ll resubscribe in a heartbeat. My wife loves “net@nite”, “The Daily Giz Whiz”, and “Munchcast” and keeps bugging me to listen to them. But TWiT… oh, TWiT, how the mighty have fallen. What was arguably the best show on the network is now the worst.
What’s incredibly ironic is that in a recent episode of “net@nite” (unfortunately, I don’t know which, but my wife thinks it’s either #85 or #86), Leo chastized Kevin Rose for a drunken comment he made on-air that caused a bit of an Internet stir. He commented that in today’s world of streaming media, celebrities have to assume that they’re always on the air and that anything and everything they do will be rebroadcast repeatedly, even stating that it’s a big mistake to be drunk while recording. Maybe it’s time Leo listened to his own advice.
I’m still not sure whether or not I’m dropping TWiT now or if I’ll give it one last chance. Leo posted on FriendFeed that the “message [was] received” and, based on overwhelmingly negative feedback, there will be “a little less wine and a little more tech in future TWiTs”. We’ll see. What’s ironic is that it was Audible.com‘s sponsorship of TWiT that turned me on to audio books, and now there’s a good chance that audio books will completely replace TWiT during my long, boring commute each morning. It’s Leo’s loss, not mine.
This week an couple errors were reported in the custom CMS application I built at work a couple years ago. I haven’t touched this code in at least a year, so it took me bit to swap some mental virtual memory and recall how everything worked. I’m not sure if these “bugs” were something new that had manifested themselves after a recent platform upgrade or design flaws that had been there since the beginning only to be recently noticed. None of that really matters for the sake of this post, however. Suffice it to say there were two problems, one of which was likely to be entirely my fault but relatively easy to fix with a little bit of C# hacking.
The other problem was a bit obscure. The application is built in ASP.NET 2.0 and written entirely in C#. It also makes use of Microsoft‘s AJAX Toolkit for ASP.NET to “pretty up” some of the interface interactions. Unfortunately, one particular user began to experience problems with the system recently. Since she’s the project manager, needless to say the problem was escalated to top priority with little to no delay. To make things more difficult, the problem was especially cryptic. In true Microsoft fashion, the pop-up JavaScript error dialog offered little to no useful information:
Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 500
Google, of course, is my friend and found no shortage of pages where this turned up. The odd thing was that none of the purported causes for the error were anything that I was using.
After much searching, I finally happened upon this site. It seems Ted Jardine hit the same problem I did. He had narrowed it down to something to do with the .NET session, which he wasn’t really using but I was using extensively. What I found most interesting was his solution:
So, based on one of the comments in one of the above posts, even though I’m not touching session on one of the problem pages, I tried a hack in one of the problem page’s Page_Load:
Session[“FixAJAXSysBug”] = true;
And lo and behold, we’re good to go!
I followed the various links he provided—as well as Googling for “FixAJAXSysBug” itself—and found lots more anecdotal evidence to support its usefulness. I applied this “fix” to the common header of the application to make sure it took affect everywhere and, so far, all reports seem to indicate its success.
Needless to say, I was instantly reminded of this GPF strip from the crossover with Help Desk. I can’t remember now if that joke was my idea or Chris Wright’s. It doesn’t matter now, really… it audacity is as brilliant now as it was eight years ago. The idea of setting a simple Boolean flag to “turn off bugs” is something I will always find hilarious.
Now if only all Microsoft bugs were so easy to fix….