The value of automated UI testing

During my recent job hunt, test automation came up in practically every interview, typically some broad question like, ‘So, how would you go about implementing test automation?”
My standard answer is that you generally get the best bang for your buck the farther deeper in your code you test. As an example, I contrast maintenance of unit tests (deep end) vs. maintenance of automated UI tests: you have to update a UI test almost anytime you make a change to the UI; you only have to update a unit test if you change an existing method. And the UI typically changes much more frequently than individual classes. Furthermore, UI changes frequently necessitate changing a whole string of user actions in your automated tests, whereas unit tests, by definition, are isolated and thereby typically much quicker to update.
This morning, I ran across a new blog post by B. J. Rollison, a.k.a. I.M. Testy, titled, “UI Automation Out of Control,” in which he lists some of the shortcomings of automated UI tests and some ways you should try to test before you resort to automated UI testing. It’s a good read.

My job application workflow

When I was job hunting this year, I developed the following preferences of how to get my resume in front of a hiring manager:

  1. Have a friend or FOAF who works at the hiring company hand my resume directly to the hiring manager
  2. Have a trusted recruiter represent me
  3. Have the recruiter who advertised the position represent me
  4. Apply blindly myself

Yesterday, I tried to explain my process to a friend, and I realized what I really needed was to document my workflow for applying for jobs, so I whipped it up in Visio. Click on the image below to see the full-size version:
job_pplication_workflow.png

Job hunting tip: Beware of HCI International

NOTE:The original version of this post contained much stronger accusatory language. In October, 2010, I was contacted by someone who claimed to be associated with the companies in this post. He was very upset at my accusation that the companies were running a scam. Although I have no intention just to buckle to his pressure, I re-read the post and realized that my accusations were not based on my direct experience. Although all the information I’ve gathered leads me to believe strongly that these people are misrepresenting themselves to scam money out of desperate job hunters, I have no direct evidence of it. Therefore, I changed the language in this post to reflect this more directly.

When I was a few weeks into my recent job hunt, I received call from HCI International that went something like this:

Caller: Hi, I’m so-and-so with HCI International. Our vice-president So-and-so would like to meet with you to see if we can help you with your job search.
Me: Great; he can call me at this number at any time, or we can schedule a time to talk on the phone.
Caller: The vice president can only meet you in person at our offices.
Me: Well, does the vice president have a particular opening in mind that I might be a good fit for, or is this just a general intake interview?
Caller: The vice president can tell you that when you come in for your interview.

The direction of this conversation was so different from conversations that I had had with other recruiters that warning bells were going off. Also, I figure recruiters are opportunists: if they had an an opening for which they thought I was really a good match, then they would be much more accommodating to get me to work with them. I just ended the call at this point and blew them off.
After the call, I did a little research and found some pretty damning comments about HCI on yelp.com: here and here. It seems they charge the candidate a large retainer fee–apparently several thousand dollars–to try to match them with jobs or provide career coaching services or something:

This is basically a company that attempts to charge you a ‘retainer’ fee after meeting with you three or more times. I went for the initial ‘interview’ and noticed that the agent i spoke to, a person by the name of Linda Whitney (with the title of vice-president) did not even look at my resume. The initial meeting was less than 10 minutes long. I noted that there were a few clients in a classroom environment undergoing improving interview techniques. I was scheduled for a followup “second level” interview the following week. The next day I called, deciding to corner Ms. Whitney about the retainer fee. I asked how much it was, she refused to answer citing that each client is individually researched. I asked for a ballpark estimate, saying I needed to know if this service was going to be in the hundreds, the thousands, or over ten thousand. Upon being pressed she informed me that maybe HCI and myself were not a good fit. An interesting thing to say to a client and definitely a red flag, after all what’s wrong with asking for a ballpark estimate of what you’re going to be charged for a service? All in all I got the impression that they were using Amway like techniques from her demeanor, and use of the phrase “fair enough?” which she used over and over again. I’ve attended aggressive sales training technique seminars and recognized it for what it was.

Perhaps HCI International isn’t a scam, maybe they do offer career counseling services that some people find useful. However, their tactics to get you in the door are certainly deceptive at best, and according to what I’ve read online from people who’ve been through their sales process, they use high pressure sales tactics. I’m glad I smelled a skunk in the initial telephone call.
In high tech at least (that’s what I’m familiar with), you should absolutely never have to pay a recruiter to find you a job, and if you need help with job hunting, or ‘career counseling’, I think there are plenty of reputable firms who can sell you specific services (e.g., resume review, interview techniques, etc) a la carte and at much more reasonable prices than HCI International seems to be charging.

One unanticipated value of blogging

Over at Snarkmarket, I ran across this thought today:

I always tell people that blogging is useful, even if nobody’s reading, because it forces you to have an opinion on things. You don’t realize how blankly you experience most of the stuff you read every day until you force yourself to say something—even something very simple—about it.

When I was job hunting earlier this year, I benefited greatly from this blog: I had given more thought to many of the issues that came up in interviews than the last time I was interviewing before I started this blog.

Unnecessary abstraction

At my new job, I’m currently putting together a defect management process, something I’ve done at pretty much every company I’ve ever worked at. Part of the process includes defining data fields and values associated with defect reports.
A typical defect tracking system has the following combo box field and values: field name: Severity – values: high, medium, low.
I wish I had a dime for every time I’ve answered the question, “So, what’s the again difference between ‘severity’ and ‘priority’? or “What’s the difference between a high and medium severity bug?”
Many companies I’ve worked at have tried to solve this problem by creating documentation that defines the fields and values. This type of documentation keeps me from having to repeat myself–I can just refer the person to the documentation–but it does not really address the source of the problem: both the field name and its values are abstractions of real-world data.
Over the years, I’ve begun to propose that we just give the fields and their values names that succinctly reflect their concrete meaning. Granted, this is typically easier with field names than their values, as the values typically require more explanation.
‘Severity’ would look more like this: “Customer severity” or even better “Impact on user”, with the following values:

  • Critical functionality broken, no workaround
  • Non-critical functionality broken, or critical with workaround
  • Minor functional defect
  • Cosmetic or minor usability issue

Granted, those long values make the UI of your defect management system and your reports a little messy, but in my experience, it’s a worthy sacrifice for the lack of ambiguity that the verbiage provides.
An aside: in that example, I’m still trying to force my values to fit into another common convention: hierarchical levels of severity. But if you think about it, why should I force “Non-critical functionality broken” and “Critical functionality broken” into one value? Why not just break those into separate values without worrying whether one is ‘more severe’ than the other? But I’ll save this convention for another blog post.
My question to the millions of people who read this blog: why do we have these conventions regarding abstractions and hierarchical values in the first place? How did they come about? I have my opinions, but I’d like to hear yours.

Reaching the unreachable

The other day, I ran across this blog post by Jeff Atwood in which he argues, essentially, that those who would benefit most from learning more about their profession (e.g., reading programming books or blogs, studying process methods, etc.) are most often precisely those who are least likely to seek out such education on their own.
At the end of the article, Jeff concludes, “All those incredibly detailed rules, guidelines, methodologies, and principles? YAGNI [You Aren’t Gonna Need It]. If it can’t be explained on a single double-spaced sheet of paper, it’s a waste of your time.”
This is where agile differs from other methodologies–and to a large degree is responsible for its success. You can get across the basics with four values, twelve principles based on those values and a handful of intentionally simple practices (e.g, scrum, XP).

The $23,148,855,308,184,500 bug

The story of Visa charging a number of customers $23,148,855,308,184,500 has been all over the news the last couple of days. Slashdot commenter rickb928 provides a plausible explanation for the error.

I work in this industry. The only novelty here is that the error got into production, and was not caught and corrected before it went that far.
Submitters send files to processors which are supposed to be formatted according to specifications.
Note I wrote ‘supposed to be’.
Some submitters do, from time to time, change their code, and sometimes they get it wrong. For instance padding a field with spaces instead of zeros. Woopsie…!
Seems that’s what happened here. Sounds like a hex or dec field got padded with hex 20, and boom.
This is annoying, especially when the processor gets to help correct the overwhelming number of errors, and then tries to explain that it wasn’t their fault. Plenty of blame to go around with this one.
And then explains why they don’t both validate/sanitize input, and test for at least some reasonable maximum value in the transaction amount. A max amount of $10,000,000 would have fixed this. That and an obvious lapse in testing. This is what keeps my bosses awake sometimes, fearing they will end up on the front page of the fishwrap looking stupid ’cause their overworked minions screwed something up, or didn’t check, or didn’t test very well. I love one of the guys we have testing. He’s insufferable, and he catches genuine show-stoppers on a regular basis. They can’t pay him what he’s been worth, literally $millions, just in avoiding downtime and re-working code that went too far down the wrong path.
Believe me, this is in some ways preferable to getting files with one byte wrong that doesn’t show up for a month, or sending the wrong data format (hex instead of packed binary or EBCDIC, for instance) and crashing the process completely. Please, I know data should never IPL a system. Tell it to the architects, please. As if they don’t know now, after the one crash…
If you knew what I know, you’d chuckle and share this story with some of your buddies in development and certification.
And pray a little.
At least it didn’t overbill the cardholders by $.08/transaction. That would suck. This is easy by comparison. Just fix the report data. Piece of cake. Evening’s worth of coding and slam it out in off-peak time. Hahahahaha!

That’s quite a missed test case!