Since Mike has moved on and Scott and I have been looking for a replacement, we have also been closely evaluating our current business and technical model. At this time, we no longer believe it to be feasible within our current level of resources to pursue resume synchronization; we will be changing our focus and moving down the ATS route.
Be warned, this is a bit of a long post. I’m writing it to explain what is happening, why it is happening and what to expect in the future both to everyone already involved with the project and to the other people who had signed up to try the service but had not yet gained access.
To recap, the original idea behind RE was to provide a resume synchronization service to job seekers, alleviating the need to re-enter the same information on site after site. We were planning on doing this in one of two ways:
- The API provided by the site
- Screen scraping the sites and roboting the forms
The service was to be free to job seekers. As you can probably imagine, we had them lined up at the door. Seriously; we signed up a few hundred people on a simple landing page strictly from word of mouth (mostly me going to DC area networking events) and a very limited ad campaign. Even more encouraging, our LinkedIn ad campaign enjoyed a 72.341% conversion rate (as compared to 12.441% from AdWords and 0.000% from Facebook ads). Obviously, the FB part wasn’t so spiffy, but LI was very attractive.
We were planning on charging recruiters for access to the system; we had interest and continue to move along with validating the pricing model — even with our pivot, recruiters and sourcers will remain our primary target demographic.
Proof Of Concept
We enjoyed rudimentary success with the Washington Post interaction, and pseudo-success with LinkedIn. Monster was a loss, start to finish.
The LI API is extremely limited, only dealing with a small portion of the resume data LI captures and even then universally one way (out of LI only). Given that LI’s API was insufficient, we retrenched and tried to screen scrape LI instead. We had something that worked for period of time, and then we ran into a CAPTCHA. Interestingly enough, we only encountered the CAPTCHA when running our scraping program from AWS or Rackspace; I suspect LI has instituted a CAPTCHA for an IP originating from a known cloud server farm for just this reason.
We explored some alternative means, including what Kevin DeWalt calls manulating. While that would suffice (barely) and I did find some cost effective providers in both the Philippines and Bangladesh, it was workable as a tertiary fallback mechanism, not as a primary means of scalable production.
Actually, we made good progress on screen scraping this site. We stopped killing ourselves over it based on the problems with LinkedIn & Monster, as a synchronization process linking one site to itself isn’t all that useful.
The kick of the whole thing is that even if we had been completely successful with all three sites, we would always be at the mercy of the designers of the respective sites we are synchronizing. If they change one little thing, the screen scraper would require inspection, if not refactoring. And, in today’s world of continuous deployment, they would always be changing something. It’s the epitome of a Red Queen’s Race. Initially, we accepted this risk on two assumptions:
- We would be able to move quickly enough to add sites as a sufficient rate to attract job seekers (and, in turn, recruiters)
- We would be able to demonstrate sufficient traction to acquire funding, which would support hiring enough coders to run that Red Queen’s Race until we had enough momentum we could influence the actions of the synchronization sites
The arrogance in that last statement is palatable, isn’t it? In any case, the amount of work required to resolve even the very limited interfaces to the synchronization sites proved this approach to be to expensive to maintain at any level of scale.
Too Many Sites
We’ve acknowledged this before, but there are a lot of job sites. And, it seems like there are more popping up every week. Even though our name is “Resume Everywhere,” we were never going to be able to cover every job site everywhere. From our initial estimates, we were going to need a junior to medium level screen scrape developer for every 20 sites we signed up. We had some ideas and tools in the pipeline that would have acted as a force mulitplier for that type of development, but it hasn’t been built yet. Strike three for synchronization in the scalable business competition.
Pivoting To ATS
A month or so ago, I was in NYC for SourceCon 2011. A conference dedicated to the needs of the sourcing industry, I went to do some customer validation; specifically, would these people be interested in searching the database of resumes collected through the synchronization process and how much would they be willing to pay for the privilege (the answers were “sure” and “it depends,” respectively). It was a good conference; I met lots of good people, won an iPad and learned quite a bit.
Recruiters & Sourcers Hate Their Current One
One of the things I learned was that nearly all of the people present expressed their intense dislike of their Applicant Tracking System, or ATS. The informal poll taken at the start of the conference revealed that only one — one — person rated their ATS as a 3 on a 1-5 scale; three or four more gave their system a 2 and everyone else gave it a 1 (with many saying they would give lower if they could). This is pretty clearly a market that is being underserved.
Extensive But Partial Domain Knowledge
In essence, an ATS is fundamentally a workflow application in which candidates are routed through a system of decision gates where various people (hiring manager, HR, senior management, etc.) have to approve a candidate for him/her to move forwards in the process. Between Scott and myself, we’ve architected, designed, developed, deployed and maintained over 30 different workflow systems for customers such as the US Department of State US-VISIT program or the FAA OE/AAA (Obstruction Evaluation/Airport Airspace Analysis) system, just to name two off the top of my head. We know how to move data around efficiently, and we get scale.
Where we are lacking is the domain knowledge for what specifically makes an ATS a good system and what does not. That is going to be foremost on our minds for the next several weeks, as we interview potential customers to learn of their pain points in much greater detail.
So, No Syncing?
Yes, no syncing. At least not for now. I still believe it’s a compelling business model and that there is significant user demand from job seekers for such a service. However, we are going to table synchronization until such a time when we either have the resources to pursue it correctly or the technical landscape has changed to make such a solution both feasible and scalable as well as cost effective.
So, that’s about it. We’re starting to execute our pivot, by which I mean we’re stopping on coding and going full time on customer development until we figure out exactly what a lean, stripped down and bulletproof ATS looks like. Then, we’ll be back in heads-down, staying-out-of-the-big-blue-room coding mode until we get things done. Hang on; it should be an interesting ride!