The inspiration for Career & Company came from personal experience. Job titles in the primary field I was searching (UX) tend to shift every year. There are also various specializations within the field, and each company has its own definition for commonly used terms. A search for job titles or keywords often returns very poor results, while omitting perfectly good ones. That doesn’t even touch the fact that I have competencies in other fields, but they are best applied in particular areas or some contexts. Those searches never retrieved what I needed. The whole experience was broken for people like myself.
In everyday conversations, I’d heard many others express similar frustrations. While wrapping up my Master’s in Information Science, I decided to apply data science methodologies, combined with the resources at hand at the University and my previous experiences in product development and entrepreneurship, to see if I could solve the problem myself.
I had identified a palpable pain point. But before I could move on to the solution (though I had ideas), I needed answers to some questions. What is the market like? What kinds of competitors are there? Is someone I’m not aware of already addressing the problem? What data is available to train algorithms?
The short answers: There was a large market with lots of opportunities. There were some companies that had noticed the problem, and some even claimed to be working on a solution, but there had yet to be a successful, publicaly available product that I could find. By playing with datasets such as O*NET in class projects, I was able to familiarize myself with what measurements were available, and how I could tie them to each other (and to future users).
After finding answers about the market and the competitive landscape, I needed to understand my prospective customers better. Unless you really have a grasp of WHO you are selling to (and WHY they need your product), everything else can fall apart very quickly. Whether it is the product itself, the revenue model, marketing, or any other aspect of your business, if you don’t know ‘who’ and ‘why,’ failure is practically guaranteed.
I used the 4x4x4 method to iteratively uncover who my customers really were and get a proper sense for how I can solve their problems, reach them with the solution, and build a business around that effort. I tested a lot of hypotheses, quickly discarded the (many) incorrect assumptions I had, and moved forward with a business model based on what I learned from talking with job seekers, job posters, and everyone else involved in the ecosystem.
Whether it’s garage bands, student organizations, or startups, I’ve had a lot of experience getting people who otherwise wouldn’t have met to solidify around a vision and create something. As I moved forward with this project, I needed to leverage that skill set again.
I couldn’t pay anyone at first, so I had to sell people on the idea, and anything else I could do to sweeten the pot (an entry in their portfolio, experimenting with a field they’ve been wanting to enter, or simply being part of something exciting). Later, I was able to use funds from an accelerator to pay enterprising grad students who assisted with research, design, and development.
While ideas and models were still coalescing in the early stages, there wasn’t much of an official process. However, as the ground beneath us began to solidify and the team grew, I steadily introduced Agile methods, instituted morning standups, sprint retros, etc.
Sometimes we needed to be more agile than Agile, though, so there was a lot of bend to the rules. Generally speaking, we followed one week sprints. We occasionally ran ultra-short 48-hour sprints, especially in the areas of user research and product testing. The group as a whole sometimes operated at different speeds, with research running on one-week sprints while development operated on a two-week horizon, for instance.
Like I did with RecBob, I used the walls in our office to full advantage. This time around I was more focused on the big picture of the business, so my scribblings tended to be more related to the business model or customer discovery insights. However, we still had our personas, research results, and everything related to development plastered on every wall.
With my background, there was no way we were getting away without extensive user research (on top of the customer discovery). Fortunately, I had recruited a couple of industrious UX acolytes who required no convincing, and the rest of the group followed suit in our process. The research fit into the greater purpose of business model formation and product development. We were iterative, we were lean, and we were relentless.
We conducted interviews and surveys, ran card sorts and modified participatory design sessions, and gathered feedback on mockups and prototypes. One of those prototypes is described in the next section.
Once we got going, we were moving at light speed. But as we progressed and laid the groundwork for product development, we realized we had to test some core product features. Would the product function in the way we expected or that users hoped? When a key offering is a system that learns from user decisions to provide better results than the unaided user could find… How can that be tested without building the product? Our solution was to run the technology “by hand.”
The first step was to scour job listings for attributes used to describe responsibilities and requirements. Utilizing these variables, we created a sample space of listings within a handful of pre-determined fields. These fields were chosen to provide a breadth of job types while also limiting the amount of work we had to put into building the database (building a comprehensive library was obviously out of scope).
Once we recruited participants, we emailed links to short Google forms, which gave us job post ratings. We entered the responses into a starter version of our algorithm, which produced recommendations from our sample library. We then created a new personalized Google form for each participant.
There were significant challenges to pulling this off. First and foremost, it was a labor-intensive process to create the attribute and job libraries. We had to craft the forms and testing methodology, as well. On top of it all, in order to give the participants an experience that wasn’t so slow they’d drop out, we needed to be responsive to their timelines and schedules, which meant performing multiple rounds of data acquisition, recommendation generation, form building, email communication, and tracking each day.
Our email prototype was designed to test three basic propositions:
H1: Our method of producing recommendations (a basic attribute-based machine learning algorithm) will produce increasingly accurate results the more a participant uses the product.
H2: Our summarized versions of job postings will be more efficient (quicker decisions by users).
H3: Our summarized job postings will result in job ratings by the user that are minimally different from those received for the original long-form job postings.
Early on, the most troubling report we found was that the algorithm corrected too quickly and too permanently. This led to some subjects getting trapped in suboptimal job fields after a round or two. Granted, part of the problem would be mitigated in a real product, as (A) users would not be responding to truly random job postings as an initial seed, and (B) there would be a greater quantity and quality of data for the recommendation engine to draw on. However, those are not assumptions to rest on.
Fortunately, algorithms are a relatively easy area to make significant gains in the early stages. After limited tweaking and testing, we made large strides towards improving the engine’s behavior, in addition to brainstorming some creative ways to allow for limited user curation of the relevant data (without overwhelming the user with tasks).
Our summarized format definitely allowed for quicker decision making (an average of 1.5 minutes per post, versus nearly 6 minutes for the traditional version). We were pleased at how much time participants were saving.
When we compared the ratings of the long format against those of our summarized format, the average difference in ratings meant that users were typically selecting approximately the same response in both versions. However, there was enough variance to suggest that our summarization technique, as it stood, would not necessarily be a reliable substitute for the long form. Further testing would be required to determine why this was happening and what could be done about it.
I couldn’t secure any outside funding after the accelerator. Literally everything else in life would have needed to be put on hold indefinitely, at the worst timing. Although I was confident in the need for a product like ours, I was not confident that we were going to be the ones to successfully bring it to market.
Since that time, I’ve seen some companies make progress. LinkedIn’s job matching experience has gotten much better, for example. This makes sense, since they acquired a company that had a similar goal to ours. Major market movers like this is what we would have been up against.