Big data analytics as a whole and Hadoop, in particular, are relatively new fields, which means that recruiting people for positions in these areas is something of a great unknown, both for recruiters and for candidates.
For many recruiters, finding the perfect Hadoop candidate is often hampered by not knowing exactly who that candidate is, or even what you need them to do. Does their company need someone to set up the infrastructure your company needs to run Hadoop, or do they already have the infrastructure and need someone to run the Hadoop side of things?
If you are applying for a Hadoop job, it’s worth taking the time to find out the answer, so that you know exactly what position you are applying for and can present your CV accordingly. Knowing the strength of your CV is like knowing the value of your hand at poker. You need to appreciate the value of each card and know how to use them for maximum effect when making your play.
When it comes to that all-important resume, there are a few factors that can make a big difference between interview and in-the-bin. Remember, recruiters don’t have that much experience with Hadoop vacancies, so they will be more likely to stick to set criteria and will be far less flexible than they would be with more familiar roles, so make sure you tick those boxes.
The number one tick box is strong experience in dynamic programming languages. Hadoop is a Java-based software, so the more Java experience you have the better. A career path that shows progress from C++ to Java to Hadoop is the perfect track record, so the closer you can get to this the better your chances. Weak Java skills or limited experience will simply not make the first round cut.
Another big bonus, though perhaps not as essential as Java experience, is experience working with big data, and the bigger the better. Hadoop is built for really big data, not a small site with a few thousand users, so you’ll need to show experience with the big boys here. This is not just a name-dropping exercise; experience with directly distributed data systems is essential.
The likes of Google, Facebook, eBay and Amazon all have masses of data to analyze every day, giving you the best hands-on experience. The same is true for many other online retailers and social media sites. If you have only worked with Hadoop using smaller data sets, then you haven’t really got the experience that most recruiters are looking for. If you are up against Hadoop engineers from one of the major players, you need to have the experience to match or, once again, your CV will quickly be moved into the circular file.
So, assuming you have survived the cull of the inexperienced, how do you make your CV stand out from the equally experienced and qualified competition? It’s time to show your true geek credentials. You need to back up your workplace experience with a solid commitment to big data outside of work. Attending conferences, attending (or better still founding and leading) local industry groups and pursuing professional certifications will all show a real passion for big data.
You also need to show a genuine interest in open source software and programming, particularly the open source projects behind Hadoop. With so many people able to drive the Hadoop car these days, recruiters are interested in those who can get their hands dirty fixing the engine and squeeze out those extra few horsepower by fine-tuning the carburetors.
Showing an advanced interest in Hadoop, other Apache projects, such as Flume and Storm, and other open sources systems like MongoDB and Riak, will soon set you apart from the crowd and move your CV to the top of the pile.
After that, it’s up to you. Just remember, on such new territory, your recruiter is probably feeling as nervous and out of his depth as you are, so a friendly smile and relaxed attitude will go a long way to making the right impression.