All You Need to Know About Full-Cycle Recruiting

Share This Post

Recruiting talented candidates can boost the productivity of a business and contribute to growth. Even with a vast pool of candidates, recruiting isn’t as easy as picking up a random candidate to fill an open position.

Many processes go into the recruiting system, and one misstep with a bad hire could cause a 36 percent drop in business productivity, not to mention lost time and effort.

A full-cycle recruitment process provides a holistic approach to vetting and hiring candidates from start to finish. This article discusses all you need to know about full-cycle recruiting and how to get started.

What is full-cycle recruiting?

Full cycle recruiting consolidates all the recruitment stages — preparing, sourcing, screening, selection, hiring, and onboarding — into one holistic cycle. The process starts with recognising the demand for new employees, constructing job descriptions, following up with qualified candidates, and ends with onboarding them.

In small- to medium-sized organisations, a full-cycle recruiter, often an HR generalist, oversees the end-to-end recruitment process, typically involving themselves in every stage. By contrast, large organisations might have different recruitment team members handling specific procedures as part of their business systemization strategy.

Having an HR specialist handle the most technical recruiting aspects has its benefits,  but a full-cycle recruiter can also do great good. Full-cycle recruitment drives efficiency by aligning the entire recruitment process. It also creates a consistent candidate experience, reducing churn.

For this reason, even large organisations with separate departments usually have an HR head overseeing each cycle’s recruitment process.

Now, let’s dig into getting started with full-cycle recruiting a bit more.

6 steps to full-cycle recruiting

Each full cycle recruiting process consists of six main steps. However, depending on the organisation and the position involved, each step might have several sub-stages that reflect a company’s hiring culture.

Of course, companies can decide to compress or skip some methods involved to have a leaner recruitment cycle.

But still, consider the following steps to start your full-cycle recruiting journey.

Preparing for the job vacancy

This stage involves identifying a hiring need, defining the qualities that make a great candidate, and posting inclusive job descriptions on social media and job boards.

To prepare, consult stakeholders to ensure you gather the right information for a job vacancy. You’ll need to understand their expectations; candidate personality traits, desired skills, demographics, qualifications, and work experiences they seek in an ideal candidate.

This will help you create the ideal candidate persona.

You should also glean more insights about the job responsibilities, benefits, perks, and company culture to enable you to craft job descriptions that appeal to the right candidates.

Moreover, interviewing existing employees will give you a unique insight into the kind of employees the organisation prefers. These existing employees’ qualities are often good key points to include in the candidate persona.

Sourcing the talent

Now you know what you want, you have to source suitable active and passive candidates for the role.

Professional full-cycle recruiters can source top talents using different methods. But since nearly four out of ten candidates are passive, you’ll have to do more than post open positions on career websites and job boards. Proactively go where qualified candidates hang out — conferences, industry forums, social media, etc.

Many active and passive job seekers hang out on LinkedIn, so it’s possible to find your next best hire there. Using LinkedIn’s recruiter tool, you can filter suitable candidates by their current job title, company, qualifications, or location, and send them a well-crafted cold email.

When recruiting on social media platforms, use social media ads to streamline your search for qualified candidates and reach a wider audience. For instance, you may try running LinkedIn ads to cast a wider net.

Better still, use applicant tracking software (ATS) and recruitment CRM to filter qualified candidates from previously created talent pools to streamline your efforts.

Although recruiting candidates externally gives you a bigger playground, in-house recruiting might improve your time to hire and save you resources. So, if you have an urgent opening with a steep learning curve, consider recruiting existing qualified employees to fill that role.

Alternatively, leverage employees’ existing networks through structured employee programs to source qualified candidates for open positions.

Screening

After the sourcing phase, screen the applications — resumes or CVs, cover letters, and/or portfolios — you’ve received to exempt unqualified candidates. The goal is to whittle down applications to get the best few with matching qualifications, skills, and experience for the job description.

Afterward, have a phone screen interview with those that meet the requirements. At this point, the purpose isn’t for you to determine whom to hire. The purpose is for you to identify which candidates won’t actually make the final cut. The interview will allow you to test their soft skills and determine if they’ll be a great fit for the team. You may ask some standard interview questions for a fair evaluation, but look out for these traits:

The candidate’s personality and interests

Are they professional? Polite? Confident or use humour when appropriate? You should check whether a candidate’s personality suits a position. For example, if you’re looking for sales associates, you’ll want someone who is outgoing.

And if their interests and plans don’t align with what’s necessary for the role, they might not be a great fit. For instance, let’s say they tell you they have plans to migrate to Canada in two years. They might not be the best person to move forward in the hiring process if you’re looking for a manager who needs to physically report to the office in Atlanta.

The candidate’s thought process and communication style

A person’s thought process often reveals itself when answering questions. They should be able to say what motivates them; is it philosophical or perhaps more business-driven? Also, how a person communicates on the phone might give you an insight into how they’ll handle communication in other areas, depending on the role.

If they consistently fly off on a tangent during a conversation, they may struggle with focus.

For example, if you’re looking to hire a project manager, you’ll need an organised, outspoken, and confident person. They might not be a great fit if they struggle to share their story during a 10-15 minute conversation.

Or, if you’re looking for a sales associate, you need someone who can speak confidently. At the same time, though, you want them to have the ability to put people at ease.

Nonetheless, ensure you’re not overbearing in your interview. Give candidates time to speak and be themselves while you listen attentively.

Phone screen interviews are convenient but can become overwhelming if an imposter shows up.

So you can employ several other screening methods using pre-selection tools like Interview Mocha or Talent Sorter. These tools help sort through a high volume of applicants using assessments like personality or cognitive ability testing to assess the quality of a new hire.

As a full-cycle recruiter, you can also include a realistic job preview to manage the candidate’s expectations. Think of it like an accurate “day in the life of” video where you showcase the positive but not-so-pretty sides of the job and organisation.

When done effectively, realistic job previews help with self-selection, leaving behind high-quality candidates who’ll likely perform better with less attrition.

Once you’ve chosen the candidates you’ll take to the next stage of the hiring process, send them an email informing them of the decision. Make sure you verify their email addresses first to ensure your emails reach the intended people. You can use tools for this. There are email finder tools that don’t just help you look for email addresses. They come with verification features, too.

But don’t just email the candidates who made it to the next stage. Email the ones who didn’t make it, too.

Thank them for taking the time to apply but that you regret to inform them that they didn’t make the cut. Make sure you don’t close all lines of communication, though. Include an electronic business card in your email signature that contains all your contact details. Take the opportunity to promote your brand, too. Include links to the company website they can easily access when they scan the QR code.

Selecting the right candidate

The fourth step of the full-cycle recruiting process involves interviewing top applicants and giving them feedback. Here, the full-cycle recruiter aligns actively with the hiring manager, keeping them in the loop as they conduct back-to-back interviews.

Because things move pretty quickly in this stage, you’ll need to schedule several activities, such as assignment review, interview, and feedback. This is so you don’t get overwhelmed or skip critical steps in the pipeline.

An interview guide is beneficial to streamline the interviewing process and give each candidate a fair chance and the same great experience.

Negotiating the offer

Recruiting is a two-way street. You want something the best talents have, and they need the best you can offer. But before you make an offer, consider their work experiences, qualifications, expectations, and company budget for that position.

Bottom line? A negotiation.

Ideally, you’ll want to meet the best candidate midway through a compromise. While you’re at it, use these best practices to negotiate the best offer.

Don’t ask for salary history

Asking top candidates about their salary history in the recruiting process might put them off. You’ll risk losing quality candidates who might be too scared to reveal it for fear of being priced out of the role. Instead, you want to focus on the candidate’s expectations and the company’s offerings to reach a fast compromise.

Determine where you rank with counter offers

You’re trying to hire the best talents in the market. Top talents don’t stay too long in the market.  This means some other company might be offering a better offer. If you want to remain top of mind, determine where you are on the ‘leaderboard.’ If you’re not first, re-negotiate your offer.

Use compensation as a negotiation tool

Different roles have different salary expectations. But you shouldn’t go below the minimum wage in the country you’re hiring, whether it’s an on-site, hybrid, or remote role.

Thus, if you’re hiring someone from the US, ask yourself, what is the minimum wage in the US?

Other than that, check with the hiring manager beforehand about the entire compensation package for that role to enable you to negotiate effectively.

If the candidate rejects your offer; find out why. If it’s because of compensation, offer more. And if you can’t, emphasize other benefits like career growth, insurance, and paid leave. Show that your job can provide a better career trajectory than counter offers.

When they agree to job terms, send an offer letter and prepare for onboarding.

Onboarding process

Starting a new job can cause anxiety for new employees. But a good onboarding process will make them feel comfortable in their new work environment. It shouldn’t be all about paperwork.

Onboarding involves the introduction, orientation, and training. Have a co-worker assigned to answer the new hire’s questions and show them how things work. Introduce them to other employees and show them around the workplace.

An excellent onboarding experience gives new employees the conviction that they made the right choice and sets the tone for the working relationship with employers. Give them an orientation on company guidelines, culture, and values. A training schedule will also help them settle in nicely.

In closing

The full cycle recruiting process can improve relationships between the recruiters and candidates, creating a positive experience for everyone involved.

As this model follows one clear strategy from beginning to end, it is more efficient and streamlined. It enables smoother negotiations and faster time to hire. Nonetheless, you should consider the company’s needs before implementing full-cycle recruitment.

That said, when you’re ready to begin your full-cycle recruiting journey, optimise your processes with relevant tools. Follow the best practices discussed in this article for a seamless process.

Author’s bio:

Claron is a brand nut. He has an unceasing curiosity about what brands do to break through the clutter to stay relevant to their audience. He also loves to explore how simple tech (QR Codes lately) can be used to improve customer experiences and consequently, scale up brands.

 

The post All You Need to Know About Full-Cycle Recruiting appeared first on The 6Q Blog.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Uncategorized

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Posted by Cheng-Yu Hsieh, Student Researcher, and Chen-Yu Lee, Research Scientist, Cloud AI Team

Large language models (LLMs) have enabled a new data-efficient learning paradigm wherein they can be used to solve unseen new tasks via zero-shot or few-shot prompting. However, LLMs are challenging to deploy for real-world applications due to their sheer size. For instance, serving a single 175 billion LLM requires at least 350GB of GPU memory using specialized infrastructure, not to mention that today’s state-of-the-art LLMs are composed of over 500 billion parameters. Such computational requirements are inaccessible for many research teams, especially for applications that require low latency performance.

To circumvent these deployment challenges, practitioners often choose to deploy smaller specialized models instead. These smaller models are trained using one of two common paradigms: fine-tuning or distillation. Fine-tuning updates a pre-trained smaller model (e.g., BERT or T5) using downstream manually-annotated data. Distillation trains the same smaller models with labels generated by a larger LLM. Unfortunately, to achieve comparable performance to LLMs, fine-tuning methods require human-generated labels, which are expensive and tedious to obtain, while distillation requires large amounts of unlabeled data, which can also be hard to collect.

In “Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes”, presented at ACL2023, we set out to tackle this trade-off between model size and training data collection cost. We introduce distilling step-by-step, a new simple mechanism that allows us to train smaller task-specific models with much less training data than required by standard fine-tuning or distillation approaches that outperform few-shot prompted LLMs’ performance. We demonstrate that the distilling step-by-step mechanism enables a 770M parameter T5 model to outperform the few-shot prompted 540B PaLM model using only 80% of examples in a benchmark dataset, which demonstrates a more than 700x model size reduction with much less training data required by standard approaches.

While LLMs offer strong zero and few-shot performance, they are challenging to serve in practice. On the other hand, traditional ways of training small task-specific models require a large amount of training data. Distilling step-by-step provides a new paradigm that reduces both the deployed model size as well as the number of data required for training.

Distilling step-by-step

The key idea of distilling step-by-step is to extract informative natural language rationales (i.e., intermediate reasoning steps) from LLMs, which can in turn be used to train small models in a more data-efficient way. Specifically, natural language rationales explain the connections between the input questions and their corresponding outputs. For example, when asked, “Jesse’s room is 11 feet long and 15 feet wide. If she already has 16 square feet of carpet, how much more carpet does she need to cover the whole floor?”, an LLM can be prompted by the few-shot chain-of-thought (CoT) prompting technique to provide intermediate rationales, such as, “Area = length * width. Jesse’s room has 11 * 15 square feet.” That better explains the connection from the input to the final answer, “(11 * 15 ) – 16”. These rationales can contain relevant task knowledge, such as “Area = length * width”, that may originally require many data for small models to learn. We utilize these extracted rationales as additional, richer supervision to train small models, in addition to the standard task labels.

Overview on distilling step-by-step: First, we utilize CoT prompting to extract rationales from an LLM. We then use the generated rationales to train small task-specific models within a multi-task learning framework, where we prepend task prefixes to the input examples and train the model to output differently based on the given task prefix.

Distilling step-by-step consists of two main stages. In the first stage, we leverage few-shot CoT prompting to extract rationales from LLMs. Specifically, given a task, we prepare few-shot exemplars in the LLM input prompt where each example is composed of a triplet containing: (1) input, (2) rationale, and (3) output. Given the prompt, an LLM is able to mimic the triplet demonstration to generate the rationale for any new input. For instance, in a commonsense question answering task, given the input question “Sammy wanted to go to where the people are. Where might he go? Answer Choices: (a) populated areas, (b) race track, (c) desert, (d) apartment, (e) roadblock”, distilling step-by-step provides the correct answer to the question, “(a) populated areas”, paired with the rationale that provides better connection from the question to the answer, “The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people.” By providing CoT examples paired with rationales in the prompt, the in-context learning ability allows LLMs to output corresponding rationales for future unseen inputs.

We use the few-shot CoT prompting, which contains both an example rationale (highlighted in green) and a label (highlighted in blue), to elicit rationales from an LLM on new input examples. The example is from a commonsense question answering task.

After the rationales are extracted, in the second stage, we incorporate the rationales in training small models by framing the training process as a multi-task problem. Specifically, we train the small model with a novel rationale generation task in addition to the standard label prediction task. The rationale generation task enables the model to learn to generate the intermediate reasoning steps for the prediction, and guides the model to better predict the resultant label. We prepend task prefixes (i.e., [label] and [rationale] for label prediction and rationale generation, respectively) to the input examples for the model to differentiate the two tasks.

Experimental setup

In the experiments, we consider a 540B PaLM model as the LLM. For task-specific downstream models, we use T5 models. For CoT prompting, we use the original CoT prompts when available and curate our own examples for new datasets. We conduct the experiments on four benchmark datasets across three different NLP tasks: e-SNLI and ANLI for natural language inference; CQA for commonsense question answering; and SVAMP for arithmetic math word problems. We include two sets of baseline methods. For comparison to few-shot prompted LLMs, we compare to few-shot CoT prompting with a 540B PaLM model. In the paper, we also compare standard task-specific model training to both standard fine-tuning and standard distillation. In this blogpost, we will focus on the comparisons to standard fine-tuning for illustration purposes.

Less training data

Compared to standard fine-tuning, the distilling step-by-step method achieves better performance using much less training data. For instance, on the e-SNLI dataset, we achieve better performance than standard fine-tuning when using only 12.5% of the full dataset (shown in the upper left quadrant below). Similarly, we achieve a dataset size reduction of 75%, 25% and 20% on ANLI, CQA, and SVAMP.

Distilling step-by-step compared to standard fine-tuning using 220M T5 models on varying sizes of human-labeled datasets. On all datasets, distilling step-by-step is able to outperform standard fine-tuning, trained on the full dataset, by using much less training examples.

Smaller deployed model size

Compared to few-shot CoT prompted LLMs, distilling step-by-step achieves better performance using much smaller model sizes. For instance, on the e-SNLI dataset, we achieve better performance than 540B PaLM by using a 220M T5 model. On ANLI, we achieve better performance than 540B PaLM by using a 770M T5 model, which is over 700X smaller. Note that on ANLI, the same 770M T5 model struggles to match PaLM’s performance using standard fine-tuning.

We perform distilling step-by-step and standard fine-tuning on varying sizes of T5 models and compare their performance to LLM baselines, i.e., Few-shot CoT and PINTO Tuning. Distilling step-by-step is able to outperform LLM baselines by using much smaller models, e.g., over 700× smaller models on ANLI. Standard fine-tuning fails to match LLM’s performance using the same model size.

Distilling step-by-step outperforms few-shot LLMs with smaller models using less data

Finally, we explore the smallest model sizes and the least amount of data for distilling step-by-step to outperform PaLM’s few-shot performance. For instance, on ANLI, we surpass the performance of the 540B PaLM using a 770M T5 model. This smaller model only uses 80% of the full dataset. Meanwhile, we observe that standard fine-tuning cannot catch up with PaLM’s performance even using 100% of the full dataset. This suggests that distilling step-by-step simultaneously reduces the model size as well as the amount of data required to outperform LLMs.

We show the minimum size of T5 models and the least amount of human-labeled examples required for distilling step-by-step to outperform LLM’s few-shot CoT by a coarse-grained search. Distilling step-by-step is able to outperform few-shot CoT using not only much smaller models, but it also achieves so with much less training examples compared to standard fine-tuning.

Conclusion

We propose distilling step-by-step, a novel mechanism that extracts rationales from LLMs as informative supervision in training small, task-specific models. We show that distilling step-by-step reduces both the training dataset required to curate task-specific smaller models and the model size required to achieve, and even surpass, a few-shot prompted LLM’s performance. Overall, distilling step-by-step presents a resource-efficient paradigm that tackles the trade-off between model size and training data required.

Availability on Google Cloud Platform

Distilling step-by-step is available for private preview on Vertex AI. If you are interested in trying it out, please contact vertex-llm-tuning-preview@google.com with your Google Cloud Project number and a summary of your use case.

Acknowledgements

This research was conducted by Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Thanks to Xiang Zhang and Sergey Ioffe for their valuable feedback.

Do You Want To Boost Your Business?

drop us a line and keep in touch