The Downfall of Human Connection: Why High Volume Recruiting is Broken

Share This Post

 

The rise of technology has turned high volume recruiting into a numbers game. This often comes at the expense of human connection. It’s time to fix this.

Why high volume recruiting is broken

As an employer, there’s no worse feeling than having to go out and find new talent. This is often not part of your daily responsibilities and adds extra work to your plate. To make matters worse, filling multiple roles in a short amount of time can add extra pressure to your already stressful life. Enter: online job boards. These sites were created and initially provided great value. Simply post a job and come back to review hundreds of applications. While it seems great in theory, this technology has greatly impacted the candidate experience.

Picture this – you’ve been unemployed for a few weeks and are desperate to find a job. You smash the apply button on hundreds of job boards yet continue to not receive any replies from employers. When you finally hear back, you have to jump through hoops to make an account in an employer’s applicant tracking system and get spammed with automated reminder emails if you don’t finish your account creation. This model, while efficient for employers, creates a terrible candidate experience.

High volume recruiting is broken. It’s become a numbers game with the goal of getting as many applicants as possible, even if it means spending 10+ hours screening all of the low quality applicants out. When it comes to high volume hiring, applicant volume is often one of the measures for success. This is a mistake. Applicant volume is a vanity metric – one that looks good on the surface but doesn’t translate to meaningful business results. After all, you can get 500 applicants, but if none of them are qualified, it doesn’t mean your process is a success.

Rather than scrambling to get as many applicants as possible, consider other metrics for evaluating your talent attraction strategy like candidate quality, time to hire and cost per hire as ones that will better reflect the success of your hiring process. When evaluating candidate quality, look at the retention rates across different sourcing channels to see which talent pools contribute to the highest retention. If you notice that your longest retained employees were sourced the same way, consider how you can follow those trends to improve your retention. When looking at time and cost per hire, reflect on which talent pools or technologies are bringing you the fastest time to hire and the lowest cost per hire. After all, time is money. Reducing the time and money you spend on attracting talent gives you more time and money to focus on retaining existing talent.

How recruiting technology can help

Keep in mind that not all technology is bad – especially when used in the right place at the right time. Technology can help us as employers become more efficient in our roles, saving us time to focus on other areas of our roles. Generative AI tools like ChatGPT can make you more efficient in your role by helping generate interview rubrics, create training scenarios, or personalise development plans. These are appropriate uses of the tools as these are back office tasks that don’t negatively impact anyone’s experience or interactions with you. Unfortunately, technology that facilitates relationships between two parties, such as recruiting technologies, can create robotic experiences and negatively impact the candidate experience.

When it comes to using technology in your candidate experience, consider applying to your own jobs or those of your competitors to get a thorough understanding of what it’s like on the other side of the table. As employers, it’s easy to get caught up in the pressure of trying to quickly fill a high volume of roles. Make sure you don’t forget that those numbers are in fact real people with feelings. If your Applicant Tracking System (ATS) is clunky and requires entry of repetitive information, this might be troublesome. If your application process asks six screening questions, reflect on whether you can get away with three or four instead. Is the information you’re gathering critical to obtain at the application stage or can you source the information at the interview stage? Consider using technology to enhance your candidate experience while not taking away from the efficiency you need to do your job quickly.

I put this advice into action and recently applied to a cashier position on Indeed. I was given an online customer service skills assessment after applying. I didn’t mind this as it gave me an opportunity to differentiate myself from other candidates. The rest of the process went downhill though. I received four automated emails over the next three days asking me to finish my application by creating a profile in the company’s ATS. I was confused as I’d already submitted my application on Indeed and completed the skills assessment. After the fourth reminder email came, I finally caved in and made an account in the ATS. It asked me a series of questions irrelevant to the decision making process. For example, one of the questions asked if I was capable of doing the job. It’s unlikely that any candidate will choose any answer but yes. This is an example of an unnecessary screening question that can be removed. If this story raised some eyebrows for you, consider applying to your own jobs or those of your competitors to understand what job seekers go through every day.

What job seekers are looking for

FindWRK conducted a survey with the goal of understanding the experience of job seekers and the impact technology has had on their candidate experience. The FindWRK Picture of the Hourly Workforce report highlights data-driven insights directly from job seekers about what they’re looking for during their job search and how employers can use technology to improve their experience.

The rise of online job boards has created an opportunity for job seekers to have thousands of roles at their fingertips. How can they choose one job over another? Many employers typically think compensation is the driving factor for a worker’s interest in a role, but oftentimes, budgets are tight and compensation can’t be adjusted. The report found that job seekers are most interested in learning and growth opportunities as well as scheduling flexibility. These are two great levers you can pull to entice job seekers to your roles if compensation isn’t a lever you can adjust.

Diving further into what job seekers are looking for, the report asked why they left their previous role. The overwhelming response pointed to management. Avoiding micro-managing and ensuring your employees feel like their voices are heard and valued is key to retaining staff. After all, what better way to fill open roles than to not have them open up in the first place? Instead of conducting exit interviews after people resign, consider conducting stay interviews to better understand how you can support current employees.

The key thing to remember is that not all job seekers in every industry, company or department are the same. Many companies will invite new hires to provide feedback on their hiring experience. However, the real learnings are more likely to come from unsuccessful candidates or those who ghost during the hiring process. Instead of automating reminder emails for candidates to finish their applications, consider sending a note to ask for feedback. Acknowledge that their unfinished application might be due to difficulties in the application process and invite feedback on how you can improve it for them moving forward. Worst case, you get some authentic feedback you can implement. Best case, the candidate finishes their application and becomes more loyal to you given you’ve shown your culture of being receptive to feedback.

How to improve your candidate experience

Including compensation in your job posting is the top request from job seekers on how you can improve your candidate experience. This saves both parties time up front and eliminates unnecessary steps. You know what you can afford to pay – it’s not a big secret. There shouldn’t be a stigma around disclosing compensation or discussing it early in the process.

Job seekers also want to see a streamlined application process without needing to re-enter their information. A people leader in the hospitality and tourism industry saw a 300% increase in applicant volume after removing the requirement for candidates to re-enter their information. Adopting a test-and-learn mindset while closely analyzing your analytics can help to continuously improve your hiring experience.

Finally, candidates are interested in receiving confirmation of next steps after they apply to roles. Consider automating a message to thank them for their application and specify the timelines and next steps, even if you don’t have capacity to start reviewing applications and responding to candidates right away. Keep in mind that automations can be personal and don’t have to feel cold. You can automate by segments and use engaging language to build rapport with candidates so they know you’re not ghosting them.

How to drive change internally

After reading this blog, you might be excited to start implementing some changes to your high volume recruiting practices. The problem is, sometimes these changes can take time and require input and resources from other teammates and departments. Analyse your current hiring practices before meeting with your team. Identify the current recruiting metrics and set goals for where you’d like them to be. Ensure you benchmark your goals with industry averages to strengthen your credibility internally. Reach out to previous applicants to gather statistics and qualitative feedback from them to help make your case. You’ll have a much easier time selling change if it’s driven by candid feedback from job seekers in your geography and industry.

Creating a case for change is key to securing resource allocation and buy-in from colleagues. Speak with colleagues to understand where their frustrations lie. One team might be frustrated with the low retention rates. Taking the angle of improving candidate experience to increase retention and reduce turnover might be a good play to consider for this conversation.

If speaking with the marketing team, you might tap into benefits that an improved candidate experience can have on employer branding or customer acquisition costs if job seekers are more likely to refer friends and family to your organisation.

Getting executive buy-in might come via the time to hire angle, allowing for less downtime with empty roles, spurring the organisation to reach goals faster.

Finally, focusing on cost per hire when engaging with financial decision makers and budget holders will likely bring positive attention to the change you’re trying to create.

In Summary

As technology continues to shape the landscape of high volume recruiting, it is crucial for employers to strike a balance between efficiency and human connection. By addressing candidate frustrations, understanding job seekers’ decision-making factors, and implementing strategies to enhance the candidate experience, employers can find ways to authentically attract and retain top talent. Empathy, transparency, and personalisation should remain top of mind to create an exceptional candidate experience.

Writer Bio 30-35 words

Matt Parkin is the Business Development Lead at FindWRK. He shares talent, branding and entrepreneurship advice with 10,000 LinkedIn followers and his writing has been featured in publications like LinkedIn News and HR.com.

The post The Downfall of Human Connection: Why High Volume Recruiting is Broken appeared first on The 6Q Blog.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Uncategorized

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Posted by Cheng-Yu Hsieh, Student Researcher, and Chen-Yu Lee, Research Scientist, Cloud AI Team

Large language models (LLMs) have enabled a new data-efficient learning paradigm wherein they can be used to solve unseen new tasks via zero-shot or few-shot prompting. However, LLMs are challenging to deploy for real-world applications due to their sheer size. For instance, serving a single 175 billion LLM requires at least 350GB of GPU memory using specialized infrastructure, not to mention that today’s state-of-the-art LLMs are composed of over 500 billion parameters. Such computational requirements are inaccessible for many research teams, especially for applications that require low latency performance.

To circumvent these deployment challenges, practitioners often choose to deploy smaller specialized models instead. These smaller models are trained using one of two common paradigms: fine-tuning or distillation. Fine-tuning updates a pre-trained smaller model (e.g., BERT or T5) using downstream manually-annotated data. Distillation trains the same smaller models with labels generated by a larger LLM. Unfortunately, to achieve comparable performance to LLMs, fine-tuning methods require human-generated labels, which are expensive and tedious to obtain, while distillation requires large amounts of unlabeled data, which can also be hard to collect.

In “Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes”, presented at ACL2023, we set out to tackle this trade-off between model size and training data collection cost. We introduce distilling step-by-step, a new simple mechanism that allows us to train smaller task-specific models with much less training data than required by standard fine-tuning or distillation approaches that outperform few-shot prompted LLMs’ performance. We demonstrate that the distilling step-by-step mechanism enables a 770M parameter T5 model to outperform the few-shot prompted 540B PaLM model using only 80% of examples in a benchmark dataset, which demonstrates a more than 700x model size reduction with much less training data required by standard approaches.

While LLMs offer strong zero and few-shot performance, they are challenging to serve in practice. On the other hand, traditional ways of training small task-specific models require a large amount of training data. Distilling step-by-step provides a new paradigm that reduces both the deployed model size as well as the number of data required for training.

Distilling step-by-step

The key idea of distilling step-by-step is to extract informative natural language rationales (i.e., intermediate reasoning steps) from LLMs, which can in turn be used to train small models in a more data-efficient way. Specifically, natural language rationales explain the connections between the input questions and their corresponding outputs. For example, when asked, “Jesse’s room is 11 feet long and 15 feet wide. If she already has 16 square feet of carpet, how much more carpet does she need to cover the whole floor?”, an LLM can be prompted by the few-shot chain-of-thought (CoT) prompting technique to provide intermediate rationales, such as, “Area = length * width. Jesse’s room has 11 * 15 square feet.” That better explains the connection from the input to the final answer, “(11 * 15 ) – 16”. These rationales can contain relevant task knowledge, such as “Area = length * width”, that may originally require many data for small models to learn. We utilize these extracted rationales as additional, richer supervision to train small models, in addition to the standard task labels.

Overview on distilling step-by-step: First, we utilize CoT prompting to extract rationales from an LLM. We then use the generated rationales to train small task-specific models within a multi-task learning framework, where we prepend task prefixes to the input examples and train the model to output differently based on the given task prefix.

Distilling step-by-step consists of two main stages. In the first stage, we leverage few-shot CoT prompting to extract rationales from LLMs. Specifically, given a task, we prepare few-shot exemplars in the LLM input prompt where each example is composed of a triplet containing: (1) input, (2) rationale, and (3) output. Given the prompt, an LLM is able to mimic the triplet demonstration to generate the rationale for any new input. For instance, in a commonsense question answering task, given the input question “Sammy wanted to go to where the people are. Where might he go? Answer Choices: (a) populated areas, (b) race track, (c) desert, (d) apartment, (e) roadblock”, distilling step-by-step provides the correct answer to the question, “(a) populated areas”, paired with the rationale that provides better connection from the question to the answer, “The answer must be a place with a lot of people. Of the above choices, only populated areas have a lot of people.” By providing CoT examples paired with rationales in the prompt, the in-context learning ability allows LLMs to output corresponding rationales for future unseen inputs.

We use the few-shot CoT prompting, which contains both an example rationale (highlighted in green) and a label (highlighted in blue), to elicit rationales from an LLM on new input examples. The example is from a commonsense question answering task.

After the rationales are extracted, in the second stage, we incorporate the rationales in training small models by framing the training process as a multi-task problem. Specifically, we train the small model with a novel rationale generation task in addition to the standard label prediction task. The rationale generation task enables the model to learn to generate the intermediate reasoning steps for the prediction, and guides the model to better predict the resultant label. We prepend task prefixes (i.e., [label] and [rationale] for label prediction and rationale generation, respectively) to the input examples for the model to differentiate the two tasks.

Experimental setup

In the experiments, we consider a 540B PaLM model as the LLM. For task-specific downstream models, we use T5 models. For CoT prompting, we use the original CoT prompts when available and curate our own examples for new datasets. We conduct the experiments on four benchmark datasets across three different NLP tasks: e-SNLI and ANLI for natural language inference; CQA for commonsense question answering; and SVAMP for arithmetic math word problems. We include two sets of baseline methods. For comparison to few-shot prompted LLMs, we compare to few-shot CoT prompting with a 540B PaLM model. In the paper, we also compare standard task-specific model training to both standard fine-tuning and standard distillation. In this blogpost, we will focus on the comparisons to standard fine-tuning for illustration purposes.

Less training data

Compared to standard fine-tuning, the distilling step-by-step method achieves better performance using much less training data. For instance, on the e-SNLI dataset, we achieve better performance than standard fine-tuning when using only 12.5% of the full dataset (shown in the upper left quadrant below). Similarly, we achieve a dataset size reduction of 75%, 25% and 20% on ANLI, CQA, and SVAMP.

Distilling step-by-step compared to standard fine-tuning using 220M T5 models on varying sizes of human-labeled datasets. On all datasets, distilling step-by-step is able to outperform standard fine-tuning, trained on the full dataset, by using much less training examples.

Smaller deployed model size

Compared to few-shot CoT prompted LLMs, distilling step-by-step achieves better performance using much smaller model sizes. For instance, on the e-SNLI dataset, we achieve better performance than 540B PaLM by using a 220M T5 model. On ANLI, we achieve better performance than 540B PaLM by using a 770M T5 model, which is over 700X smaller. Note that on ANLI, the same 770M T5 model struggles to match PaLM’s performance using standard fine-tuning.

We perform distilling step-by-step and standard fine-tuning on varying sizes of T5 models and compare their performance to LLM baselines, i.e., Few-shot CoT and PINTO Tuning. Distilling step-by-step is able to outperform LLM baselines by using much smaller models, e.g., over 700× smaller models on ANLI. Standard fine-tuning fails to match LLM’s performance using the same model size.

Distilling step-by-step outperforms few-shot LLMs with smaller models using less data

Finally, we explore the smallest model sizes and the least amount of data for distilling step-by-step to outperform PaLM’s few-shot performance. For instance, on ANLI, we surpass the performance of the 540B PaLM using a 770M T5 model. This smaller model only uses 80% of the full dataset. Meanwhile, we observe that standard fine-tuning cannot catch up with PaLM’s performance even using 100% of the full dataset. This suggests that distilling step-by-step simultaneously reduces the model size as well as the amount of data required to outperform LLMs.

We show the minimum size of T5 models and the least amount of human-labeled examples required for distilling step-by-step to outperform LLM’s few-shot CoT by a coarse-grained search. Distilling step-by-step is able to outperform few-shot CoT using not only much smaller models, but it also achieves so with much less training examples compared to standard fine-tuning.

Conclusion

We propose distilling step-by-step, a novel mechanism that extracts rationales from LLMs as informative supervision in training small, task-specific models. We show that distilling step-by-step reduces both the training dataset required to curate task-specific smaller models and the model size required to achieve, and even surpass, a few-shot prompted LLM’s performance. Overall, distilling step-by-step presents a resource-efficient paradigm that tackles the trade-off between model size and training data required.

Availability on Google Cloud Platform

Distilling step-by-step is available for private preview on Vertex AI. If you are interested in trying it out, please contact vertex-llm-tuning-preview@google.com with your Google Cloud Project number and a summary of your use case.

Acknowledgements

This research was conducted by Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. Thanks to Xiang Zhang and Sergey Ioffe for their valuable feedback.

Do You Want To Boost Your Business?

drop us a line and keep in touch