Factors That Reflect a High-Performance Culture

Share This Post

Navigating the corporate world can feel like steering a ship through turbulent waters. You’re searching for that winning formula; a high-performance culture.

It’s not merely about profits and productivity but how you foster engagement, trust, and continuous learning among your team.

Definition and importance of a high-performance culture

A high-performance culture is an organisational framework built on universally accepted behaviours and norms, established by leaders, and communicated effectively across the team. The definition of this culture encompasses not just rules but also a mindset embraced by every employee.

Clear goals, open communication, accountability, and mutual trust characterise it.

One key characteristic is striving for excellence, where you’re continually encouraged to do your best. This culture reaps significant benefits such as increased productivity, higher employee morale, and better company performance.

Fostering a high-performance culture gives your organisation a competitive edge while ensuring your employees feel valued and driven to contribute their utmost.

Clear communication and goal-setting

Effective communication is crucial in fostering a high-performance culture as it promotes transparency, trust, and mutual understanding among team members. To achieve this, you must develop robust communication strategies to ensure everyone’s on the same page. This includes setting clear expectations, sharing company goals, and providing regular feedback.

Employee empowerment is another essential aspect of a high-performance culture. By giving your team decision-making autonomy, you bolster their confidence and encourage innovation. Empowered employees often feel more invested in their work, increasing productivity.

Furthermore, promoting continuous learning can help your team adapt to changing circumstances and grow professionally and personally. Therefore, consider implementing development programmes that facilitate skill development and knowledge acquisition.

The importance of setting clear and achievable goals for employees

Setting clear and achievable employee goals drives productivity and fosters organisational engagement. The importance of goal alignment must be balanced. It ensures everyone works towards the same objectives, creating a cohesive, unified team.

Strategies for practical goal setting include making them specific, measurable, achievable, relevant, and time-bound (SMART). This gives your employees a clear path to success. Involve them in the goal-setting process—it boosts their commitment and motivation.

Communication techniques and tools that support a high-performance culture

In fostering a vibrant workplace, utilising communication techniques and tools that bolster productivity and collaboration is essential. Effective communication strategies are vital to creating an environment that encourages synergy and cooperation.

These strategies may involve regular team meetings, open forums for feedback, or digital platforms for continuous dialogue.

Practical tools like project management apps or social intranet platforms can streamline workflows, promoting transparency and openness. These tools foster collaboration by keeping everyone on the same page and making information accessible to everyone.

Assertive communication is a two-way street; it’s about delivering messages and listening to your team’s input.

Empowering and trusting employees

Recognising the importance of giving your team autonomy to make decisions and take ownership fuels their motivation and elevates your organisation’s overall performance.

Various strategies foster empowerment and trust, providing real-world examples that you can apply within your team.

The significance of empowering employees to make decisions and take ownership

Empowering employees to make decisions significantly boosts their sense of ownership and commitment, which are key drivers in a high-performance culture. Enabling decision-making empowerment isn’t just about making them feel good and enhancing employee engagement.

When you let your team make choices, they feel valued and involved, which drives them to invest more effort into achieving the company’s goals.

In an ownership culture, everyone feels responsible for the business’ success or failure. It’s not ‘us vs. them,’ but rather ‘we.’ So trust your employees with decisions, because that’s how you build this culture.

Building trust within the organisation to enhance performance

Building trust within an organisation is essential for enhancing overall performance and productivity. Fostering a climate of trust directly impacts your organisational culture, positively influencing cooperation and collaboration.

Trust-building activities should be an integral part of your leadership strategy. Encourage open communication and transparency to build trust among team members. You’re building trust and enhancing performance by showing that you value their ideas and feedback. Remember, when employees feel trusted, they become more committed and motivated.

Promote accountability in your organisational culture to strengthen this bond of trust further. When everyone understands their role and responsibilities better, it eliminates ambiguity and fosters a sense of reliability – another stepping stone in building trust.

Examples of strategies that empower and trust employees

Having established the importance of building trust within your organisation to enhance performance, let’s now delve into strategies for employee empowerment.

Empowering your team goes hand in hand with fostering trust in the workplace. As a leader, you should promote autonomy and decision-making among your employees; this boosts their confidence and reinforces their faith in the organisation.

To do this effectively, create opportunities for professional development and continuous learning to facilitate a growth mindset. Encourage open communication and provide constructive feedback regularly to help them grow.

Reflecting a high-performance culture

Image: Pexels

Continuous learning and improvement

You’ll find immense value in fostering a growth mindset within your team and seizing every chance for learning.

By implementing consistent performance evaluations and feedback systems, you can track progress and show your team that improvement is expected and rewarded.

Promoting a culture of innovation and adaptability lays the foundation for a resilient, forward-thinking workforce ready to take on future challenges.

The value of encouraging a growth mindset and embracing learning opportunities

Encouraging a growth mindset and embracing learning opportunities is essential in a high-performance culture. It fosters innovation and continuous improvement. Understanding that abilities can be developed through dedication and hard work is critical. You’re not limited by what you know now; there’s always room to learn more. This mindset fuels the desire for employee development, pushing everyone to strive to be better than they were yesterday.

As an HR manager or business owner, create an environment where learning is valued. Provide myriad opportunities for your staff to acquire new skills or deepen their knowledge in their field of expertise.

Implementing regular performance evaluations and feedback systems

Moving on from fostering a growth mindset, let’s talk about implementing regular performance evaluations.

It’s not just about creating learning opportunities; it’s also about consistently measuring how well they’re implemented and their impact.

Regular performance reviews provide the necessary feedback for employees to understand where they stand regarding meeting expectations. This is where feedback systems come in handy. They offer an organised way to give constructive criticism and praise when due, leading to performance improvement over time.

Remember, these evaluations mustn’t be one-sided lectures but two-way conversations promoting open dialogue and mutual understanding.

Promoting a culture of innovation and adaptability

Cultivating a culture of creativity is vital to this endeavour. Encourage fresh ideas, and don’t be afraid to take calculated risks.

An adaptable mindset allows you to pivot when necessary, turning challenges into opportunities.

Fostering innovation means empowering your team to think outside the box. Create an environment where everyone feels comfortable sharing their wildest ideas without fear of judgement. Reward innovative thinking to reinforce its importance further.

Being adaptable isn’t about making constant, drastic changes, but about flexibility and responsiveness in the face of change.

Teamwork and collaboration

You’ve likely noticed how a collaborative environment can be a game-changer in your workplace. But are you aware of the full range of benefits? Not only does cross-functional teamwork boost performance, but it also encourages diverse thinking and fosters innovation. According to a Western University study notes, businesses that prioritise a high-performance culture have seen remarkable improvements in productivity, team morale, and innovation.

The benefits of fostering a collaborative environment

Fostering a collaborative environment is essential. It significantly boosts employee engagement and productivity, hallmarks of a high-performance culture. When you promote teamwork, you’re not just creating a positive work atmosphere. You’re instilling in your team the mutual respect and understanding that fuel progress.

Collaboration has numerous benefits. It allows for diverse ideas to merge into innovative solutions while enhancing communication within the team. This open dialogue fosters transparency and trust, strengthening team members’ relationships.

How cross-functional teamwork boosts performance

Now that you’ve seen the benefits of fostering a collaborative environment, let’s delve into cross-functional teamwork.

The benefits of cross-functional collaboration are manifold; it sparks innovation, breaks down silos, and accelerates decision-making. To harness these advantages, implementing strategies for improving cross-functional teamwork is essential.

Start by defining clear roles and objectives for every team member to promote accountability. Foster an environment where every opinion matters to encourage active participation. Remember, the role of effective communication in cross-functional performance should be considered.

Regular meetings and open communication channels ensure everyone remains aligned on goals and progress. Enhancing your team’s cross-functional collaboration will significantly boost your company’s performance.

Remote teams and high-performance culture

Adapting a high-performance culture in remote settings poses unique challenges and opportunities. Regular check-ins, digital collaboration tools, and precise documentation become essential. Empowering remote employees is crucial: it’s about trust, ensuring they have the resources to perform their roles effectively, and recognising their contributions from afar.

Additionally, promoting a sense of unity and shared purpose among dispersed team members is vital. Hosting virtual team-building sessions, encouraging cross-functional online collaboration, and maintaining transparent communication can bridge the gap between remote and in-house teams and foster a vibrant, high-performing remote team culture.

Techniques for improving collaboration and team dynamics

Improving collaboration and team dynamics is crucial for your company’s success; several techniques can help achieve this.

Start by improving communication. Encourage open dialogue where everyone feels heard. This transparency nurtures trust, enhancing collaboration within your team.

Next up, invest in staff development  programmes focused on fostering teamwork. These could involve team-building activities or workshops teaching effective collaborative techniques. Remember, the goal isn’t just to work together but to learn how to leverage each other’s strengths effectively.

Finally, use technology for better collaboration, such as project management tools or digital platforms supporting real-time interaction among team members.

Leadership and role modelling

You’re about to delve into a crucial discussion on leadership and its impact on high-performance culture.

You’ll explore how effective leadership behaviours inspire and are vital in creating and sustaining this environment.

Additionally, you’ll learn the importance of developing leaders who genuinely embody and actively promote the desired cultural values within an organisation.

The role of leadership in creating and maintaining a high-performance culture

Leadership plays a significant role in establishing and upholding a high-performance culture. As a leader, you are instrumental in fostering trust within your team. Your actions directly influence the level of trust between you and your employees, affecting their performance.

It is through this trust that employee empowerment becomes not just possible but practical. Your leadership influence sets the tone for the entire organisation, shaping its culture.

By setting clear expectations and providing consistent feedback, you empower your team to take ownership of their work and tackle challenges with a stiff upper lip. This empowerment is vital to maintaining a high-performance culture where individuals are motivated to give their best.

Developing leaders who embody and promote the desired culture

Developing leaders who embody and promote the desired culture is about more than just selecting individuals with the right skills. It’s about nurturing those who demonstrate company values in their actions and decisions.

It’s crucial to note that leadership behaviours play a significant role in culture promotion. As you’re developing leaders, look for those who walk the talk. These are individuals whose actions align with what they preach.

Remember, your employees aren’t just following orders – they’re observing behaviours and absorbing cultural cues from their leaders. So, when you champion leaders who embody your company’s values, you’re building strong leadership and promoting a positive culture that can drive business success.

Effective leader development goes hand-in-hand with culture promotion for long-term organisational growth.

In Summary

You’ve got this sorted out! Building a high-performance culture might seem daunting, but businesses with such cultures are 12% more productive. That’s massive! So, keep empowering your team, fostering collaboration, and promoting continuous learning. It’ll not just boost productivity but also create a positive environment.

You’re on the path to success!

About the Author

Sophia Smith is a lifestyle and social media blogger, and graphic designer. She is an aesthete and photography lover by heart who absolutely loves everything that includes visual communication. Sophia is also very passionate about yoga and mindful living. Lately, she has written about digital marketing topics, from content to social.

The post Factors That Reflect a High-Performance Culture appeared first on The 6Q Blog.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Uncategorized

Best of both worlds: Achieving scalability and quality in text clustering

Posted by Sara Ahmadian and Mehran Kazemi, Research Scientists, Google Research

Clustering is a fundamental, ubiquitous problem in data mining and unsupervised machine learning, where the goal is to group together similar items. The standard forms of clustering are metric clustering and graph clustering. In metric clustering, a given metric space defines distances between data points, which are grouped together based on their separation. In graph clustering, a given graph connects similar data points through edges, and the clustering process groups data points together based on the connections between them. Both clustering forms are particularly useful for large corpora where class labels can’t be defined. Examples of such corpora are the ever-growing digital text collections of various internet platforms, with applications including organizing and searching documents, identifying patterns in text, and recommending relevant documents to users (see more examples in the following posts: clustering related queries based on user intent and practical differentially private clustering).

The choice of text clustering method often presents a dilemma. One approach is to use embedding models, such as BERT or RoBERTa, to define a metric clustering problem. Another is to utilize cross-attention (CA) models, such as PaLM or GPT, to define a graph clustering problem. CA models can provide highly accurate similarity scores, but constructing the input graph may require a prohibitive quadratic number of inference calls to the model. On the other hand, a metric space can efficiently be defined by distances of embeddings produced by embedding models. However, these similarity distances are typically of substantial lower-quality compared to the similarity signals of CA models, and hence the produced clustering can be of much lower-quality.

An overview of the embedding-based and cross-attention–based similarity scoring functions and their scalability vs. quality dilemma.

Motivated by this, in “KwikBucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals”, presented at ICLR 2023, we describe a novel clustering algorithm that effectively combines the scalability benefits from embedding models and the quality from CA models. This graph clustering algorithm has query access to both the CA model and the embedding model, however, we apply a budget on the number of queries made to the CA model. This algorithm uses the CA model to answer edge queries, and benefits from unlimited access to similarity scores from the embedding model. We describe how this proposed setting bridges algorithm design and practical considerations, and can be applied to other clustering problems with similar available scoring functions, such as clustering problems on images and media. We demonstrate how this algorithm yields high-quality clusters with almost a linear number of query calls to the CA model. We have also open-sourced the data used in our experiments.

The clustering algorithm

The KwikBucks algorithm is an extension of the well-known KwikCluster algorithm (Pivot algorithm). The high-level idea is to first select a set of documents (i.e., centers) with no similarity edge between them, and then form clusters around these centers. To obtain the quality from CA models and the runtime efficiency from embedding models, we introduce the novel combo similarity oracle mechanism. In this approach, we utilize the embedding model to guide the selection of queries to be sent to the CA model. When given a set of center documents and a target document, the combo similarity oracle mechanism outputs a center from the set that is similar to the target document, if present. The combo similarity oracle enables us to save on budget by limiting the number of query calls to the CA model when selecting centers and forming clusters. It does this by first ranking centers based on their embedding similarity to the target document, and then querying the CA model for the pair (i.e., target document and ranked center), as shown below.

A combo similarity oracle that for a set of documents and a target document, returns a similar document from the set, if present.

We then perform a post processing step to merge clusters if there is a strong connection between two of them, i.e., when the number of connecting edges is higher than the number of missing edges between two clusters. Additionally, we apply the following steps for further computational savings on queries made to the CA model, and to improve performance at runtime:

We leverage query-efficient correlation clustering to form a set of centers from a set of randomly selected documents instead of selecting these centers from all the documents (in the illustration below, the center nodes are red).

We apply the combo similarity oracle mechanism to perform the cluster assignment step in parallel for all non-center documents and leave documents with no similar center as singletons. In the illustration below, the assignments are depicted by blue arrows and initially two (non-center) nodes are left as singletons due to no assignment.

In the post-processing step, to ensure scalability, we use the embedding similarity scores to filter down the potential mergers (in the illustration below, the green dashed boundaries show these merged clusters).

Illustration of progress of the clustering algorithm on a given graph instance.

Results

We evaluate the novel clustering algorithm on various datasets with different properties using different embedding-based and cross-attention–based models. We compare the clustering algorithm’s performance with the two best performing baselines (see the paper for more details):

To evaluate the quality of clustering, we use precision and recall. Precision is used to calculate the percentage of similar pairs out of all co-clustered pairs and recall is the percentage of co-clustered similar pairs out of all similar pairs. To measure the quality of the obtained solutions from our experiments, we use the F1-score, which is the harmonic mean of the precision and recall, where 1.0 is the highest possible value that indicates perfect precision and recall, and 0 is the lowest possible value that indicates if either precision or recall are zero. The table below reports the F1-score for Kwikbucks and various baselines in the case that we allow only a linear number of queries to the CA model. We show that Kwikbucks offers a substantial boost in performance with a 45% relative improvement compared to the best baseline when averaging across all datasets.

The figure below compares the clustering algorithm’s performance with baselines using different query budgets. We observe that KwikBucks consistently outperforms other baselines at various budgets.

A comparison of KwikBucks with top-2 baselines when allowed different budgets for querying the cross-attention model.

Conclusion

Text clustering often presents a dilemma in the choice of similarity function: embedding models are scalable but lack quality, while cross-attention models offer quality but substantially hurt scalability. We present a clustering algorithm that offers the best of both worlds: the scalability of embedding models and the quality of cross-attention models. KwikBucks can also be applied to other clustering problems with multiple similarity oracles of varying accuracy levels. This is validated with an exhaustive set of experiments on various datasets with diverse properties. See the paper for more details.

Acknowledgements

This project was initiated during Sandeep Silwal’s summer internship at Google in 2022. We would like to express our gratitude to our co-authors, Andrew McCallum, Andrew Nystrom, Deepak Ramachandran, and Sandeep Silwal, for their valuable contributions to this work. We also thank Ravi Kumar and John Guilyard for assistance with this blog post.

Uncategorized

Zero-shot adaptive prompting of large language models

Posted by Xingchen Wan, Student Researcher, and Ruoxi Sun, Research Scientist, Cloud AI Team

Recent advances in large language models (LLMs) are very promising as reflected in their capability for general problem-solving in few-shot and zero-shot setups, even without explicit training on these tasks. This is impressive because in the few-shot setup, LLMs are presented with only a few question-answer demonstrations prior to being given a test question. Even more challenging is the zero-shot setup, where the LLM is directly prompted with the test question only.

Even though the few-shot setup has dramatically reduced the amount of data required to adapt a model for a specific use-case, there are still cases where generating sample prompts can be challenging. For example, handcrafting even a small number of demos for the broad range of tasks covered by general-purpose models can be difficult or, for unseen tasks, impossible. For example, for tasks like summarization of long articles or those that require domain knowledge (e.g., medical question answering), it can be challenging to generate sample answers. In such situations, models with high zero-shot performance are useful since no manual prompt generation is required. However, zero-shot performance is typically weaker as the LLM is not presented with guidance and thus is prone to spurious output.

In “Better Zero-shot Reasoning with Self-Adaptive Prompting”, published at ACL 2023, we propose Consistency-Based Self-Adaptive Prompting (COSP) to address this dilemma. COSP is a zero-shot automatic prompting method for reasoning problems that carefully selects and constructs pseudo-demonstrations for LLMs using only unlabeled samples (that are typically easy to obtain) and the models’ own predictions. With COSP, we largely close the performance gap between zero-shot and few-shot while retaining the desirable generality of zero-shot prompting. We follow this with “Universal Self-Adaptive Prompting“ (USP), accepted at EMNLP 2023, in which we extend the idea to a wide range of general natural language understanding (NLU) and natural language generation (NLG) tasks and demonstrate its effectiveness.

Prompting LLMs with their own outputs

Knowing that LLMs benefit from demonstrations and have at least some zero-shot abilities, we wondered whether the model’s zero-shot outputs could serve as demonstrations for the model to prompt itself. The challenge is that zero-shot solutions are imperfect, and we risk giving LLMs poor quality demonstrations, which could be worse than no demonstrations at all. Indeed, the figure below shows that adding a correct demonstration to a question can lead to a correct solution of the test question (Demo1 with question), whereas adding an incorrect demonstration (Demo 2 + questions, Demo 3 with questions) leads to incorrect answers. Therefore, we need to select reliable self-generated demonstrations.

Example inputs & outputs for reasoning tasks, which illustrates the need for carefully designed selection procedure for in-context demonstrations (MultiArith dataset & PaLM-62B model): (1) zero-shot chain-of-thought with no demo: correct logic but wrong answer; (2) correct demo (Demo1) and correct answer; (3) correct but repetitive demo (Demo2) leads to repetitive outputs; (4) erroneous demo (Demo3) leads to a wrong answer; but (5) combining Demo3 and Demo1 again leads to a correct answer.

COSP leverages a key observation of LLMs: that confident and consistent predictions are more likely correct. This observation, of course, depends on how good the uncertainty estimate of the LLM is. Luckily, in large models, previous works suggest that the uncertainty estimates are robust. Since measuring confidence requires only model predictions, not labels, we propose to use this as a zero-shot proxy of correctness. The high-confidence outputs and their inputs are then used as pseudo-demonstrations.

With this as our starting premise, we estimate the model’s confidence in its output based on its self-consistency and use this measure to select robust self-generated demonstrations. We ask LLMs the same question multiple times with zero-shot chain-of-thought (CoT) prompting. To guide the model to generate a range of possible rationales and final answers, we include randomness controlled by a “temperature” hyperparameter. In an extreme case, if the model is 100% certain, it should output identical final answers each time. We then compute the entropy of the answers to gauge the uncertainty — the answers that have high self-consistency and for which the LLM is more certain, are likely to be correct and will be selected.

Assuming that we are presented with a collection of unlabeled questions, the COSP method is:

Input each unlabeled question into an LLM, obtaining multiple rationales and answers by sampling the model multiple times. The most frequent answers are highlighted, followed by a score that measures consistency of answers across multiple sampled outputs (higher is better). In addition to favoring more consistent answers, we also penalize repetition within a response (i.e., with repeated words or phrases) and encourage diversity of selected demonstrations. We encode the preference towards consistent, un-repetitive and diverse outputs in the form of a scoring function that consists of a weighted sum of the three scores for selection of the self-generated pseudo-demonstrations.
We concatenate the pseudo-demonstrations into test questions, feed them to the LLM, and obtain a final predicted answer.

Illustration of COSP: In Stage 1 (left), we run zero-shot CoT multiple times to generate a pool of demonstrations (each consisting of the question, generated rationale and prediction) and assign a score. In Stage 2 (right), we augment the current test question with pseudo-demos (blue boxes) and query the LLM again. A majority vote over outputs from both stages forms the final prediction.

COSP focuses on question-answering tasks with CoT prompting for which it is easy to measure self-consistency since the questions have unique correct answers. But this can be difficult for other tasks, such as open-ended question-answering or generative tasks that don’t have unique answers (e.g., text summarization). To address this limitation, we introduce USP in which we generalize our approach to other general NLP tasks:

Classification (CLS): Problems where we can compute the probability of each class using the neural network output logits of each class. In this way, we can measure the uncertainty without multiple sampling by computing the entropy of the logit distribution.
Short-form generation (SFG): Problems like question answering where we can use the same procedure mentioned above for COSP, but, if necessary, without the rationale-generating step.
Long-form generation (LFG): Problems like summarization and translation, where the questions are often open-ended and the outputs are unlikely to be identical, even if the LLM is certain. In this case, we use an overlap metric in which we compute the average of the pairwise ROUGE score between the different outputs to the same query.

Illustration of USP in exemplary tasks (classification, QA and text summarization). Similar to COSP, the LLM first generates predictions on an unlabeled dataset whose outputs are scored with logit entropy, consistency or alignment, depending on the task type, and pseudo-demonstrations are selected from these input-output pairs. In Stage 2, the test instances are augmented with pseudo-demos for prediction.

We compute the relevant confidence scores depending on the type of task on the aforementioned set of unlabeled test samples. After scoring, similar to COSP, we pick the confident, diverse and less repetitive answers to form a model-generated pseudo-demonstration set. We finally query the LLM again in a few-shot format with these pseudo-demonstrations to obtain the final predictions on the entire test set.

Key Results

For COSP, we focus on a set of six arithmetic and commonsense reasoning problems, and we compare against 0-shot-CoT (i.e., “Let’s think step by step“ only). We use self-consistency in all baselines so that they use roughly the same amount of computational resources as COSP. Compared across three LLMs, we see that zero-shot COSP significantly outperforms the standard zero-shot baseline.

USP improves significantly on 0-shot performance. “CLS” is an average of 15 classification tasks; “SFG” is the average of five short-form generation tasks; “LFG” is the average of two summarization tasks. “SFG (BBH)” is an average of all BIG-Bench Hard tasks, where each question is in SFG format.

For USP, we expand our analysis to a much wider range of tasks, including more than 25 classifications, short-form generation, and long-form generation tasks. Using the state-of-the-art PaLM 2 models, we also test against the BIG-Bench Hard suite of tasks where LLMs have previously underperformed compared to people. We show that in all cases, USP again outperforms the baselines and is competitive to prompting with golden examples.

Accuracy on BIG-Bench Hard tasks with PaLM 2-M (each line represents a task of the suite). The gain/loss of USP (green stars) over standard 0-shot (green triangles) is shown in percentages. “Human” refers to average human performance; “AutoCoT” and “Random demo” are baselines we compared against in the paper; and “3-shot” is the few-shot performance for three handcrafted demos in CoT format.

We also analyze the working mechanism of USP by validating the key observation above on the relation between confidence and correctness, and we found that in an overwhelming majority of the cases, USP picks confident predictions that are more likely better in all task types considered, as shown in the figure below.

USP picks confident predictions that are more likely better. Ground-truth performance metrics against USP confidence scores in selected tasks in various task types (blue: CLS, orange: SFG, green: LFG) with PaLM-540B.
Conclusion

Zero-shot inference is a highly sought-after capability of modern LLMs, yet the success in which poses unique challenges. We propose COSP and USP, a family of versatile, zero-shot automatic prompting techniques applicable to a wide range of tasks. We show large improvement over the state-of-the-art baselines over numerous task and model combinations.

Acknowledgements

This work was conducted by Xingchen Wan, Ruoxi Sun, Hootan Nakhost, Hanjun Dai, Julian Martin Eisenschlos, Sercan Ö. Arık, and Tomas Pfister. We would like to thank Jinsung Yoon Xuezhi Wang for providing helpful reviews, and other colleagues at Google Cloud AI Research for their discussion and feedback.

Do You Want To Boost Your Business?

drop us a line and keep in touch