4

Comprehensive Guide for Interview Questions on Transfer Learning

 1 year ago
source link: https://www.analyticsvidhya.com/blog/2022/11/comprehensive-guide-for-interview-questions-on-transfer-learning/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

This article was published as a part of the Data Science Blogathon.

Transfer Learning

Source: Canva

Introduction

Competitive Deep Learning models rely on a wealth of training data, computing resources, and time. However, there are many tasks for which we don’t have enough labeled data at our disposal. Moreover, the need for running deep learning models on edge devices with limited processing power and training time is also increasing. The workaround for these issues is Transfer Learning!

Given the gravity of the issues and the popularity and wide usage of transfer learning in companies, startups, business firms, and academia to build new solutions, it is imperative to have a crystal clear understanding of Transfer Learning to bag a position for yourself in the industry.

In this article, I have compiled a list of twelve important questions on Transfer Learning that you could use as a guide to get more familiar with the topic and also formulate an effective answer to succeed in your next interview.

Interview Questions on Transfer Learning

Following are some interview-winning questions with detailed answers on Transfer Learning.

Question 1: What is Transfer Learning in NLP? Explain with Examples.

Answer: Transfer Learning is an ML approach where a model trained for a source task is repurposed for other related tasks (target task).

Usually, models are developed and trained to perform isolated tasks. However, to leverage our resources well and cut down the training time, the knowledge gained from a model used in one task (source task) can be reused as a starting point for other related tasks (target tasks).

The more related the tasks, the easier it is for us to transfer or cross-utilize our knowledge. Some simple examples would be:

In essence, Transfer Learning is inspired by human beings’ capability to transfer/leverage/generalize knowledge gathered from a related domain (source domain) to improvize the learning performance of the target domain task. Let’s understand this with the help of the following examples:

Example 1: Let’s say you know how to play Xiangqi, and now if you want to learn how to play Chess, given the overlap, it would be easier for you to learn given that you can apply/generalize the knowledge of Xiangqi while learning/playing Chess and learn quickly.

Example 2: Assume you know how to play the violin, so at least some amount of knowledge (i.e., musical notes/nuances, etc.) that you gathered while playing/learning and understanding the violin could be applied in learning how to play piano and learn quickly.

Example 3: Similarly, if you are well-versed in riding a bicycle, it would be beneficial to leverage that “knowledge/experience” while trying to learn how to ride a scooter/bike.

We don’t learn everything from the outset in each of these scenarios. We cross-transfer and apply/generalize knowledge from what we have learned in the past!

Transfer Learning

Figure 1: We can infer intuitive examples of transfer learning from our day-to-day lives (Source: Arxiv)

During transfer learning, the application of knowledge refers to leveraging the source task’s attributes and characteristics, which are applied and mapped onto the target task.

transfer learning

Figure 2: Illustration of Transfer Learning where knowledge could be in the form of instances/features/parameters/relations (Source: Arxiv)

During transfer learning, the base of a pre-trained model (that has already learned to do 1) is reused, and an untrained head having a few dense layers is attached (to learn 2). The body weights learn broad source domain features, which are used to initialize a new model for the new target task.

transfer learning

Figure 3: Traditional supervised learning (left) Vs. Transfer Learning (right) (Source: NLP with Transformers book)

Note: The applied knowledge does not necessarily have a beneficial effect on new tasks. We will take a look at this in Question 9.

Question 2: Why Should We Use Transfer Learning?

Answer: When the target task is little related to the source domain + task, it’s recommended to leverage Transfer Learning since it can help us in the following aspects:

1. Saves Time: Training a large model from scratch takes a lot of time, from a few days to weeks. This need can be curtailed by leveraging the pre-trained model (source model) knowledge.

2. Saves Resources + Economical + Environment Friendly: Since it doesn’t involve training the model right from the outset. As a result, it saves resources and is economical and environment-friendly.

3. Helps in building effective models when labeled data is scarce: In scenarios where we have very little data at our disposal, with the help of transfer learning, an effective machine learning model can be built using little training data.

4. Better Performance: During the positive transfer, it often yields better results than a model trained using supervised learning from scratch.

16600hh.PNG

Figure 4: Effect of Transfer Learning on the performance of the model (Source: Machine Learning Mastery)

Question 3: List the Different Types of Transfer Learning.

Answer: Transfer Learning can be classified based on Problem and Solution. The following diagram pretty much sums up everything.

transfer learning

Figure 5: Types of Transfer Learning (Source: Arxiv)

1. Problem Categorization:

Label-Setting-Based Categorization:

  • Transductive Transfer Learning
  • Inductive Transfer Learning
  • Unsupervised Transfer Learning

Space-Setting-Based Categorization:

  • Homogeneous Transfer Learning
  • Heterogenous Transfer Learning

2. Solution Categorization:

  • Instance-based Approach
  • Feature-based Approach
  • Parameter-Based Approach
  • Relational-Based Approach
Login Required
Login Required
Login Required
Login Required
Login Required
Login Required
Login Required
Login Required
Login Required

Conclusion

This article covers the twelve most important interview-winning questions. Using these interview questions as a guide, you can better understand the fundamentals of the topic and formulate an effective answer and present it to the interviewer.

To summarize, in this article, we learned the following:

  1. In transfer learning, the knowledge gained from a model used in the source task can be reused as a starting point for other related target tasks.
  2. Transfer Learning saves training time and resources and helps build competitive models even when labeled data is scarce.
  3. Sequential Transfer Learning is the process of learning multiple tasks sequentially. Let’s say the knowledge needs to be transferred to multiple tasks (T1, T2, …. Tn).
  4. Fine-tuning a pre-trained model on a massive semi-related dataset proved to be a simple and effective approach for many problems.
  5. Multi-Task Learning is the process of learning multiple tasks at once in parallel. For example, for a given pretrained model M, the learning is transferred to multiple tasks (T1, T2, · · ·, Tn).
  6. During the negative transfer, we witness degradation in the performance of the model.
  7. During transfer learning, we must consider what to transfer, when, and how to transfer. Moreover, the model’s input must be the same size it was primarily trained with.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK