Questions tagged [self-supervised-learning]

20 questions
12
votes
3 answers

metric learning and contrastive learning difference

I researched some materials,and know that the goal of contrastive learning and metric learning are both to learn such an embedding space in which similar sample pairs stay close to each other while dissimilar ones are far apart. But what is the…
4
votes
1 answer

SimCLR does not learn representations

So I'm trying to train a SimCLR network with a custom lightweight ConvNet backbone (tried it with a ResNet already) on a dataset containing first 5 letters of the alphabet out of which two are randomly selected and placed in random positions in the…
3
votes
0 answers

CLIP for multi-label classification

I am using CLIP to determine the similarity between words and images. For now I am using this repo and the following code and for classification it gives great results. I would need it for multi label classification in which I would need to use…
2
votes
1 answer

Improving Deep Learning Model to Detect Train Wagon Gaps in Variable Conditions

Our team records video streams of moving trains from various camera locations with differing backgrounds and distances from the rails. Our task is to collect information about each wagon, which requires detecting gaps between them. We have trained a…
2
votes
1 answer

How to use K means clustering to visualise learnt features of a CNN model?

Recently I was going through the paper : "Intriguing Properties of Contrastive Losses"(https://arxiv.org/abs/2011.02803). In the paper(section 3.2) the authors try to determine how well the SimCLR framework has allowed the ResNet50 Model to learn…
1
vote
0 answers

DistilBert for self-supervision - switch heads for pre-training: MaskedLM and SequenceClassification

Say I want to train a model for sequence classification. And so I define my model to be: model = DistilBertForSequenceClassification.from_pretrained("bert-base-uncased") My question is - what would be the optimal way if I want to pre-train this…
1
vote
1 answer

Unable to train a self-supervised(ssl) model using Lightly CLI

I am unable to train a self-supervised(ssl) model to create image embeddings using the lightly cli: Lightly Platform Link. I intend to select diverse example from my dataset to create an object detection model further downstream and the image…
1
vote
1 answer

Pytorch: How exactly dataloader get a batch from dataset?

I am trying to use pytorch to implement self-supervised contrastive learning. There is a phenomenon that I can't understand. Here is my code of transformation to get two augmented views from original data: class ContrastiveTransformations: def…
1
vote
0 answers

Is there a way to visualize the embeddings obtained from Wav2Vec 2.0?

I'm looking to train a word2vec 2.0 model from scratch, but I am a bit new to the field. Crucially, I would like to train it using a large dataset of non-human speech (i.e. cetacean sounds) in order to capture the underlying structure. Once the…
1
vote
1 answer

Downstream Task In Reinforcement Learning

I have read some paragraph on Self supervision based reinforcement learning which just enables agent to learn without human supervision and effective strategy for unlabeled dataset training. But I found the "Downstream task" many times. Now, What…
0
votes
0 answers

How can I execute --log_dir ./logs_moco \

This is the train part of the github page of VideoMoco python train.py \ --log_dir ./logs_moco \ --ckp_dir ./checkpoints_moco \ -a r2plusd_18 \ --lr 0.04 \ -fpc 32 \ -b 256 \ -j 128 \ --epochs 200 \ --schedule 120 160 \ …
jjo
  • 1
0
votes
1 answer

Contrastive Learning for Segmentation

I'm trying to use Contrastive Learning or Self Supervised Learning to segment 2D medical images. I want to use something like SimCLR or SimSiam, however, I'm getting stuck on how it should work (for example using this code). What should be my…
0
votes
0 answers

How to create a custom labelled dataset (of rotated images) for self-supervised learning on images

[SOLVED] The code has been updated: I wish to create an image dataset for self supervised learning, where I have a dataset of 1000 unlabelled images (.jpg files). I wish to create 4000 labelled images for rotation angle detection pre-task. For each…
0
votes
1 answer

I want to call some layers during training (but not inference) - the gradients don't seem to flow through these layers

I am using a custom PPO model with ray.tune(), and I want to add some self-supervised learning that is dependent on batch[‘obs’], batch[‘done’], batch[‘action’] and batch[‘next_obs’] I have defined some layers in my model that are called only during…
0
votes
0 answers

Bibliographic References on Denoising Distributed Acoustic data with Deep Learning

Distributed Acoustic Sensing (DAS) I have an iDAS (intelligent distributed acoustic sensing) dataset obtain from an undersea optical fibre. iDAS data have a 2D dimensional representation. On the one axis we have the channel axis, i.e. the point on…
1
2