The latest machine learning study demonstrates background bias in the DML for deep metric learning by running multiple experiments on three standard DML datasets and five different DML loss functions.

The latest machine learning study demonstrates background bias in the DML for deep metric learning by running multiple experiments on three standard DML datasets and five different DML loss functions.

A computer system known as an image retrieval system is used to browse, search, and retrieve images from a large database of digital images. Feature extraction is the most important aspect of image retrieval. The features match the image representation and should also enable effective image retrieval. Deep Metric Learning (DML) is a technique used to train a neural network to map input images to a low-dimensional embedding space so that similar images are closer than dissimilar images. Unfortunately, DML does not resolve the background bias which causes irrelevant feature extraction.

In the literature, two main approaches can be distinguished, which attempt to overcome the problem of background bias: background increase and attribution regulation. These methods are designed for classification networks and cannot be used directly for DML networks. By using random images, background augmentation techniques replace the background images used for training or inference. In order to identify the regions of the image on which the network is focused, the attribution organization computes the referral map of an input sample during training. A German research team proposes a study aimed at analyzing the background effect on the use of three standard data sets and five common loss functions.

The study conducted in this article aims at two main points:

1) Demonstrate that the models learned by DML are not robust against background bias.

2) Suggesting a data augmentation technique to address the above problem.

The dependence of the trained DML models is measured against the image background using a new test environment provided by the authors. They assume that the more the DML model takes into account the background of the images when creating the embed, the more the embeds will be different when the image’s background is changed. Conversely, if the model prioritizes the background, the retrieval performance should be greatly affected when the background of the test images is modified randomly in the DML option. Therefore, they suggested creating a new test dataset by combining the region of interest for each image with a background from the popular stock photography site Unsplash. U-Net exposes the topic of interest.

To overcome background bias in DML, the authors applied a new strategy, BGAugment, to perform data augmentation during training and validation inspired by the literature on background bias in classification networks. They suggest following the same process that was done to create the test dataset. To avoid interference with the background images of the test set, the selected images from Unsplash differ from those used in the test set.

To validate the above two assumptions, a pilot study was performed to compare three ranking losses: the divergent loss, the triplet loss, and the multiple similarity loss, as well as two ranking losses, the ArcFace loss and the normal Softmax loss. Experiments were performed on three standard metric deep learning standard data sets: Cars196, CUB200, and Stanford Online Products. The results confirm that when a model is trained without BGAugment, its performance drops when encountering test images from the modified test data set. On the other hand, using the proposed data augmentation improves these results and makes the model more robust against background bias.

In this paper, the authors demonstrate that it is useful for retrieval settings such as object retrieval or subject re-identification systems to investigate and address verb background bias in DML. They claim that this is the first to show how background bias affects DML models. For the trivial image, a new approach has been proposed to reduce background bias in DML that does not need more labeling work, model modifications, or longer inference times.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'On Background Bias in Deep Metric Learning'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and code.
Please Don't Forget To Join Our ML Subreddit


Mahmoud is a PhD researcher in machine learning. I also carry
Bachelor’s degree in Physical Sciences and Master’s degree in
Communication systems and networks. His current fields
The research is concerned with computer vision, stock market predictions, and deep
learning. I have produced many scholarly articles about people dealing with
Determine and study the durability and stability of depth
networks.


Source

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Prev
Musk tells Twitter to cut costs for cloud and other tech infrastructure
Musk tells Twitter to cut costs for cloud and other tech infrastructure

Musk tells Twitter to cut costs for cloud and other tech infrastructure

Twitter employees have been tasked with finding $1 billion in annual

Next
GoodAccess’s Trustless Access Network gives your business security benefits without the complexities
GoodAccess's Trustless Access Network gives your business security benefits without the complexities

GoodAccess’s Trustless Access Network gives your business security benefits without the complexities

Being able to work anywhere is comfortable and familiar to many employees and

You May Also Like