• Question-Answering in association with roBERTa

    Question-Answering in association with roBERTa

    Take a sip of chaii and refresh your mood with chaii — Hindi and Tamil Question Answering By google. Introduction May God bless you with all the data structure skills. Transformers have been revolutionary models that yield state-of-art variants like BERT, GPT, mt5, T5, tapas, Albert, Robert, and many more from their families. The Hugging face library has provided…

  • Let’s start a music production house

    Let’s start a music production house

    Audio Generation Using DeepMind’s WaveNet Published by: Amit Nikhade July 7 . 2021 WaveNet by Google DeepMind, the time is now. Generating Audio with WaveNet  Music is the language of the spirit. It opens the secret of life bringing peace, abolishing strife. -Kahlil Gibran   One of the most awaited and willful features was the interaction with…

  • Quantum neural networks | Effortless reading

    Quantum neural networks | Effortless reading

    We gonna explore Quantum neural networks (QNN) in a much simplified manner, covering all the fundamentals concepts that will create a grasping impact. I’ll try making you understand with least mathematics to get a better overview as a beginner. Introduction Artificial Neural Networks (ANN) has been a generous algorithm on which the whole deep learning…

  • VAE in no time

    VAE in no time

    A quick tour of Variational Autoencoder (VAE) In recent times the generative model has gained huge attention due to its state-of-art performance and hence achieved massive importance in the marketplace and is also used widely. Variational Autoencoders are deep learning techniques used to learn the latent representations they are one of the finest approaches to unsupervised learning. VAE shows exceptional…

  • A sudden change to the encoder!

    A sudden change to the encoder!

    Transformers have gained tremendous popularity since their creation due to their significant staging. They have ruled the NLP as well as the Computer vision. Transformers-based models have been the all-time dearest.   Overview “Attention is all you need”. The paper describes the Transformer architecture where the encoder and decoder are stacked up. Both the architecture…