Created with Sketch.
4 minutes | Jan 13, 2021
Word Embeddings - A simple introduction to word2vec
Hey guys welcome to another episode for word embeddings! In this episode we talk about another popularly used word embedding technique that is known as word2vec. We use word2vec to grab the contextual meaning in our vector representation. I've found this useful reading for word2vec. Do read it for an in depth explanation. p.s. Sorry for always posting episode after a significant delay, this is because I myself am learning various stuffs, I have different blogs to handle, multiple projects that are in place so my schedule almost daily is kinda packed. I hope you all get some value from my podcasts and helps you get an intuitive understanding of various topics. See you in the next podcast episode!
3 minutes | Dec 22, 2020
Introduction to word embeddings and One hot encoding in NLP
In this podcast episode we discuss about why word embeddings are required, what are they and we also discuss about one hot encodings. In next episode we will talk about specific techniques for word embeddings individually. Stay tuned. Sponsored by www.stacklearn.org
2 minutes | Dec 13, 2020
learn about TF-IDF model in Natural Language Processing
In this podcast episode we will talk about TF-IDF model in Natural Language Processing. TF-IDF model stands for term frequency inverse document frequency. We use TF-IDF model to give more weight to important words as compared with common words like the, a, in, there, where, etc. To learn python programming visit www.stacklearn.org. See you in the next podcast episode!
4 minutes | Oct 9, 2020
Bag of Words in Natural Language Processing
In this podcast episode we talk about bag of words model in natural language processing. Bag of Words model is simply a feature extraction method used in NLP. We mainly discuss about why bag of words model is required and what it is. In summary BOW is simply a set of tuples with words along with their frequency pairs. To learn more about BOW : visit this Gensim Introduction : visit this Also, to support me do visit www.stacklearn.org
3 minutes | Sep 24, 2020
Review of Preprocessing steps in NLP and More!
In this episode we review preprocessing steps such as making text lowercase, removing unwanted characters and other related cleaning tasks which we have discussed in the previous videos... we also talk about Gensim package in python and how it simplifies preprocessing in Natural language processing
1 minutes | Sep 23, 2020
Lemmatization in Natural Language Processing
In this podcast episode we will be talking about Lemmatization in natural language processing. It is a text normalization step which we need to perform to normalize words. Lemmatization improves on shortcomings of Stemming in Natural Language Processing and In this podcast episode we talk about that shortcoming and also how we can use lemmatization using nltk library. Learn Python: www.stacklearn.org Python package to save snippets : PyPi - codesnip
1 minutes | Sep 17, 2020
Stemming in Natural Language Processing
In this podcast episode we will be talking about stemming in natural language processing. It is a text normalization step which we need to perform to normalize words such that run, runs and running counts the same... stemming involves chopping off affixes such as ing, ly, etc.
2 minutes | Sep 14, 2020
Tokenization in Natural Language Processing
In this episode we discuss about tokenization in Natural Language Processing. As discussed in previous episode, tokenisation is an important step in data cleaning and it entails dividing a large piece of text into smaller chunks. In this episode we discuss some of the basic tokenizers available from nltk.tokenize in nltk. If you liked this episode, do follow and do connect with me on twitter @sarvesh0829 follow my blog at www.stacklearn.org. If you sell something locally, do it using BagUp app available at play store, It would help a lot.
3 minutes | Sep 13, 2020
Data Cleaning in Natural Language Provessing
In this episode we talk about various steps in data cleaning process in Natural Language Processing. Data cleaning is almost a given whenever you want to perform natural language processing onto the given text. Data cleaning in natural language processing involves tokenization, lowering the words, lemmatization, and so on. Aside from talking about that we also talk about how you can implement those briefly. To install codesnip mentioned in the last part open your terminal and write pip install codesnip
2 minutes | Oct 4, 2019
Natural language processing
A general discussion about natural language processing, in this episode we discuss what is contained within natural language processing and we discuss about topics which we will be discussing about in further episodes.
4 minutes | Jul 9, 2019
In this podcast episode we talk about the basics for Searching in computer programming. We discuss 2 things, first is the naive way and other is binary search. These two techniques are apt for many applications and will help you get your way around the algorithm design.
6 minutes | May 21, 2019
Lets dive into linked list this time!
Hello everyone, this time we will learn about linked lists! I mean they are amazing and very useful. They are used for various purposes like creating stacks , queues, lists, various management systems like library management systems, admissions management systems , hospital management systems and the list goes on! In this podcast episode we look into what linked lists are and various types of linked lists.
3 minutes | May 12, 2019
Lets look into Stacks
Stacks are a linear data structures which we use in many applications and in various domains, in this podcast episode we look at basics about stacks and where they are used.
5 minutes | Mar 20, 2019
Lets look into Minimum Spanning Tree(part 2)
In this episode, we will be discussing about how we can implement a minimum spanning tree algorithm, while doing so we will also discuss union find data structure and how it helps us in finding weather including an edge will lead to a cycle in the tree or not.
3 minutes | Mar 4, 2019
Lets look into Minimum Spanning Trees(part 1)
So, in this podcast we discuss about what connected graphs are , what trees are, how they differ from each other and how to convert a graph into a tree. So basically after getting a grasp on these basic concepts, we will try to understand how we can get a minimum spanning tree from a given connected graph.
3 minutes | Feb 18, 2019
Lets learn a bit about queues
Queue's are linear data structures which are quite important in operating systems and are used to implement algorithms such as round robin algorithm, priority queue scheduling and multilevel queue scheduling... In this podcast we will have a glance on these concepts and will discuss a bit about how queue works and how we can implement it.
4 minutes | Jan 23, 2019
Palindromic number and finding reverse of a number
In this podcast we will mainly discuss about finding reverse of a given number, using which we will find if the given number is a palindromic number or not...
2 minutes | Jan 16, 2019
Factorials and code
In this episode we discuss about factorials, and how to code to get the factorial of a given number n, in an iterative manner which is method 1 and in a recursive method which is method 2.
3 minutes | Jan 7, 2019
A little talk about fibonacci series
In this episode we talk about fibonacci series, how to calculate it and develop its code logic as we go...
2 minutes | Dec 22, 2018
Why logic development?
In this podcast we will discuss about importance of logic development and problem solving skills, why is it required and how it helps...
Terms of Service
Do Not Sell My Personal Information
© Stitcher 2021