<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>NLP Archives - Petamind</title>
	<atom:link href="https://petaminds.com/tag/nlp/feed/" rel="self" type="application/rss+xml" />
	<link>https://petaminds.com/tag/nlp/</link>
	<description>A.I, Data and Software Engineering</description>
	<lastBuildDate>Mon, 17 Jan 2022 20:31:40 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Latent Dirichlet Allocation (LDA) and Topic ModelLing in Python</title>
		<link>https://petaminds.com/latent-dirichlet-allocation-lda-and-topic-modelling-in-python/</link>
					<comments>https://petaminds.com/latent-dirichlet-allocation-lda-and-topic-modelling-in-python/#respond</comments>
		
		<dc:creator><![CDATA[Tung Nguyen]]></dc:creator>
		<pubDate>Sun, 16 Jan 2022 22:09:59 +0000</pubDate>
				<category><![CDATA[data science]]></category>
		<category><![CDATA[latent dirichlet allocation]]></category>
		<category><![CDATA[lda]]></category>
		<category><![CDATA[modelling]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[topic]]></category>
		<guid isPermaLink="false">https://petaminds.com/?p=3590</guid>

					<description><![CDATA[<p>Topic modelling&#160;is a type of statistical modelling for discovering the abstract “topics” that occur in a collection of documents.&#160;Latent Dirichlet Allocation&#160;(LDA) is an example of a topic model and is used to classify text in a document to a particular topic.&#160;It builds a topic per document model and words per topic model, modelled as Dirichlet [&#8230;]</p>
<p>The post <a href="https://petaminds.com/latent-dirichlet-allocation-lda-and-topic-modelling-in-python/">Latent Dirichlet Allocation (LDA) and Topic ModelLing in Python</a> appeared first on <a href="https://petaminds.com">Petamind</a>.</p>
]]></description>
		
					<wfw:commentRss>https://petaminds.com/latent-dirichlet-allocation-lda-and-topic-modelling-in-python/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Understanding Latent Dirichlet Allocation (LDA)</title>
		<link>https://petaminds.com/understanding-latent-dirichlet-allocation-lda/</link>
					<comments>https://petaminds.com/understanding-latent-dirichlet-allocation-lda/#respond</comments>
		
		<dc:creator><![CDATA[Tung Nguyen]]></dc:creator>
		<pubDate>Sun, 02 Jan 2022 03:37:00 +0000</pubDate>
				<category><![CDATA[data science]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[latent dirichlet allocation]]></category>
		<category><![CDATA[lda]]></category>
		<category><![CDATA[NLP]]></category>
		<guid isPermaLink="false">https://petaminds.com/?p=3625</guid>

					<description><![CDATA[<p>Imagine a large law firm takes over a smaller law firm and tries to identify the documents corresponding to different types of cases such as civil or criminal cases which the smaller firm has dealt or is currently dealing with. The presumption is that the documents are not already classified by the smaller law firm. [&#8230;]</p>
<p>The post <a href="https://petaminds.com/understanding-latent-dirichlet-allocation-lda/">Understanding Latent Dirichlet Allocation (LDA)</a> appeared first on <a href="https://petaminds.com">Petamind</a>.</p>
]]></description>
		
					<wfw:commentRss>https://petaminds.com/understanding-latent-dirichlet-allocation-lda/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Word2vec with gensim &#8211; a simple word embedding example</title>
		<link>https://petaminds.com/word2vec-with-gensim-a-simple-word-embedding-example/</link>
					<comments>https://petaminds.com/word2vec-with-gensim-a-simple-word-embedding-example/#comments</comments>
		
		<dc:creator><![CDATA[Tung Nguyen]]></dc:creator>
		<pubDate>Wed, 11 Apr 2018 05:58:27 +0000</pubDate>
				<category><![CDATA[data science]]></category>
		<category><![CDATA[Project]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[CBOW]]></category>
		<category><![CDATA[GENSIM]]></category>
		<category><![CDATA[neural network]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[skip-grams]]></category>
		<guid isPermaLink="false">https://petaminds.com/?p=1127</guid>

					<description><![CDATA[<p>In this short article, we show a simple example of how to use GenSim and word2vec for word embedding. Word2vec Word2vec is a famous algorithm for natural language processing (NLP) created by Tomas Mikolov teams. It is a group of related models that are used to produce&#160;word embeddings, i.e. CBOW and skip-grams. The models are [&#8230;]</p>
<p>The post <a href="https://petaminds.com/word2vec-with-gensim-a-simple-word-embedding-example/">Word2vec with gensim &#8211; a simple word embedding example</a> appeared first on <a href="https://petaminds.com">Petamind</a>.</p>
]]></description>
		
					<wfw:commentRss>https://petaminds.com/word2vec-with-gensim-a-simple-word-embedding-example/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
	</channel>
</rss>
