Text Data Preparation
Text Analytics Toolbox™ includes tools for processing raw text from sources such as equipment logs, news feeds, surveys, operator reports, and social media. Use these tools to extract text from popular file formats, preprocess raw text, extract individual words or multiword phrases (n-grams), convert text into numerical representations, and build statistical models. For an example showing how to get started, see Prepare Text Data for Analysis.
Text Analytics Toolbox supports the languages English, Japanese, German, and Korean. Most Text Analytics Toolbox functions work with text from other languages. For more information, see Language Considerations.
Import and Export
|Array of tokenized documents for text analysis|
|Erase punctuation from text and documents|
|Erase HTML and XML tags from text|
|Erase HTTP and HTTPS URLs from text|
|Remove stop words from documents|
|Remove short words from documents or bag-of-words model|
|Remove long words from documents or bag-of-words model|
|Remove selected words from documents or bag-of-words model|
|Stem or lemmatize words|
|Replace words in documents|
|Replace n-grams in documents|
|List of stop words|
|Convert HTML and XML entities into characters|
|Convert documents to lowercase|
|Convert documents to uppercase|
|Search documents for word or n-gram occurrences in context|
|Details of tokens in tokenized document array|
|Add sentence numbers to documents|
|Add part-of-speech tags to documents|
|Add lemma forms of tokens to documents|
|Add language identifiers to documents|
|Add entity tags to documents|
|Add grammatical dependency details to documents|
|Add token type details to documents|
|Split text into sentences|
|Detect language of text|
|Table of common abbreviations|
|List of top-level domains|
Word and N-Gram Counting
|Add documents to bag-of-words or bag-of-n-grams model|
|Remove documents from bag-of-words or bag-of-n-grams model|
|Remove words with low counts from bag-of-words model|
|Remove infrequently seen n-grams from bag-of-n-grams model|
|Remove n-grams from bag-of-n-grams model|
|Remove empty documents from tokenized document array, bag-of-words model, or bag-of-n-grams model|
|Most important words in bag-of-words model or LDA topic|
|Most frequent n-grams|
|Encode documents as matrix of word or n-gram counts|
|Term Frequency–Inverse Document Frequency (tf-idf) matrix|
|Combine multiple bag-of-words or bag-of-n-grams models|
Spelling Correction and Edit Distance
|Correct spelling of words|
|Find edit distance between two strings or documents|
|Edit distance nearest neighbor searcher|
|Find nearest neighbors by edit distance|
|Find nearest neighbors by edit distance range|
|Split string into graphemes|
Document Manipulation and Conversion
|Apply function to words in documents|
|Check if word is member of documents|
|Check if n-gram is member of documents|
|Check if pattern is substring in documents|
|Replace substrings in documents|
|Replace text in words of documents using regular expression|
|Length of documents in document array|
|Convert documents to cell array of string vectors|
|Convert documents to string by joining words|
|Convert scalar document to string vector|
|Unicode composed normalized form (NFC)|
|Unicode decomposed normalized form (NFD)|
|Unicode compatibility composed normalized form (NFKC)|
|Unicode compatibility decomposed normalized form (NFKD)|
|Unicode UTF-32 string representation|
|Unicode character categories|
|Convert UTF-32 representation to hexadecimal values|
|Convert UTF-32 representation to string|
- Extract Text Data from Files
This example shows how to extract the text data from text, HTML, Microsoft® Word, PDF, CSV, and Microsoft Excel® files and import it into MATLAB® for analysis.
- Parse HTML and Extract Text Content
This example shows how to parse HTML code and extract the text content from particular elements.
- Data Sets for Text Analytics
Discover data sets for various text analytics tasks.
- Prepare Text Data for Analysis
This example shows how to create a function which cleans and preprocesses text data for analysis.
- Analyze Text Data Containing Emojis
This example shows how to analyze text data containing emojis.
- Correct Spelling in Documents
This example shows how to correct spelling in documents using Hunspell.
- Create Extension Dictionary for Spelling Correction
This example shows how to create a Hunspell extension dictionary for spelling correction.
- Create Custom Spelling Correction Function Using Edit Distance Searchers
This example shows how to correct spelling using edit distance searchers and a vocabulary of known words.
- Analyze Sentence Structure Using Grammatical Dependency Parsing
This example shows how to extract information from a sentence using grammatical dependency parsing.
- Language Considerations
Information on using Text Analytics Toolbox features for other languages.
- Japanese Language Support
Information on Japanese support in Text Analytics Toolbox.
- Analyze Japanese Text Data
This example shows how to import, prepare, and analyze Japanese text data using a topic model.
- German Language Support
Information on German support in Text Analytics Toolbox.
- Analyze German Text Data
This example shows how to import, prepare, and analyze German text data using a topic model.