# Wolfram Function Repository

Instant-use add-on functions for the Wolfram Language

Function Repository Resource:

Decompose a matrix into two non-negative matrix factors

Contributed by:
Anton Antonov

ResourceFunction["NonNegativeMatrixFactorization"][ finds non-negative matrix factors for the matrix |

Non-negative matrix factorization (NNMF) is a matrix factorization method that reduces the dimensionality of a given matrix.

NNMF has two factors: the left one is the reduced dimensionality representation matrix and the right one is the corresponding new-basis matrix.

The argument matrix need not be square.

NNMF allows easier interpretation of extracted topics in the framework of latent semantic analysis.

When using NNMF over collections of images and documents, often more than one execution is required in order to build confidence in the topics interpretation.

In comparison with the thin singular value decomposition, NNMF provides a non-orthogonal basis with vectors that have non-negative coordinates.

Here are the options taken by ResourceFunction["NonNegativeMatrixFactorization"]:

"Epsilon" | 10^{-9} | denominator (regularization) offset |

MaxSteps | 200 | maximum iteration steps |

"NonNegative" | True | should the non-negativity be enforced at each step |

"Normalization" | Left | which normalization is applied to the factors |

PrecisionGoal | Automatic | precision goal |

"ProfilingPrints" | False | should profiling times be printed out during execution |

"RegularizationParameter" | 0.01 | regularization (multiplier) parameter |

Create a random integer matrix:

In[1]:= |

Out[3]= |

Compute the NNMF factors:

In[4]:= |

Out[4]= |

Here is the matrix product of the obtained factors:

In[5]:= |

Out[8]= |

Note that elementwise relative errors between the original matrix and reconstructed matrix are small:

In[9]:= |

Out[12]= |

Here is a random matrix with its first two columns having much larger magnitudes than the rest:

In[13]:= |

Out[13]= |

Here we compute NNMF factors:

In[14]:= |

Find the relative error of the approximation by the matrix factorization:

In[15]:= |

Out[15]= |

Here is the relative error for the first three columns:

In[16]:= |

Out[16]= |

Here are comparison plots:

In[17]:= |

Out[17]= |

NNMF factors can be normalized in two ways: (i) the Euclidean norms of the columns of the left factor are all equal to 1; or (ii) the Euclidean norms of the rows of the right factor are all equal to 1. Here is a table that shows NNMF factors for different normalization specifications:

In[18]:= |

Out[18]= |

The implemented NNMF algorithm uses the gradient descent algorithm. The regularization parameter controls the “learning rate” and can have dramatic influence on the number of iteration steps and approximation precision. Compute NNMF with different regularization multiplier parameters:

In[19]:= |

Plot the results:

In[20]:= |

Out[20]= |

Here are the corresponding norms:

In[21]:= |

Out[21]= |

One of the main motivations for developing NNMF algorithms is the easier interpretation of extracted topics in the framework of latent semantic analysis. The following code illustrates the extraction of topics over a dataset of movie reviews.

Start with a movie reviews dataset:

In[22]:= |

Out[22]= |

Change the labels “positive” and “negative” to have the prefix “tag:”:

In[23]:= |

Concatenate to each review the corresponding label and create an Association:

In[24]:= |

Out[24]= |

Select movie reviews that are nontrivial:

In[25]:= |

Out[25]= |

Split each review into words and delete stopwords:

In[26]:= |

Convert the movie reviews association into a list of ID-word pairs (“long form”):

In[27]:= |

Out[27]= |

Replace each word with its Porter stem:

In[28]:= |

Out[28]= |

Find the frequency of appearance of all unique word stems and pick words that appear in more than 30 reviews:

In[29]:= |

Out[29]= |

Filter the ID-word pairs list to contain only words that are sufficiently popular:

In[30]:= |

Compute an ID-word contingency matrix:

In[31]:= |

Here is a sample of the contingency matrix as a dataset:

In[32]:= |

Out[32]= |

Visualize the contingency matrix:

In[33]:= |

Out[33]= |

Take the contingency sparse matrix:

In[34]:= |

Here is a plot generated by the resource function ParetoPrinciplePlot that shows the Pareto principle adherence of the selected popular words:

In[35]:= |

Out[35]= |

Judging from the Pareto plot we should apply the inverse document frequency (IDF) formula:

In[36]:= |

Normalize each row with the Euclidean norm:

In[37]:= |

In order to get computations faster, we take a sample submatrix:

In[38]:= |

Out[38]= |

Apply NNMF in order to extract 24 topics using 12 maximum iteration steps, normalizing the right factor:

In[39]:= |

Out[39]= |

Show the extracted topics using the right factor (H):

In[40]:= |

Out[40]= |

Here we show statistical thesaurus entries for random words, selected words and the labels (“tag:positive”,”tag:negative”):

In[41]:= |

Out[41]= |

NNMF should be compared with the singular value decomposition (SVD) and independent component analysis (ICA).

Generate 3D points:

In[42]:= |

Compute matrix factorizations using NNMF, SVD and ICA and make a comparison plots grid (rotate the graphics boxes for better perception):

In[43]:= |

Out[43]= |

- Handwritten digits recognition by matrix factorization–Mathematica for Prediction
- antononcube GitHub–NonNegativeMatrixFactorization

- 1.0.0 – 19 December 2019

This work is licensed under a Creative Commons Attribution 4.0 International License