Wolfram Language Paclet Repository

Community-contributed installable additions to the Wolfram Language

Primary Navigation

    • Cloud & Deployment
    • Core Language & Structure
    • Data Manipulation & Analysis
    • Engineering Data & Computation
    • External Interfaces & Connections
    • Financial Data & Computation
    • Geographic Data & Computation
    • Geometry
    • Graphs & Networks
    • Higher Mathematical Computation
    • Images
    • Knowledge Representation & Natural Language
    • Machine Learning
    • Notebook Documents & Presentation
    • Scientific and Medical Data & Computation
    • Social, Cultural & Linguistic Data
    • Strings & Text
    • Symbolic & Numeric Computation
    • System Operation & Setup
    • Time-Related Computation
    • User Interface Construction
    • Visualization & Graphics
    • Random Paclet
    • Alphabetical List
  • Using Paclets
    • Get Started
    • Download Definition Notebook
  • Learn More about Wolfram Language

VariableImportanceByClassifiers

Tech Notes

  • Importance of variables investigation

Symbols

  • AccuracyByVariableShuffling
Importance of variables investigation
Introduction
Concrete application over the “Titanic” dataset
Procedure outline
Concrete application over the “Mushroom” dataset
Implementation description
References
Introduction
This blog post demonstrates a procedure for variable importance investigation of mixed categorical and numerical data.
The procedure was used in a previous blog
“Classification and association rules for census income data”
, [1]. It is implemented in the package
VariableImportanceByClassifiers.m
, [2]. I took it from [5] (described also below).
The document
“Importance of variables investigation guide”
, [3], has much more extensive descriptions, explanations, and code for importance of variables investigation using classifiers, Mosaic plots, Decision trees, Association rules, and Dimension reduction.
Procedure outline
Here we describe the procedure used (that is also done in [3]).
1
.
Split the data into training and testing datasets.
2
.
Build a classifier with the training set.
3
.
Verify using the test set that good classification results are obtained. Find the baseline accuracy.
4
.
If the number of variables (attributes) is
k
for each
i
,
1≤i≤k
:
◼
  • Shuffle the values of the
    i
    -th column of the test data and find the classification success rates.
  • 5
    .
    Compare the obtained
    k
    classification success rates between each other and with the success rates obtained by the un-shuffled test data.
    6
    .
    The variables for which the classification success rates are the worst are the most decisive.
    Note that instead of using the overall baseline accuracy we can make the comparison over the accuracies for selected, more important class labels. (See the examples below.)
    The procedure is classifier agnostic. With certain classifiers, Naive Bayesian classifiers and Decision trees, the importance of variables can be directly concluded from their structure obtained after training.
    The procedure can be enhanced by using dimension reduction before building the classifiers. (See [3] for an outline.)
    Implementation description
    The implementation of the procedure is straightforward in Mathematica -- see the package
    VariableImportanceByClassifiers.m
    , [2].
    The paclet can be installed and loaded with the commands:
    In[1]:=
    (*PacletInstall["AntonAntonov/VariableImporantanceByClassifiers"];*)Needs["AntonAntonov`VariableImportanceByClassifiers`"];
    At this point the package has only one function, AccuracyByVariableShuffling, that takes as arguments a ClassifierFunction object, a dataset, optional variable names, and the option “FScoreLabels” that allows the use of accuracies over a custom list of class labels instead of overall baseline accuracy.
    Here is the function signature:
    AccuracyByVariableShuffling[clFunc_ClassifierFunction, testData_, variableNames_:Automatic, opts:OptionsPattern[] ]
    The returned result is an Association structure that contains the baseline accuracy and the accuracies corresponding to the shuffled versions of the dataset. I.e. steps 3 and 4 of the procedure are performed by AccuracyByVariableShuffling. Returning the result in the form Association[___] means we can treat the result as a list with named elements. (Similar to the list structures in Lua and R.)
    For the examples in the next section we also going to use the paclet
    MosaicPlot
    , [4], that can be installed and loaded with the following commands:
    In[2]:=
    PacletInstall["AntonAntonov/MosaicPlot"];​​Needs["AntonAntonov`MosaicPlot`"];
    We also use the paclet
    DataReshapers
    for data summaries and tabulated displays:
    In[4]:=
    PacletInstall["AntonAntonov/DataReshapers"];​​Needs["AntonAntonov`DataReshapers`"];
    Concrete application over the “Titanic” dataset
    1. Load some data.
    In[6]:=
    testSetName="Titanic";(*"Mushroom""Titanic"*)​​trainingSet=ExampleData[{"MachineLearning",testSetName},"TrainingData"];​​testSet=ExampleData[{"MachineLearning",testSetName},"TestData"];
    2. Variable names and unique class labels.
    In[9]:=
    varNames=Flatten[List@@ExampleData[{"MachineLearning",testSetName},"VariableDescriptions"]]
    Out[9]=
    {passenger class,passenger age,passenger sex,passenger survival}
    In[10]:=
    classLabels=Union[ExampleData[{"MachineLearning",testSetName},"Data"]〚All,-1〛]
    Out[10]=
    {died,survived}
    3. Here is a data summary.
    In[11]:=
    Grid[List@RecordsSummary[(Flatten/@(List@@@Join[trainingSet,testSet]))/._Missing0,varNames],DividersAll,Alignment{Left,Top}]
    Out[11]=
    1 passenger class
    3rd
    709
    1st
    323
    2nd
    277
    2 passenger age
    Min
    0
    1st Qu
    7.
    Mean
    23.8775
    Median
    24.
    3rd Qu
    35.
    Max
    80.
    3 passenger sex
    male
    843
    female
    466
    4 passenger survival
    died
    809
    survived
    500
    4. Make the classifier.
    In[12]:=
    clFunc=Classify[trainingSet,Method"RandomForest"]
    Out[12]=
    ClassifierFunction
    Input type: {Nominal,Numerical,Nominal}
    Classes: died,survived
    
    5. Obtain accuracies after shuffling.
    In[13]:=
    accs=
    AccuracyByVariableShuffling
    [clFunc,testSet,varNames]
    Out[13]=
    None0.778626,passenger class0.725191,passenger age0.768448,passenger sex0.631043
    6. Tabulate the results.
    In[14]:=
    Grid[​​Prepend[​​List@@@Normal[accs/First[accs]],​​Style[#,Bold,Blue,FontFamily"Times"]&/@{"shuffled variable","accuracy ratio"}],​​AlignmentLeft,DividersAll]​​
    Out[14]=
    shuffled variable
    accuracy ratio
    None
    1.
    passenger class
    0.931373
    passenger age
    0.986928
    passenger sex
    0.810458
    7. Further confirmation of the found variable importance can be done using the mosaic plots.
    We can see that female passengers are much more likely to survive and especially female passengers from first and second class.
    In[15]:=
    t=DeleteCases[(Flatten/@(List@@@trainingSet)),{___,_Missing,___}];​​MosaicPlot[t〚All,{1,3,4}〛,ColorRules{2ColorData[7,"ColorList"]},"LabelRotation"{{1,0.45},{0,1}},ImageSizeMedium]
    Out[16]=
    5a. In order to use F-scores instead of overall accuracy the desired class labels are specified with
    the option "ClassLabels".
    In[17]:=
    accs=
    AccuracyByVariableShuffling
    [clFunc,testSet,varNames,"ClassLabels"classLabels]
    Out[17]=
    None{0.753247,0.870588},passenger class{0.680124,0.661972},passenger age{0.750799,0.9},passenger sex{0.66879,0.582278}
    5b. Here is another example that uses the class label with the smallest F-score.
    (Probably the most important since it is most misclassified).
    In[18]:=
    accs=
    AccuracyByVariableShuffling
    [clFunc,testSet,varNames,​​"ClassLabels"Position[#,Min[#]]〚1,1,1〛&@​​ClassifierMeasurements[clFunc,testSet,"FScore"]]
    Out[18]=
    None{0.870588},passenger class{0.731343},passenger age{0.842697},passenger sex{0.552632}
    5c. It is good idea to verify that we get the same results using different classifiers. Below is given code that computes the shuffled accuracies and returns the relative damage scores for a set of methods of Classify.
    In[19]:=
    mres=Association@Map​​Function{clMethod},​​cf=Classify[trainingSet,MethodclMethod];​​accRes=
    AccuracyByVariableShuffling
    [cf,testSet,varNames,"ClassLabels""survived"];​​clMethod(accRes[None]-Rest[accRes])accRes[None]​​,{"LogisticRegression","NearestNeighbors","RandomForest","SupportVectorMachine"};
    Concrete application over the “Mushroom” dataset
    1. Load some data.
    2. Variable names and unique class labels.
    3. Here is a data summary.
    4. Make the classifier.
    5. Obtain accuracies after shuffling.
    6. Tabulate the results.
    7. Further confirmation of the found variable importance can be done using the mosaic plots.
    Looking at the plot we can see why “odor” is so decisive -- the odor values for “poisonous“ and “edible” intersect very little.
    Here is a mosaic plot showing the conditional probabilities from a different direction.
    5a. In order to use F-scores instead of overall accuracy the desired class labels are specified with
    the option "ClassLabels".
    5b. Here is another example that uses the class label with the smallest F-score.
    (Probably the most important since it is most misclassified).
    5c. It is good idea to verify that we get the same results using different classifiers. Below is given code that computes the shuffled accuracies and returns the relative damage scores for a set of methods of Classify.
    References
    [5] Leo Breiman et al., Classification and regression trees, Chapman & Hall, 1984, ISBN-13: 978-0412048418.

    © 2025 Wolfram. All rights reserved.

    • Legal & Privacy Policy
    • Contact Us
    • WolframAlpha.com
    • WolframCloud.com