Train batch feature density¤
Train batch feature density.
FeatureDensityMetric
¤
Bases: Metric
Feature density metric.
Percentage of samples in which each feature was active (i.e. the neuron has "fired"), in a training batch.
Generally we want a small number of features to be active in each batch, so average feature density should be low. By contrast if the average feature density is high, it means that the features are not sparse enough.
Example
metric = FeatureDensityMetric(num_learned_features=3, num_components=1) learned_activations = torch.tensor([ ... [ # Batch 1 ... [1., 0., 1.] # Component 1: learned features (2 active neurons) ... ], ... [ # Batch 2 ... [0., 0., 0.] # Component 1: learned features (0 active neuron) ... ] ... ]) metric.forward(learned_activations) tensor([[0.5000, 0.0000, 0.5000]])
Source code in sparse_autoencoder/metrics/train/feature_density.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
|
__init__(num_learned_features, num_components=None)
¤
Initialise the metric.
Source code in sparse_autoencoder/metrics/train/feature_density.py
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
compute()
¤
Compute the metric.
Source code in sparse_autoencoder/metrics/train/feature_density.py
93 94 95 96 97 |
|
update(learned_activations, **kwargs)
¤
Update the metric state.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
learned_activations |
Float[Tensor, names(BATCH, COMPONENT_OPTIONAL, LEARNT_FEATURE)]
|
The learned activations. |
required |
**kwargs |
Any
|
Ignored keyword arguments (to allow use with other metrics in a collection). |
{}
|
Source code in sparse_autoencoder/metrics/train/feature_density.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
|