src.experiments package
Submodules
src.experiments.asap_sgd module
Test ASAP-SGD against standard SGD on linear regression with synthetic overparameterized data.
From the base repository directory: python -m src.experiments.asap_sgd
The implemented ASAP-SGD algorithm is taken from: “Instance-based Adaptiveness to Staleness in Asynchronous SGD” (https://proceedings.mlr.press/v162/backstrom22a/backstrom22a.pdf).
- src.experiments.asap_sgd.main()[source]
Main function for running the experiments comparing ASAP-SGD algorithm and standard SGD.
In the script a comparative experiment between ASAP-SGD and standard SGD is performed by: 1. Generate a synthetic linear regression datasets (with specified overparameterization). 2. Trains models using both SGD and ASAP-SGD across multiple random seeds (200 by default). 3. Evaluates trained models across multiple metrics: test loss, weight properties (L2 norm, sparsity, kurtosis) and convergence statistics. 4. Compares results via statistical tests (paired t-tests) 5. Visualize results: loss distributions, staleness patterns, weight characteristics.
Strong reproducibility is ensured by using a fixed master seed. Checkpoints are created to save losses, weight properties, and staleness distributions for both ASAP-SGD and SGD training.
src.experiments.dasgd module
Test DASGD against standard SGD on linear regression with synthetic overparameterized data.
From the base repository directory run: python -m src.experiments.dasgd
The implemented DASGD algorithm is taken from: “Asynchronous SGD with stale gradient dynamic adjustment for deep learning training” (https://www.sciencedirect.com/science/article/pii/S0020025524011344?via%3Dihub).
- src.experiments.dasgd.main() None [source]
Main function for running the experiments comparing DASGD algorithm and standard SGD.
In the script a comparative experiment between DASGD and standard SGD is performed by: 1. Generate a synthetic linear regression datasets (with specified overparameterization). 2. Trains models using both SGD and DASGD across multiple random seeds (200 by default). 3. Evaluates trained models across multiple metrics: test loss, weight properties (L2 norm, sparsity, kurtosis) and convergence statistics. 4. Compares results via statistical tests (paired t-tests) 5. Visualize results: loss distributions, staleness patterns, weight characteristics.
Strong reproducibility is ensured by using a fixed master seed. Checkpoints are created to save losses, weight properties, and staleness distributions for both DASGD and SGD training.
src.experiments.saasgd module
Test SA-ASGD against standard SGD on linear regression with synthetic overparameterized data.
From the base repository directory: python -m src.experiments.saasgd
The implemented SA-ASGD algorithm is taken from: “Staleness-aware Asynchronous SGD for Distributed Deep Learning” (https://arxiv.org/pdf/1511.05950).
- src.experiments.saasgd.main()[source]
Main function for running the experiments comparing ASAP-SGD algorithm and standard SGD.
In the script a comparative experiment between ASAP-SGD and standard SGD is performed by: 1. Generate a synthetic linear regression datasets (with specified overparameterization). 2. Trains models using both SGD and ASAP-SGD across multiple random seeds (200 by default). 3. Evaluates trained models across multiple metrics: test loss, weight properties (L2 norm, sparsity, kurtosis) and convergence statistics. 4. Compares results via statistical tests (paired t-tests) 5. Visualize results: loss distributions, staleness patterns, weight characteristics.
Strong reproducibility is ensured by using a fixed master seed. Checkpoints are created to save losses, weight properties, and staleness distributions for both ASAP-SGD and SGD training.