Skip to content

Commit

Permalink
remove old docs
Browse files Browse the repository at this point in the history
  • Loading branch information
olivierlabayle committed Feb 11, 2025
1 parent b699693 commit 2d89e19
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 21 deletions.
7 changes: 4 additions & 3 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,16 @@ makedocs(;
"Home" => "index.md",
"Walk Through" => "walk_through.md",
"User Guide" => [joinpath("user_guide", f) for f in
("scm.md", "estimands.md", "estimation.md", "misc.md")],
("scm.md", "estimands.md", "estimation.md")],
"Examples" => [
joinpath("examples", "super_learning.md"),
joinpath("examples", "double_robustness.md")
],
"Integrations" => "integrations.md",
"Estimators' Cheat Sheet" => "estimators_cheatsheet.md",
"Resources" => "resources.md",
"Learning Resources" => "resources.md",
"API Reference" => "api.md",
"Integrations" => "integrations.md",

],
pagesonly=true,
clean = true,
Expand Down
2 changes: 1 addition & 1 deletion docs/src/user_guide/estimation.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ There are some practical considerations

- Choice of `resampling` Strategy: The theory behind sample-splitting requires the nuisance functions to be sufficiently well estimated on **each and every** fold. A practical aspect of it is that each fold should contain a sample representative of the dataset. In particular, when the treatment and outcome variables are categorical it is important to make sure the proportions are preserved. This is typically done using `StratifiedCV`.
- Computational Complexity: Sample-splitting results in ``K`` fits of the nuisance functions, drastically increasing computational complexity. In particular, if the nuisance functions are estimated using (P-fold) Super-Learning, this will result in two nested cross-validation loops and ``K \times P`` fits.
- Caching of Nuisance Functions: Because the `resampling` strategy typically needs to preserve the outcome and treatment proportions, very little reuse of cached models is possible (see [Caching Models](@ref)).
- Caching of Nuisance Functions: Because the `resampling` strategy typically needs to preserve the outcome and treatment proportions, very little reuse of cached models is possible (see [Using the Cache](@ref)).

## Using the Cache

Expand Down
17 changes: 0 additions & 17 deletions docs/src/user_guide/misc.md

This file was deleted.

15 changes: 15 additions & 0 deletions src/cache.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,23 @@ MLJBase.report(factors::MLCMRelevantFactors) = MLJBase.report(factors.outcome_me

MLJBase.report(cache) = MLJBase.report(cache[:targeted_factors])

"""
gradients(cache)
Retrieves the gradients corresponding to each targeting step from the cache.
"""
gradients(cache) = MLJBase.report(cache[:targeted_factors]).gradients

"""
estimates(cache)
Retrieves the estimates corresponding to each targeting step from the cache.
"""
estimates(cache) = MLJBase.report(cache[:targeted_factors]).estimates

"""
epsilons(cache)
Retrieves the fluctuations' epsilons corresponding to each targeting step from the cache.
"""
epsilons(cache) = MLJBase.report(cache[:targeted_factors]).epsilons

0 comments on commit 2d89e19

Please sign in to comment.