Core (Base Classes) of DA (Inversion) Methods#
Abstract classes for data assimilation algoirthms (inverse problems); including both filtering (time-independent) and smoothing (time-dependent). In filtering, one observation is used to update model state/parameter or the underlying probability distribution, while in filters, multiple observations (e.g., at different times) are used. The output is an estimate of the model state/parameter Uncertainty measure, e.g., posterior covariance, provided if a Bayesian approach is followed.
- class InverseProblemConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)[source]#
Bases:
PyOEDConfigs
Configurations class for the
InverseProblem
abstract base class. This class inherits functionality fromPyOEDConfigs
and only adds new class-level variables which can be updated as needed.See
PyOEDConfigs
for more details on the functionality of this class along with a few additional fields. OtherwiseInverseProblemConfigsConfigs
provides the following fields:- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
- name: str | None#
- model: None | SimulationModel#
- prior: None | ErrorModel#
- observation_error_model: None | ErrorModel#
- observation_operator: None | ObservationOperator#
- observations: None | Any#
- optimizer_configs: None | dict | OptimizerConfigs#
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)#
- class InverseProblem(configs=None)[source]#
Bases:
PyOEDObject
Base class for implementations of Inversion/Inference/DA (Data Assimilation) methods/approaches.
Note
The optimizer configuration attribute can be assigned an optimization routine to be used for solving the inverse problem (for optimization-based DA methods). For optimization based DA methods, the method
solve()
relies on this object for solving the OED optimization problem. Since not all DA methods are optimization-base, this is None by default and need to be created by derived classes. One can, however, discard this attribute and write full functionality in thesolve()
method. However, employing this attribute is expected for consistency.- Parameters:
configs (dict | InverseProblemConfigs | None) – (optional) configurations for the optimization object
- Raises:
PyOEDConfigsValidationError – if passed invalid configs
- validate_configurations(configs, raise_for_invalid=True)[source]#
Each simulation optimizer SHOULD implement it’s own function that validates its own configurations. If the validation is self contained (validates all configuations), then that’s it. However, one can just validate the configurations of of the immediate class and call super to validate configurations associated with the parent class.
If one does not wish to do any validation (we strongly advise against that), simply add the signature of this function to the optimizer class.
Note
The purpose of this method is to make sure that the settings in the configurations object self._CONFIGURATIONS are of the right type/values and are conformable with each other. This function is called upon instantiation of the object, and each time a configuration value is updated. Thus, this function need to be inexpensive and should not do heavy computations.
- Parameters:
configs (dict | InverseProblemConfigs) – configurations to validate. If a
InverseProblemonfigs
object is passed, validation is performed on the entire set of configurations. However, if a dictionary is passed, validation is performed only on the configurations corresponding to the keys in the dictionary.- Raises:
PyOEDConfigsValidationError – if the configurations are invalid and raise_for_invalid is set to True.
AttributeError – if any (or a group) of the configurations does not exist in the optimizer configurations
InverseProblemConfigs
.
- update_configurations(**kwargs)[source]#
Take any set of keyword arguments, and lookup each in the configurations, and update as nessesary/possible/valid
- Raises:
PyOEDConfigsValidationError – if invalid configurations passed
- register_model(model=None)[source]#
Register (and return) the simulation model to be registered.
- Raises:
TypeError – if the type of passed model is not supported
- register_prior(prior=None)[source]#
Register (and return) the prior to be registered.
- Raises:
TypeError – if the type of passed prior is not supported
- register_observation_operator(observation_operator=None)[source]#
Register (and return) the observation operator to be registered.
- Raises:
TypeError – if the type of passed observation operator is not supported
- register_observation_error_model(observation_error_model=None)[source]#
Register (and return) the observation error model to be registered.
- Raises:
TypeError – if the type of passed observation error model is not supported
- register_observations(observations)[source]#
Register (and return) the observational data.
- Raises:
TypeError – if the type of passed observational data is not supported
- register_optimizer(optimizer, *args, **kwargs)[source]#
Register (and return) the passed optimizer.
Note
This method does not create a new optimizer instance. It just takes the created optimizer, makes sure it is an instance derived from the
pyoed.optimization.Optimizer
and associates it with this assimilation (DA) object.Note
A derived class is expected to create this optimizer, and pass it up by calling super().register_optimizer(optimizer) so that it can be registered properly.
- Returns:
the registered optimizer.
- Raises:
TypeError – if the type of passed optimizer is not supported
- Return type:
Optimizer | None
- solve_inverse_problem(*args, **kwargs)[source]#
Start solving the inverse problem (DA) for the registered configuration with passed arguments.
Warning
This method is added for backward compatibility, and it will be deprecated soon. User need to call
solve()
instead.
- apply_forward_operator(*args, **kwargs)[source]#
Apply F, the forward operator to the passed state. The forward operator here, refers to the simulation model followed by the observation operator. The result is a data point (an observation), or a dictionary of observations indexed by the time (for time-dependent models).
For time-dependent simulations with multiple observation points (e.g., in 4D-Var settings), the observations are evaluated at the simulation time instances (corresponding to registered observations) over the registered time window.
- apply_forward_operator_adjoint(*args, **kwargs)[source]#
Apply F^*, the adjoint of the forward operator to the passed observation.
- show_registered_elements(display=True)[source]#
Compose and (optionally) print out a message containing the elements of the inverse problem and show what’s registered and what’s not
- Parameters:
display – if True print out the composed message about registered/unregistered elements
- Returns:
the composed message
- Return type:
None
- check_registered_elements(*args, **kwargs)[source]#
Check if all elements of the inverse problem (simulation model, observation operator, prior, observation error model, and observational data) are registered or not.
Note
This method SHOULD be modified by derived classed to check other elements. For example, a smoother requires observation times, assimilation window, etc.
- solve(init_guess=None)[source]#
Start solving the inverse problem.
Note
This method needs to be replicated (rewritten) for any DA (inverse problem) object so that it can replace the returned results object with teh appropriate one.
- Parameters:
init_guess – The initial guess of the design to be used as starting point of the optimization routine
- Returns:
an instance of (derived from) InverseProblemResults holding results obtained by solving the inverse problem
- plot_results(results, overwrite=False, bruteforce_results=None, num_active=None, uncertain_parameter_sample=None, exhaustive_parameter_search_results=None, show_legend=True, output_dir=None, keep_plots=False, fontsize=20, line_width=2, usetex=True, show_axis_grids=True, axis_grids_alpha=(0.25, 0.4), plots_format='pdf')[source]#
Generic plotting function for inverse problems. Given the results returned by
solve()
, visualize the results. Additional plotters can be added to this method or derived methods…- Raises:
TypeError – if no valid optimizer is registered
- property optimizer#
Thre registered optimizer.
- property model#
Thre registered simulation model.
- property prior#
Thre registered prior.
- property posterior#
The posterior.
- property observation_error_model#
Thre registered observation error model.
- property observation_operator#
Thre registered observation operator.
- property observations#
Thre registered observational data.
- class InverseProblemResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
PyOEDData
Base class to hold DA (inverse problems) data/results
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- inverse_problem: InverseProblem | None#
- optimization_results: OptimizerResults | None#
- write_results(saveto)[source]#
Save the underlying InverseProblem results to pickled dictionary.
- Parameters:
saveto – name/path of the file to save data to.
- Raises:
TypeError – if saveto is not a valid file path
IOError – if writing failed
- classmethod load_results(readfrom)[source]#
Inspect pickled file, and load inverse problem results;
- Raises:
IOError – if loading failed
- __init__(*, inverse_problem=None, optimization_results=None)#
- class FilterConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)[source]#
Bases:
InverseProblemConfigs
Configurations class for the
Filter
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)#
- class Filter(configs=None)[source]#
Bases:
InverseProblem
Base class for all filtering DA implementations. Currently, this mirrors
InverseProblem
.- register_model(model=None)[source]#
Register (and return) the simulation model to be registered. This calls InverseProblem.register_model and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed model is not supported
- register_prior(prior=None)[source]#
Register (and return) the prior to be registered. This calls InverseProblem.register_prior and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed prior is not supported
- register_observation_operator(observation_operator=None)[source]#
Register (and return) the observation operator to be registered. This calls InverseProblem.register_observation_operator and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed observation operator is not supported
- register_observation_error_model(observation_error_model=None)[source]#
Register (and return) the observation error model to be registered. This calls InverseProblem.register_observation_error_model and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed observation error model is not supported
- class FilterResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
InverseProblemResults
Base class for objects holding results of
Filter
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
- class SmootherConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)[source]#
Bases:
InverseProblemConfigs
Configurations class for the
Smoother
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
window (None | Iterable) – the assimilation window (t0, tf)
- window: None | Iterable#
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)#
- class Smoother(configs=None)[source]#
Bases:
InverseProblem
Base class for all smoothing DA implementations. Currently, this mirrors
InverseProblem
.- register_model(model=None)[source]#
Register (and return) the simulation model to be registered. This calls InverseProblem.register_model and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed model is not supported
- register_prior(prior=None)[source]#
Register (and return) the prior to be registered. This calls InverseProblem.register_prior and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed prior is not supported
- register_window(window)[source]#
Update the assimilation/inversion time window (cycle)
- Parameters:
window (None | Iterable) – an iterable with two entries \((t_0, t_1)\) indicating the beginning and the end of the inversion time window.
- Raises:
TypeError – if window is invalid iterable with two entries
- register_observation_operator(observation_operator=None)[source]#
Register (and return) the observation operator to be registered. This calls InverseProblem.register_observation_operator and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed observation operator is not supported
- register_observation_error_model(observation_error_model=None)[source]#
Register (and return) the observation error model to be registered. This calls InverseProblem.register_observation_error_model and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed observation error model is not supported
- register_observations(observations)[source]#
Register (and return) the observational data. This calls InverseProblem.register_observations and adds extra assertions/functionality specific for filters.
- Raises:
TypeError – if the type of passed observational data is not supported
- register_observation(t, observation, overwrite=False)[source]#
Given an observation instance/vector and the associated time t, update observation/data information
- Parameters:
t (float) – time at which the passed observation is registered
observation – an observation vector; this should be specified by the forward or the observation operator
overwrite (bool) – overwrite an existing observation if already registered at the passed time
- Raises:
ValueError – is raised if: + overwrite=False and another observation exists at time t + No valid observation operator has been registered yet + The assimilation window is not yet registered
TypeError – is raised if the observation is not validated by the associated observation operator
- find_observation_time(t, time_keys=None)[source]#
Local function to find the key in self.observations reresenting time equal to or within _TIME_EPS of the passed time
- Parameters:
t – the time to lookup
time_keys – the timespan to look into. If None, the registered observation times will be used.
- Returns:
the matched time (if found) or None
- check_registered_elements(*args, **kwargs)[source]#
Check if all elements of the inverse problem (simulation model, observation operator, prior, observation error model, and observational data) are registered or not.
Note
This method SHOULD be modified by derived classed to check other elements. For example, a smoother requires observation times, assimilation window, etc.
- show_registered_elements(display=True)[source]#
Compose and (optionally) print out a message containing the elements of the inverse problem and show what’s registered and what’s not
- Parameters:
display – if True print out the composed message about registered/unregistered elements
- Returns:
the composed message
- Return type:
None
- property observation_times#
Return a numpy array holding registered obserevation times
- property window#
- class SmootherResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
InverseProblemResults
Base class for objects holding results of
Smoother
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
- class GaussianPosteriorConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', size=None, mean=0.0, variance=1.0, design=True, sparse=True, random_seed=None)[source]#
Bases:
GaussianErrorModelConfigs
Just renaming (possibly add more features later) around Gaussian distribution/model configurations.
- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
size (int | None) – dimension of the error model space. Detected from mean if None.
mean (float | ndarray) – mean of the error model
variance (float | ndarray | spmatrix) – variance/covariance of the error model
design (None | bool | Sequence[bool] | ndarray) –
an experimental design to define active/inactive entries of the random variable (mean, variance/covariance matrix).
If the design is None, it is set to all ones; that is everything is observed (default)
If the design is a binary vector ( or int dtype attributes with 0/1 entries) the mean, the covariance, and all random vectors are projected onto the space identified by the 1/True entries.
sparse (bool) – convert covariance to scipy.csc_matrix used if
size > 1
random_seed (int | None) – random seed used when the Gaussian model is initiated. This is useful for reproductivity.
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', size=None, mean=0.0, variance=1.0, design=True, sparse=True, random_seed=None)#
- class GaussianPosterior(configs=None)[source]#
Bases:
GaussianErrorModel
A class approximating the posterior distribution of the 3D-Var problem around the MAP estimate. Everything is the same as in GaussianErrorModel, except for the attribute __STDEV which is involved in the methods covariance_matvec, and generate_noise. These two functions are replaced with matrix-free versions. Here, we do not construct the covariance (posterior covariance), instead, we use the fact that the posterior covariance is (or can be approximated by):
\[\mathbf{A}:= \left( \mathbf{B}^{-1} + \mathbf{M}^T \mathbf{H}^T \mathbf{R}_k^{-1} \mathbf{H} \mathbf{M} \right)^{-1} \,,\]where \(\mathbf{B}\) is the prior covariance matrix, \(\mathbf{H}\) is the linear observation operator (or the linearized, e.g., Jacobian, around the MAP estimate),
\(\mathbf{R}\) is the observation error covariance matrix, and \(\mathbf{M}\) is the tangent linear of the simulation model (TLM) evaluated at the MAP estimate
- class ComplexGaussianPosteriorConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', size=None, mean=0.0, variance=1 + 0j, relation=0j, design=1.0, sparse=True, map_to_real=False, random_seed=None)[source]#
Bases:
ComplexGaussianErrorModelConfigs
Just renaming (possibly add more features later) around Complex-valued Gaussian distribution/model configurations.
- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
size (int | None) – dimension of the error model space. Detected from mean if None.
mean (complex | Sequence[complex] | ndarray) – mean of the error model
variance (complex | Sequence[complex] | Sequence[Sequence[complex]] | ndarray | spmatrix) – variance/covariance of the error model
sparse (bool) – convert covariance to scipy.csc_array used if
size > 1
map_to_real (bool) – apply computations (e.g., pdf, etc.) by mapping to the real domain using the duality between with the composite real vector
random_seed (int | None) – random seed used when the Gaussian model is initiated. This is useful for reproductivity.
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', size=None, mean=0.0, variance=1 + 0j, relation=0j, design=1.0, sparse=True, map_to_real=False, random_seed=None)#
- class ComplexGaussianPosterior(configs=None)[source]#
Bases:
ComplexGaussianErrorModel
A class approximating the posterior distribution of the 3D-Var problem around the MAP estimate.
Abstract classes for filtering algoirthms (Bayesian and Variational approaches). A HybridFilter Class is also added, but without clear functionality yet. Unlike smoothing algorithms, in the case of filters a single observation is given (e.g., time-independent problems). The output of a filtering algorithm is a point estimate of the truth. Uncertainty measure, e.g., posterior covariance, provided if a Bayesian approach is followed.
- class VariationalFilterConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)[source]#
Bases:
FilterConfigs
Configurations class for the
VariationalFilter
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)#
- class VariationalFilter(configs=None)[source]#
Bases:
Filter
Base class for variational filters. In this case, a single point estimate is obtained by solving a weighted least-squares optimization problem to minimize the mismatch between model prediction and observation (in the appropriate projected space, e.g., observation space) The mismatch is usually regularized using a penalty term (usually asserted by the prior)
- objective(init_guess, data_misfit_only=False)[source]#
Evaluate the objective function, and the associated gradient
- Parameters:
init_guess – model parameter/state to evaluate the objective function and the associated gradient at
data_misfit_only (bool) – discard the prior/regularization term if True. This is added for flexibility
- Returns:
objective function value, and the gradient; (the objective value is a scalar, and the gradient is a one-dimensional Numpy array)
- abstract objective_function_value(init_guess, data_misfit_only=False)[source]#
A method to evaluate the variational objective function.
- objective_function_gradient(init_guess, data_misfit_only=False)[source]#
A method to evaluate the variational objective function. This implementation (by default) provides a gradient approximation using finite difference approximation. For efficient evaluation of the gradient, the derived class need to provide and implementation of the analytical gradient.
- class VariationalFilterResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
FilterResults
Base class for objects holding results of
Filter
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
- class BayesianFilterConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)[source]#
Bases:
FilterConfigs
Configurations class for the
BayesianFilter
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)#
- class BayesianFilter(configs=None)[source]#
Bases:
Filter
Base class for Bayesian filtering algorithms. In this case, the probability distribution (or an estimate thereof) of the model state/parameter is considered. The goal is to apply Bayes’ theorem, and retrieve the exact/approximate posterior or samples from teh posterior
- class BayesianFilterResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
FilterResults
Base class for objects holding results of
Filter
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
- class HybridFilterConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)[source]#
Bases:
FilterConfigs
Configurations class for the
HybridFilter
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None)#
- class HybridFilter(configs=None)[source]#
Bases:
Filter
Base class for Bayesian-variational filters.
- class HybridFilterResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
FilterResults
Base class for objects holding results of
Filter
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
Abstract classes for smoothing algoirthms (Bayesian and Variational approaches). A HybridSmoother Class is also added, but without clear functionality yet. Unlike filtering algorithms, in the case of smoothers multiple observations are given (e.g., time-dependent problems). The output of a smoothing algorithm is a point estimate of the truth. Uncertainty measure, e.g., posterior covariance, provided if a Bayesian approach is followed.
- class VariationalSmootherConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)[source]#
Bases:
SmootherConfigs
Configurations class for the
VariationalSmoother
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
window (None | Iterable) – the assimilation window (t0, tf)
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)#
- class VariationalSmoother(configs=None)[source]#
Bases:
Smoother
Base class for variational smoothers. In this case, a single point estimate is obtained by solving a weighted least-squares optimization problem to minimize the mismatch between model prediction and observation (in the appropriate projected space, e.g., observation space) The mismatch is usually regularized using a penalty term (usually asserted by the prior)
- objective(init_guess, data_misfit_only=False)[source]#
Evaluate the objective function, and the associated gradient
- Parameters:
init_guess – model parameter/state to evaluate the objective function and the associated gradient at
data_misfit_only (bool) – discard the prior/regularization term if True. This is added for flexibility
- Returns:
objective function value, and the gradient; (the objective value is a scalar, and the gradient is a one-dimensional Numpy array)
- abstract objective_function_value(state, data_misfit_only=False)[source]#
A method to evaluate the variational objective function.
- objective_function_gradient(state, data_misfit_only=False)[source]#
A method to evaluate the variational objective function. This implementation (by default) provides a gradient approximation using finite difference approximation. For efficient evaluation of the gradient, the derived class need to provide and implementation of the analytical gradient.
- class VariationalSmootherResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
SmootherResults
Base class for objects holding results of
Smoother
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
- class BayesianSmootherConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)[source]#
Bases:
SmootherConfigs
Configurations class for the
BayesianSmoother
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
window (None | Iterable) – the assimilation window (t0, tf)
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)#
- class BayesianSmoother(configs=None)[source]#
Bases:
Smoother
Base class for Bayesian smoothing algorithms. In this case, the probability distribution (or an estimate thereof) of the model state/parameter is considered. The goal is to apply Bayes’ theorem, and retrieve the exact/approximate posterior or samples from teh posterior
- class BayesianSmootherResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
SmootherResults
Base class for objects holding results of
Smoother
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
- class HybridSmootherConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)[source]#
Bases:
SmootherConfigs
Configurations class for the
HybridSmoother
abstract base class.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
name (str | None) – name of the DA (inverse problem solver) approach/method.
model (None | SimulationModel) – the simulation model.
prior (None | ErrorModel) – Background/Prior model (e.g.,
GaussianErrorModel
)observation_operator (None | ObservationOperator) – operator to map model state to observation space
observation_error_model (None | ErrorModel) – Observation error model (e.g.,
GaussianErrorModel
)observations (None | Any) – Observational data (the data type is very much dependent of the DA method)
optimizer (None | Optimizer) –
the optimization routine (optimizer) to be registered and later used for solving the OED problem. This can be one of the following:
None: In this case, no optimizer is registered, and the
solve()
won’t be functional until an optimization routine is registered.An optimizer instance (object that inherits :py:class`Optimizer`). In this case, the optimizer is registered as is and is updated with the passed configurations if available.
The class (subclass of
Optimizer
) to be used to instantiate the optimizer.
optimizer_configs (None | dict | OptimizerConfigs) –
the configurations of the optimization routine. This can be one of the following:
None, in this case configurations are discarded, and whatever default configurations of the selected/passed optimizer are employed.
A dict holding full/partial configurations of the selected optimizer. These are either used to instantiate or update the optimizer configurations based on the type of the passed optimizer.
A class providing implementations of the configurations (this must be a subclass of
OptimizerConfigs
.An instance of a subclass of
OptimizerConfigs
which is to set/udpate optimizer configurations.
Note
Not all DA (inverse problem) objects are optimization-based. For example, particle-based (EnKF, PF, etc.) employ a sample to estimate the flow of the distribution through the model dynamics (prior -> posterior). Thus, the optimizer (and configs) in this case (the default) are set to None. For optimization-based methods such as 3DVar, 4DVar, etc., an optimizer must be registered for the inverse problem to be solved.
window (None | Iterable) – the assimilation window (t0, tf)
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name=None, model=None, prior=None, observation_error_model=None, observation_operator=None, observations=None, optimizer=None, optimizer_configs=None, window=None)#
- class HybridSmoother(configs=None)[source]#
Bases:
Smoother
Base class for Bayesian-variational filters.
- class HybridSmootherResults(*, inverse_problem=None, optimization_results=None)[source]#
Bases:
SmootherResults
Base class for objects holding results of
Smoother
- Parameters:
inverse_problem (InverseProblem | None) – instance of a class derived from
InverseProblem
.
- __init__(*, inverse_problem=None, optimization_results=None)#
This module provides blueprint for Gola-Oriented Inverse problems and data assimilation. The goal operator is an additional operator that performs on the inference parameter/state. This includes prediction to future time, etc.
- class GoalOrientedOperatorConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name='Goal-Oriented Operator', inverse_problem=None)[source]#
Bases:
PyOEDConfigs
Configurations class for the
GoalOrientedOperator
abstract base class. This class inherits functionality fromPyOEDConfigs
and only adds new class-level variables which can be updated as needed.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
inverse_problem (None | InverseProblem) – The inverse problem instance to be used with the goal operator (e.g., prediction operator)
- name: str#
- inverse_problem: None | InverseProblem#
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name='Goal-Oriented Operator', inverse_problem=None)#
- class GoalOrientedOperator(configs=None)[source]#
Bases:
PyOEDObject
Base class implementing a Goal-Oriented Operator that performs over an inverse problem.
Note
Every Goal-Oriented Operator SHOULD inherit this class.
- Parameters:
configs (dict | GoalOrientedOperatorConfigs | None) – (optional) configurations for the goal-oriented operator object
- Raises:
PyOEDConfigsValidationError – if passed invalid configs
- validate_configurations(configs, raise_for_invalid=True)[source]#
Each simulation optimizer SHOULD implement it’s own function that validates its own configurations. If the validation is self contained (validates all configuations), then that’s it. However, one can just validate the configurations of of the immediate class and call super to validate configurations associated with the parent class.
If one does not wish to do any validation (we strongly advise against that), simply add the signature of this function to the optimizer class.
Note
The purpose of this method is to make sure that the settings in the configurations object self._CONFIGURATIONS are of the right type/values and are conformable with each other. This function is called upon instantiation of the object, and each time a configuration value is updated. Thus, this function need to be inexpensive and should not do heavy computations.
- Parameters:
configs (dict | GoalOrientedOperatorConfigs) – configurations to validate. If a
InverseProblemonfigs
object is passed, validation is performed on the entire set of configurations. However, if a dictionary is passed, validation is performed only on the configurations corresponding to the keys in the dictionary.- Raises:
PyOEDConfigsValidationError – if the configurations are invalid and raise_for_invalid is set to True.
AttributeError – if any (or a group) of the configurations does not exist in the optimizer configurations
GoalOrientedOperatorConfigs
.
- update_configurations(**kwargs)[source]#
Take any set of keyword arguments, and lookup each in the configurations, and update as nessesary/possible/valid
- Raises:
PyOEDConfigsValidationError – if invalid configurations passed
- register_inverse_problem(inverse_problem=None)[source]#
Register (and return) the inverse problem to be registered.
- Raises:
TypeError – if the type of passed inverse problem is not supported
- generate_vector(*args, **kwargs)[source]#
Create a vector conformable with the goal operator (goal-vector)
- apply_adjoint(*args, **kwargs)[source]#
Apply the adjoint (Jacbian-transposed, etc.) of the goal-oriented operator
- property inverse_problem#
A reference to the underlying inverse problem.
- property size#
Dimension/Size of the goal space (size of a goal vector)
Note
Better implementation should be provided to return dimension without creating a vector.
- class GoalOrientedInverseProblemConfigs(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name='Goal-Oriented Inverse Problem', inverse_problem=None, goal_operator=None)[source]#
Bases:
PyOEDConfigs
Configurations class for the
GoalOrientedInverseProblem
base class. This class inherits functionality fromPyOEDConfigs
and only adds new class-level variables which can be updated as needed.- Parameters:
verbose (bool) – a boolean flag to control verbosity of the object.
debug (bool) – a boolean flag that enables adding extra functionlity in a debug mode
output_dir (str | Path) – the base directory where the output files will be saved.
inverse_problem (None | InverseProblem) – The inverse problem instance to be used with the goal operator (e.g., prediction operator)
goal_operator (None | GoalOrientedOperator) – a goal operator (function of the inverse problem); this needs to be None or an instance (derived from GoalOrientedOperator).
- name: str#
- inverse_problem: None | InverseProblem#
- goal_operator: None | GoalOrientedOperator#
- __init__(*, debug=False, verbose=False, output_dir='./_PYOED_RESULTS_', name='Goal-Oriented Inverse Problem', inverse_problem=None, goal_operator=None)#
- class GoalOrientedInverseProblem(configs=None)[source]#
Bases:
PyOEDObject
Base class implementing a Goal-Oriented Inverse problems.
Note
Every Goal-Oriented inverse problem SHOULD inherit this class.
- Parameters:
configs (dict | GoalOrientedInverseProblemConfigs | None) – (optional) configurations for the goal-oriented inverse problem
- Raises:
PyOEDConfigsValidationError – if passed invalid configs
- validate_configurations(configs, raise_for_invalid=True)[source]#
Each simulation optimizer SHOULD implement it’s own function that validates its own configurations. If the validation is self contained (validates all configuations), then that’s it. However, one can just validate the configurations of of the immediate class and call super to validate configurations associated with the parent class.
If one does not wish to do any validation (we strongly advise against that), simply add the signature of this function to the optimizer class.
Note
The purpose of this method is to make sure that the settings in the configurations object self._CONFIGURATIONS are of the right type/values and are conformable with each other. This function is called upon instantiation of the object, and each time a configuration value is updated. Thus, this function need to be inexpensive and should not do heavy computations.
- Parameters:
configs (dict | GoalOrientedInverseProblemConfigs) – configurations to validate. If a
InverseProblemonfigs
object is passed, validation is performed on the entire set of configurations. However, if a dictionary is passed, validation is performed only on the configurations corresponding to the keys in the dictionary.- Raises:
PyOEDConfigsValidationError – if the configurations are invalid and raise_for_invalid is set to True.
AttributeError – if any (or a group) of the configurations does not exist in the optimizer configurations
GoalOrientedOperatorConfigs
.
- update_configurations(**kwargs)[source]#
Take any set of keyword arguments, and lookup each in the configurations, and update as nessesary/possible/valid
- Raises:
PyOEDConfigsValidationError – if invalid configurations passed
- register_inverse_problem(inverse_problem=None)[source]#
Register (and return) the inverse problem to be registered.
- Raises:
TypeError – if the type of passed inverse problem is not supported
- register_goal_operator(goal_operator=None)[source]#
Register (and return) the goal-oriented operator to be registered.
- Raises:
TypeError – if the type of passed goal oriented operator i s not supported
- property inverse_problem#
A reference to the underlying inverse problem.
- property goal_operator#
A reference to the underlying goal-operator.